var/home/core/zuul-output/0000755000175000017500000000000015134342171014526 5ustar corecorevar/home/core/zuul-output/logs/0000755000175000017500000000000015134354406015476 5ustar corecorevar/home/core/zuul-output/logs/kubelet.log.gz0000644000175000017500000333513415134354345020271 0ustar corecoreqikubelet.log_o[;r)Br'o b-n(!9t%Cs7}g/غIs,r.k9GfD i?k,Eڤ펯_ˎ6Ϸ7+%f?長ox[o8W5ֻ!Kޒ/h3_.gSeq5v(×_~^ǿq]n>߮}+ԏbś E^"Y^-Vۋz7wH׋0g"ŒGǯguz|ny;#)a "b BLc?^^4[ftlR%KF^j 8DΆgS^Kz۞_W#|`zIlp_@oEy5 fs&2x*g+W4m ɭiE߳Kf^?·0* TQ0Z%bb oHIl.f/M1FJdl!و4Gf#C2lIw]BPIjfkAubTI *JB4?PxQs# `LK3@g(C U {oLtiGgz֝$,z'vǛVB} eRB0R딏]dP>Li.`|!>ڌj+ACl21E^#QDuxGvZ4c$)9ӋrYWoxCNQWs]8M%3KpNGIrND}2SRCK.(^$0^@hH9%!40Jm>*Kdg?y7|&#)3+o,2s%R>!%*XC7Ln* wCƕH#FLzsѹ Xߛk׹1{,wŻ4v+(n^RϚOGO;5p Cj·1z_j( ,"z-Ee}t(QCuˠMkmi+2z5iݸ6C~z+_Ex$\}*9h>t m2m`QɢJ[a|$ᑨj:D+ʎ; 9Gacm_jY-y`)͐o΁GWo(C U ?}aK+d&?>Y;ufʕ"uZ0EyT0: =XVy#iEW&q]#v0nFNV-9JrdK\D2s&[#bE(mV9ىN囋{V5e1߯F1>9r;:J_T{*T\hVQxi0LZD T{ /WHc&)_`i=į`PÝr JovJw`纪}PSSii4wT (Dnm_`c46A>hPr0ιӦ q:Np8>R'8::8g'h"M{qd 㦿GGk\(Rh07uB^WrN_Ŏ6W>Bߔ)bQ) <4G0 C.iTEZ{(¥:-³xlՐ0A_Fݗw)(c>bugbǎ\J;tf*H7(V;%kLV:C5/=v-}҅"o ']쌕|tϓX8nJ*A*%J[T2pI1Je;s_[,Ҩ38_ь ͰM0ImY/MiVJ5&jNgBt90v߁R:~U jځU~oN9xԞ~J|dݤ߯R> kH&Y``:"s ayiBq)u%'4 yܽ yW0 -i̭uJ{KưЖ@+UBj -&JO jKi0>,A==lM9Ɍm4ެ˧jOC d-saܺCY "D^&M){_$4+CN(44iVz- 1 EaE nQ Ӌ_kckh>F L+ *nacԇ&~hb[nӉ>'݌6od NN&DǭZrb5Iffe6Rh&C4F;D3T\[ bk5̕@UFB1/ z/}KXg%q3Ifq CXReQP2$TbgK ء#AZ9 K>UHkZ;oﴍ8MEDa3[p1>m`XYB[9% E*:`cBCIqC(1&b f]fNhdQvݸCVA/P_]F@?qr7@sON_}ۿ릶ytoyמseQv^sP3.sP1'Ns}d_ս=f1Jid % Jwe`40^|ǜd]z dJR-Дxq4lZ,Z[|e 'Ƙ$b2JOh k[b>¾h[;:>OM=y)֖[Sm5*_?$cjf `~ߛUIOvl/.4`P{d056 %w ^?sʫ"nK)D}O >%9r}1j#e[tRQ9*ء !ǨLJ- upƜ/4cY\[|Xs;ɾ7-<S1wg y &SL9qk;NP> ,wդjtah-j:_[;4Wg_0K>є0vNۈ/ze={< 1;/STcD,ڙ`[3XPo0TXx ZYޏ=S-ܑ2ƹڞ7կZ8m1`qAewQT*:ÊxtŨ!u}$K6tem@t):êtx: `)L`m GƂ%k1羨(zv:U!2`cV, lNdV5m$/KFS#0gLwNO6¨h}'XvوPkWn}/7d*1q* c0.$\+XND]P*84[߷Q뽃J޸8iD WPC49 *#LC ءzC"wS%'m'3ܚ|otoʉ!9:PZ"ρ5M^kVځIX%G^{;+Fi7Z(ZN~;MM/u2}ݼPݫedKAd#[ BeMP6" YǨ 0vyv?7R F"}8&q]ows!Z!C4g*8n]rMQ ;N>Sr??Ӽ]\+hSQזLwfm#Y~!%rpWMEWMjbn(ek~iQ)à/2,?O %VO"d.wEр%}5zWˬQOS)ZbF p$^(2JцQImuzhpyXڈ2ͤh}/[g1ieQ*-=hiך5J))?' c9*%WyΈ W\Of[=߰+ednU$YD',jߎW&7DXǜߍG`DbE#0Y4&|޻xѷ\;_Z^sнM\&+1gWo'Y;l>V ̍"ޛ4tO,{=hFѓ$b =D(zn;Y<1x~SJ^{vn 9 j1шk'L"cE=K]A(oQ۲6+ktwLzG,87^ 9H\yqū1)\(v8pHA"ΈGVp"c ?Z)hm.2;sl$瓴ӘIe~H|.Y#C^SJĽHǀeTwvy"v܅ ]?22R.lQPa ˆSܫ1z.x62%z].`Gn&*7bd+, Z`ͲH-nမ^WbPFtOfD]c9\w+ea~~{;Vm >|WAޭi`HbIãE{%&4]Iw Wjoru ݜmKnZ<X; ۢ( nx K8.|DXb +*598;w)zp:̊~;͞)6vnM!N5Cu!8Wq/`FUwWAֻ,Qu W@ Fi:K [Av*_958]a:pmQ&'ᚡmi@ zF(n&P;)_]µ!doR0`pl`~9Fk[ٺ+4Hhao-jϸ??R<lb#P-^39T|L /~p│x@Bq"M/lja\b݋af LnU*P(8W[U6WX ZoѶ^SH:K:%Qvl\b FqQI.ȨHWo;Nw$͹O$oEE-eq=.*Dp,V;(bgJ!gF)892sw*+{[or@x,))[o新#.͞.;=fc<)((b۲Eumw峛M2,V[cm,S~ AF~.2v?JNt=O7^r.@DEuU1}g$>8ac#sĢB\PIPfwJQJ;Qxm &GBf\ZA$Ba-z|A-I @x70 晪MV)m8[6-Te@`E|=U D(C{oVa*H7MQK"<O%MTTtx袥:\kfeMuVMy̥Q\ګ1F#șcq##rI$I.im򯚪+}2Q14S`XPL`-$G޽*}w[ #j*ٚ- DIAm<==UF^BcAw`g*7R(#ғ [K&#Mp'XގL=s5Ǜ>Y+yn~F8I !6WB3C%X)ybLFB%X2U6vw8uUF+X|YukXxVO(+gIQp؎Z{TcR@MSRδ~+1æ|mq՗5$B᲋eY(|*磎\Dži`dZe j'V!Mu@ KV{XץF .Jg< ƜINs:b zĄu3=Az4 u5'og^s7`Rzu-anOIq;6z( rx߅ euPvIɦ7聀t>G;_H;2ʗ6 h6QװxmR JQUbTP2j˔Ni)C)HKE"$ӝ!@2<Bq 2oh80,kNA7,?ע|tC3.㤣TiHEIǢƅaeGF$ u2`d)/-st{E1kٌS*#¦۵_Vu3ЩpRIDr/TxF8g4sѓ{%w .ʕ+84ztT:eEK[[;0(1Q@ET0>@wY)aL5ׄӫ A^%f+[`sb˟(]m`F3 W((!5F-9]dDqL&RΖd}})7 k11 K ;%v'_3 dG8d t#MTU']h7^)O>?~?_ȿM4ə#a&Xi`O}6a-xm`8@;of,![0-7 4f kUy:M֖Esa./zʕy[/ݩqz2¼&'QxJE{cZ7C:?pM z*"#窾+ HsOt۩%͟A498SwWv|jNQ=-[ӓI]iSCQ&s~In/SZ % 'I Ƿ$N ̢t-mfeF;gUаQ/ .D%ES*;OLRX[vDb:7a}YF30H #iSpʳ]'_'ĕ -׉6tfЮ$zͪO_sYq+q艻*vzh5~Yy;,DiYTP;o./~^.6+zZFD& m@WXe{sa 2tc^XS?irG#^ŲDI'H_Ȯ;RJ&GT.Kwj;of¬zHmmS2ҒN'=zAΈ\b*K ڤUy""&D@iS=3&N+ǵtX^7ǩX"CA⥎å+4@{D/-:u5I꾧fY iʱ= %lHsd6+H~ Δ,&颒$tSL{yєYa$ H>t~q؈xRmkscXQG~gD20zQ*%iQI$!h/Vo^:y1(t˥C"*FFDEMAƚh $ /ɓzwG1Ƙl"oN:*xmS}V<"dH,^)?CpҒ7UΊ,*n.֙J߾?Ϲhӷƀc"@9Fў-Zm1_tH[A$lVE%BDI yȒv $FO[axr Y#%b Hw)j4&hCU_8xS] _N_Z6KhwefӞ@蹃DROo X"%q7<# '9l%w:9^1ee-EKQ'<1=iUNiAp(-I*#iq&CpB.$lٴާt!jU_L~Tb_,֪r>8P_䅱lw1ù=LAЦz38ckʖYz ~kQRL Q rGQ/ȆMC)vg1Xa!&'0Dp\~^=7jv "8O AfI; P|ޓܜ 8qܦzl5tw@,Mڴg$%82h7էoaz32h>`XT>%)pQ}Tgĸ6Coɲ=8f`KݜȆqDDbZ:B#O^?tNGw\Q.pPO @:Cg9dTcxRk&%])ў}VLN]Nbjgg`d]LGϸ.yҵUCL(us6*>B 2K^ sBciۨvtl:J;quӋkKϮ듃ԁ6Y.0O۾'8V%1M@)uIw].5km~Ҷ綝R(mtV3rșjmjJItHڒz>6nOj5~IJ|~!yKڮ2 h 3x}~ے4WYr9Ts] AA$ұ}21;qbUwRK #}u'tLi'^Y&,mCM)eu㠥Ѻ\a}1:V1zMzT}R,IA e<%!vĉq|?mtB|A ?dXuWLGml?*uTC̶V`FVY>ECmDnG+UaKtȃbeb筃kݴO~f^⊈ 8MK?:mM;ߵoz+O~e3݌ƺ(ܸf)*gCQE*pp^~x܃`U'A~E90t~8-2S󹞙nk56s&"mgVKA: X>7QQ-CDC'| #]Y1E-$nP4N0#C'dvܸȯ.vIH"ŐR ;@~y>Kv{) 9AG ćͩ$.!б~N8i"1KФ\L7/,U@.ڮO?mُa ې!rGHw@56DǑq LA!&mYJ*ixz2*{_;IYJXFfQ* 0kA".mݡ"3`Rd1_u6d逖`7xGMf}k/⨼0Κ_pLq7k!dT x삖A7 u/~&ӄMu.<|yi I?@)XJ7{ޱ?Q]{#\4ZfR-dVaz./f+yGNMGOK?2_~3\z=y}^G$*A! IcuR.o=MZ9zu b#s9@*иrI@*qQN||Ix;I}&ݢ6ɢ}{]x}_o>Mm8S]~(EX{wml"Ms>\΋"?|NKfֱn !d[s׭dֲcUh=Ɩ9b&2} -/f;M.~dhÓ5¨LIa6PnzɗBQiG'CXt!*<0U-(qc;}*CiKe@p&Em&x!i6ٱ˭K& FCfJ9%ٕQ·BD-]R1#]TROr}S [;Zcq6xMY 6seAU9c>Xf~TTX)QӅtӚe~=WtX-sJb?U'3X7J4l+Cj%LPFxŰAVG Y%.9Vnd8? ǫjU3k%E)OD:"Ϳ%E)=}l/'O"Q_4ILAٍKK7'lWQVm0c:%UEhZ].1lcazn2ͦ_DQP/2 re%_bR~r9_7*vrv |S.Z!rV%¢EN$i^B^rX؆ z1ǡXtiK`uk&LO./!Z&p:ˏ!_B{{s1>"=b'K=}|+: :8au"N@#=Ugzy]sTv||Aec Xi.gL'—Ʃb4AUqػ< &}BIrwZ\"t%>6ES5oaPqobb,v 2w s1,jX4W->L!NUy*Gݓ KmmlTbc[O`uxOp  |T!|ik3cL_ AvG i\fs$<;uI\XAV{ˍlJsŅjЙNhwfG8>Vڇg18 O3E*dt:|X`Z)|z&V*"9U_R=Wd<)tc(߯)Y]g5>.1C( .K3g&_P9&`|8|Ldl?6o AMҪ1EzyNAtRuxyn\]q_ߍ&zk.)Eu{_rjuWݚ;*6mMq!R{QWR=oVbmyanUn.Uqsy.?W8 r[zW*8nؿ[;vmcoW]"U;gm>?Z֒Z6`!2XY]-Zcp˿˘ɲ}MV<в~!?YXV+lx)RRfb-I7p)3XɯEr^,bfbKJ'@hX><[@ ,&,]$*բk-Yv5 '1T9!(*t 0'b@񲱥-kc6VnR0h& 0Z|ђ8 CGV[4xIIWN?Yt>lf@ Vi`D~ڇŁQLLkY <ZPKoma_u` !>Z;3F\dEB n+0Z ?&s{ 6(E|<ޭLk1Yn(F!%sx]>CTl9"و5 |ݹր|/#.w0ޒx"khD?O`-9C| &8֨O8VH5uH)28 Ǿ-R9~ +#e;U6]aD6Xzqd5y n';)VKL]O@b OIAG Lmc 2;\d˽$Mu>WmCEQuabAJ;`uy-u.M>9VsWٔo RS`S#m8k;(WAXq 8@+S@+' 8U˜z+ZU;=eTtX->9U-q .AV/|\ǔ%&$]1YINJ2]:a0OWvI.O6xMY0/M$ *s5x{gsəL3{$)ՆbG(}1wt!wVf;I&Xi43غgR 6 ݩJ$)}Ta@ nS*X#r#v6*;WJ-_@q.+?DK១btMp1 1Gȩ f,M`,Lr6E} m"8_SK$_#O;V 7=xLOu-ȹ2NKLjp*: 'SasyrFrcC0 ѱ LKV:U} -:U8t[=EAV$=i[mhm"roe5jqf$i>;V0eOޞ4ccc2J1TN.7q;"sդSP) 0v3-)-ٕAg"pZ: "ka+n!e߮lɹL V3Os\ဝ+A= 2䣔AzG\ ` \vc"Kj61O Px"3Pc /' PW*3GX liWv-6W&)cX |]O;C%8@*Z1%8Gk@5^NtY"Fbi8D'+_1&1 7U^k6v읨gQ`LRx+I&s5Www` q:cdʰ H`X;"}B=-/M~C>''1R[sdJm RD3Q{)bJatdq>*Ct/GǍ-`2:u)"\**dPdvc& HwMlF@a5`+F>ΰ-q>0*s%Q)L>$ćYV\dsEGز/:ٕycZtO 2ze31cDB/eWy!A/V4cbpWaPBIpqS<(lȣ'3K?e Z?ڠ8VSZM}pnqL f2D?mzq*a[~;DY〩b𻾋-]f8dBմVs6傊zF"daeY(R+q%sor|.v\sfa:TX%;3Xl= \k>kqBbB;t@/Cԍ)Ga[ r=nl-w/38ѮI*/=2!j\FW+[3=`BZWX Zd>t*Uǖ\*Fu6Y3[yBPj|LcwaIuR;uݷ㺾|47ߍeys=.EinE% 1zY\+͕߬VͭW_겼cazyU1wOw)Ǽn@6 |lk'Z|VZpsqL5 څB}>u)^v~,󿴝} 3+m𢛲Pz_Sp2auQAP*tLnIXA6L7 8UgKdT)*7>p{Pgi-b)>U6IXabPde Ӽ8Ģ8GɄnb'G ֤Mcv4?>HC78NE@UMc8>`TvZ:}O ?m\Y#_NvӴ],([7Iߡ$;NGΦq{pK<3gG<:N%hcw=驑}5K` .H2ߕ{ r,nB-<}5q;ȋwOqƾJɓB牸[0*VuQVNWq~CW` ! f#ʰ7#.=ċ^XG# Bjϱ#}p, ۉB3bANk1W,(Z.j~p7CR.QW%boe MPVB ba4Lnd2L:ȸl=]dKfQEHZ>o"y9&;}HNN,+ !9:ףg&P& f鮧~@5f, ;)gKSӵ֒תdPȘ.}xcH>NN ^PB/t°N\Tͣ: j%c!:eM/UGImj8c8?Jh?ai'x}dp޼?2өmԱTw*iBW?$[lqȏ $9*g,*&B ON1T > RY2AԱ-)[e:9||;gYr֌<ߒȽb5ثh^&(&)$U L2醖*ºh~0a'@fRDDY8'T`S5ITW5_B6Z,MMk`]BrS/.my9X9WOlYɧ߁<# oxZ,Yzmo='y lD'ߛ##?L-%)#hL.V[Eď 2XD'Fybbg.Zv9/ݟlXKq&b#.e4,+ NU|hD~>yOx7ljょ(LG!xêYW#1:!6@:GQ![ Kb#RagrH<_IՕ(.KSߚK7Gg''Ku;mEڞP6Bqu@FFL)g{R3w{2reݐSC0V>Z/xpy>#nO6Q"6T QDz`MaaBQ!T>g7z'XP OEs܄M(:5_Y' t >Yw }./sg +p(˔ko?d/ۼV'g寢U !xVOK,nOwQW(\kƽb-3&#=l(=nGgO7u%RR;\9p9yPquQI5Q՟$'MVb2 QD퉜f)v"n>5ɗ\=?rXB~^25Iq+zn7☐x5  uIQQպbb:耮l("=ޢ"bowƊ䜇(˧Ͼ2_ޝ3b@|.#6d TX~=|h)8@ QƫJ*vlP 6Y>G]mOj ] R]X8 M][{oG.XUaUM_ݡ;ć5Q="ߝ~ƫaCm_ڛNI_}xBGTj,!>}UMg֢b8X_7Sg|xQgiQi[5ƎۋQW?|6J+vLC ?f|E:2_Gѡ-33MX(JbכC!yXwҪ Ŝ(ϐasdCb [aG`NUpÈnm& da_'~<8J i]=:`J4pt:! :YÃY^`^av)u'eHg[qBM5R\=#xl2~cc(&Osl໧.cž(J }!-Z(^PE&QA&1By_`k84W&P[0녕wmZᣅ>js8I0!{aES'<L0hR#uϾ,a q)ۻ.\qބd]7Y8\Ud=n OqNu{#"[rYPG?= 8B^_B!holO,2F*ւԾ*lJ@"lrfxʥӶ^sysx!R\o0&G%ӂ# 8x|dyU$Z_xS#:G!~=ÄL\cYP?k"$|&}t 9-G"ZuS Y}%lC|0A (rhr~zJnQ#V'! o_ˀ*l\tĠ=d &Ԟ%RT7zw)Vj/n.+u`Yuc+S}i{;2{C 3 p:4y,PbIdKk<_ke2z2͝N28b<6$jA,XIݢWc*wJTFV,2Q<[3SSs!.0m!_r.6d XwE]E`qnki\b8#GS7Oqq֚]<7rk[[tqS&ՠ* QV2d,Fd1EZa|Ikvl`}˺n, _x$nK`j91TJ?E<_hB$t ;IEϷ%F*oirj}l /br2R =xRnrp]gf,nܕf_ p̾HMUӠ.%Zx&^.]q6 \n"es]X΂R$݄F.Gط%v6W6Jr XPz}=yOE_8?a'r QVA1m=du56{".3UTa jcErs7kujw=e7R+Y$2c¶Y"Az1%sE<'뵏RHGf=eoՉ7P*  T4@5Fvo5\d+%(BU-B7BTkƖUqŠS4LBCPM<:i8%Ԟh;f.R Hnˋ0ťw]k$[HvZm-gkK80S 5HuSk69N$f[@S]5-y!ĭd']j7\Nn{q|x8Pԙ> 0>kLKHKywTlQ+"MTљ|! Rڻ.y4_RbIwp^';Jlp>nՖ=xڣ{ÀٌÃf?H󭫻6ՇYY^ y U:X$a/0xJ!we.3Jץ,5m(aVœA[f.t,Jvwj$hDKkxKͽaCf` vаM5/pTY=n惹+*)ۧ;?&2==K U^B¼ҺK3z%yD0UK%rM'ʆA[d++o/!˒8xGqg q&Augm.ðTy23塸>mLqS9[fhl>}BupS_< 1vy;0.FDՙmz.H!:2 A r; \@<0øӝq}讧lQ3Evpc YnF cOjDgqq]5{qsN$) b&܁7UFrvc!m :!c3ƈ;jp7^nÈНt[QP_ ?i0%gsC"e1&*y &u|?;#H{MeQwAAMz6?| Ki ads;o_0">^UDi&}法rP4 9ۀOV0T0=q+w{PNFyx}˪YR|Rwv2tF´0JcvƕC ,  aOޝ6SEWx>gyʲ=V-*BTpvDZ#BFn9 LGdCp)s(EQ>H8qTW"Iw5mJ_puݳ$ s7Ɖ7d<Iludv,Y &bڤDo_' :䊳1p-Y/> Annr7>GC)WrqzͤUܛr(s J^*NsP:W\(ETT3Lzakra/Nj06}Ru=uzX8}ݠiߚ.TV*]I'Oq{z((hK+QڶWUOҽME>/ZwЮku07asPsțo-<;ǗׇW?+`eS"8ޜ-0*(n>}=Sa ^+$׶~ 麩^ $طM0]7(++,oͩ w[ l:|k-ED|'V!hkp~ J$ZJ+0#"yQ vXN!ry\(B"h+lL_L! DJ(vyP-G 鱱BMC{v[M ЄO!fmpTP $)_a] 6Ѥc!li|4 I^s>f \a5 aoU@i&0Nl ɦZN2(PeP=%|QVQ0 bR r44 ϡ'CضiPpo"hEcݴ]w9fSh8(B3tЦ}//^#1qjNRp2Iqž*I$FG(xt F7!|4b"Уi 7Im@nWHp_fu7p{%6VamЬm5[bAޣx ĄkZa.-\\Zk٬p 0@w~*\ĻYx 8DAoi-Q** 2c+ҊԎV}.7Dԡ5߲~㻈f+FP.#r), !;dOj0)#N- cZRG9.x@O>de9zwQfNۛ~Xr+Rtdu rZt.fjԍS/z F&!|`Zٴ{=ɠO;ݙj:`።WϾ.]VꋏVH<,Ru}KYQD&Nn * G֎=7 G.չw~O1s&,A Y-88%wʑ4(8[6Iis¡$ (LI"3$0P4C{\5悥 #Cx{d"m"`zNzruuzR: v^EyQ{ I?R抰w$z:q߱m@D(D'p_-0qlIM0rNҷwXquMΗHl WoG֕O>C1|BcBjtP>h"2&Pb0!"'2ՀR /kFryx[ޣ TRCpm*TA)*}ʗ6hMשmŠM Y^" }1wj:v2/8'!$Gg'|t ۇv.~)8_n : mY:~:rH>Drv:LGuojӓ w7{tLn@^i~}.?e׀chKރwܭߪ ,bԷi ۏ9PS'pӾpHiL.BK`uK:A{  K~HNbq܉^"}̎BԜx2P[=Յjz6W\]ˈd 1=hD6)*%;ek8 Kڡ`qvxrHh< \ɟ6\}y`Y3WZL -$Z5>b"觌.Fioz=`C3 ag*(F>h*ggS! )X],ķi25OXa#!lNgQ/DICgn "O}qɚC (p, [El, ].ea;piCWbmQ07BOJg]– _0u恷\-#~؏4[[4 gt w` SFA`,܂On6#K;Lxzb9yޥÐBkAX>W*G]ƙiEcj5(`Vu($ &m/?rG÷PL(݂P=e K6}JwT^v+ݪ_vV=K׾owmz-S˶ mO+{lGJ:˄:[lO2B u /ʷ oO(|GB˄[nO2B u [&ۂPo{BHbPb{B;* _&߂P{BH2`GB5.j ݖ?'c"]1E&gyR]'ì?t(wY1tG0rn'o&h=U/I8Xr-@W5.usHFϰv)D׵k{DŽ"(|hF^!<+R6ϿHU%eQw;69aWϣq8r~yŤW)&Y6V\FK/u-23ҏ/(SA~hց%IS'bi/Q J 5m.qm~=^eO3-YX sfguW>Rm((O#0 c;uLLW3ko/mʂCΗ%hv{꾪PvAfX:U{CyzԝT ,NZDpGJD-IT(s:Ї^b6-H=KgQaw+G()k&3}mń,Y 0@G34z&aKRB?4H A~S>, D6@GD#@AP Zk4C爛Ug T^/*ìZjr9>v!5pDɚ%PFUξg.STZۢQ?WhupKOu.I)0t(z~ f0L)0?VCf,*D{!pYǡP^8`Pߏ>p趑6mv_#_l_-~=8Kl>.A8}9'!N*L(spKzH(J䵋7dMW̨o3h-=pܳIڟR?668̭} aU|(y Ko^_*K~ȄG/D-%R-|lley\+kdp}?ɇ =ԩHauX:#LN^fx٘? mg9rG2^7`g_? ݬ3M5 .Rv-ȍeJZFJɥ>Zu`q :5Ay] J)͉zRZ\wv$SY/?Mޕ,Ǒ$_i{/c66tl.d+B@#kFqPg0+#|>kG}4D)#mѧ9wy@7M#{-쏃N7- 8tX&>4L3|vM|C]#{>{/qGDs嗇8{**`J{g[f~!OV[Q:̹gh%[?;̖;wqaO$ߺ_>#MjٷNh=tkAݎٛL?6dl))öqs ԡ6ɿi[6Hs(ĩgjlO<P]MNp kוuB=a{󗃿 }u'Б.Ǿ 0>>MWOQec!VDdW,5&ۂT! V4=G9 MLB/`NċV,Jd69.ykaİ9.ba)jlL{+X҂3/B 4s{0X;`'P_|Ei[ ^W&JZ\}È9tzP|֧XB :pDNlJ8HtϽ+*hǂi"[712l}x~Hbۗ)ḋA`6FfrѲ9S\!j ƈ!?.sU52 ęY"bwHbGrdȃh J ǃ@<,K a+n9k2p}-s&Na 1F =oH<ztee#[pzcnRheqw1.eXV+g=81 y .آj8CDh J<jRE HPY0ňgODs!7YVz NnV0%bCy 8:>8NBkeA;ozȥ1[]$Is8l7@|Bfp2g5* ~M$wY%pmi"F҅&?vY#rgsJM34gy-XZR1]Lї"=KN ^~U9tkeILipB ; H8*4XUC'cL9jaë됒#F菟Hpԃkex\6u瘥,36Oce3H!R)¥B8eEp=ES5k0"İAJI98ghAj ju8M }4ZBTp#i弌U`SPQEf9XPkd9֪Kmb 8ӂtaZ^J-fy3-h$>` FO͐}/g[eR$ՂCCSM]Tba _I .:%f)ôAXaXj{x_υ@HN r~GxOip_2ƁU |,DkɑWj|@=Z ڏ: M׷im?DؼT-2(Wpj\*0/ >BIE-7miݒߚb]C[Bk;" rf,rBA.7S6J-,W& ?Z:  ΏDРI4)&LYu|W {دdکd%jVU̒z`l"X# 7$6Q\JUe*^ *Sޯ7G;XS>idDrV* #/# ~psER5ۃal ?9/)p\}Qy򕬦*sEU؟mDmW~]D5NtXS*z6"qf]0VZݐXiǞTdV އ.4l/3.p1ݑXi3yiBb$`?G+Du`apeDYZ1;U'iDaY5`TBYt'X+Wi&u*nX:: ^xK#9Nf)8%z?Ò7E1ѸҴPh`İ 9s5V6wp"oAG}y!VUG>kLzcDtgqHz^ # HB$3J\]r1,X(E #iL*Ї:E1]fR͇Rb-#1.\Y+I$ f0Xr$"AҸ){ć4ZU"*%?Ĝj` }FZ/% &LLEȓpD5F {v{Gu 0ݹPB88[ݖ:gIP3sa bwoMx{ 5B e,xWKI)1[Zc?Na vT~߬bqKʷiFib/)i%pr@tKy T9D,)BeL>8ե,,'!2Gѹ=1F%.ljp)3+rJ*g4˾S u4ز%}cPnHpܶ6qJi#Zim".p=<~#qtDĤJ1ٺbQsU8F$HK*5%')>.z'̷>jb(i:=<8.$X[Z7:W|dB0Mu `$]}8aXf"b rvh":g} Qsbca4~FUK91%n"!j|7T$ϊ\usZ}2AJ<~Yi<ϏI;$/UVU g-v$8-^u+-j#b' xG09vmyY3zc(li"&:O~ʈ:ॅiX.cL`35pM,暧8S߽Ξ"*ЇU1IV`Ȥ߄7:I׌|V 3QzGn.R>K7Ff4WP~<_;ADq.V1&*7GK@fpCǯ$ffv__ pRk^p[^Xo"̖u#'aAg'"¼l@w;Dp՝ZXzmzyH^WѱuDUSAL35ex"*ysK$2t-JKB#jEpYccfjsl+b8[)Ū;nGwbp33*Hi3o,[EVX\QK0wgTF/fV3Qgr`A'f)&x O`$L|ImkﮙkΟ1L<q&фƒ d i^Ucұ:Js6n0m%jV()t5pcGmmx>Qb~ހsپnnHo媚fzLE|1 ޞa-iz S`*om&8)< 5#f'c? A2)E| !C.kHbO7Ș,WAssL۲vpCJZV uB { C݁Q9}]TRU&ARȞ2b>"Fұ@bg:춟{* rpk)q(䮟\}mn VZ.5, Qa(/^}Lє=8GSiNRb[ojVPXb7/V($\#B ? /D*vU  MnRf/M_\a:G˼,]*W瑼|IJa5媮@]wf)p4¬欀T+5W^kz!t q}s}fm&Y spn#ӭk{VŌy_̕/ESYG yjNU?a2qg?o6lIћ-MAE+Tkdܪ@Ăň1gsD|l'{%Ohߘi\:%3}Feyv[$)8Fҥ= B+:e= \sg&Q[gϳWA~yyy$qpsr3H J{͖Nc^&#FdIf,omd{$%|R7YPݺ~UȇwjU6%!ewͪ^ l6X3P 4_lW-^m R~Kb0NNj2פ1_Gcc#y! g3b%KxVFQ ?M,{l>#GY8Zd:Gx6I'D"%ȂJ-G%AG )IuND%KFeS)2BWi$P2P3wB|S3'[NmnK5h&Xslr:e^ j:=+.\"c/dz%_ϾINx} ߊ. x:DS(p^a>mX #x'7v']0b|4?&>y] `\.>lA=޻&8A =I Vh"?T W XϿfW#|j/ zA?IfvP 9D=LbP1#h<&ywDs&č5.yWVsg3g=@4' JO"څt TR>phA~שwhW͓19DeQgqRo{9 006tn1v~QL邋[בs%rꜵr k pg(T] 7W HBܭ?>+R\nx6>C2 /'8=s'8bAy7VbVkq&3<2aWw\Dx̪puq/l|M;/Yn&̇qZ,2i/7/ UXaϭc]\~eBȗ 9_vBJd-L?MIӏoj/SՏW.Yx^!ݜv:eMx& Z?sF1wh /@},DWf@ZY| @9IctG[[&%%|I匼)8(WHp]6αV Nj׿*8;TG]0Z)䞩b9i"{fp[n8.8t]W0p%&{ GMC6*DuA!ѡb[nX1եbCnX څ%NŎJt*7l9ݨEBvj;4ZRݝn5.5j~!ڥ(0'~ `~)e J.Yfְ'A/,ȧ`|) Jh7OY>˿I>ga21̎&юTXl@bb˸)/=.~‹SmL~g'dOfxc]5z0s;&rAxfͭ+߻jT@ 4*OqֻI;^z?ϋ _vy8cR1u1ԉTaBǂ{z_EZו |"w-d ,Kp<94wH)ʭܝd]dO3s=rZXU (z@/}hvz jG=Q+R2ahfAueݫ]޲>5íwY!((^ Tx.?T5?L3%^ٸO%kevy=9{:]4gs8bwH)/),/\=}5=+^=0"LjC)Y) kq8%-M;v"[9 P#dѵwg0A"m`lzpXspck4r[6gY7,{`Y ɸ\˘B(ҩeL;[qew2ЄOq4F<:x54!%]\uqHGV(Vr%iåb Jj4VÄddc-6ߥ`]œ,n*^0KY-jbz~G4P\xFZ #e~uyJ4M-Z=0kND,/}6+UkX˱1h /C!Bw +)(œ;@*ݪNĈ akŦ: l׽ I.Ԉ#Č-ˋRWx&^^&Vڵ[aK/NFK~?Lf% FHڪ#Ye'@ᕟZzvk<բ uuּܤ j 6Χ^Mz 8o\bx9|sWt8("DQh@-vpׯ[}G(:t@KQJdJ?>@ْ.PRBxQ>̬Χyqi_ u ˅I!ޮj<!0 B UGg Cc( J I(oQXkנ7 -_ٴx3C,_7`E kVY޵z[iAOk\-CKQk,c]\ $57c40ʄoRqogKD la,F~chZQ#hZ9OB̂P^u .[%[W\?02oAڄi z)W3>Xk:urmaEla号uL Ɲpf\Œ77у)1HƗ~0`~$C5l{# (Z4{L8ǴRϧ:ZXi)5t~ִFi%_MКI(.QZZ1駥J=I?b&TղlOXT=MAvVkyҏd9>ZN2 Oht^<`5H~f]?CFMv 󚛦yyͪ2~ĮU&Y~dyB`$G6N_OoI7DqCK;qdBj@؀kS)n#w:Fos=?Ǔ}A6ըf'c2]}Ӌ1<,ᛷff[,B|R:N 8{MU˗*_M?7@@Й-z#~[8QzVۓYQqʰVP Bq% bLJRejP^ZńEXЋٛI R;/.X:Z۬n:Qld:+zݝtz| ?U&6bj!.H=d;q_%1ݸuJ+`w$CpQސT2)J3C(A$MQJy En7 (oy-|54FF8IZ+z͠┶ @_0k $\.y+BV$v[UlǬx+D,b,JC(.bTP`I69(YF9Rc\3;VDnND[&shAw;>i~S34CיN9%38xwSAH)kP; ; -% R Ydž8ñwD]y) y+8}j%tWa!Hh8a"ֲ+\޺1MPu;Ef@j:P("B~\gک5.ei!KGzP.T};U/:%(߆G_ sr6O ;ioݯU6;}4IHUriF."T$U'[(sCB,1f8-T%RnRjo쓟6ϕqE9-jPkrp=2ǘd|k;Gv8xOF0LJȷHo^P8Ղ] .|$sHͽqȏQl]e;i7o؜eJJ<msݮzhǂj⾸ER[Iy-m#|>_E܌W fӇI",968)<ԤBrÑ.zK\C 9y.@`J{aT4ic, $ 19m4i@Y7GP4hL&@JM@R!1{I%!+̹qHN񆀊g{?hu`<: upMEaKu`A*QVJuOiMDTuO10 65IN)7C! /!Z NġD5݆;-I 'i*u` B{C`V:l83Hikmh`PC@pW ЦA​.ce%'N!)%bum>33ٝǨKQ&z+v>%#n^(2|Lx)$%.aS} X6_,ga)>J癍} &tWˆM NHpmCn\(&|F'v#:#EI4i/LD\e2{j#q050)HMg8< {PN mmrm ZdMn)"E'ˋ,M3b:`ru8CLb'ˏ4L[ ?clH-8+"+iiPx3!]Dx^tsu-wI0ԜYab<852$RF%p\H>y3cJLju0GʶSLI7k̎ &ilQkK5 4PLy bOdDI9'6DDXCL,%j=Lr갯Hk5&Mg{d )[:11~bi8茧ùmn$t bϧ:2iV.uR3]JӥpRpOEҠ󁓧# *iah4JI[k[!JnǠ!FhZRܔU$u){]۪4bs'vz`]JLhn\ī6\5gӣ dʩ]LՊޤL#dCRxp!I( W:uY//*d{%. E"{'e罌w$&1G4&Nb=p>=% ʒtUe`BgXEHyF1J66( P6 C˔x)u=]e$4"2*!Dh&!!eq\2I3䎆5I (neمk#9DvIMҠlYަՄiIj-R$̪󼟞ϯ;KˬpsNu8J 4L{jNh5CEj#!R5kuZC&#ϗF<2Aw zcCzZ}\Qh. \i"iyNTV1VeszU=`ǸFn^7BcYL)ݩzcȵ-:iUҠ-m4(ASVRԺyrN)1F{B*0ۛ젪ȷatp #| vuoˮy"<{uTL?G-:G9+RAX ̣x0ȕ Jb&İ>[Y~udJN`]^Mrm\+-aC!J"[4`%Z3\s}GSZvb;dq\ g7vS,s~if֏ŝNq\"裟tm8~LZ~X~"XzyCۀӡ[EN킿);zC,:Rw )pj*g0xCXhl>r^E k6ݼMnͤyUM{dmnffEKdFKBN;ԗ"|FfQD#IWSz8+E4L7'HNo'՟K0FyѕYOI>MQ~p 5'GB))V-2-GG0'G-I֡&ղ^:$S0 Y85G0K1'|B) X;rMrqUia\ %6mu}̏+=iXWQxEEQ=E7g;wysX콛^`D-@]6KA8SC(+LfB~85k&J[0A.]glz_t>v2!'zo) a۠p }/zSޫܷo E[Èo;L@ncy鷽 >&:_l$7XŲwz9Mym{,JPKk[xMCU ϧzVZa=)!G UNJouU|*D靏`ͿAz=#zn-)9TwUFՓmT}i 'Vj\/nQJ Rܬ:(Lhцkߥ>|]yMD1xɟ3-_#?ts/.E{ Wff/.OV>`R8;燽@=I#| h &lV 7Afi\Ť @^.zMƔbVy;P8s9`j} _l\?*`g"'@y3ĂZqj1cm)ץm4Vpdnw+W;3zGETai}"q_BP@י_KѢS4~8i:Naw)0tvBrQTe(13G!t0&tz۴bs F[L2oI cvAmfR]_Lh}ٚvw9l3#;1ZhlAL7XH|Fp M"g\>T52>WY LSO xM㦃`W~89B~~Ep֎k1PBsx Fq8{#G_SF}ܤ`$LIfDvֶ]3Wt,Q6x][40sq+6-vtB6qK .#ss0%BRoe~}PfɳAѠ/ͿN6+zO.Bm䩄:cInfB% S +d^*FAhv@h0RT'f0Q0=BBK"2 +fOa wD[ JwJS~#+Ы߇`RXw2ɜEtzTԑL"SR~ Ff)j+[&Un])R~\!2IJYW=ER1%_2^pa'wu= _-l쁊yDK[asd, `z÷QoF.שw7g2%jB]9qR~dF'YKjU} 4d+㣁,.RHb!mݨ/4!r87ČB"IVk~cY}5; 0[\{53Z6\en( B0/+`(~gB M$bh'ŇcY(?ȄpupY lDxyPQb?uFCoQ#/M2%_M%ЀNRsQo ?=H]v;:0礂jwC1_NKGQ0 "aP6a e|dy 8Q B$`0q1B7RBW UEM$J%naR+/ iHD2$b"Miػ f С? ="piJ;҈H6(n\U,ݜZa7Oѯ>يx4nE܆>iW[?&R˽qgkfAwpQkQʐwGì.mDA"̡t#'pzfuC&gjN0>JDhN`Ol0"_.lrbC7zbj5d0_WWW9{/25x30& & u6oO[\aռUA+,^|ݒ/<.wzAɂ<lYo›pr=od۟3(gPK:,Oto~s/ea.YO]o2)PhdzzY bOa}Mv!߿.cWO> c _beA@w>(;suEK;c`+&z<-.ij/4fKʎ[ٱ mUY?#dn/`#gvk9cOaqWo*S2KhISzF<,A7_y}9_Ԓ j2:3h^Θha=@ Qk MScPT%6 *PXZvRP؅R+e#G aN' "KZyPh7P}(h&f~pqb2~tTWmeUŸz$&gzR(5]NPVDAF, TL"aϰ]B^ Nq.Ø;HadMUH2f$v4q¢HL+Mx !ZcA Q|ȃ0P0 y+}Hm6CAxоHmQ-b qa|0RLj!9S2daBET bΒP0>/5Ej3!z4sy"ذ51 RvE `8Oej6{dIQ~8/5X助RZbi!zt~Yn(1Ly13V5VV"c;glK~Z#˅>lVWLJf{z/_6NnEV.=n$ǑBIl> ^րˮaiT9ޑ}TW5bEGQq' 9'cRߝ^(p x^L|b;u_}ꞮnD(O;W?yac#Ԇ??ϯC>u_7asV.Q́xex2G ku#{koa9Z[B옸Bk\jW=- }F/([{YKӽRu u;<WO|aA/ Ƀ7Ff'/NG2Zhw0jP15ӹEGvP ߮s)>dqH69rvW2RVWLMwwu& 5:F׎lm5ǵ4;uژ(_t *?m>0>A9y\I0zM,OYTgcaX_??\ bhUӾ5A7k͏ٽ1/R )k"$d1E߱j@Vt$|ǟn%%KBD&ޒ;Z#4v,h>Npmh(!lڋ@ڈw,l1> ';Z?8svRC#g)# ﰔpKHƎ-܌@n,L"%E,su'!9K"7 S1ZLŁcj65~ݴryuÏׇ/WOq;7$-p/`(l1:/Gq>^>jO?Oqߋ Rv/zA b5_mխ_r]E7/|Q0n._PC|ɓMǫ (U4mxLkkb+ȿX}mx]^D Q Z`%I#L9c3fu8lr3} aq5&Й%8IdΖ8g8^ ܑ;6cV،f\ŽȖ|b{b06o* M#7~Do?r6t8aswƒp9qyWC(+qD=KÕ;k5z"ފB$TxFJ(Kth z_J+-I<v憑ji!U80m/}pO7v>^&E<#h\*CJQz9⼉FDh5QY.x'{_ϻFڐi8=s`DZrzcqCyl'~#Cr'ko!A9E"B$ק>-$yǁ$dRI7/=XNlT\j.'4G;:rMلXgy1 29<'>Ȫ{hSҴE:hXb9|(լZK V-fi[KG䮡Ő_$[? yEzdLq颐,{E+rbī̴%Fut>+ %uQx"XxF$`l}LrthrKx:sZ".S<7I?F>snp3Rgc h`o(.'6}vDAr)5쵱XLdJ;1\iMI& …xA ^{ج-` XʵP) Ldy[;cpXr'_E'lwJMl=::z<9{8As dJd J!Eɱ,mQejByw-6лd9}+6xyǩN]ҝqWDwgm]fy[}#7o|K0Fn>7Rwu KxgZ%gM 掠w}næ~|peEJLK&3M\QŴr3)U>}\~{6l+P3gܚ tJc4h Ko`rjʡ=hhkz͇DcU6qz>EWI^TNDdV$Qkx:pEm4 |S#6'h$@>3g\m,>x#X$4EB ܛb)cF>Sߗ'+tc[Ne[ wƻ<ʉ^qf # <8܍j~}Q]ϸŐNt+.w^лz{r~paFX LJˉOϸ&2oy{^]h - .ҫ;N1a˪utИ+LYxA:elsD]Nײ1~-=]!1(yDД1Q:/I}hK_9[kLjI =lf¸D*ʈuJMQLpHO,mq>n:geihgzwAowul. LLJ +V3SSq`(cgrNw|v1vL=d3v#&WPJTK䙠5}yP.c#n'Gθc&J90DZL'Mpu*K=e|:2]f% !퓆0hvl38^l! ֶ X,'t ,9DOVw4T;v |Lƽ/>*fX\Bljrq9 _=4?)ה$PR,+S!*kK'N߰-Ɓv ^47^]XA$qI'Iv_z"JB$^4Q 4zv= |(GY8|8ϓ|Nh?x&o}b."3n ]ADI<"d.q%1k[Ł/sqy ފ& F4 mByCd]ucf%*#@+J۔%'>( Ā8jD :Y.>n_ ka{9M DrTxvŽ$-rLtI /< PAR2!tߖFIBd=pm~"p{jgܮin}sZNX| ?s NEgb>xMGOכ>Isd3?%%g%MnG;>G?o5>j%-fXS.vҋ~=~Vq#N y^}ePLiAEq*BL&Ϸx!vИse>_mg93unI}SKQP7H4 k--PL<Y׶QClR Qi7@L^`ݓO '/8!(g3؆A۰] AoHcm6ņu#jݥxs^iTݿwst[(q XI2j_Uo$-(UYH :úH D9ޔ V PC(,#^=ܯ9һ2N&K;ٹ*5\xAc,E^rTd|cJ>^y]v4csjcJHșiC` 6-+-+QCY;bZ~}{Z;|ގ^6JoYRsv +}zKk gA8,`YZ?c1ͩⷥmC>zƨ/+=jBGE S\Ҁ 'sCќr)Ge3LjUvKq K gq,| s6zZ_e\:˶*Dl:h9vR'Dx\'sTÏ5=--Dx`~fo,4`> ` PnOI?c1nt Nj琸^EA' &hT>-&dyޣPі3sNEB>-Rs~ nwA۞3*[ESCi3S!.DhBi2Q1YqQ;!QPNwڂbM}-hһ;hXgLʹ'*Oﵞ 3ڃ3>+I(|DI?뗗2~tИGZ|%v::_*&jH<{"50K^cX/3EO۪rW7>m‚X֓Brm 1pVH:hqX,7#&nK #P(dLmtĸ㟱g+=g&ækhؙ5 |# Q:hXZd7ō&y{$=¥t:~v>$nݾ6S5cƕBB*%wʧ[ѶWPCY>b [@(ٯV>6!**\${SeB&H5\"|F ig@[1ޓ$, 9>qf&(=?^J 44ޒ8hZK;[X׶QCY;)dX,j `- |sea9i*k~' 29wu.⦌]F_cf&ۘ`j)EZϒ‰YԶ=PFj< \3$/].o[ƆGgdO1kCoWO$| xc<Bh>U`Lq3W~y``ƚ<`hRw m٩9Y9@YaϜltq=fi۫wYD ~Xv'xmn@+bImhEZǑ_ؓ#60'l>`;¯`UIS=wDJ&Hϡc# d&LEEa~[tg"B!Mb)v$ő[î`(=5 SnA!}J t2z#5<$ w[+۠gU oq2n#-<Kl/֭3Szܸyxߕ5,9F#9<$ ,!?;7i}xBDؒ^DZR5muC^nx8aJ"sHʫpu3 ^=FxMbn79;ji`>KHMi,4"LT8h [4Rss0:,e,aY$[4)+1g1!c7b YI<4Mص~vbх?%`FP^㔮-,0 :F ,lJiT;F$K\`SMOq1nmՕAy"5@=]#!Na)ڈgj¦Bk QSWMcS,uR!O Qm>Srx]ZF{S7ؑEPq,Z̢>% S꬀rKeh<&Y.h p" n`.JhQp2ǰƒ9›c9>c]^Y͉:2;ܙxb[4wt Ξ z"^gF,w FJR+c|A^em2<$KUe(Yƙ7yYaaOdE?{fAH=-ce=VVXXi5dw^o&RqIR%CҤ\X#N NbXRt@)K4JMsm+(] #v+^UI2'#,{ghz=x$4vMbw hpBEn0r[?]C&. 'g|%fAZFEwa]']_d'e\ P#NfI'Ҝ TFȲNeLL9on?_9|֮u@CY匮莇SYaD{Ўn83K<׊v[sWTiuX^ʂ1;#!^,nJōex=ED(gH%'l+ttaOc+hJcPA^bͨ}-;X֬ 'h¡ڕ=ezE\0aE&_wiF)&hۡ^=4k՚<ͭ́2h4=1=!4 *M4U`9E7@cmfT M\X|&qR8͈- ^S±-M ћ |]|";ߘHzЦ;Q;T'4S"LE>0Od"qjA)5Zi+˺kX9.cQ~k4j#ctX'i\\rU'e§ f\E[9vb14m.~9|W5;@f92F|*|Z#LR1yG]'2uz~4SW0Tz |=lCLuC|vXc:%t 4+!Df+>;%8#J;Wbil5jF߸9 E>0Odnb=q)*lFw"8AAùHTZ\m_ӮX5qry>F٘eF(]FctU0rk֤%~-|zaD %S_/Y1OewMKa2 O% AKA ,ML>;h<ru26A)C(d1DVD[ݿ765uڰƖyTf'9˳ ШYa9|YCShng<:? ;LSn6e %n}^If/0|#` \ӽ# wKP]'r & ŒU"J3$f_Jz# U a3u|"9<+|3z/Wg<E +qCiUΤ@2&3^֭)m͍ ,lƦE+tG_o:Y.x=}G>~nI0PGnCdZHg}a׬CsCPa55G>*|Uy|oSԌxOo}6ct!#y=m3^^{fo_Ыk~De3{#}?ak^p]CC{g5VJE1mD8d䶰^i8UvWMI(  IE4'\*T5.~=M;m,\^k2"#2`Qde^kt|EMdj^gEMUĊPҨȈm8ªRX=mpu$&p+)R)Fq5qYȕό$*,$EBRg7]7Nluɴ<JX.L \ėT%ߖ`9~5&7X>o0^ 0i,h#ԤzGlVJ%Χpi.))9}~Mg!5'KtFG,U =^㴂\^ݸK9cVS/x UtUL,"ʅOٺtI׽RHۨ<ҚJ xFdVtx]5Mb힯 }"B"Fmj5%4ImjsuBI4eX'*r8yu{@rߚdXNf"PN,Ej@Fls9r4c"*x1?~hKhQf$[S+x\Wh.CVk,m4m\҈T 6%' LJB1MNo'*ZFB&bdN&Zo %R2'a'(ׇU4P *vExTD%ZS|߆W3EAҴR2[7++t] k0i,"[D.Xl8z&1N1 ޛMt *dgA@ Pj7+gJ-d˟_n&pNX_om7~wK|ˮ(~(PePe{9{ŗ~hW&*Dĸ dڊ|~~G(K:@@u'&ūx䳲A<=drBGP"1+ `48w\RMOq͛r.cl`16V]G&:+hYZRU1&'ݥ u&WjSjT^L&\m¾vOS'.?<\'[Gm ӻ̓v O %[zL` @c'znw|ro>S uX{sʜøz}44QEv`Y~eKJ3䶜F"ctQ >ejA2W6E 4Q6Qj_P fƊ20%C1WwȇbMVh'fV[ ׌ip$UX4.0]Ӌ|"wA'.ssUv(唸\ywOkO㮐3ݛoШYa9rO~?)@>@1~2rDc8˅KcZz<G&Ҧ䷫Lf92F|"˻/~4ݯ1WJĊױ"nOefs2?GuTɄRlH8(zBH-(QWw@L3Cm|a*Tө 4[o95Ж WroA=پPH{Bʏ[M>rJʈ>u[+xmBH|tDE2w$Ɗp3 [yUn s}Pk_2PMcЗ0Ag|1iPRxCt<IwAQyrr9n}J҅dqs|v/rԿ)w?!:E(>\^oVݡOz&u_F'c[ OBM)+$+¢~oz qtq[pT*p CX +Wn^ =!}e S627G<#H{{w9ʑN߾N~ dG<\!#c],R8aG&}ì27igW.$H851u܆"nz;Nj3ީQ{?o,hi].ˇ|NhQ2^;'j=Aˏasؚ?i|5pGC|K0O>w`*`IAk⌒.:."4 YUJYI.}G;Q/f([znw(רw%QL޶眣,W~Z thϓɢHP"(S]S7S|SW\JC:hx1~߯:SSKB\ڶ}}fS$5Z1خG`t(w,`~YsGrt0YuJv.%1CMvTw?v 6ʔ~כbp~XvF ` _7TLZ .g4L=l8SaP#Zuj'BzJjr~V|ˍat5m$|I2ͻW ov.V,sy8\/D[֌5\B;+] 64:ROfPB i7۲ Am;vooAfYTvݭH=(!U4upVJ'ӮFYS\-!`֩R"h]&t^@_,`̵Us7{_I_Gg#BڱHH*e%l+00nZ0 .$}i~ъet( x::WY2RXd06{Cw aJ=P ; O{DѴUs~K*Y?#OKFD2)~<7^4\ ZjA089RǷĵ:<^,$Sm W K:#ixN{TM XeR늩So,CPJwV}.;h'CC_-JFK1uW5R UfA08: >>{p|QA6 ̪mO𰠎R.6$ӞZ4:PCjOaN&ZI,J^ \ Z"&  j< Y3{ jV} ls?UOXO2FEJְ "jϱ1 "m6_.~f{WFAFknMZ) ?bTo*,&ƻ>T իɖeހ#^`*K9X0v*kTp`Hdbz4ʂ#2G?n}+۷}Bp% P; 'σ!M5A:4u) WI;4vL%Il"xK@-(aK,PA3&e…aPçWB{ld%>|FME.HE`:h@ࠢ14 #s15QSN]/u *((5{=:Fka?)?Oό~ 8mWRщ"@,Mmq`QnߢS59Ia=@7lqnt{qy틤IkޤUdO%'6׋  V\(}9\Q* Q{c;(IMZPr@4ʂX.R_O;؀Ǡ';ISRZpOmh9w>10:L@+NDB L@Why)mQq1_λ}Fu=Z0,9d5/qD,^?ZϷ] `T1k]fvϠA EҸW<lu)1ķ=^l@ H\M"Hf*v|Q{sfCAyV9ƁSEl٧.b!jgā3]}0I!}ww>7!44e|?-q w몪mW:h'P XVh xͪ U.44 (dZ}izQu4 ceʛXTlHF˙kx;D͜ q-8F+h(&Q(f8>QQ[0}u 9~ 8gj`3cWpۚjB<¨e(&8g½u w=qA,8:Cmp/[P-B!g\Ή C0bXi'ύ}"Au( rqO{Qt( NP,X)rpa$2 +~I_cq&Rgp`ndGa^E,6&CL%F4=pYٜ,o3JK{5~t@w& #6.ִj: m9_tz^ϓt$n~;Z?gN6n0jG~>:kv:[p4?4b6Kj0Ϳxs7{zJ\ݛC_~}p Y?Μ)X?sOoG))mu"wnQdX? 'm?Yݑ/fva?voKp7{bӘ2P㝰\&. O꼗 !DH="NM]q J4π_1K54`a3 p\m& Ǔ G(#/`Hsr ߍ sN FR?忎KM踖lBGM~P {,ݞ $9.s%&gațtW'w{4ݛ"Y@(V@9g$M>[t\5D)e8jRbgXs>OǦi" HINmiv//3gh:QiǞ~z0v[p/{5gw\Ϟvq;* [4GOo8(a,Gׯq *.x *y#~fPwǓbHenrsӣ' 5/+ = $ mAW4sm2>LI\Lէ^ "o6LxE,{CNDuvc3\IrT!&mZq)m '\X4US'^=fPwqx 3< 4;a_yd6Rhkp(l|$(P B+-.{rN}bcZ׸5^V\qI 7{ #crt਑ g{*w4$hU>λ8ٸB>0`ҾLϴؖ+iykE+-u=OdbY)jMW2g9W2WrOg":48MĬuWEƛUEM `":-k 5 ~`0ul^LUY,-\Ki:!g&;>W9E=^d+6ֱcӭ_HXn4fZ?wXd*j*ށ+,$qkmW zo(v  \`\7̈,=Ţt*KKja紜,N3dʺ別gv5;2}[6AYaPZ`J(skt[Ӡ~RFe2pJ6 n!%w7_&JRyn&Yo([Ƒ_~}/~,Vd LC 1U!1K B :eXޓ!*Ti) &YctЭ#q Ya[>_XYW[C gwuaդ㡄(T5(T*ʨun H[GU9W:h\Ӱj'U Q_W9AH@i.UWol`@~_|B\?фP<:QдJ^^_ = <1uhJT ǻ+>?Xg7ygqvG?J1 EKHSȋڐ \E&`U.K+`̡Բ8>%: x *1)X6 h &\X A-4Lnէ0:i0C»66_`qܯ>k,de?@$ШO>J(Iie.԰V3ȥFmĚx_Z;b-Wv} ӈOO;`~w$)[׷4p *uGCsQv(C>׻{GQ}40?N8n7;֎EVv*Q?eۓQ?5^77ښOm N-pT+2,$6 B_ұ.!l`rlH>c@H:lв}b3lKFiQ.PG"QpߝvOKғrЦȂ2 gA[KjrBa0`@"/I(ڭn}x:NȷlGE%P` `[%‚O8aJr,QZÓ2L .(rz"[0)hbIpT~6珂- 6c)8'!<^/OB.C=(4;IXC9>RU ,CYa@XI,ϑs@=yvvf 9or& naגܑ D')'@p"tl(cEخ~hwJhNHϨFD6 #@!#! AbāĉB< s}ky8t{@Z,azvl2$:RُܱC- ^:W(Ľh݃wNdN;QZf}p:5\qf_Q;:{֫fa`y֬}`@bPI,EQ]~q]0q `TqH0vA) #ruūNa:x )X:u)` Y`q \"‚K}:G:RX^`qL>UfGEc[ |Κ: A GD>x6o¹?e͎uU$#Om]QW_씠{=L*]X0rGfqio I-jrXݸ&'ɯs\0:ٳ(IKexuBA~cXؗ<:gホsy>T;P @.z/T㛝>U|a2qeQwW .5u3i<;Bhxyx8bz' Chy9%j]i@2WaG-@.X\ǃ<0n{ #fTf+ɐΞi5>G~įakD (Jeu{ ^s]_9\6+I{×e{3jϮc摓DM׌Cy;fC$Hi0ó@u!'FX30$ }"Ljcn/QzfDb7VCn S\&Sa$WPmvs4- V DGE )?Q{wၞcgo΁T،U]'`5zwԛwfAeyi7ݿOZ/;y *mn;ݒ պ0r`߷̷ }C璪sc· ή5*"';G+M ^Q(>u;KQRNAr*׆s01NPfc;- Wr9L|\;ԻXFuwlM85φl~N'kPE~0^xWq\}Ѫbƽ)(p?,:ddE s ij&aZLPkl ΨFF\)f܍ق?Os˦[0AMQkg *.U H:1XG q q ݊voW6]tyJ|zN7냓s!@H;Q{&b UBW_iLV6&Z}M% l2~4O-Ob,טڲ=T/6-8}aʏׇׇÛvvɲ4*V<-CsЂyH rQ׶xT `"n+qd“fuٟn\ >Hj毮B(l~EԗRdhFe{B6񠑜(sWՀ=Qﶆpk2 ը꣡tfY?^H7,r@C4/ى ,oҧL0. 1{/c\P)]{*@%;m:R6*T,l+8+X΍jrV7h^礥>5[GV7/ƃ\Gm*U!NQ_E})+%Ƿ+nǻ[Mpif5lcTGUU ,RV<+ uSO58N6M;z6F},ݿHm*?>-fi/~ My_!Ņ f{U-ǝ/fF&*TB^ zgG a-@sXZTWz] ZP ןe_PvF@7u|2n+u՗餜<0UC^(JfML%0[w:rT8qh >PMkZ5,-⿏'SAE͛' f?.OEj}XDU5KZc'&MųVgWNl1K9Ahrd/5#MJ&;\:4WWKW^[oþ7ӥ߅6w±^Lʼn~vciS$޲ͬ4e_8sgfT.GQWź7X"ֲ8&1fj,jS1ZTT7d,.W'e"{vrdhĀN?EH` qYZMpоz>9αWvۭ4: ss9h3U޷u^ 7{[o# s|qoy$*=3GŒg¼xViK}8E&R^s1'w냢;nr7[[\Fr*~Viтˆ;m8- E݇S=]gH"%-%W|]yi5#-wN^.tM9Uޛ[]:3W;/eO向 6-ENgV7l^~she|Qyn.fك;Grn w@P,6yEj] H&`l-_1M<X)V LvWJ6*ڛiF6[yq7}h' };!/e Y'}0;gX,W&W/rް&=li ʠ?6ш?vx(1ltbĀ\#Mqf.e(ub6'jt{+aٿf巯mJY_IGuk9oiUka6q1-`U v) 1M[FR)BQb!x (W%6!ڏB`2 ڃ#ݣz* A~/_A"?ܚl@Gh,@ul@a9qOGr֊<#ɘzNpk.7g3k@f2lFvPkLvF5!,`ʜѓi{E< LqGt#˭2(Pt=gH J *8ʝ7\ @Z i9KWL#,8ZI s @Aִ{o7-ϡ?qmf|?zTWI:HM'r!yٜvy sZRt #k,>p2=gc'P1瀣(PʍRm)"e`;X+\J?1d:vm0Z鶗]8+:tqʀ?=WX>3V3|0]3cʊ1- 6 4#11Fu(T,Cư:%騃=DwW~TVLVF`3HcTZ!ukcu1>e!o ˸BJ'0Ѱ%BJAY&D4rfqELaat^"2M˰ h@w&[eFoϧ2kit-e!鉇[lpЩ"ox!4Ȁ I$x% Y> ai<@%WI9BX4 \*gJB#DHDRF4QAJp.gH*"ir[w:ӋfIQS{ϴg8 1ILha2XJQJj9gA//˅: GQfHQ)BfW"j $Xyr걱Ie$ jlГ=i'S "tjTK~O8U4RVf 8Yi;>Η[F ̘ԄOVl:&_+]\|} g0WDՔ֘IQB}`-[^o{CtGD'{+Ȉi"A%b*H/2HP^\]G F&-o&UuZwUhWXVUto7pこ-,i 9;|^ƿeaE}aA2gHR C;JVƆͅXo/A6>! b2,2`e6j-8Rp%w"LBA쀹DОzjdS8߶U; XOxpX1 F<"&xij\nFWmD 8u1p <2WiSԣGWfaaecۀj $ހЬ@ t$@x-\r{LE dCuH(:1Қ I aCR)1L$d Xʔ-rlPq\o G?3:Ct+1e y*>h8|||II/oS5p~̊/YQ@*N4yjs2͈IUF< q螪|X Ng>8ŌQ䬽JٔCI!0,W*%\renbAE=EAW1PyĿ=s,/PcQ$m[&gdŬbT+`93_/yZ40IR4IiWbL$t4;")1s)-7IkK @jIIyYQYΪ gpAϧ߽,̏ԔQ#lpf!;Z&:SBU1z+%υgz~줾NRx2VPj( $h9oEFZP+6?Ir>`S F.^VPNI:GBF3 43]sEVr$wSS!^PBH=2Scyr%;jCa-,H*qXc$\<!_(դYWYtZXp/}_io!-dvs62HؚHrk(_ GyTOڽRɨTBFElGw8B {Jg}nՖxyerUjߙl!PKl{fApa4o]"h-5e9rWw{eB~m"Ldp2&nC 4 Yf Az0 j Z*%Tq‚ح|/5a(W;f 3>D$:\뀷mhظV˱~5/WR%aS6z|aJ"7xi`4gd3Gʹ#aQercGa84nR5dߺsZp &w̕@x"ҫ"@1`謵m*2O*OcbדuLN^].ZNJ˙Q 6O߿o|s5}߁eG7Cw1l:d3.l)Բ3q0kӷ}Ȕ'hO)s^(a8ZP}(N7eȳy;N+}H2IHHKΫlnZAQV'> {|oPFTgPl"~s 5BTg{/5&Tr9b2$OF"Z@*2[_$2bde)Mri Vxzd"sD]-X N$ICi(u;CZAPkh4@-6qhsJKt@jS" UZGX5UI ~H 5TFx\&9<K-dƸd~H U+/&Glh=%ψt> ?+ʫϢ1ĥu5Օ$}dnPxMk{'aZ->-VZHQ\zPxKU7BR T U щPDfP3N/M<(% /[`m)BjoJqH%_Xh`ꆴPxΔN|"Z&j!7GQa#2!&7Px:)XJ¢pe1I)}ٸ.T^ۛ*Z'U[sE" IQl*;9j{P.2y}SPD@ў)dUUC-7 5GQ7FR#^2ZJߜg`!j0(JKhɢu_!5UPRLm;JXtMLXN=!j0>F`$$bI"'ú>)`]N4P{n5hx>%C՜<81!j0rgZTՙ,͜&k u&P(*l$9${Mhr Py[}\8}uIP|$m34 H٬;B "ȱSx}8vj{NcP?TX4*3T(J 9*xiz C(vH "/{^2jсوnPv4DM09Ad1QC(TaT_DN:]Yέ1b,fB_7QC(`rZ{h1Ri as+n#[iV䞕!*0 t6FKѶ$M<ΖTb,[z C(Taс@Iv gCэK? PyZƦc`lJ<6% cS)ؔx,*/*jV;T4 sWav~>vT-Z45E=Z?M[fμZ}X_6PZ=/{gqIR &Cv諟_._QONL",&=.JN U4d]"83ҖUPԾ{v:G7*C_{Sd)*G hѿeZ{+("v!h]O~毳k>ļeQHlKWRSR*jQ؜?/ΏbH6_U/?']]߰kzcrg֦~H]i \밇5ͭoZRO[nvwM8:mʼ)!c}mӼұ@I3NKK! ٽ۷qfvv;w'Cmk?uo57{VBM|Wu |#6XyZOVp{KagdȏV-תzzCa=*ҁ3zk?{d^@%+vjxiŮ}dO~_٭#}Ԯ&%~$?t[mR[k }>NurҨ6lM5k;|i jo;ˤ 5&`xѸ2^@4^@4^@4^@4^@4^@4^@G\jqmGbYmA%͉<7p]`pӞFx9vu:n7El58Pq! ºiV묭͒AT<Ƙr Gv Phn qemڄ8Ҵq˦ӲMC\U)JX6ib KW(ضŤ{vt"񳻁\YM|Co>0sh~˾őka 4 I^K-P\[Dr(wtnnk_k[4uG3{% ÌS$)"Z)>'7Iz4 iZƥ*_g櫿}GMz[>|}'>N)ɯb>Qqih08JI{;]OozoB<1\ON/#( b.*E=qG-*]4@(^%8&Dט3EV n7E @)M,5EF$ ^3Kmd-o/+J] QɼZ6&R 1Z,nHX<(!KolMrڽ 2?"^{cow1֌uj#1g@A>~|nj)aR⍔TL܆hA̩TZBK ,z5 ekD|pCFQ\=-m({B8]euie9Z&{kDy:7*SO;=_i*lUz.{ɍL&`Ew@ IvQ=Wrؼ!rE.c{8Vh8AA>Dtl˳mČLf2{  ge )uf1ppt%rp-ΘT"J,΃i4 ${M} U(Tiڵզ~n0cP1yxiŸ90ɘyPh p/"dC!}H$rq'5n_"(NL0x'tG?}'ݏ߾z+ͫ~ŷO0 %LE7&`57OЦ8zha䀡-z%q_=6mSI؄րxpi|6Y~j;mWQs?&f~ޝ,"/7fe)ͬyN,eh#Uf#F,k8LD.P 8Ȍ&r A (/dr,shcY]6*lilRI9f̻{ڎ؋;~gE\H.IJH.14\0&br\*;SҐ}0̀a6.TYYŁ[-Grt:8C^ExCfVǗCl-|}uuiGh2hMxLM; ɸlx8ams<DtRƎ5)IuƔ8A*iT҂ *y,sYa4Bz"p̠61btkͶ޾Եkn:ٸ|Ҫ|1jVD0;˓' `Lrnj~z:>Ct7U)}'c.қSGZS_!땢;μ-}co Kgw^^A'G:Q +\#Õ9#s(rf \[9jL|'dZf!FSr42\iѤnj6 xv0f8V~p\89}:rAj Nkֺ~}ŇexFY춾Gz6C>2kiȵ4uG2&)AnݖR+A,sxɅ#RDD@V.yeǃ;ҰO`v;xFp{8rt6 Zc 8)) %I{h-%K!S !{0Id}215O2Jzat6)cUb|^ek[?߆=ض{č9z'^ .*" TxX9*hǔH:1:Zn5%?vv9ZV#rf?^$2,h4hW.:1IAU;s4;`)CENH9hʳ䖛B(gD%5٤ < ꬨlgզ3>-mw(*~`(b6o8 ĝ F&:SBW1z<&PQ} 5)ʂ@ J5]M7Q0MV"2Mޕ$ٿRtyD^,;5EI,"%RH)X+"Ef u$ZZ~/~R.!9֛c`r&fVeNhLQXAgN{4 ehe8c9"]tQ$"֤B* !5 hU"R '25|1m`|zךq)HKb57OцEoʣ 4Z,B ljhQ=`cю$dOOc'Z׭$\{"Srq"$'RnC=dyb7W>o% xo㲭 JP\Xd5 P{]Dr+id"o!qܘ8xrvQo; 7?+ft-jZSZ Ձ %] ;rΆֆjDUhX^AǝFjg`T&ϪzǷ('b0-u0ӫܖEy3`hoxN2ԜsW566:)x-ul;li_a~evh [F@BÖC̎yMiU+ Qu}Kۆ:>>:*Tx:7q6wP1UȎ̄..nj-6[w/do`05p/I(X| ~o5U1M{6i:I54K ߽ V'`Pɠ /a#Q3 lqN_+63Փj>t>7vܣY,; +o 1)M+4܋jp^VhB=]+ɖ쏥gWdz85DM1'wW۳n=kR28WS f}.ΆYhFAZ~ׯr|SovQ,v4g:Yߔ-G89a87@Y z?? j-@ L}>PHR gsSQPUz􀫝.=e#zI/5=&uu:%* *7F+BZ``v6i;.#XחߝyX|V@A=.$#f7fiԶ5S .(aY3y Yjo'=q` {4$"6z2΅H<& O5Zks 0%8]:n׎n=cz IV4AZH'L@6ÇX .ͅP4B\( Xh.[(B\( Esh.ͅP4?P4B\( EsѶB\(zB\( Es 6&Di6YD7H8R3hj^p$_PZk߽W:t=$<&q@ߟ["3>z41bè ph8P+cJy,FC20.h47!$Wd?}<*]B3țeLɈ <*4 I`;Bȡy:J.(F hE,`TC C֧yjVmD^-ρ-55[kmհՈլ ;Tf}H_=w֛^9VFumn֠IB:+(>}~$VTeK;{+/]R2~`'p97^|O/7/|#01bokwn?MЀ+U/ZU5ַkءj#|oWf{U}qf{~irC7C9NV󂬹B\W٨g3ޤW4ۥVs*6cC>,ב9JCm{^%YW'i~u SVX 9KH^F鉤 ǽLĦ kQл 5,V#@q.#RW6J%)!ZUCRJ&5P8ˡN+:GۚL<}v}ҥN稥 /$~I`?j]c׍8[#6VEmk'AQq oUnOg [`.N $w&fMpFI "$jDJi,m'[ΐ7ftx;5 XL򷺟usn?ڣƱ#.&񟿹QL8&QvZR/ 8*yA*I#,cR r@IE-",U" ĥXML$Аs fgBL l"B,YֳQk͹8WLޠhf>qm 4a8)o~gi&-LCR! фJJǍt"48R:`{Ey[Q9z{"ͼ7Ӱ&e e1x(g Ůy9moo̎PT^7+^uru̿S;ZäQG!ZrD9v}6&'u<3`(b'lc۶jPU~D|괕h_ziFfpYI,p&mi5r FVB Bx BXo1OTȹ洣F~\)"%Es[z4Y1ԬdR: 4%J\ M*D)Je4& ٪Y,W3iEATrL*0tT. cT[9g5gG=;#nT||4LL73Cd"FǠ ֨q)uVke8 ETߟCBɷPiH2%#w:pTNjgI@֠TK5d( .hSd4h1S3GЙMBBBY8ZY8*Wk2D}mx3(G;5 FHie" LiE_t;"ض)2^~HkMDMv|$9.]'X͍GahB5P/qE=ZT%pX5 S;i\[ӎD&$Vo]G",R7EyC9dPMlbc_{}cBu>< ]$f%١/_{ 1Ĕڿv{5 u a~z8# d팍5je8:]$Z5o5֗/KCR<@Pߣo"ih4"Df|$/h%90;Zyu4vhZ8HL(Emb΀;m<8v((t\P`H@j{ r~Dznf?ciw˴ȴ:>>)~3;կ'٣$WtqiwQwJ_\eI>fN+. J~j+I:HM?p7~!XHf (ոQB9a+s}ǎFo;1 Nrro?jhRUQ 5'5˻ >nqz|< YZ|8N'ztf7~lpUFou:UBY<H>^W7X SFvh1cKd&>/ ۅ]] 9~~1}_͏)?=8 YWu`.S4١M5MZlд&G=MyEGvlTn@oo|3?F>:TlֿӚ#e a6,?ze*YeDfgU]ۙY*Zem޿LxVe;gGGdG&HD /M"S>mʼ# nJ#ZDGl%)!ZHR Z PqFF:GeM(mL;_G o\zKTRd]PJ*n|)q8(D*պEmKB -┵9ۜˇ͹,X s41 3 !\8A0T~d&RJC6syo5ܝ8A ϗnoHqW|\u<y&JT^:6|L*V9@H Y +7 5/."KKAp ãuݪ/߄s243 56%ZJjt5%I%p(:@Ҝ+ ꄡ>PBZ#ZdA|`Mzug=ԛ>_ =zQ$$zb.9e H^{ʜWR7r<"[6@:dLT"1*gGiPaw%uѐ%JSlsi_#Nx8fZ%kAI[6ܚZ/1GDh93K3D++Pk^Q7qˢ3n2Nkyx:B-;ik7.A[3à80ǁ5jvq5_ĹE~x@w[P"1>!VYkJo`e+}N.|s/7W oL K1\8jiuN޳>J[Hu H7T{AG fzaUjQV֢ɮ] _ek^ٶH@H3rC|A+ u͏"ם"i֊jN ٗ 1"xeJ#b[5qj|D*?)jc ?ULwWτ]U؜/#^捽PNXeMCJCMT\yU]凓]lC1[Jl\ߦYWS˓W|/Մo:?9gu/ƙprOgEYE_ ~Oʠ#JIfIG qB^w>P+MYyj7&'N(v# f.N3f<'Pn~)|bרV /źA|z{(i>?~h6_-Keum73VvUy/UwrjDsj*iuJKZ9*` m[ ^ЇL5[T4Fߍ+YR~k-z}R G?.VUwdğʞ ;N![fu¬F\a+̲ܜM~ >Drr>1@8dNuenkǶs7O)1UphvstL U б$D E51nnwԋ+"76UvlZ K<$޸̐J FL!"ODHu^~PƣFlq!B?sUjlTZz|h%J5/%tֱ$'L hoN^Hʥsi=Q{$Ir$!=#:j>1c*JE;"EB{>`8G*q]ZAaEq1\m8?!5?.37vx=:_l6>9l}i'ZZ|'(A(BRuV-IZJtY;|+ \8 $ QXBa>rG&:$yix-j jq'gLKm@aIQ&fVeN hLQZܝ99Q**$qTRp5GR"QkR9m54 U2R 'aeJ+j@VM7y"}Uesh >$5F"|8H"|$9NK8 6#Eɣ 4Z|zI$m٣An;-cRqrz3ʊ~{>RVG^B HdדCBlrlŝ^]s` Uϧ1qm?!>WWl,Y;zWс[>pjY)㕽w5N.ͭ:8a ${-Pw囆uo&ºdP%+úRqX7ֽtGs71 qOB7i ~!ES)idYy\g%yzNʹD3]:kIrE~قm[yzo0M=КUЛ'S|n{)6CwttgdJ]5M3?t4!0~nOög_ mQsXy(et"fN ɷ-o-O2^ ,_||!>N VȌD%M2QtdVr GP<4g0\XlbYޠ}t@bJF؄F[N@xPjc4$%fZwu0I'*غ ^%"i#7#rПeйc]1jLRm̺ DRWơqR;'H dU9 |KW#|=ZARVi@q-K9#1g@y42"wHAIꑆi*$uؠ3#.) N!X {̝g{J6qWrc*gY)jdZ]Y_?q{yT.Cً$wqiw#eoɹʴ9)+)O`06R(-MS =gF2{,PB%ֹ" 8wh/]3ľ)'&V U%oZpRy\ ̺nqz:*|<uC guhC Rt?VBzZ&頋(tǗWK%@P? &84d?ZW :qzGe¨Z"YqoL켄_;?W7܌/&NْΠd\?{}疛;$*ۛq*o68^%Aʖ\fff2ma%u%k;]Vy WjZQ:ε7Xe/R}` #:FH^Aykvyu,VI.wj駛7c/>ۯGT\W{ݫ _ꜹK=|͗n^՝7?p/+__|M|_xϑq8<3{{= p[\:.ui\6 .wm[g{qpzݩݩ')w@wۗ;u/;^\=)奬Ýeɫv'0nbJj{JdU}y|͏0NTnNdg->z_MnFIͭ>")]@ط-i@5*}uV?Tr9cPKgc#8"JJ Fb JY*7hRZ)Euq(OJIt"KoԼͶJuDP)IJNZ̜u҅-_||p[ݵt$O&h~ `;(aԑS[󻩣:eqԇ#ClŚZzrMđGXk>x݊jɥr^+XQAg(Vn,+ta}mLH Etj!Xx.<DHюz֌L:W8Z̜bZ|gQr=#PmݝdIppК[J`9[b(璊J7Vb/A'"sK!2V] ޭz4ۻ X~䚧 >T}ʹDJ}0MJů1_m%94pZ>oS7K- }}Y>]^ʇuq3ɞ``f׽}W-vuϦlG>{^:<m~ nykl^֒Za燇mNV])O"sfSSIn7x-Rxu8_MYaK6?ƹ3+쭬JSҟqr'˼C @/w֜Y3M<{yyv/L7^`%ux.=O%5ݬs4I|rLZӃy:6/Mٴ&dw_;8wWoyc݃{ DGǛ@Qϋ~ڞ7ퟘs;B[o_z"p"GCT |տw/y+w7]`uܬ٠3;3͗}<4l̊7G KtT:7?/ýU~6y~P ķv|ߐ"vN'k~+W_ڒWhn G 7kb{xLƞk$p6wo=uْm$a#Y|DmP0.XXyuü,3q:ݼ|-M̔t"x:}z6FNpwN-gCqĚ5.;ôt3Ezt]ջc־l o;ۮYz9k7OQ+yzG;zl.o1J%=|K(vSP%egM z*8]2ts)3xJHqj&ԹZrSmr)fͰ`4j>׻^[f[[;cqNf/RDKZzϽCMTDzo%[ZEg$MԵmmT(:TӨ%gC _ x3S"sV6s˺8M(K6evʒ&dK*D[Ƅc,1mFM#d ]U~b1*B&\ a%Au%-bTg^TQ2|,R'J)NO7&hUU>5i7i9ՑJj MQ F%k &\5i7к9R=cyE5Hmm2s&΄4v Rc1vѼXm6j!G]цdP둇 -U0rs%E6ktA,ڲZaB Q:K+z o'DNڨ/ ޅ jseFG>"R(- O9mKNwh ă84#:CIٰD(QTzsPyUoܔXD):LAZ Eޘ ](H܅igE#H5wn+":B RB K;=x!(!=:T}f=.Gc/fV$5sJmhB $\a|G$o ^ a‘r'pYTQc$itrIFn>Xh\Dm,mT1|Ih j hN\7Ɩ7EѼR'5֢Y u(P5jI:Lj='k[r-F_v*6LLaL5 >ŢU98C7VAPkMyywd9PZFYk!HH !HH !HH !HH !HH !HH !HH !HH !HH !HH !HH !xI zR$ ~sngn2Otz$фIHH !HH !HH !HH !HH !HH !HH !HH !HH !HH !HH !%O FTH Xz$[i@! hKB $$@B $$@B $$@B $$@B $$@B $$@B $$@B $$@B $$@B $$@B $$@B @ {:$adH ѓ@2x!>F3QH !HH !HH !HH !HH !HH !HH !HH !HH !HH !HH !HH !>@ג*.W~奦|~vqvA7pߵvYw{G@ȒJO \s~2 уK2 QK>:3)v:]yoG*}@ b aiH,wU%ELR9UwUbIbKd=/.VIeIf#00 \s‹)~s &GE0i-*(=`F4Dc{QBRMO'^5PR{1v+дAi?Lk A@oY]vƝp‡r R)qwE9 {yk l) jqEK?g^jP  [76jIefZ29[ __3)a XIFr: q-x)r@4ɨ+$WSQWHRǮJݞ,Օ\gPza88< v^3g+lxNwXU<>Sחs7 ]t"XU2cx)w$o>B*_u+z/*뵹zYne2An<ƕee'$RKd.$,$6 Knl-h"q?XN쿶PMb%o/,8c|-)(݃вuLyރeiYN#QE~<<8ՓZqUCky{+0a-睎x4bSWQrp:|[/u2R2I.CA$fe&祮3^! uj%O K\P!sAxSضPrr6U^e+\.ڹt3P>^ mmHh7`S- ZC-㭻F^W!4M&1GDtEiR"&qSncʑ'(i%1-Ռa$g Y1h) K-$Z)Ii06ƘIF嬉$1rvJ<&1cYŨ1*Pb$1 4Ldk\`q,XN Tw4؊@"w $ ofrB4̮&`YHܛAH(&6K&h2U ZakOZlr0XegT3vfONMwx;baSrL:ðf@Hy-J>';WOD`UHlG#c&FӲIoLwOȘ2|5u|yZ]ɾV3bp5z$/vԁu{w#]tQ ֬c#V*vZ&=C1m4JL7"MG XDEcc]8$!d 3XD,rd,Kyjhr X"fڊGce t4!{J%JG3[ѰnsCHY=R֏,ҍWc> itï%lD1hQĠ'~``L2xz-εRh8 sP [JPr$615e5bYSwrYLyur R?]0oy~gs'cb6@ fFl&27r2k7PIt9wu>?}Y}g˴`|﷯ ~`a%,#8NWڇH_fgb4MCWƼUcPS?bWº"afqg xúʼs`7ś_a&?^§A+ r ~źok$}a+GrΆ=<8]ޗYٲ7,YriOS54,m&WIE/ݺ]]ZrPWho?:җb*%R=QM۷*-t~ޙ\ ;JFLv&Y)r͡3ܭf1Qc vG[¨Yv~[~=KgTΏՍe}bm $[s>w}9."QNӚ/ B$ړ{ ލ*?|Ahޗ/^u' Nm{r[j|8(edVm`T4S1=j7Ҩ}X?]/޽Ww͏^z2/޽%XFl J=D! F?\rMS]C;tmU?*"~o~+g& L?0_[n8-s­ x jQ)MTT}*Kp. WR!ōBlTUjۚpUhm[u4GZp:Iu u,N:>!&Jd%ePI(VAerTYH@['k6u5‘r<[%"'ɺD},3N)Q5$UƓA vSMm'H>[;'tmWK_J[xҍC/ʮPJ~hË\ZgL)Oƕ^ )LS=@s+bu6F8;h[bv1\cs>eڹ#P=}:?,EH2xifN/J Qy ˵϶ v-;ԓuBU$^Tr>W@fثHH2mgXEB*n8yt=PiT^_^6F-V^ G|g%۫@z #@nvG2 )x dBgI`ZM>Pry91F3,q.]1K,"+!d`$PO1&d潆ƣr獑swu-tD|#ݷցczKa1|˯\UTol=xjJZNS'!J~Ry=COM+L =uR{* Rt2 s:!usNF]!jǮJVS~j&=!u+u:vHcSoN]cWWH%z28hսX]ݏ\~*G:3x?*͑E{+֪} +YYX 5mSrY&u^\l-T #gJ& "%?';7/`2{4Ȓ$s3c \MOIM#t4k?5r{jTUPMck!P*g^ TDe5 ۝J ƜՌ1/?.TrИ+A32W`k\ ,ryP)ڝ\i[b32W`gc\q>S N\*oh,ܔ|U!W̓B<`Pڝ\VC[hA_pyճά7OܨWDS-q0U^J4te>]6T?qQ/?ٯj ?N j=/*N1g`qY|et^>MU.\OpVܨ}gYp8]ԛRM,bW{GCN.Ds^y&{ZjK{F7~wEؘ^Ff]Q`y(B͛nq dU¹(2BR$d*֗d][C+z^Sߦ''ݴi{VoG Q$XCEYEbO*/-db<)Q #%l3r&I#FE6I w"5Z^6lC`iK fr m=ZX)$ZN *ɣ.E mˆդ uPYfVgN](xV`]|\n(M _>|t%T\ (Pzs1Tt S> $MR"L(8ㅐIĥ@5{,bú9jhAZ(ӆY )2弎ك瘳L '6@zM!I/I(Q-L:X# 9` :yMindzypsْʻ,#h|TN'U-UlutL(! ιaF;$cl (n'o \jk 84n1tmAkX~\L(?VYr{&KA7xּ(]ێ`*]  j[<Ԛoٚ`@ "tWU K*D XMǣ~n`SFMUW(XXc%r} RѰG|7p#8wNCCe61pO9Q1E QJ:&mk[ߑ!pM&<~\c%7ܜaWB#A*#AۅvQTjqV V5_-jQ֣(akA>3]ζEZGBZ i=Ψk-P}:{-r$iM BV0E )b$t 1h2PR"k5Kw^ [Ou@s}('אY>n}~*nlnB& JdoGւLSpGHgjIcP="2i**DAP\ X%/qS2O c#z-?D="h 0gUP E8i<8gR)$e5VA!\^$D>okh/ѣSZ9bCOhonMvM@'i-[J!Rī]ew@)nU"f^()JqQ SӑGۃds$-M0cQ$ۘVjwu='͊YŨW97< -  Y)!椴+cL$UzXvBse@\i3rM2BYd&!ʓ`Ί1r6ԳZ=pb!AO0#m9!U; yK&:S"S1z+% [hR>}/?7#˿%wA,[&j $Bd)"8oEGjVcP+LmI%/6;%AX,r wƧJ֠rV3?H8 ml6X3nlhu@4:=qw?0l>qzf;jቁ#yz=``Ypws=px/|6+e,i8nhi\JdYŘs؀g2)"WZ \-hYĦO.6E'o <w,3|]Ö[#+r|FR+YmFDk !:eޢ >V M$FWRA]+]BJ496\cY@XI#JE4 qc-P$`y3*뒲n"J؂8xih,kd@.cQRtAETY܊QuZi?kȏW@yRH#! D 8_'=(R ߚtn^LV,&x^ڠt4;uDԙ/W 780VV?HW~yӹHI΂`((x …LҥL^^-~'KIr\`\bVy,&WIŮɍ_izm]num4ԛubG+`~H_;#Wm?-^ǫ娻VA\vE*r1[S%K[W'ޓwJ0j[tn~A/뭺>l1d&51Kowgk+.muoWӺ6KOؚpv$zLe_xL맑Yפ1 },XpX͜SgY7nuj jq= N )K?ǡ }5Iaх gE3A H&>7} {ߑ9zW\~~/o_-p|q~G+0\#AIN߶455ͷZ`j?>b[^3}1f:k[+o:]}y1z0.C문w5)"U v+D6Yo !jx*q^S.cmB SLz|P'eݞ%K)GvaAҼ<$YtM)A6W E.J*0*H/2JH л w% k# Er! B%y@ty!KS*`(l4Pj#5&Im<9:wnTtXǩlyQIknC%in@ Еse1"OU^D#uУ]p=zG{;eT@$/ cԌ Qq%6@4;G @`裱& / ڨ" ٨hK5)B)0NjqҽΩ h8pW.Tdް7~GjslёБCG]nio,Q} SU W2GG>ZFD":1 Cb,P G)zZfhBHӤ"DάE-بR\I]fB9F(>YD>GzQ#}"}D>Gi>GzQ#};* pDv!";DdbEdDv!";Dd"CDv!";Dd"CDv!";Dd㗞 ~`)"C,^Gd"CDvB;U)ʠ*d}U1rɟ'+SA~ %}*ŧX Ma9瀞RD1կk[ "ET R-"ED""ZDDh-^]a+Ʌ( ›8IUş5m\eyQ>_x{ d$?| ^w~u'\-͙gb1)̦:rST qy/\xWC_#zKyWo߄>i+4r4+;6[% /\qnC2=+<' [*ͮrGO |w7a'-`_}?Do~ww :"ohr<)~ϲ EFPa=H0B{ucA,lp)no a+hvZ|=?NJ`-n6wWLO%f=!77#r M`f {dFf\7j:bm}[/}a+4on7m2ΥYo6rKƎC'Ml\g[ ^m7d漋ójs/ ixS2S۶:˱٥-YE}X{.[pk7g][宊2u! ՋAT7oG;~Ҁt<ŏ!6ܿk!Z"PÖV'6Vm3(PӻY^}!8Gm($ތLv{Ȃx܆oh/$ ;6CaӬ65IӛF^X1R atC"9'U'4 -_Ԧ36 "Y,M Fz:[{D=S^ c+LD9wN9hZ@0M (%$Ak% ,TV+I5i-[=*ު|- |_/Qa>z(HMGzR^1'k?A?Pnoo#p! 4T3Ʋۃ<|TLŒ02@OPe `]&3ltf eRk@`~+eF*Hh,uN@@F4[ 66Hd<,\hXfQ#L+m1 C͡ BDF2# DpJs#T#0S;T()GR5؋C.: fjD %ˌ>fq[} '(`E%3eFfN## []''35zmiѮ@?M޹IT2rxv؁4J/[>B I0{~ژ|~@;7@?< JXVW`2*(˷Vϭ11gGG/"HhB:$3fL&u RCU=M~KSy9eHKl -}|DZN:͐C)S!UKEe . Y`6GĜgw$$wN?/N t un3/w5p&j[~7~3}yOChlw i7JJCܔkb b7Y{r^O^^a vnR~m+$˫EġBw?k>Dn t c^b-ͣ=D`.E>w7«?s~ +ҏ毟]E6o g7J`܆3y%Y'rnԻX_h|; cs%:{X7~nm]a+rAU!bU5LGit_<T^.?yJD ֌pcS$KY Uf "G=eQ[WUWԲuPڿҵtgJWSiZSEQ)P&Qˣ ^ϯ2{Re'qhmcS߯h<Io~ڋ<Z|-rE.Cef `:6~L\#"(3`!N(  hkM0ud-6 kxNsg`kẁ_Э;~^_5`AL$YJ ZH4w˲}/ޛ=Lrp*3(gLBkiu^ Ra+9g6Z{jg1WA̝(0yjUlLYD3{ZtY;׹$t|u(TNN*[f3/:f7ICԯY!sEGd^*ԨS3=1oYY!{uw:0E5Zt3%5nt3|СqDTF]%łdvXg!ِѽlaa^غpEWZַww&*i5ICfeogA];כl9X)\fw33zv8fu6R?tȷ/vj<=킰ZQS:7_Lsj|ï;d:zXGqƂP$4ʤ jbIw,m :ё!^dtM:&Ӆ_b67w|, ]{ d_E6x='?&&˘ky'oߜ:0lʿyNnPx!.c5Jck!Z݈D#@RcTrbTLS1y*&O䩘< $(z:z:z:z:zRYGļr1ɹE/9a\QV9cN5ak_]w!g[7_uֱ(T&RN5F*e $:CR0'W;l9)8jN- Z ,QJ$(0@7&TB)4&CN2̌ݸ XhC)y"rRN(F*2!אg F"PasͳJ.f嵖wZnq_umLЗ)c0<2cQ0 !@@ cCdC 9qZ 4AFmDxKc%ȮWlgۗ}hq3*M]+HE'r]VɧkM/j<*ἉSxUo~^BQu>7\WF O#dݕ燏kVLZy= o|>#)B\JθZP:L0]kŪW*1͆,=YAOZ5_d|26UEd9+59ˈFI2%iJ44RBh FOwў4 i+:^#k-/=oDRx唷̕598^se&d Y6kpG< 56T RbdJ t3a1sH-t`IPpaiY5gIPvljגL_PL;h9^8v ptkɌ}8{ijQ}3c_gQ*|c?\5gCSvY+v^ u=ȵ%Wy&QGy@BG3ܵ2֥enκe}!:tA̯Ge.mւ^/S8];lI;oh^KFMõMw[{5TǾpCEzz ѡ{'&[4Bo'MX_tm(lgA] ǟY(U޴~]m JͲtgQZїyE^뽖 OŶScX<) [`Hp L-RvT~б, r '飺 (*Ҧϴ?J=ԡϽ~vup"l[%)@Lgm!4, AЌ CS\bXdH B &mTggWwQga'-V0hb iV0 K*ד(dfzI_!~1lJya1 ؙ{<"%ٔzF7xHHJIu6_\L`&Z3&Aq%X|pF_Fwe^a7F>kq@O_N*qx^w|>\{SEn0)D04i^%CS֧B<X9;*vtϧYRnҤB':9)KF0!FfAʤXRzǡh5@NPyʡ/HII.'F(ɢ"Aa31gQxj 3EC"y&#icSq˖\^nr^>"+TN *\ "䝏rExDk kD ZJsĢX!. P6<Lpol9"0i5z%QJ5xiTRd aQDr^g"% T$V[Hŧ**oq(~0\. uȬ8H'L jį$xaF'EȻ$#[y@57ai* c\huȤgb(MP!whEnI;#]o0);7k9[O\:.k+j84@oIoNpGa |.PB 8 ڨѽ~$L 1 0'dp?Br %dLeXbPGß6*b"AdTS9o|.[| z;](WVD&T[\^ mrP84=.\ Q3^?ߺ\{U+ZYlW-+m<^V:%lѾ8a*a}}t9yR{WH'8F5zjC$}tfאn%Džʵ9ߢϹa \6K _|o:~0-,t\ bldP쮥[V쓻41wtc=eب5Bїfv/:]ysLKUTܜ%nАH1',Qk$B3%Cx=b,9A]`%p|6@;׻G.,|mݼw0 3ZRo>lp;YYy:`ny>1*(>{,ǘhxyS1uګɪФ^RK#x}-!27J3r[!*vv;a~IY$-[k.= AꣽgF;oyOobŸ7{? wګ'גC AbYGg&=6!g& KHYl|ыom__2hRY"wc:;1n)sT/۝ wE+T2TIk;Gc(2zEE 4w>BYx5!=c8++d 0ګ۱QAgb5& C)c5W?/cD'Q]iir-fr@YlIո0=J 8M3xx_ǣxXw#>'S}hjXvI /{ ^tCzX=j%)5z"0[5g1}(>Vι1wJ%:N uQ|!dk2.fӱHijc{ fCu1 gZڪJ+z;X1݅)CNj?':%&٬jؗ9]=ʒa.V`82ȱ87~UL z?b쮧\2QQ}m+hpT憉L&߱ p^~gQm9w|X¸dlx7 Y^]}9zeD+Gj1ώr9.rX/FdTPF.zTb&}`Qο_|KϘԭKֱVVc 8Y9z`Ʌ*-|skWQjYx!,W7_IKW-PZ?oFϚ iuLJPWc2bsAwIn5_h su= Gq](æ18{f01|:[L] -O?.0,[Sxl|aOa/-^uyu {oy >4or7{Xa7i>6ômZ<-4 _` 糕Aߌk-~zm؍~+4[5J^2k[J-y4P21C7h@rR!Db:Q"u;QzQjɳ/OO:"TAmSl ǟ%?Ep,h# ¸^kbB >Tcfj2>=܏s;rv'-Rv\9HM [佼O3y֎c&&Nb F꘬g:Ep;Jp QXkE9Tt"%vag݋zYpÓ"4]d{8r+JDl Eblb‚*H$hSBl9UT-.x#@3)hŀP"clA;Wa/B,cccPBhڤ4Q 2|P`B@s ͝"0D u-+**DE#P䑐څ\2[_@?E8#~TZm,~V⹥ a6%OaBIT+L+SIZ'Ke*u8HsEvC) 9tNF3UTL9yo_,LaF=吼i.ycxʼ::ZUh9Gk\$T =^eJr}{/IArR 29X[iR^89E艧sDnk:赮rZ]l8/a~P>h~xQJ\" DGgO"rm7^ GHJ*r!wu!3DVL*F4%J%A *V@ F5ic+HUg$5LYB55A/Yf@kK uK#I@9p,/b9g ~N0c>F$?K@#QP8T5E (J3F[o Y)'$ oTNBh&`YMjIG g$UMbVT5xOa69%)ZȤ3Zh8jҡwo6h*$VQDp`Nљ0ĭ82cFhM8!ˬv$ő΍sdrw-c4PM( -֤cZ d97%Ǵ*.=Ni ȹ/+ID"=&3:JcJC$'hGAxc'AG1y#:()xYqkpAe2\esgu̺Ɛyu\f!]"!NF`6@ `(\,@ S\Y\s@N"œ ;\JQ-Mђ(J[祦sdɚ5QaO ƈ5;й&!5!R'cZdD`%Io:~0 Y"%PWcUVO< " N$c A5""P\fhA^džq(s|z?WnWY]pG^ &3%DߞXJi 0ϣJJ'a$Tn(tA NFjr&548H3E9 HCKdawIytn, _rT#|şu {^5 .f;* E17M"R |פ bGyCq4ˎ_D|gPɊAĻbx'`@^zU&wi 8 2hoPw@tU[^*h-ڜuIpmNy ܻG9ܙ77W__ Yb4?Ns q?\Q\}⫯ڰ/fA 9V?J#P3܅E"Ad&GTF_~O?]ݴ&WI*~N&p#>MY,D ,ԉ ̋qc`S6T2 ^$ Z<K] L6H1HoǬ E f6:+39ʀ}M!̐9¿?;]C3t5@Ky'Vύh8La_ DsA|:3;:l *WT &s٢Zz[nl.<#*ͬTA.Ji7W|'y`&)O~+V&U0'IQwH/L2{ {;H>~IxxK࿎|^z)|׼8܋Q%NV$/O"t&Ч_S jpH݋5{Ft/ֈkDqn!(82b.||GIH;Nzt.1jA{?ۇ}«(ջg`tcݿvnjaRծě^+9+Q6&(VG&@,F_GNWN :{g}q7SxZ{g/ Fƣъ񪧫sބ7[|c{lR8;ޚkVӯ^4mU{CF?/l[.so'?;(.~;$FW[^b˫^U=$ 3yK,cJQ4XT;E*&U˄Rjhj |Ќ]v6+0y2^>bG}nV5*1K[AEXcCc}js!BtPjC[dJ3GiޜFt1ޗSˆ-EO0b[8%cL+@ 7*K侈-O]\E,%?oP\iN{$aMވ.c"">uq 1m ={+y\qu?l qu?,|ZC\v1ǼOk:ŇXiyN.SY%bHJXx.ɿP@{{<ū9řX$HwRa*}{y 8#1㑋1%{##ɋ鈥@1 izGjՙdwٽZ dW;j2j2 T▱1ӏx31QS*Q/,7) X7xy2ܿt{!Ư{}vrD`Rg(ڔ-Rm|ɼ#F++:l+Ŏ2S4`!ZnuNcʠ&A{deҽ0 #DQu`Q`~F!tf{#]پ-}>lĒŕFӸ`)anIs LP$s/w5;y䗫*drv-[Ŭ/.By 4V'4xgPPJ~?J7*䩘I?lI?'ϗ8wb6~JsZĆa{V沫LeMH&XS4J5AR鄡6GYwן%ۻ4Jw0L`G0[|gR>$]}S诂9 wHH&ȥDGe}'ewǸ&e97䠖1_R*y98ЋS"4ޗ{׉<yvނ:1k\1]lɱq`ů,&wQ+gz1ݕ_?9=i6*q x&Y7y(b?*qbIنejl{N Aʮ\_M>p%6kjhz Q:X+f݌f`>&G+1Xw)/'Qi*Tdʔmm&7^;rSJ5' :% 6\/k;s.$miW J1Q,GbۻRN.%_^p-ntk }SWZEuK϶Ef,Ѣ%vpۚm A% q4ۓkxnO^h7;+}I.8<]&˟u@js偩^u7um17鈪'y5^MX133938-ֱppSe$#pro}QNoc(%ΊqpzUD &h"VP'z ˍȩL`C; "8i;" ܨIV.Oi wT|0#<4W5iiYƚiYk%'GIeio< nMf3!c:Zѐa5!&eQv8AMH-nrFtEg: 5d#b\¦%(=4ޓO=4ޓO7!*N0g&j:MRb=DOAFx߽MO R D`X uk/V7;҈`i@(-Abe$7TڴB#rZKDhz.iz.iR `s`\V(cq8J>9M`EeԈ `ix;J3ve,{6":mܘx=&]a-&$Hz[v$kvĎJ``P`ZmB ޗe ܱ嵓wYYo½XXx+vZXuZ{Kw@%ŷjecM} )#Hgig1#VQ*,%ЌfK}>~mEhú75 bs ފ`X̀W-}L )懦;q]&zh08luXQX re-gJ N b>#^ ! Q?^:pCh&4R+$žI#3!LÁ ΜQ .iy#m^g[hD3娐QnV+7Hj-`L#:aq{yap\لU, kՈ:4.<&?i{gV|0rfsS1z ?gߚ/ ĺ *nL?`۟ϓOfa%?=փAu8)?8W/K| !H:16aESD:Q J1u3 ;L /:aY/h`pU!Pk& -,,,FW"q&G_WE iYÌ?p\\ovRVoLO/.oM 1 ?&[Dj6qO>ʌ*bCrcNIz,桺ߊ'/Nd@L2k ?zNZ(oyS`qaxZ!],Cԙ6Q.bUkh8=_&'C%h5jˠ.RLIl)=s`GV$(6q'w圬4=W ׿ӳW/߾xyDN8흿z;48P e(!P=@@ 4UlZ9 ۜwsLɛ/?q#ttq9ed5kwK@'' bV˕CXx +ojw4Y!Lfrd}0,&R J5qvmxN߸4ا1W`fp'pH`ḥNdd1$=tyi}|BDx.UMO<ԡ Z<K] L6H1Hoki B"N1<áN+:0lȾK[w7O4 ;y59)}*{s9nM:v[`Wzͭ5kKTZE%k z]Ӭ& FW`,KQ*e eR+gI194_iHio{1mT%lvt"#e1B |nLaP!,^AHt{oIcJ#689Hys^s%X)*۲: Y+zW{wܡvz<IxOxZ'vZ#YG`6 Ӷe-oM:X/;?(pi2lz9UP`B<# )Mk5 yeiɬxѲe:F BJ4 u~DlcOmhש]^6IjG|gu27LE~lO>f4K5fR o>r T[JHDm2+ٶ08 q`A?-)RKҲ U )Ii4{Qzϱ2 ;s 1^.[a9-lxͫYu-D }c7Q;r3=G n|.4?ݮL?q]GfY\k9iΦ?kungc[ߋGa<{#EͭO+*7P5eLb,9 Zs'%lttQkȹ鬋Y˛.*ihmņ0r2RPʥXk1(DYG唗Z)IAZ!LNX4.jg06*!D9Px[p ='7*R'K?5G\/&ZJ^( gM;bz{B1ӎXM BV /0u!w[sQs8kGB!H$') 42%48zT &M!JcEdJ r$g4qjJ̤9*ctJ#ְ|Vl8;|7*>0f!pN{y{ $ Q1Q8#g4t,P_ 7#T3L8U9}7Gt`$D :?'C"cÆ Oa5G,AdxĹGJԴ\.L?8я<6,W?4e2JSE[zn6MJ2Oea72皘y>i8?VWwySUdFB& pRԱ2+, ^.*h{ YC | Ch)(;ZTEmw۩ov@t+vYgWh%}oK (*RtJ^WR3J3JJe`skH7hn u.wl ƿTue;^3"$c/!!)xʹB-]Csҝ ?) jrˇr6!PROۿx.1B^h&4|K&(As#Eᐫr䵦VDCwlq{AZذ]熵|{'{'٣25_j4ه 14_]o1*}or͓ZU"4QKOm Pt⊯81^;;,k 5qZ>t D)Bz(7|5[Gbeb(G^^+孫ޥ~VU3>h܏]PbjZ c_{X&ɤ$³ #,'{Wނx8nzdzѱ"9 8ؘI9D!oS iL"*PZĿǔ=+BsٳKdfBjN(NAITU쿢eϒ78| ':ԜD C8b'{ r0~!":0Ią*Q^;Mt7DGJHuyz3 X/h NwƄ 1a=36p99I(KS#FkuNrD}2b? %R3RMPx|^_F.(9go4U&<6zSt?+7cER#-CQ$T]ӖqhDJp c1v[/%^<@o cf V[%JFI31@B֤9f F&rDpwTFF i$CGi8G2s73k9G;oide9rX39ڥcQWv TW _Rrc LԆbU"WĻ6\s;_K-1: RqrH0)rQ)2!u ΫCY5P ukm^]V:udhHkJпf ;ucCqSb%, vYGN;n:u ~Ҍ'm4QBR<8aAI&kNȒ9Z-~z %3~6NM2x_櫵649r@i'rnKw`}N}Q %1u?KɺG͹gϣ˺ BR~3 |Dʝp'd%"G!˹Lxm)vt蛲VL>}BU+̵9_/˹x4>}jŝ'nsF/xzzC\oy{>n;:w0e߻ nslcm _y7D+$dҮڦb%Há9?sd㛎oΒo\|| iotx?oSrݵTɬ-6w hE^&eAǗ/8s gv1sE8OXS0S<:xB˂w^%N,Z6Büd!i[<4z@)wN܊ ;m㥰Ik*N&xR}yLgֈ #jBCdKm7l̾4<\BaV{籥3hvcަ(2DO.x؇ UϦPiAXA@'}-;ݘ7 ]Vu CQx[pR6 Ñ]LXá >{?u+3<$]N" fe jf;ڳ~߾ {g"FܛQTTRe\t!Uh qv?r/vj>WEΥG瓙aevRCzMNb1Jibb 悷y$r0*YìN*Ge1֍@<7RvK8V9zMG4 L09N־Cjs=q6vq/`gr@SNfbzRy9۪*ZXx#x+zP!GAĭA,!\X^p關ߥTTfN&E~dڝu fh9h>T^r'yP-bm&r# {K+ awtRqXRY `*Sz0SKd-0!<.$d-R牉㱎^M0@3bTiʌG%Q,jRU{bٸP.S{WVdH>DF=Xr2r 6iNqIGE%B$IYf%8`9+r~c0e7SQk,tXs)h@Iņ]$ݍ}9Tɱ^ Oy$}3 uѵ2`Yc뙢WbJx<3CtYa R$F@!:'8(!$4a(`< 0,CʙsgՎ1eT 0ΎO!H))"xl!)Xil!pvN䈣<% H21y8>A p M4;B, bGl'DEG5\1kU6`yD :#2yroMs1P|S>o}ٍNY+z4:~MĊrw\;JJēZ% ؆Yp13ș1+淴)k\c`LHXjģe1j$*W4"t>P֏.#ˈ)j$& Xlz)1mp%AAg5 -LguFa=<"pcT o_uB?e]61D (Jb! ֘Hnjt,4wX[xtt+ǩnǼEAowAifS) -M^.Ǜ> \BKN9~#|"o@F4X|yv4Dn84$;Q#d: H y;tlȸp*~FCaʟP^"wm$'x@F`g_xbA &L IW仿ᐦ!)i(R,qZ5=A0,ec 0J!q-}7:&n'ܻ1t#KӨTͷ[wе}.!7|"?cL$dKPC&Ǒ-h'o{鈞\OV{RC#89 E2V#j$rduSWlGvWd2m\k_/lk>"ݯ=yV:vrr)Y#ZVj4:&ՠ7pNV XQLUAޛ|e?y{v@2%d`/L`qy|B%sT(\Rk.e bY`>+nF)u]`Ys:Y{9E-q~9&K$MO4$4f*yd@8\NWAc-YLvhux}>+tSqRdU!P!Wܣyh/O݉c Z_oS;ypD٭XҤ}0) J[/jWȇ+CӇdou0>mvEwwZ98G&zBfsS7um߯Ჶ{dށ=0u\X=( `LԼ|]1U6R6a;4=Hv! AoA$~y/I%]m6I"Sj򩺤=A5c{E:Y4l јx;XقMܢLQ#0R7ַ7tk*)BDIo}=Z/=$:ݬxvI<7P5Oގ6N:}<|5ʺ IH@0VY98 DȽ/o %:)iU80fq~yV7㞎7 ؤLL;Sc.5Ĵ3 MG/:MPn^A Za/U;ݡ\1;*}g>X% lm .MqJKn\=h{"o0=㬏7o% ?.*"ѹt5 {!Es 哷˜8^ LgAWYol}PJg-ח2mwL<8,Z%O$ST1;&\j[w.]1Mڛo80+N.GIdۍsWg _"ro{w%߻6ewʫyCǓU4h_+Z3/jK[55ZR;ޤ:I&=Ϋߒ]`URP)rU@.ÎV|ͪz+kz=Mѵl{K@JgRi)NjƞKKBRPƳ}KR1sxNK+IV\TdeZ*, eb^hA3qtꈂUGÏEA.''c $xf9rj2$7 .`;>[s^GqrgwiN|ߟOQ#kl{d֞[xD'sߜМ SG BJì%+dtd,[ ?Lk{>+ {<>hy ` 4<*g ^2@UTIT^Pnl8kq(%.xƼV@c%W$١.#i /L؂k Hl8܉~P/QQJxJ9PY 1i vJvv~8k !aN-r#3VdL9WgAR6)9sFȠJVqͬ1 h,IRB=n xƉ;Uvl-Y /Z41b.;Dr g#j\~L0|pQQK#ɭiSR>o8MT(@E"$I$Di^k%E!%~Z;'x*8SpH,]ˀED\Z&E+ FML8~R'=ac:)h.U, N Diwʀ$Qyg֪BKZ^pt"8pg<.F;]t1zh5uxSA2n9Śm)$6N$RFC)q\*GYrGhbU:Ӌ :cc.^4BaΔ,0$cq£f!_6$J2 69P2`a Z^_QQ )Ƥ*E "qo$stF^P+-|~dJi"PoY5HkNXm+Py':%DQZepÑ,{_RDpG@٘b3.8`'m% f)NfsNpd*G8DFj ynr6lEnu:pp|/ }wN?j!R_oj)tvv:tqQ%JY/|4;;)h検VHz)ćҩ|zŌ*&ep홚уH[u?-/|s9|f0bE'm8Σڮm ƓbpZ12u&S4}>:#&OjD+XVhxt\ٗ9MgglˇlYSVfY\bL}w2֗3L򠈍S~EJuSņ≿/n6/.H/o-}s~~{ͷιpooHV`Pl#GӣpܚoԮiҩgz8_!ifWh@XwFX[0Fy؅`)q&i[Vk}"lɫ,Uʨ8`h-r~u7%%omپF(C_pf[OpJz{~ -֯{{ECߡJ#0sܥoT"FL=#Ab?o)w$uGP_'IihOA`Ѧ܉ X8nEd$;0yGxf<2 k4Am$1HoC"Zld9l0fpV"&g=NqW:&qƻ5 KDE*0.8\lv$b;ƱсvLԣAF6KU Rq,>f@I<CWTbʂ":F wN \O:p&&VQRƌ!_b>LLj9+j$t>=oˡrt7+=L—WKӹn@A=Acfz~X7 ()ï8GqVT+_iTSV-ǕNӑvu(VP'z >7^#b*@ I8i;"YsL(QmGs:2,"1rbrX+_"YH2>Co'O)ҷ"XW⾣lKS=4%n}Q fsB=VHq bOqcqd8~6=K_"WW!0E.Gq2zu8*:.uxQWO]zܵ=ͪJJlZձD%E]}ot]t\|t\uT oΏ F}!~DO׿~ћfW~#L_pR*Ӡ~w$9'NʓAZ-]MT=i4'7K5zͲj3vuu0 0Qmಡ+.1o>GowR݌1Ǫd<`}7j- ױ56{S:xE ٔwQ֎A)[_߾yu=XCgmI WZrV9Afzj!]=!i4{?ysk}uf-|mW'mR3OgN#C Jڷ =>> qt7>^`_ |ݰf܋eod ?|vpvSWZ[o-p#>}13v# ŏRs,~فFFaII/IT%$×d _K2|I?Xd#KK2|I/%$×d _K2| &EL.d?>.g ]2۪(`~ ¤hi 4:aapB&z]zӯWEIn.\am8gr>`+rYZx; Mgi"[{O3mj'6ե7[%cʨ=L)٧};FR i"XNn+}@qvIJJnwD `E [՗ɷm]рmC ,%  +[[:^HEƵAZ(Č!'\"LgRHl1,y[Kp y :'KI S"!&"$F bSO.:pnT'j7:2lܟrn_5 mV`9 B)=FAZX5Y,Xp"]왔[I8Y_/Bj0?oK@!b&\ lD'7-*\&\_|'2KDоFYR?,w@Kg(p鄎9k-7P#0 3c1' D*ag!+~$X`T9FE" M-a$EV  Όa߾5'3O}>tDcŘ"۝6S:'ӑ?P`D"bUP'988DpqFKPZn1xXKJӘGр 0ZleF͝QFGR4,Y/g U\֠v~dN0g¯(z*7^:s]VQ'!7+ %p7E_jazN^Q-xbk%ݪ`?E.! .d,y8Y ]"W"{P{|0-aMWL:eRs]O <Df9t\UJO;&-w'^4CrUYxpi&{wLddtcq{E9Yxza杷ݗ`剄DPIr~e'+&{j={볬y<7)+nD+yq7`GN^`9IA_OE_gՆg<%rb(.;L @.w?O_?5aIeO(v[bLLG ~R5_ L(5bԈIڬaz52[}{^xV P㠵n.Mw%|W^W;BJz]Rj],~nos͠!u5r!IKeJ9+2H<7ܗ5뒯C~YM@-M7RG<׳X^,<2x 4f='ʗ!D!e!x0k5f,`Xk1hFs+%"[?F~> nGF$')ȨL '<'2z 6mUVw;0$5s!,ȍ$S{+%R.">N3N]5G;cqɩz-/f Q,[:mw]wׯ`k_~-nov'L&úU0@-j#jλvx$/{dNx2zy2 r?} gN軴&K{_~/L,q߾yyB23 UKiS[jfE3%Pd4HtK{~hhd#1*ZEll$qT$VD.I.x25<-Ci\呻8 %"&Uc#ȇW<0o#DΣ sn y{a9$L}gHMLc1>"'^q<{"g3:}wzk" (+Qw²&I)ˇh1K2hBZճUbd؉3XO19MQE&IYƨjpNJQ]#.-}WpI7{F㿊l` fp;`2-D<$s~nIdmNTŮ.?X$0An(w \X LמaStvޠ1r^fF-Uh} vv&t=&uؗݩTÝBWUϸ$F+Kf!Ҫ7 k`35c (9'6U@ۗ? yRs3@ҁ&_mT,)adʙ趜B^*&賊Ii<.CʻDJVLnS1 Mb 0Y'9c&%]lF{E[N!W01h4(Ukݼ: 7*i-Y ,ȌCq)TO!yas,ţ>A0b0%nF$)|-w6~0ܹG(5dH&-U8Ms5`%y7Ana}EXTĢr4UV*.n̉' Up0s0GYtce O#Jk9'tMVc.12_$ ޹2&&Ggx-5!䄋]oӴtFX[>bǒ&y؛R =S׻iZgo:>Vsy82\w^_g Q~8D/:mXPKުE.ݡFMTicڑ^S;Tė'iZ}Nm&2trsU `AvpQw.ŧ^.j:ա(eׁ+1_ZV2NK\AI#%Ȋ++Ķ"QM$#NK;;{t<Z:Yz,ٽh, gW'> Vy%Oh'5crThbó>o̺ςgjvM3-y"^&%E0WcB_"C.oe6æUi:5.ڬ)f!,5Y7U϶7ya<Wϼɚߨ5ul4cOSvXǟ74uum3tKJ/3滱+Qͱ}lHR3cb @ QJQr9R?;U6+f ?6h.ԣ1gQsk]@渄IrU]OtGtBפTDŽ7A[cx QcC^$er>YsBqne2s[܊P͸lL: ⬘uZ{ o5('.dG&IK׼ߖP RHA:^]Ľqˡ @i' @YxQ՛9^YX K**9,0I 2uNWbߪ[p Ack#2>Lf"iRSRh.SI0 NH}lza 2.;XKb郈2pfKw>R{?^mx5ZcHq{|-McuCK~pKbr_e>Q/{X9SFzSo* S^{?tϧ C&-pK@qL1LSjpI:o) 4p7i@*9JntW&;Kwf r7~8O J:4b1’i^[k!R_j)ttt8=;?\T 9qBNQ+rGeBţ'NI}jeOk4kk#[~kop1=]x ĤXs_<蟜6s+.9zi---0b$nHl0b0j0ˋ#1mLQ-h,->'zr5f?QCuճZ F.Fi#e`Eoq(]_LRo{El5Gj=׭NtiM?lnB_H?8Ͽ?᧏\wHf`Zg?XE[VX_0mkho>NԳ _?fG^1mjYEʭHO_.0?'MhS6f> ɽ8Ml~TOzaEMnP)U,"ĊOe#f#A ǺU>Nq﵏ɧ̟HA~$' *h@$/ wK btˠ3uHLIm˼=a- Kպ;8tz궍:LbZ:wӽ4k0vWu\?Vݽٝ_'|׷3Ӌp#Ͻ&SE$4h55 |<GL5B'p2Go^Bɴjql+VLU>YV%#roSd)֣BE7ȶ1>qieGY:3,'oG-ps#Vͺ'+/cr֛u79(ݓuGL"m "๳㗳Zv׵InJs;2fOIhR`*O&rIOZ@I*'CKe\A`|,7+^j^X/ \;*a`)H"*e:J8x:g h1tcW pW+(I>t4Aq ܯߗsi[Fxh 9]L&δ&8~u SrI{8[Up3vXg-H/PICa m8~A  Uz+;i/R΃jޟ/TJӝσ>M4լ[*C" |"Ȣxl&;*ȳ\l/ ؽs=rDMv<A2g{W6ߏ$&2Xఁ"0H$[qVSjvAփ_5)fiT L£و̵ \E} H 0чd-:@yg9!:42Z$KEA^E~:Š6YsV<U Q L$%Y# qY"dIYntr8H)J0眓䘘 `SB1sG[9`-2PVZu,hJ^Ts>b@4;l$Iu6vP[}N*U׆01&HZij˫GEƙB8lP$]4rd%B%;oX{էծ}Z}x3;Nȝ~8x3m0,:(j&A*>Yr,P4JTTA+.:TaT={ur]j=AV8;~\%5z+`*L=T`R.Oqq*(3kШMCM 9ETeuՑHob:+kӊ+خ3]`ŒeeYZ{7;޽_N 0uQC}VJVɰI24@4,O0JH̥j'qbL,"FY`qU |XJ9*dKR :4'݁GZ͘8mM13G$aO=LVӇ4/>y1]FU`4\0lIҮ{::~+%'l?8;ʟ!iw[|i6hrzCHsL=][t|;+}Q}}َcFo!]/ϠKM1lmyC}^ k/á[k̒,p q_ t{(>ZʖyLvX<{+[׫@Fͬ j3y*v|\wydzW{Vg$Wo+,v*}/R>r36%*e'$fi{_9.tg,3y8|bJPƭFPf㣡n/ن;U4^Vtˏ;B z݅)^px֑W|< i9(0nq>s-(UhQ{`tWxugJ&h햘7"Qs3\(i$Á>w7(_[}_+ ^_hd1 Ku^p#&P\Bs ߓ˜x ^ L⵫73ꛭ̘__JȬ mee fMB2IǫLKѧ~jxAS5}Nڹ4*Rm.Z DYzKgEZ8yvj̐Z>(:REҵ:zP[=vLy,mtiV6̤s80MovvO0Qy9NFT z7ͣDLbGc[Tb t|#n]_\^.>\f_=~uzT)5Sn},4аyLM:7lu@k]v2J.zRg}ѲzcOxj>v)6; 1!<*g cmd,y@E :Ժj `q*1&Jg ơxƼV[ Ѵ+?Q d՚I{~&W|B!)ǝ;/}CbLz+N}&g4K/QQmr1$0l~8k 0'9[ kpw8;AJ٤d s>JVqͬ1 h,IRB=E=AZ!##wY&^83ibY;\v(Y2Ϊ!Z\qH0u`QK=g&iSR>o8MiT,q^B?2mIM:.Ӽ{gaM~l?.o vv-A\i DG}KƔ:1g\2qtɎNxcܔ^`|7ESQ9nh#Yg8:1ʳ eztxb?ۭW>>xl=@ I <Al߬&Qox1j&'[,c+fRܯIg:_5X=^=2|$&^Vb|7sugV*YjUꜲr1#ebM׋Q,c"8Gw1M_nXsSC<^_;w]}'| wߟ| i? L0R8ZE1O&'7SZS|SLMͧ-k>yٛy[g*~a?,=)N7ZsZQ^&O޼VK_P)U<"ĉ_Ax!GVT&#<_']9wtۚj1|N,:I^ؠ)}I ly2fIhDfd4LV:u$=n\`\"xT$Y6 Br. DհhM.Y SwSPW΋U%j_OWՏ_?AzFI$BrsRL7.4`VN\tgڦVWO*f HĈ2)xE0DHk(I1ZƤœFrGQx䜧TuzrRpZRz VDV(J*!kt4h)HP L8"9NB>,"%g cͰ3p6 lR=E_yD}Vp&ft=›w]uK mRk.WEkdX n_[7ww9tf} ;W[6r^pnZ=_6^,=/;lrΛF5'OexVYq@M7=.۹5}9?Ovc(yy/d+#ݡ$ 0(3UIZpݮT%В$J#$`UBCF0;î* . RvU\*Lճ]gWzí{>u ս&|ivu?p5,_] J-曾=zZpU׮$vU%vvU\zv +R^n%!\PR ^/Z?td#q 3giīy̛35lB:@ azo?xK%|x<ߢ`G)?/ yzndlvU{<+cy'<=n#Ϧ)4bH9LLL/sO;4|'m@sr2c2?ݘYb :ƌ9jo6j~;wɇ)2 g3W :DWT2Zd` \S$Qv*r!E.vV(BT)%Ehj!eJ2\%+p֘mW+{eTe ;î\zWU eo k!Ky:hZ{"ǯ?eUx~cMPQo5|WDhchӫRB&6hFW24$ u II-P#-Q1̮/u&So`u<>],”>7Ӕ0q9ōIsϓ9+1-wNk7W?sϏ%Lw 7 Ig_ 'gGז΂DmrhBQ NS CeL r$Bl(( PQT*&h"]$%!"3@ԏ[=EjW/(mV1О$8 H1##-0D˝u4Y=rBo7ݺ]ysdKN)ءTԮ8*)UtJYEK_D%}>QIOT'*JD%}>QIOT'*]J$a4to35JzkK07"4*5SiT9[m?ͺ\amVmYk_f},(]}N<Ȍ&!^8PX :K6K|!RG#x9LmL∐cҞ4Je N' lޣu6bNW7+ݦzOI4/2+syzY5Ӷ,Z;wT껥ǂ!|lN" j-lCv%n_R.IeA6ȁ{:)+A#@20lZ%kbҥv.μ(^v3X[#dA8t|: [l2Y Tt8F+vm)% ’v Ĩ#f1}V,qKəYi*z>~-?c.˘I@T.8%5 Y d >hT{U3U+vRo']L 3($o!e0htvW,1K&YlȢհTg0GN|%W&b:#PAZL*dI#meb1mL%$b,裗pc/0l}}Gm!Pdۈo+NiWlyQa̾@gcXȞ@$ANГGPqd@aD1s^L`xu7i&wr2;.]r*5dPDXNQ<~; @1T;fq o] {>34e9bv꒓.Vbix~4w+]m" _&Okf$Jt|,|N2,&-/n~<}Š/rգJqIu\EN<:01m|l *؇=Wj=׍NjƉߧ G׃8<=bt ?7?޼i߿K BaoZ?> +~=] ͇V7.g=[jA]NyŸ7ǚqg"% ~(81?2?mO~ӥԜ(e Ϳ`3,Y-TF_uk!d#T*eoR>(Q5lt "trUWU6t^/L"X*URM\fv\8>wDQrJ1ɦod V:d}2>&5C1 !ID[jܵ([d 1I+;tjʱY& Q \A`khd{Cq b*c=3p̡+kHi}5|H8ij ^n*B*&+eOVe r׽k f,KD$ -d,5k$"`r_RQ=*;|%b=$:zNd'tFC(&OFk+tR ;54] ,:s֨Vy ˹ense-(/7f8\>A hp`2!˵7\6/r YVq;R*?sմ^yyVW[-|xӿ{5 0NtKMqKzUT-r $|꽐}Y,Rge.K_{>+.F`:hqDcmރQ^F=4nLԮH!+UB:RTD*1cfPɐZN;4JlhTCT0+6;zWB*%{0F0/器J] |qUV.,}}jMѦmivEoYei4(C`ԇ"ٮԇj+}}o><+ӝy^!>m>G !Bٌn z7c: =ȟ@#C|b_O`YOHA$H @XJh| lQN-E0XE$HR8V:`(III͖/" @STx* "ijv=2㊲M Qk-WKh|fD~ͣ5h"lR "'"@t0Y zc0`()F=vB3ck]VfiOWXnK3|RkOh鷽;1z:& ሗx' fzߝya>o>''ȁ+m$GE@afBcfeA#GY(YrKrո7JٲeI|q2)f/ce}f4bnPx'ΞYw3qqY3ף `QβF6£MָA^>p:#ҭ-'t}\ cQMrOl@wQ=&c۔ %p̡>K=W*qAz6s#%xƢ0Ո;G/껩ʘ?FHY^Y-,S;5){X0Q p'm)„Yf鑭ǮNjb%@6la{8q}S7},(7z$oz7> o ^\R+Oa2 oEξ|邞}h3]f4Ct=޽M`m. 믅u;Mow _6i:KMRr."4'\&I!,68kş:G.. ](6D `%H6!kQva/~XCyoStAmmVٯر^grƍ|)q_40[mU _d*f>w"FIojst <@=%Ʀy1b18WeNٻa99I )3%&} }4E ٙ5rtbɶqqi7]_]sKar;B4M% x0v>~'S{σo zn緽Żx1"6->g/cScgD>:~B8gř`Fďޥ:K%:K\n|;VpRJn3Q dR@c8>%cJŕ|V*]+[eSc &me":>vTؤeQ:x$e$ߛ;_nicB( /R E9(5:&Th.(AYV)KNcLbx\x#@N&"3\[r|e8$*LBCANȏ|K ܻ+?OglG]jY;}>k/t.1^X&d^2XMQL!Lp <-h roc]l\gt΍!I.yϘ8f Q#@ڔZGF?t2m{gMFڛgsLH=##E(V Z5Uj 4\(lI\q>q:8- H_R2KQk~n '˗Pa(g:1FуC>^n9;XP}_} I{7\V-0+M/ɎGV[ -jE"txNBe܀=0̵ȺG\]X=(;3 |`Lsa2kjbJ "Zm~aYq]]ЍfVt{-wJZnm tp%1ޞVYoqbﮚIC93Yxu`:ŽCΜ;5~reVj`Q`:@n5֨,6Tc1Y7%$X^\S$íL. ,WA`P50eGZiDY#C_[hU,6N;Ah1ғDVrBRw2- c SLcy[ۮo~EEbg3qi(toz2{F yBk=Ko|޵#ҘIv, %%OWdٱV;c5E٬_xꉩ][io1qLHZD]o늜=|E׳?9I͎D;<{$P24D/a6C4u׽z/=Yd Lr! )f 'L !6m"D&mCCeA="a@!!Xe%:P* 'ǒW:#gϠ!)"[h4399NiLʁlv-)gHIjؤv(}P/E&ܱV1G#r}`:imX &{*qS;_TWƨg'q>hZ`by( %^xIF ' ,:1aB: cW}0B7mB[papx@a4ӝ._J -(Ζizde2z~u9]TM,PXQ 5d ss(xKf}VVڽ8F[ey!r$1ȃ`-wLdsȸ)184 R1[ $>`MsB8Vw/oqqjzj<-?DQjN~|O?avRUM9]=շɻdC ]g׆WyEI3|Wj /jG7R RIR_reV@wv>庤9E!pԗHo\X5(k@^ZW:RN*x2e7Ȃ0?j{8"HB ygA,_+@P(Q %nM)ʽ$JsYre& XE"Ykc!+6 rH:#g]])ԧv؅)m6Sk5v:J;R6~%8@#A/(J􋉒'r5/%JP QJ͆(0J:&en4T:'"$1fZJH*"d!a-\zOyMcDWOM95E /^5+C2rѢRH`H1&ˍj"68Ɯ,_ǹ J.o]HtEU9HȞ\H:~e|[#F{䚺,x/(Y'e1$?pD7=`"$ gvƹ5rMqgAgS:hs>QG#bL+26&]J sY`R0p¢1Qde%"K#0c>댜=O}b3σpK-%t3JIؼchXqiA$Q~RBP!| ե>0fND`plV6(<:Zq>NzX<.{88#Y[m^SY:+Ҙ',dhH4N ח}ٞ8:Ivա 6^^^ᴏ8M5iR'V$R*Rc^k2j 5l@+&#iny s&edY'IX-GOMxTY{t+0֫۝c=icxz=+5:ԤnR:_~HOCw8V˧Iq/GqsvkG(G cRWW&qo`9((\3=3*ql*p#Cς̤<&c,:iKҧ d}>tÁtkգ&{M^ 2,t#[_A*)LY$KX uQڷo}9i XeB.>$11kF)LHnjt٧ˊ{IAA/LLp<3x%:3c)SY9 >+e.}8_ySsWϊ)-t#=*Jhi?Wo$ћ|< vbmSëp=t!Yξf%bJpK@9b3L\sS @<^Ǔٜ ]xI ] GFj#s:(j5䦏n );'B>j: Si7tcu!Ҵh^w5RZpͥ4NxvjEC.|jV]M &=^QenBխ9-ZWmrH6GuomXʼ,%BArzcBƣp IxհhtJbNpɥN;_!W;_FIUq@/N 6jeC]:wQtvZJAK%B\ N_BSa9OT2)x^[$3ZZN02H-Gsw,{ٍ>;ߍ%!)5l3$]?ϿEn쳧;M/ҝT:::RPqzLODWWHҩ IQlDHi!>wɊ`Y$$aE#W9!u* *t zDr),w0)}h"r>%'LȰ3r<nesÁp2\mۘJ`o{_~ܝ辁{t!-/B+jvG -yƈCz 5ܗȭͦ6G/@h{ץwvvBdߘ l PZ7/i9'Nu5eM-nݿmY7ww{Gk-tK˧6oϞN~W7ɻۯ8yh;rښ/{]|KksڭO3{c~`_Hі' 5qyqќ[m19JNw`«*k222WwuJ(E08_V7֋͔^B[Z,FP1ye4$ IǒY&$Ι+!Y ԡm7o?q_@$uҭ<ޞXO @fy^zޗze[}CPHPZ. *@Wvb b J]X *P)@眒Run%a\&|aޅ-bJ1Z*jo6Xd|0;渧7ɐw6fd! 2+6\&2Y|4!jyIX1TMj:II-w*xq9\|<mP| qbd7``B`"~\q-Q+ؑJ?JmeXf}\47YdY JnVgի5܈U Sg TO/i$}祥Cٙg# }픓i4btTԂ;1MԪ6=XtALbZpqyOR~X7Gc>W$R`>n{̇ޗ ^c:mg&{~~q:ŶW KbUeYLԒZ/XUR8/[lY|w.4,ow/kڬj֗WWexvZkPZnpQm1z2R>XSV jQ?NC5 *,{m }2! GkE`8_E:_ۤu4v_ۤ8WkQBPGdqRbƎ\5)QO5+}*kY?ٶg7!?]\\J9%3\|<4s*wo.VApݠ*jW(C|ԩvr:ʑQJ:)&t)od6@\ XZE"1кejU~:gg*/n'yؘt"L BBPX`-U yrTO@Ӣ/ )ļ-qͷ|;t~N>ԥ'OtI-Sy]}=7znf=WT@h0:J9EW&$ Fh\P Y=9kBҾW59JT]Fv^&={vyLUj:ܸ/û˱ lY0~kg$}ԛKmw_$f _(6_ԑW@kb6+Wꄧ[1Si3xI< eP!v崝>'j$Oё<5":fy: $PD:m xkuʥb-%&k3U[JWޙT 9VIaB.R AUͥxN2<:k’;s@ffv;\^ҧ ;"&;o4IΒi^Tmm@KhݞR+JU&LɅ+GΠ)VVcʈ5 ']'Xh,E8(A`r,r16B$xAe쌕nk 2!eyQ𐢾ӻeRy^$FϏQy9>Esw 9BQ: \8ϐ]5b"{ړ?==}A<7蚃o]D^RYԈ/@GU` !:B9DxU:^Z&5,kMJfWGF6Ĵ^imZlLkzvy4l>ęKpvn{ld#&!V{xfPUe IivgNk35TZ8!ҊMK -`@ aV om0k1A{/LB]*MF%:xS5$ 9RPM`.Ӧ~[ois[;2UGnDi/Z; N煨OklSC =&J%tMZ7kR"M ^aC:g,3Hn'{ʝZw Tm@hW% 2;VKGGFeҭ;%˫hzއUЀbtm3&c0Ε⋵]bELy"D]Rv/|( $20"ct$jgTl舶nSk59"w_uPyDUCXOԜJ9bEf(ژbg&,?Ek2KPh5!rw9rO8;e#;Եj%_[S׉P[EΩG&CιnU@ J"8R d| 3κs ǘ`fKRF%D+oCZ& "Ѳ Sn !Y_rG;%N`ȉ (n P5M ({]߲7H79x8%hs1!JHVWBUN-#"sR^)ؗ;N`D:"]eh  9'@9S57;%΁u3o|j>w[g7|^Sd8Z#P|%#NYG+Q!J2L2 \']¾7$r ]/4F w 꺫ZC!WjB0Q%1bm "&ADKŁm t7BqPyYV.>zz;ۚ<3[N*8s5\kM#X(n\?Iۿk:ȏW@GGvז((dbP.FYL,T-ɏtc>C1>## Fe-(fz%!iE>>~=>kKTӯmU2'-'s޴Xk߶Sӏ[T{Ux۶&˼=O;yϳ<_j=:\ 2֫b'|s6U =? $׼7K iys޼#!yOob?)0OV"VXǤĭŢcKIY_\g7y;K,+iP9nk~.b.$ X;|: #xl??ηBoߞ^zv=|8pVζϺ~ȾYQq5$ݧ5mN }9KI^` 2=rxyygFo_ǟ{߽Ϸ~G:qF+0O4#AGߏ"#3SciS[]W {^TG.wՑ` !zqi8 >З t)S4a\E\ Bd/AX ꟷ+y\GJヅ*l?OnܭM4ZJ*U,7N9]R"}ALM"WA'AgrY*爩ֆ;´D8_!Er! B%dx@LyJT )}@Ilq6NQVH!U%j\^&~IKzo)s݆N\!IJR:NJ:uQJFo} :AW{O2aV[sbxdx!sQ)W-`4 %D͵1:*N9RO>e61pҒo[5(b!DLJq}eT=HG[gniϿuY:d4~:*;ۮcCG/_JrD_/ꓚV?Bī#MID ztC~t( tͭsAk6baZ~-!tA*۽ *Ir%َ2WɽIMCI҃{7! !Uq9ӻ5*YS`1BthLGtR,c"" ֢!-=50nG0EH\Fn຋ aQ\Hy%&|:K+':bZO n~y!軷W|7?'=q ^=[0?hn3woi˟ !\n\jKo`V~Y-ۦeAzYىcR )ã80YǃםT;gvzVK 5WLs+Xq=GK:rѲ4) fͣwu #G^#.rAX"J`eRYCMck7PFnW I2lr֕&mIg~,`ImDwG/1AHO1@LB8DžڧxDžeB|q86\'s~8C^iq/_AWNYIw4v29y]qC%bӕ[fm荊hE餲YL:R :!$]:G!fmYiYș\qAIpBX !5{IS{L>ܕpys36ϭjGG Z)1hΙR!,$@{a7`B:cdm[yW7iz˭-;jM NdcU:kv t4MNyL6s%'^3H 3 ADR:}ő~jnP;e\%R`Y) NcrȰBr[͝ɡH@Ж gbԻБ7 tqE^<)b.g{6s>ϧ"&q<*.QhI^L1)IA8A5Ny mu[9;K@[%!&CgO@wcx(1Uv?j{q7%7+զv};ŎueP^4˗,EsB~dRl*p B/TZߡr& +"Z~2rd ]\*UKveԵ~^b2]1nd{B|ѥ%N>aTX<䘮 q :rۢ̂ii8rɅxyA0-} ڇQ_>J)K-6A{:%qE v:⪐ TUVcW&Oq0nuܴ}rfƫ_;.LQ `go 6]Y}Gu@f߀yԫ U?77|8Z]l̍|+Ƨe_WA&%]So~l> e0=-hyx|a>swewzEdzA\D C YﯕQx (<]ZY'tVwܕhMc h~x6E\Zزۖe޾6{-%\ lZ)V kZ)V w;zV %Rp-kZ)V k&Rk~Rp#WSqj5kPֿC:ךiJ5G+JRp\+JRp\+J}V y[) lZ)V kZ)V kfyf)V,/GtťBF]WijnBtgѰN;k&Q9ʦPJ$r\|:KtZx妋/[sk<0i\Diן6S^3)zRyoA< r l gs,>x?,#ˈ"wɏɇ LN W%aӐQ:%-,NzЎW^j@a֑|vcU7If,gVӴن2 ]A-&$=B D ګa*+͢}ti9WݴX_.keEeFe<~K{CZ闋ɰO`W Ӟ/bFb!WvT|zkt^T#ʻ͂6;u1{F!MJ=QBftV;aɎW!JF&ǘgFY'ѓT& Q3)j2 ڒ9%v"8TʲP6Ylb Z^\8nveVyCFɧp6%6!:c"!\AiPK`5ƘQpґx9(e( El m^h%iABIZC,YOqC<]mw*Uenvێ/q'\!8ˈAdUH̃D.3!sNvxhìqŽ4dB I"51Ǡ 8YEHNvlW#g<\(8"E"VJD]Y"&D!, t@jkRR@aI' F$ \.pd(kX{m'mo899 *qf_ftQ;-';w@;P /Jq$Vgt.)>=:`Rw!&$$,BV^%S-tBEG"I$zYh|2* )Dj:VER1N'3A ejqCС%)PY(A0vi8KQ\Sxr%)w@[b6&ۼ=)N‡]=Cu>{qC+זQ&s1 ^tnxuhNF4V[{,¹(PtI37xu 0)Ay #y{w֯Dl!kR*.,IJ2FMJe3(坨XI`aX/-,ǔ@Jx$\"#Tnt9+0|GD=<~fdIF4)ǒ^ S>,IAxg| 4Q1ct3i`9`QJR=*0PMR"L(K$J:&!T!~EYWgN]8x8E -4gdHvBL;ob˜UʀtXcS$>Qvp M|cZ!uzǢb:$R4`DD3r4ٹ""@E|]xxLh! 'ILp^$1`&xiͥǎǦ)0^׾kJ bb j2j 5e61pkm_E?{|? nmE|:Bdɕ$b=,?Feڒck"pΛQ$#Z{1[,} *[~Ͼ8ܠ h`՗5dD;U:{.iW_&9jAL*JѢ*٘a+|#~ԝu?W񲊗Zvt?G3smvi1Gi-`aʌ`^´b1ϱkKnNJ(EBƒ EiC]c']Fāk!FjUq0:EkX tyJq4QȹQjE#aREbo"D4F҅ZhHziW3-ۤgw} =am#G3*3$r~.eZE Qi)3xeLR.4 ]z&0~ǿ/enlQF M%#AR,9mdtvw-Lz{oMK'zNGYeEn`"{&CDTO#I QDpC@J, (xGI4 GrM2E@n |Zd 0a='Zn0Swv5Ayo &XAf\Y^+#@-'J s$@3AhxDAPI(a` " FT8bHET@yhYβruČl'f]v=Hq½go8bD:0'3`a2P(%՜3J|"[VQow8peVIx&.P<!ʫȩ*'i(rQJsYc:1Ւ!OQ(\0J&EЊ Cb DgA0V V8(Z[.-gs,m i7BajxQ&<'Xs3-G=r.2 Jb-+f:gFh*Uec}M8*8b."sE1,R߬ͣV8hG ( gr}lt`W:0zIЋY2ΌAn(SkG"t dG&U~(Ү*c3H(:sP1xu&TJJ6e䮧D|rg5/* @ 8鈘&y-R) UASʔuH=΅u+¥&Ӥ ViǬQFYJy@. R5,w [M| Qk֣{uMԑ1b}!pai8uoK*7%Ѝ CX^G*#RTD2'TfiKTK)*kǔH:;7.xc|-Mub/u#OhQoJbvG^8!an=ɏaRr2CB#{1& F r_t@ %SDblp6xvFjuÀ\:xgnrhAyz$qP`/yb|3} sS+E^uZh~v~|"X`Ƥ&<|6jCv~4qV+8Ѧ |ɕkԶ4SSu7w7h/^v $Ysf q4<-Ws$'_.5k`Cg$uH=Y0?2a<`d'e7ѳ1gz!F{VQgikd$ ́}1, &q>ăIsS<Ϧḹ=put7oH?}x{N0Q'f`\#AP׿chkh*Z|v׏+=%m1& J~ؿ\:p[W~҂ӕՄx~໏ g vp6pm5UAF`8T3_EާBT'H+q/㮌{;?jnA}A&p#>!Vy!:^Dƍ.ASg:n:ax8OS& DB1>1TUbRD1qE,:{U'mv}ҍNUKwU~i7KT%H\KK WUc& NW`,V(U1xe23Ń;J9ZfګRf ؞+"xkq$F(ϭ) Q"De "(V`c 38O0vY4a#+^ wE%Nlf?)9VPH{ W~N L q97ut#٧>p?lG:C.Ckcv{{Gg#}3fpRF+̅^%( #UܟǩXl+)ٷfY# 2xI41+bry"hg6TSLq0r+z2(:>)k52b=6M c" 3FΆF7S^"Yh[r=s5$Ps{k=otkYS̮麵›w=9}8kqn]-QWVf@||GnNj]53&m;zrVWs u=/\dE(K-oqlt!Cmk:&5-(#C-=ekv5tMaǭ׳&d1(Yj/Gn%6Ql.4GrZ$MIħ]M>)ۖׯ9}X:5wb~6|T?:N@?i:a2OoFKmaTʥ䢦L Q68Pa5rIWу&(>\B.PmguܻKBۦףB:f]Wq]Zmw\ǃ_K_Eo{Qy}>3g=i8ֹA77 prr1]z>|6>Rp^PI *IA%)$v *I܂JR䬠TJRPI *IA%)$ITJRPITJRPI *IA%)$mTlTJRPI *IA%)$JRJ,$A%)$TJRPI *IA%)$TJRPI *IA%)$TJRꗞ~ R3]PI *IA%)$TJR;JRPI *IA%)$TJRPI *IA%)$.6gRjh+fQ.Usj*pq]4Cc`\b82A+ؔ,{q/cm8B鬏=#,B\(wp)BSϘ8 1b"iZ+ Qjo|#0wUH,R#7 E| |ƃ9KF3aki^}n98|Tc$ۿsWI7/lk{c]˭ FImҵ<=4?Cf_]?,6f՗ ,|qv h~WϏ!HN/$R!sqX=b\H*h.~\Y)K1{he Ezq5r*C; 'ab+/Wq<+D8PbS 35֢h睞4'/a$-$ #f/F0V{!Uŏڦ eHYRt$ tiA$oe62sc5q,̚ ?Y y7᷅o%[fiF],}.+_/6N&0UB(@^L1))ı n<:r3.9qpӢw 8yoS[2Ҡ;ƈ/PzRXm>j8 qʛzS>w厲1żNhs\G, J֏,[S{Oi7 Nj+؄l3)Dk<1A)n^$t\ :!9IeΠkzo/v5h끦kfPx@ OVxesمR4ͨmk&$,hh«py՟m+ŗ\B˓ .kp"N]<:^rɍ>0J{Hu&H9 4k~D'jJ`\fN7ܔ^c}jt!=|;We(k6]V@Ws4\Zv=:2J9}AMI` b,q&ES|y"U}wdXa,0|>28mx40Uc(@a2h /h(+V\Fy@d(<ˌ3 J*Oޕ TN2iD$ %N֥=~WPjq oi7SZ7{na|_~\7 e|NlOJ;>R;OfpNT>KT gtDn` fR$U^z֠sUTEӳOFbWAj!R03]#{}n_b37&03r&u՜10J;S:.YKhJ`QHQ̦&h@ Gւ=f]8C,[KQ-qv[lǥ+V8jV[VG[颁DBY, 2$ H&"ug&!'4]]{6>U(B&dȁ !["<d1x_Vg=lbbc-luo{mŃHF@Z*)Rg1TZHBE >Ȍ2BT BqR ";F&EC akGWgE\9/,եjXh+E.vdo<.J(f^ZݝKq鬾{ͷGWw>cuyB!/+R8OuHiey9OIm/\ _\qsU7WEJg{sf̕ѻ.6Z\&sucɎ1ʹ̕ձ+P/E_R<Y,4u)mR;`S74\C#&EipNQO>  gn߷J"c^&i5]7EJz3t)<ŷᨵˬ}샩G)k,q&jq1$8m]zJR. @3܌Kge&|bYs8nј/ÛkQ)|?]yc+WC9"$3 `*x%(|- ,@^-v)H)Dkߡ/+4(1WE\b护 zs͕F=Fڟ0с,ooZX`>Ms> r  tCγ4yg00m a"K"г_K3-S @_;aYSQļ&C߆~yzSzZR\Z0\#q]"O_G)avoӬI`\fN7ܔN> L߬nM}y #h3K[>uL_Ў%;Kr`ĸYb2 跀El1@VGGWl*yR{åYW}o*-v[i0.q*3WQ-y0xi gv<^[J"ܺ%Wzc֝VohrT0n8ڶx`k k@Z`cmE| {D& ĊFyu#(`&EA9z H[ %e}ʏ.X`^"<ˌ3 $W'J4I&s$dib#Mݖ>Jq!A:"Jƹ \g@G! DC 6C5qv\ZVj?]G|4afv\q[z ZJWj@i'w-Y{s&c9:Yae|~Q8o^7\Y&q.I=y$]B]V&Cs3qתٳˇH&̮`nI>ơ2j T%2'A̅8DV&n#@zFzUΰ/Ïi>.mSPƫڡ|;JgEYezoЛqO|bs֠:/L/[kW)Jdkxf╕qc׉gkcڰ^5Nq:ȥֆC- ,m Wtz1%fzYodV{KNo([VGf#yulniF@v挌X>",}u61p3~[zxbL-7Lb彣hLOyW,Y)2sjGnݻ8xԅJKκQX{'7P'SJ'F߁^Q1u(p{m쪖7Y_w_7f~xz7ta<>Ng߸>倘q` %"-: [W+&f`|zc3rm9]!+cTZU-& BFY['/ <%Q ]:A٬$/9qǮZ4f]:#$S1I\҉AЌMT](BC!!`$=eVc1J`/:iB5yT\ YG!bHTX6#g>EEUǮ6ֈvЈF\5Bd6dXC$ Zb8tI)=.tBRM5w 6oLŢ2J31,Y\JI+@PBbbFlFfxx#"b͸dWE"'6WsŃcW}C>|`To#7F_6AgAS$-t=CJ55e_%WҍRkлU*n|V{ltNqoJRLfnc%([]Z{٦dUΎƳ>_z˗ןε8*h{z2/>7^g:f%78\1{'= Unxn=ZBNwEA$v#sE!a>'ȓAp<V;8'XvJZiumѨ12Jd*3=ʬ<S*Ҙ45ŝIHj M}6#x%9'L;bHdžģX}hl6H M|CnBW!Es:#l"j$] IJc\DdrXB(E>%F:,*w1IddL[Z.S]zgZZ2v&7(zJ; T3Y~­B[5&444.% Oi.J_w#OݍS Bnvy1.-gd:Ʈ(Rh; :\C-Þơ 55GMrƾS]e/e!hؿ“A AO)zpY83]|tU_*K%+i/itίƹWF#MF3kV<˿I};jkɧdEbt ݾ;0;`Sϕ}PKlP+ft) ՞ Clj3.B‚>1/Z`)Js0]$h3ZK=K῟@.>w2ʢx1ʢ)@ʘтJKBCZ$9@m>39[$SHR""AS-h_o|ᓨ=ĩ2>LK M!Lgnn!4:::OpqZTom2i?At]:V5QҐҌgM@YNxt5^ Uw:L'J|2(dPݡsvW;&* UP!OW,O Wl|~R1ΝF_c2bd.>G>j9ҾS mN -Fey03Y"jj#M}+ٗtBVU}mfuм2O4.F Ŀ+ka^B.*(4˂v*le`P/K `-jg0)0DY@je$I )2e#iا/FF0ʃKڢm0&ښA22:&ucʘʮj*F5*N/B?fƛ^~SyǷ5=^Ra>WY M:üG$J~M<5Y)qq?9W<y?H״#)Z qtNGa:8|4<;n-%U%(Cl okFkߏxGkCu_jwbXM[bo2>=8[v)H4O~Up1N&!Vү #FƬfY~f49`j'ϋލyzv:[rvJuF]lk?𧅌ԁ _/S}ՔFeTƱ2;ч?ㇷ?A*og`F^EO&O"'}#vCk[ m˷ZaZqwi5i~oo'ϧyhS7w0խ ɣ&l~t:J7=`yKU:˽ q}݈߀* sU1N۸kl+v`G2}BP'J]=Md&uEm $taB#xkA!eH@2:SCԬQЖH8M&:ی']vow/}K\5/'G /_ul۬A֒aB9Nh;0.wK.:-9jo6zwQi!bV2f/P'7`$t* c6Y9' îtmg,1v${cJ$g'.#((tQqb3r6D[}u|Tv>Kob(c^tJl*Ӣ]a8W윉%IdOZBL E}WQ1HCoJzmkh0(ir0h6 Z 2L" x!w_S y ̧HXdw,в)K9n=xTW:pc+0kw(53wWai^CfyN=?oPHPIEC+.S*'?J?Xe \'N ,g%ւQiZE4Q]2 h\Иi"`F (8Dş2U4rŒh),F,S ߘSWX=-N߾;#ܟï= v{}CS3tW/mV*~!eū~T&qC9Z\4ZؘbR>+ϭςxzÜusWZQc9|Tֻ Uns^ܼGʁNS$z|\ u.wΖERcibQtn^4hjvpoQ>=):wW$) jK$R{B^ H{);Vҹyၕ' ,KB YUŌBIF9uNZ *0k3L1d>vӨFv2q:9-;_ה%eS;"qK?j|ڸnƿ^OoҚiؠ9J9j)& TP7F&ԧBs1o[yԏZ~GWhPES7Go`ԜQBgSckFjwkl*9m5amI3W/g# L* ~P)uWoE\IK#glq3uyԖC\=JNJ\IZmj>b.Y_ֿ~t 8هJj&^*l )o-9zRB[Hy )o%<9STj-RB[Hy )o!-Eg,7Kc )oRB[Hy )o!-R4˽5KǬR-RB[Hy )o!-R~ ݈C1)ٝvI2"xҸLNc&E]-ǨUq\"x/e,P :eeHp.jjAlut RL܃"UEjͶ{~RuQT}4A{j=>l:Jù9' e͈ἤ)Q^VZЌ?A 5uc^~ۏgZhEf($S%6 *rZ'L P ҭ=vHNf3X-A2{ӭ Eߝt׼>!lvXhvb 0~ӒqekwMEQW㱫$VfE5i|s|s0gM9ЬOGf? C'Ӛ-U)G6uֆCN 6, "+5d\.M—4oRxtOm(Oa巜*&oCnpFgc{Eqz{l<6rlLU0IN7LPjs1oc՘-u]qo7hy/xީ VBD t׾k5{IiK f@935k7cJ wO7_#a>rVOY)3oxTjsN4[*)^ 4~apZT{@쭮nǮh5VM'/N:MD=vnAG RҬ_!z^(;7"^#>+Qo?~i0M#u\G s>+Y)-qǷVFO_Džu"cFFU5Dg %2L@Qyb i{Si_hx&%V[j2R21?:ȑ h\WsL8gm {ֿ=$%j78OdgiFiO+YaD*$ށ'I2Qمb/D]<$ .V2 *1Ԇ*z5rhviiwӒ"Tj|z{=%m=9n`8^֋Δ0[n5%Tޞe1((bl&s@E,S-]2nF)GvƾPVƒ{JoJ5y韱,۞EGɷp61 >6X-]N ȮQJD5E|gvJG s"Kȩ) Zp=jK H9Lӹ"gĆ8("},U nplqHurRџcRVQI jYH!PtU@jb0x.(cn,˞llXb .0 P;KaZNq6DE.I:|J%_]/QxÿǛ7cԂM!98WZh*j>Yv\1  :UǫOk>'7/Zx$DBBfW,It1y6ȣ@=!G+zg?}DzFQ (*ZҔ*|ӘMʪ$#jT<3Tw]@0Q{q  S*p:Z S2FnYqͿvsc!QA_U(iEMn!1\>n\}׻\c%Z9ȀO1AQ0P7w2XSJc \K}J hp) [TPIĕ3 !h}QC))*l9Xc(;YT{012g2x8R(>MT[8&|ed)B!eQ厵ꐨ/)?LIk3~Diܚ}v>Ȍ2iQ7_unǡ)*$Dh1 'Lp^B(m0ԛ B7MS(. ޵ݏvuh798@N?̇.mDf~anJ q8kZLBm~0Nqzd\..l2Jl}hYe VDEH*G.}YJC1Ɲ4 OwL .Ҝ|g Ϗď wU Gk *Lm~̾ _uYq}}7ZD&,]?A$Kq)J PHoUf V*ȑw++@K2%B ;M9.y0bډ."tY)$((b~D 3J`-Z '/vn8RK2KΗKi;cmgZ1sx/e,P :eeHp.jjAlut {,_ܸyX3a򱾻 {L+V܌bwV!LM oi{^>3Q/`~h6PwjE^,ct᯲Q#)BY UT2-dtUKUd ^t|C gcBS1ď+]M/#ك Ǘm7"rWUQa]BQi~Y^Ttv hgK9wƇ 0$ O@_@>{C#b ި^r-.(>=( 2l\b^C<)3j)"'B+iU2D32_d+CGHH3 e^r)W 2.)6,jw1M%{(!@ 5%75)7+RJW Kt7OS}Zf4~T:ݶ[UM;2)S5EꛚֈU!aJG"cBj8DwZ<\X< yIFO=a{-K¿(+ʐHt 8,y LUdxG&Yu@N)~d2??fx,?ftݟ»w]/"76+A˝m,>"m,^ZEewdv6,:nG=hޔC+-7de}Þ)=;q޳#W7H3a3W,DQ: s^5ڦXVDϑ'#amo)GL⬧j,Bq/{:;(+f9sFHEF[ IcL\u4"(ȉ[U<6㘄DV1mJI2~tV -鬥oDfwM>mBT/U8Z <* TIq$a":RJؓ@MRό05A*ot햌z!cQpm2r6Yh$CjUSlIe<ݮ.#h.Q, V)$ޔ&*@I3'dy~/**, 㘦t;va;Z'h][ުT#7KfSd%QpHbڠIg_,J;P5="y?_Jh/W E$hBb *C+rT(Hdy$c?5 )$ALILk%mEr >(gƺtOhO@w1>k C4k}S^iշǝ-oJ1̛8ғ ?N5^wvd=qC?|/8~0V\|ƙ: a3.8:cGL'Nԏ.3bGiŵ_d aUs &k虥 ln}*4 -aj`LqQ\rp-iG6o|WR }<MVh7o ^խ.HNHQބ2Wi1z0^\kuQc~ۋw-+`NpGO'ޮ9GY[jsGibc%]JJ߾i2{ݽ:'iXEdc:Ub:{Z '^H.WMI^UIۑNI~@8 FL3=ffbߛq(9qfJ +w]p *%N޳v=zJ3↲2:_?_~9+q8jk7fڌ~;-q~ 'c Raa .71 \6j(#׺ l]gHWRjZA V'e,k2h9z(DCQ@${.G$Ck*x(1m֌8 "k[.XjP$uo.q齉"*K&B&!Ahmu W$àSGNXdCGkKGئzEeD$]4&PF; XQFTꖩsZercS=;1tT-|JޏiףbtY7WKWŶ7@BkU88J$_g58nS5"xhA6Xo#z\ ЌS>t ɘ& 6&0DU3.\H1H/9]Vʠֆ,7+L9kE匋jܝqA¡_BO4ҟJ ц>AӦWϿ*f>ȮzyT>zTL#*YlD? JS*djߣJak~*Ynf, sx"or/_OV2qXQ e8KNOxZ[/d+,Ƹz#ѳPíe f,@(RFEv&GHV97:87*KIEsNYḋ6JKf\P8~;h;Ҕ{<'Ve7ݥhY1k&1M2(iemwb5)>OgZ< BE^L1idX7XgP{N w&m[9䘤BheJ lNȍD"X>9oճ^O&Uhu`u'R͉6EkYn+zy,K` {ʤ MѾF5F4Z(dE2zҫr5y2ɟ8^#L"A$5eƙIIxBP^;d)?si>:i*ɫ׹>Z;6뼥7L 3ݥ;#~<ɦyӹׯ? ^y&GM*=Tg蕞WBW{B8^{AyaԌ&>xגٍ˗·pI}iyYEhcMދ=2>gN[q]/g46jramX m}v+oz(o>]K%5^MO7{I78y)'BȀ#02%=,l (RHFlf]m Y]s6]ؐxp}U^Ng4.bLsA;:}d #c!>C ZwN;:tO$~yu7zj1bƒ2S8 -bv{N]UiMX jh'~{@~Ylsw5#G'|.5*c#F!yXaL}`  O9M.<5bԈ)2Ynz=խFW6=s,n3OT{͹TʙFL)gWެ޺ J{)})WX>6=p#B۷#BO`dͧZ&%L tYoK)zťw\љe`A'i lWgˣY"W;"o~^,$F- S2Y,@B{J|tO>*&Ō-K]H:[Ik^Iy"#*^Go 2VnBMdERr_ffdJǷ9#pr~8΀x;Sқ.Q#@seʕ.32cc|Eɴ!),S(z*givQ4㎸b̓vz9 Ro|@ߣt96s!140ɾTƂr9qgiFq4D )2+چbaLz/(lS Pa[1f6rk4Mlen6bN= &k(YhaXُ)!sqr6%LtM W}XR614!UPFby|G=|G5z@zl +f R4])5:J Xyġ^mي6p4-:yR%B"ڃN I7k8lqܥ_.17N2M3^a8pTf>Ыڌ^1:GwzXWhf~0VK"+  :aK<Ά СסY_Y(EH- #Jq-0[TN"2 Je-2W*x/}d%Ef)'퍓p<&rB)ufA+bo.쾆CzZ//g*B_(bkY'M4 Ӌ뫈]dًC]#ΊeMɁ9Z9B$L`IrL7A+fF^t7BQ0K@2X!xKΒ uB!T!yO79zΎgB*)Ñ$Ђif,ssҝ>쐳-gHIjؤV(},UhB]r.WULGdf97D$ 4f+gG"JV(va4ӯXelH2&H-eibzgRr*x el$,RLPel@2v$xVwY~nu ÄCd夤32W|JM(LQzt!vaUUQVEY[QeWA=Z7ii)oP~/L2\%J2ZLJ^(Fi8Ba|c OԍㄘFJ W:5P!dM}a@0<*{»dE,zY\ B*:Z *ԊW)T~z]׭rжY݅ tnqhr4Mgi.}۶~#Ɨ$f~w滱jw[z^ooY١;7 -n<_N5- 'Ond++{ Ia|孻&ua$n6㼞|xjusd4"BJ͌ K725ޫY6cP0^|Va574^sk]q 9Qrɫyl " -*!R Y[c8Ƥ}IA[]@cN#&{^ɁľkZ6H^hp riXpt36 V?7T< !HLJ蓲$?pD7]`"$IAΪ& Ip HR)i49dPQG\15 kp)%ΡgK#ЎqZ-a"p¢1I"K#G} gYoQZcHb&X_lZty(gk9Jv3QG@!=J8,L * ZS HU߿P;G++ƣ`ua]) zW2G0V&CTt4nH"JJE-UZP"Tyof& rf!.J|Z)d6L,H+L:k?5ׇVsUm7mU&{L"ˁBs` bՎ,H\VJZ ?o饏wl*ﻒzLw~3@r1}$])q\y7?N$L~"SA;Ny5 p}6Nٞ{JKgZx q%;:1J#?y=H5kɒZU9}dd<22 V{;CvOedVϹz iSǟҷJ,[jwnY95k`Pdz(tulvhW5ىS2xFg`><Z_տW~l/.> I̹r6㳓ݮGFO׳Z~;_Ξ@=yW7b6O[,Vb2|>fNw*^k׳Z Fzұ󷋫X"8G|t1?Jɿ4Ѩ5뼱(]rO?|?~OӇߓR:kZ?{z,{t?=kW߲kia/|{>߇cm3{W& )?~~z6,rk5X@&+4lya(.hG{r jT<*Ċeb#xS~ejˍֲ>Qiq0^y]2a}I2Cr(Ę*jԑi 2|N:dHlX6Y*J4cԍG4" ɹLM J2 ˈF-pr8N/{:J{mM|#i|i"M jav/F4?h ]wQdI$Ѕq&MР*4Z9kUJ9ڹ#S>p Se2!,:W a4IZrKQ8䜧^qҋjx8q}4 ߏ߲ _o=Qq^R!#UyJV"ղ[g4IP\VtTQEGUtTQEGUt@G%*O?rTS*O?rTs8Y8 ؽ/w_4eھ2ZN#n5J#҈[kTeRJQRJn*rV*UY*TeRUJUVJUV*UY*TeU*UY*TeRUJUVsvI6qLs)+UY*TeRUSjJUVTeRUJUV*UY*TeRUJUV*UY*TeTJUVjkp /&vB 29NɗY:4 n@xml Zr%Zr8xg$xHCVNȔf-D֑GHŠ^q"\Q:rXV*75!X;rb!C.a2qboَIC7w4Pxl(:mJY׮zqY+BTZs~BAL{#[CKvpQ3/Xr嚓YB3U7H\TpMJ"'}S`5?ּm*tt=i b9[X6#q;uiLR?5,FbhmN2xDZn$9sY& qsȜ:qSr4Vr*?ƩG%:ݜd4a!rFwM/}ksf6+co7 ?>rX?].,+M.m|X &חG:q0^0-R\-5\8_*[Hmn _ށbR|<%I]{2|A&S/C]jwcbA{TFe4>ҨL. !]]oǒ+_7wW! .vo^vUaI8o(YiAf8͙ӧP )-R1P+J*NypM#$h`.Z32AT)4*#,!  GgŜ0vIgחv zBw\rF Ūf$qw@&BQ$JGwRNb %= x$ >J- )](1k ZV{UӼ4Q(Z^L zv\ەm)7־^BZabkm_1xOY:e8kU`/ϦZ^cipzJ)z^ x'ygy݇S0X.ְGa7#sR(ϟm! %K}g}!h":GP@&fH$0X(Nl^3E67E2$!PA5crb$=IkrZ4c묲fJ&;maK釮㜡vOqa{z7|M>vCݟyZMR @$EvYKG:82 &G2`Mwl>eڷ~}K[!;4I\b7C814ƷS~㸸noUG ֿQ EtZk L %Eḛ8*ͯ>}CFgP BeєQ)R H:E jА&s=ζ,*>O.rcC>lՒi=Vo"7˫a'\3h@h"+b Iu_rQ :_9~y#ǏЪ)Q(B Rhi7@6hn^dT%GSJ2:[zƻk厨TM^-y_Ȥ=Q? }pӝğ'LuGQӀtT"w⩖ ː]j^j9᧵n}ܐӱjV!"CPRYٔ*B%(6t,AXwTKaZc)q`s^dL/s.YzU3}q\t˰O=b\O%e4@"iZ{1{B|T7d%ɛ`-p".bAHh Qjk3ZJ %¨5j)U,dkM)р#fK!3x_CN>ݖS*vOJ"V:gg <emɧlg-:M`dq&>/(j.*)6 t휈.Ҁ?._=sG2BhT͉T3rL]FAietʪ%7x+Frď#jOGһ8.‹O>}X՘z:׋L' 7h5an}v6]81ǻӵ:C$vjݐMp%"ֻ˵g7bl6\SAfq(jƨmc_nG"( ?DTDs!,*EhSr@ѓ2ac., { a/:kRfQe1qd/RIDv}}{~eQT~1"D|L^{CVX6gݻkKbڃs-e d)-Cq^VT %`Nz!F458{qq?Xg3-91.\| P9#3>8`I 2;N;QZ@*lbrEpx2mUS 2^wyo ®6~w bݫ ?1_lv^#}_Cq{tKt__k-Pl1q|#O`㰅J'3oڍBmBF+LzJ%UP 7y]}]%vN߾}q.sY(x$=~ߌ泫/8~c;:aGJ2sn}};9*wg]{.ukO=qJ)͐ڄW}5,>Z4쏛i9eIy9Wr8MsC?xge@˭Zj!ӁVc-ލ//V&֕(<]y=( / Uc2ql8; vjV2]`!0H&[oT^MJ)h+JUe ,äc*Jrz_fT(39՜ HDNWL`MʴF2)vcZi(^Bd1&#TϺێSQAᡔ5K q[O κ$!H;V@@]BZx 7l]6$BjF,[v#hXzGf׳yYΥK?*I}TܼV)4+'CW45niPI CJo w'/ag!i;.,)/f՞zޜ...ū_Zux_*OiLt35y8iW |`1oIHӛA@3}jJ듊/qZCg_ݔW/ /tN<1rc\k&E79[(cFVY#;>bF^I>&M"<]~~o fBÄD!Ua*КP6cE]8F MNWXHWFxinf9bJL;/N%m#`y @wקoAwo!\[ǃY/yts:E2VK-S`ק ;-oZY,Cv|N6_P_<ݿ쫡 tBNmB}G<;_2J3JcD]oݫQ0 ϲa@זGP`}Og`l[/xxMmq.@D9zt F_~4 .eOH^QwBFC^+/{.fhK+ʆ'knU0J<ъCk Z5C]%?5E^,W\q7^עzq+?eX́*w{WG{_?_ߴٷ8!z4ǿA=aٖt/mvr|q9[Fx^glVǛMZFH2MGÿgc;flGsF́LG)]=^}?o#V$tbI'0Z䀘y^|V 'Z=:<JA/T侅ȼWz+KF:ޚy A%"57TV{(.Ѝ-]@vI_[N;\Nپc˗?ehgn})-߱y ._{䶯oRk~hֻuCם/ns4O ~vr o1.A~pڗ;Y:^eZzռf[G`(_>?=ec$sĤxvJOX%]Yѥj{U9 ;E~yHy2 A=*/wf i <^M_']>bozSQ~3wݜ{>ܝJ8;mrTAO~0ANH, 1?m{qP:8f  r6S+BH Bٖ*H% ӓ+'cZ-h B)^"]Eiͤ N1Hp ]Z+0LWS+C!CCG|\)ZmطYv$oܶӕpq$Tg1 i1F}IJB 9%T* UWs+:\%)F&9,1=JR|.ppTÄk+N)=@`,UẄ́3HK0;\%)•sZ9gWI\I@Zz?w$3J2eO}*\%%>Jj}.pb"WIJ6WQLi & \%q8IK&)`W W+$3՞g &iU$\}5pzv8f=(9{@/9w2玒R%# puЧIqNpgWlj+ȹUVHzp%U눼> 3u rAVnG_*gװRK̞XS7F!ĚpYsHOeʄFYҧnك=&=S(&gCA\a"RdB da*V %$W -FjLR<++&B L5D \TȾHɸp} W>"C9Y^Nݘynjɟތ| ෶G*fٺ-?ß%I]G"YIx4"a3M3Xʝx|.ټN|.uoX-[ W!wLMo_؞}z[Zf]AzXL^ =N Y0:5FNr 0Oydt)]ȒzO~V7Nhh15hX~{8zyX}Cro|g*ܓOY-ò.ZNR}F/ {l?/ { 5)7ҧa7>p6JkqU/7YVo߿qL:41~Y_}Gq-y]? `FCgW*W]*o _ŻoOټ?̠B3 ~[m7oY z1Nn%zxKT_N,o=4eL?mX;^RՕDt-0VWT>dt;ha GUe 4mЙ%>m7` ,BMYq9۠4WX*PCQf!*BPJa."3n/>Ew?Ṡqn;v2tˉ!\B,B0#@# be}0z)#"b1h#2&"PVk_Xi\ies5.֤f/+ť+` Ԥ}X\~v6Bxj/tHޜZVt᭩&zVinWyeZ[X %i9NGҺ2KQ[ko>nlgo+;}$L"28 [aUrBx=ni!ND,2 G1E4^JJ8'%vg%0l:g?0̬lFxf6!+ɻ~SM PjI־]"-^Q-a>}(rGE>3.q04#d>liEHAKd^ j =|uxf^ ,&8p} AL!jF8C~@N'uI 1Ll #[=z'31*xO>FPpeC@TH$wNkǤDF !`Yzb}0-ZlCG!}BEr_@(!;K]x2EU,܇P}0xƻH7@yr( U_3,4dYח!>$l3ygsP׎HCW͎4zWu??⼸}Pi &ˮq>:q4:Ql!CV"haT7"fa:*Q5ϯ<|:@Lto娈]KO7m"3|-YMݷ̯ʋE,uoB 7 4ofemyЉ okAsζLQv|B#Gö W\qgY|Uݮmǡ2.1jyp_n dܝ XvVhu@kƝY8?E_xסR]i z@TVMSovЎ&> ]DntRUyY M|/poiQ&U2xEX^m!k_;{|XZTy~ qEg(_W{*20]Ԙyoo1|9đl_5ir3P4-U:{yS.tzDfa2~Kyz@{QW׷S=nˌorVʹvT2~zAΰ:LeilFA C+}f)w$Yϡ kŵLxx` I"'a8OXHCR^:jS+C0!B*Caj3)m|2b=YFs+%cۙ8f,Th=S!~yiDjnHr"ZȤQ[, rs-XQ`Vuj!Z]QQfHd"TaaR":6on6Zi=3WKLOKU"%^aꕬ"*.5tT-KnoBGn@V[)[/#b GfC5 iU(GkQ:B(#'8Jz0AMicJ|@A Fُqΰ38 YX,|HU Duif^my_jZn@f4/vHqHgK&`:l`#Nj`ˮA7$gxTH'K-CJGt`ZD"rRu؝m1-/ݙvڼcj vS/C)J9Q&1J#tXȅ4*x 1/ k|{Ҙ3qa/,0;ӏSQtb@wi<#ARlL)nDp\$"4]=Fp$W0B:EDl8pFBd$&:łKsI&teLjؙ8$P[r?ާδT\TpqŝL3ዯpu{qT ]E TqH[܀ρ]iǩx;Cu>IjoX?&v< n~|GX:~?%KDzlɸO:wTXΙ͜*c8F2\r)Q<ޑџxǓd>ޑEϙF+F(GQ `;)D[Iuࣣ!c⍲8`1waʽ3À9aFhNPƮ tWo_͒ohis3.'ܕGhgwׇfHW_2N5~\pzVT+1*Ellc8d*Qx>'è$ޑ%ЮZ^~̶;8k88bNA>DxQzQ9NyCQ{0rHRIpxaR[ERcib0x;杝s@LzE3ҧeJZav–S_Q3-! C9B \lz@s$%ߌYtǹ"۝grE8Ԅy(a*5 %Jy>%B%ػwљ8= Rw"ͥ"vtIGYY7W3ܕ=$`I[1Dzj~3L H" $cLK21<9\^>="Bޕ6r2y>/5F,KK,E$=_ I]DI#Ҏ]WU7n^M3`:5mmRkt~֢_ƳmfOϰl#O; =;>Z\by>WEl1@VG<71 ֯x0:bY4^>j9R06 ݧj8. ?V>Q5rE>~"S6WUߦAn{Uίvn~}FV!pxy- f`μ;g*a.K\PY;Qk 1ڞn"J{H%He4sH5qXW~%=n B4U{\Z~U2LCQRǓ/"Xk2%$=6?~AnxB 4}P$J+$f59KTS$AoIO*zj΋^`˶"|\hxÜ۝렚j~ȸ`q^4Ӫ.N'i|/?Um U^EeZ&{ 1%'I5?zǁ-hDQ՟]Hj)\ޘcW^ڷr!-zZ`^)U)y:11NxQvCÁ;hHkqL?86qH&5 G[c(@JX+ b1_{f~z؃ <7vp9ާG"e`2̋W'tEI&Ix%6T )Y(XVʛluT,pϝ6dὡoA 9GsU.Kw}mn<(M;J"lj!C-AŬ B,}"b52ֈ* $۹On'|]R۾Ot/ZDhј. s\B$xqF$v #:%D 5K_)DD5{mdc aJ:΍W;t23r6HBot\So[x'GLQFQYR)3QDsHoz)GL߹ܚPͤ^پb1j59W"I,%Q=3Z λc%mdRh8--q qLDV6%c$^}9f_}ݥm&®ya]ęUB.43ya$HDt&%b5;@o.KYsFwQ1$xKb@ɘ5  N$6 f%#GoedCΠV {=ד.C˨Bh} 7%QI!ig&4$CRŞqt8ivz3Z :}!xx/pZG\zM5r>;$ T%{ Pkz1ݥEh;$uVDL&ΖȮmM8XqݴfhCBD"#q˸ n%۩ߝ{A}FiSVP)R/2W/ Q\O ɖM X'- '"ߎP RHA8SM8Ip;tFߣ:>5 @:Dp!\P< 2'x3["><ؾecF'j]Xa l,lnhG4TeзqwݏmLI:J[ެ,I3Ixe=V3/tZ1htxA{AZeY/,>$@ƢFZe|.D)\` ɽx#D4R8Aj#h@*dJy]V$i܋rc/(gƺ'' 41|,|Y _Ӭw4r{P^iZS {¢e4sPwEɏqJE#`{)t~ʟ~y_]Wl>%EdK&Nޑ)>d4ᗽ^zl$YBU#9 4gi5Zi$7~q谬49c1+X?/ 6L)L.tpplQ- ?NFE׻^e&SH|9*}`W4/]ŋimLTO6?}"0[A I$?Fy88Ўtc{8RƣOgM7;Lؚ-Zfff..wicD#XFHhe16ǃM[V\꺾Z F.[iGJÊ_ƓP>j6yR jZ F* N>r鷷 {{=I?R(N֑IAX57?ߡiUޢiia]Ɯkڽ}9oQfa)r $ߏicqלxs٦7Ldrٴ͏oiU4T U\RcBx_ K>4/@xl_wc9ѭQeHQuoWGr B]<#a}I{ {kȾSA':(/ s"^GzjcÂmCqW "a1qBr.cB=8 8)N'g:;ٚx!n9yi;IsVn;q۱Bg7 M/q qӿ9 y^V7j RG.RfI@Z[0UdX{ ā=/6ȁ=ADIg@hpaLВ[%y;U-hɹ]nqw XnI%Cq^<}\M<PM:G|*{en;ʜE?OxU'5X% kt!&bFM$#=y{%v^v^#H78EK0ADNVxY\e.*/ Wې&%LI,,rR$L*} Z,Fg29rDcDnvFΆY-oޥhp4|iŎ*N,/Q߽bVz. k|q]Dù޼ɬs`z)769O.~="C.o.@k*7WO9R큊nͦB*YSzZ7Vy+r7Bɛd<ӛ]1ɚf[*5ypZ^u\/nݲڅ6}^)nx=CJq;[A:xtk7]z9_1i0 2e@S`1ўΙ+2Zpk%>k79_u1zt#jss5d绒fv>lO_6=SV=B(8"%LH0,~q6<qR5uz+{G1;zclY/$c> #Lat,\Bx f-`|:QXg$.q~ 2#.~OmIgq[H˺EG11L?5Ҥ>|w>SV e֐P^ 5g_ɩ؃ zn>p덊h=f#s2@BR 0+Whz;0dB'kNyQkD^4]d@ٟi2A, x&׌T:-$ #HGZQGH  H̐bLd8 Bg."t M4Nױ9^̚x 5_^Y )'ٜn:{^5BhXC[2:J3XYN&|V rh¬PX7ȃΠܢnSW.rZP9$MNO%@6'ƈT"mXY[;#g}\futϞ-ׂV=zEoێ!Y4<#C,Vӡ! F!Pxo{10ԫ=VV~ujN:%ݨbؕR=se> ^*̗r=xP&$LCN {߽.ޞq Rp u͢<Ѡ-rΣ"4"1;3N̞Cv{GRKlŬ*K 0E)W/F~~o>{4]FxU5NNs;/|#fZ,w%~Λ-_G!OHmT[<OSmT[<OϖǬxEjZԭx-j⩶xζxSmT[<OSmT[<OSm͖RpGÀ/7FaX]F1R@!? KrH5ES#?KT4vP(;R\#Xj`$iA f[2> C\\ކ sao7wNts>y*;ywo*fO'5fc[O'rٳ&yH ϓ0)jDzz@zO\5\@2bX{ bX{ bRg9ggy7I3ߛn䱞%&1@RhLbWIŰڳܧEa]E"石{;ҤB'Ԣ#sEK#%uF"$Hѣ,HKTJxYW0dD5ͼ E KC$Ec#dѠ-M$KH׆f<×%= 2INҞk(*9?d;_GE|[otAJ]Dp3|GN"zSp5eTt q\l6IKPY4E0;Sq|Sy2]waNxS 8N9ꙶ>MKx"z)@F$DAhM:!!HbV)cަ+lv[Q8d2IҞ]oO!  x~׷t~ qa& #US$L9!DqA)&Y2AEUaFq}}aO!sw!k=ӖGVl {#!Ehnrh<IׇzL874]~5]ϤA6ҜN4\dz\KU@ ʵ#O%N^p! zYo6韟C6{0M 5Yy ŝFyRQ?x`|"jɢF[dnih,¸Nǿχmi.H Q~c[n 7@Wh \އzӋYx.㾇.4ߟ9ڟ9@f+&.opN|hIn&[-O?Sc&djÿ V D+a>vq1 grcAUm4-%wڗZ@IJx+rw! 靷Z9|| |~e:8~4Wl2@HZ65}7ON79{Ӆmg°pQ kڬʹ6 ωyvjb-o9p_ |c7E'yc}s^gy Y{vl]6:XLhoWM:}-WE(!yw[]6&ע'M4Oo(^jO;p6i3AmA{p5 ~XK<.^V&SlVMP+yQjAHP1\1B5XTiZ9|HQXekjdʭG\.""k1Z(O,rgW3Uc1qX(CXշ6so#2 V:Eh4d IB+HSHPQ2酢k:<7E іhp_#hD-cU^F51gvdtKеv_%J{p66[ѻ G^91{ˍBWģ}}XjÉ69uMBȉhN['< .̎[#ȨN1& A~# Pڔ -EEwp A42g32*Űf슅0{dA]ef"̬;~\~efNh`0}П^8b:"@B%lUhB)N[ehˡl5Lnx :{.2½geJ%Q3h161aAX$)88ۏq<.,ڲve4G-a0iH2&Vj 5..XrN{haJ=ddT8hĚ<2`&($Q1H:ՠ+ٌSx(XL?vEDUUEĊ4^0\ r! ZC50!>Ν0XQDRA NmRfd)㌏> LI3TVFblF/#.NYLKvESuŊk9oAݶIBtW` w,q### KiǮxh atv5rnQ|?.M6"f>S)k-Wm~\S$1H9(juR ߄|Jz5@uOT5V,TERo|$j}sB C Z;%fzPVı1q F:)P 6PV%YH1 d⩴SJmX:X'mH ͇!^AEv O0E#)J}3$u E#ǀlF^Yum~y} @.5]MV?g>rq]̎kJLo`&+N]fy7->fg&jr8.}~w k"~~Lnq|7f;+o_[|t5(_ACs\W}Mv7nӮnSx:cy%k{J Oȭwߛ~ObQ2l02V*J1uǟg`^.G%7B-C98p#pV`% pfhYż+IC9R1%]1}?2նg>:edz¯_p-Uu1(] wAS$y$4}Yvᥑ^i|NwAY*W(nqn 9 ,DޖItxbr.Y°2l[^hJPi)s2t5/yD~RoJLVqvSf;jRݮX'8EKY{)'`pEYTJ&I-0UM!*'P+8O'-!1͒i)pJs*'t8*G57jE|d%d7F^*l HZ&iwz1nqLM3bMJ^= @\;q9I֙' wIT.84lg8e/8o'~ x1G{k1sZ 21ʃchjF`H!d E6⸒DDdq[G3 X~8! `YړGP*yg%uP-uP5^^NQ7lO]ʪKdx$YW)8$.}O'Rsgm]l9h8 qP 8[J`rPHJgzo;!l43b3kctp$˹0r3׭V|05u)W##c{c΀ιL V Q[%,iaԂ _EX.A|{ػy'5\,q6>D$X7Lh6z+ dsQ :|N!繤Ɠe b`Z%]6L6T82XjtI3IqbR^X!-D {bFx>^QD3%iYRG'A ?DFrPZ;mW}FC#HR8S@#Rν阐% ӜYaW .caFéL=@}`J;?&Ʒ~LgY{W.oҬ➥7ctx=?@ߚ^%]LOh&8P#řE8qhCc!UxC 9Ȟw<-b |w>SPԀ!2QB9aT+)n:M~xu8lfY4W vwx4j6zsp>[j6lmN1&?54K@'U4\ 8σ3֠R5*E\:~߼ƿ߾<ߜ|2sN޾y ؁Y7 vᷝ0dnMë?cj0_357br?"%{|^[qQ~0 y^L.b|;7p+b;|.H~P=TmOUϗkgHAm$r2SA@dPI 2q9,s@ӽؗ h!+"9(%I8<&ws$$UƊ/@ M7r{ 'm^v΍EjU4n;JZLq_8<{bxWxpP`]^qӷbǓ]V1"f.ͻ+#?~F=,MJD/]lVgԨRsTjRN]gxt]T vopA\ . ^wi;G&/>ھTZ5.teȐ9A)J)OgC6b"S AdqWB # p&VOd:r:f‡."`xO3úAPZ CAׄp)N &JiN8کYƃsJ>Kl" X1rͺ٠ GWAʈx嫯wC$WGOYVUs1DƜ}Ey֣VH~.H9RRU3_K:#1@Ş~d{WQQG8f9 P]@eI rPrN5CKI 1PRJY&1Z)ϑ1: ;gÇ[*zn nrޠ8xڜZK9k[mZ3t-7z|1l & xS ׂ6Ŋ FyPZ?4L uv[zлZ?.93f5 ,gYcî[#/;?x}+=d<^3^ӺcMp~<ʛyx,#_g<.&VU:Yf--Ujkz1m'eogך;jגٮkƚ+qV.(EwEd_| ~#8 _NX )] 'X+`ޓ `ÌKZ jл}1LTc-K&4 $Hj?p5o+cb^1'Fճn׉iV"w:9|3,fgieҪ@ <p|\Ƶ깸 z] J{ׂе@` +J BpٳBhwvPZٳ]I)^Rp\q}..рX|tg/U tpҘwX'i&S%e$zZ=/6q7͎aC`nQWzRmwۼ=Bi/G(w)%Ӭm#*Sq}Ɵtd,>攰1^3ٚe<ҔU"\8<\d+k|HD/UVi% i!HScrxk`*SfAU,95-ivvΆBy1>b;Zoo&:=j QWήE1磑J1?E" IrEε!iICkۘm^ +%z qF}b g _ c tl(72rkurGC `F"D㔲S@qb:eidZa6j>"rc@wjOH1p.t}U8$<n9j#0M>mVo{w@#!z5}hYC}'*xpuL*s )/YQ8&>pmxa<,Sy@"j%ge`3AF`GmIcjfJ! 8ǞtyP]waxf_8s<{g4 .g` n|Jk(C631[sHS}gTu+bZG*CYJQז`as ;җ`HuC@C gl69/TjEbE+} Q,'IJN1ɲ *k!Ѝk!5/T.}&aU*1Rb0ٸżZTz8#owm$F6.Idq^$XTwWˊiQKJA{/"E-kd[2{WSw:ܧLk5{w6;]4|qZ~c?1O~bf~a1]~TJ$ NfW뾕Z|:,3U _]TzSrO{. ʓ?fulz>nmlqڟu1e*˷gm3mYF%Y@ J~u;2?L_Ooz^Yt>6h9N> vΤPDTlю-hqZ˱Á:'J2 h ,[QJH#TyFyKL%Q\g3.9V/BcG8Ž@5gQ5b2qQ vkQQ/>^؊;Շ>^|0"_p㲌/v<n~|GͪZ7~M DInWd@kMR&茤ŢSzkF\wtwXqéw¹=Ք@Ge3~^:~wƿ;&A4b֠jJ(O>)")0V N ׹daJZȀNJk%0Jd*@_A8U55=rH̎HkKCu|&Ȕ(@Xx;`s@\l$Ǐ"VJ9e~<юWbG ǽ!mTɕdrBb ^R9) :^.&m#$/}uuB2!jmeR6xvc,9ܺ9f%~2IX8% uXjʔbH`C|X}> )loq( wyzWc H`$LRz*o0_iTXKy,6K1hXT-W=cti!ɘv6{hh'ΙVhXCrK4*Xމ HjlC־ܓ Ӂϔ/u3YjN+ҙ}%vuD۳A^-s{uw*ݟpu᭨]FfEwOOoR]/᳨)QM X.z8fL"%V὜9wg^^f_aXS~~?HF~> ]-۽vyr7u&l{ݰv7z?]8Y^oYAߝ,;H*~bT &Ia7w覡d8xC||}ƣj zbmC9]E8O ,bzCm/uuI_NK3+S]Q:EK mS=94mB5JK^7/Ezmyy EU$#2 3HHEGFo)s2F3)]nQ6 unΔ>Ϳɍ%IiGtV<.&WXg\O&۟MnX/^X1[iX=FƦ})BͲrP[A1˚CL>Ob<TZNk;usj 1~?^F|UkF/A/k?ۍN+:_ӵmlGO<--7Ova?yJFwҺܙL!7 Wx.3һVvR߹=Ea\zg!oWOHMF##169ylҜF Jk'Nd);ctmtLxmxWJ9UĬGM4/YDt $AjSHh{Y:F$bPY۶8I);g%: )DJyE&H#Ψ9@P&0;|hlPăqPo/P5jx%G4Ĩ)[ ʵ;;̿GB̵U b5ybfI(gQdrv.:#K_*ނLH+L!L:Ӆ@jWZ $32I05c*F}ڥ#胟-g9Pz4$1SutvQJVٜMH.2% )4ɋ0'bU5ZQZW= (A zBYHP#$7Z\t51NW` :+uFdeBD7QP,d(bwgȳUyTMǂ/ڊ'?4Lz|$ut_GqY`fڣdK(T2ep&feFiSW!^ EЄ(N;+L"Q{ (1"EEdB$%2Yay( cuCn!GJTұ7^TÚpLtM}/4SHԂuAXlL&]/hTu ]B%CK>(@FyS]#Elۨ^ˤV|a^;Öcы(ikF  Œ fR!QUH&(f'@Ak^pznFpn3;;Xn]< XV QF1&jLr>,td.!`B,*Ͽ1wge]N;o$Ay| Y"j-ӶJ7N5{<.g.h千CYfuD>Bր^_9ddrQ ͲK-ΕGADH y0:Xj-%2!$%I9X_F;X9_ v^3ڢ6Ze(8)1D=Bj*bz[ʫKLw'LW-l}Sӝ=Y5'ן&o"|KUڹN(qp9Ӽ~yrgPiMQ/jƜH@揅BjN.əy:bAn'[2B\UR`$'~ft?:!ٻ6ndWXzd7#Ҹ\uqpH;?&D %RBʑe40@[uLp8g)gk:W2C95[ԯ_M5zh|v~QŨ0h[!_Z`gSƏ('NQ|F1P;KsjZw&ͳ%Y_QsiE`v 1&ٌ3OdmH?|17[^BVԬ #FfqCbP'\Y/Ӆu&'WQ\5ꪹj F8=#y`/F3H6! &}S!ؚN%&u=?dGo_mWoS_PW`}N)M?E& ߿C붆Cs 6g,J"ڜqlDn}8~xMjVM=jm ϥ&j 2N~c~6tGtQS% A9ʟh> F3>v jM*]-׍8Q$#2V Em+'AQqJ6XHѭ-yl2S. $w&fEpFI"$jDJi,m'm#s3N̐v>IH:M43.^:T}褦Yj-鍅8fn JX##Ѡ>wC@A$|'-$"!KH*'r(|sSxZ%Юn~Δ0:z.|BID` 9Kg:%* *7F+B)|Nᳫ; <[ Vˤl/~ |$أ.H0H7]dtAjdElndty]WI?dѵV~t-1 J+W3[IJߢ3q=C)480 XTg@]Zm-jTԮgU1438 gRXfqT9* Y@o[!lBn{?( T9b_ks3|r1+ZT[䗐T 1"!@IS鸑NQCGEP:`{rxexs5Mn+*9,6VCyw-%kmyRߜY29sK7])$pLF*iQ_%H Up5k[$9'lJ$A$'!y@ `9s dJK+V#:=0Dg b⨝ha80(3Y`jDzL5rn;A 6${ye=6M& r6},ѻQtaϐM3L^Q{w_y|}Щ~Y$JF +SZŠ8,{QľMF[khj<<;yi5vHq9wrՕ.7ч9ROJs'{m~|%wJ, 8 RךY +6߼!$X17 !龀V 4@ Hx <S& Bj)ή2vٕ : 0Ӽz <^gѨ{W=ܗç?=x9-6SX_C #E_r4LPT$A$SttP9O\S9DOחH0roD=z^ݷ2!BP `ᶬ+ %\)J>+ &\R]R]T(%JuR]T(%"!KrXr͔ƒk)fJk)fJkhw,dKb;(UJRTi(UJRTi(UJhz$N4"_hDV)6+d1EVAKa< +߃8<L]eUdI'a9u&>y:PG(Y9\~,?KQd1u9ꔂ"L 5OxԌ,Zҋ'FK.gO<P,&{];zeUyxxop2Ix5Ɨc_%arUӗC=:ZWSHVVXurIGMkC/j;d D"g=Nhxgg~pK$~F= U3Z^6%4naܩo̸uxA3&'K4V:&|ǽ-G^r"r.tzfM/ UF}=㉼/prOΡq<ƻO933Λz/Nttt—:R48y.8,*AnYBA(%(gŗ^Kf!!ڲպ]oK+Y?y\:/UuJ3/m "BDT$$$HX% HA`cV2@ "z3gbV0WJ~?Xgn:g5v~=|Sʜ > >UǦݿm}|aW9/_IyfͷȭVg÷r`oJĝe8G!Uʦ@>|Z&i3iq^kZ5  4q[7Ai|2 Y]5i'dŶ{'^Gp?ҝUa[ZFl i͢M]j4Ff6~w6*g=~g,]>t-;5`ZKK*P0B&i$9CEs4XyJwލuQ篟?}IY烮edp,GG|y$PolF+̈́?~uNx=9^3 ,Db?H3Hlye׭ilWwyi/ K3ꞌ~׍2C9:p,šR䵣a@W@ 6dc#y`^80 n0IvP;d"tǧ蒗`1ADnp.S'z:Z:ZMw÷ȱz vdWz7pF)'kq|%S1 SZFS3X`y=X7`ZVXlIeإ̍8-W(le4gz6E;m iTF[h}sج?6ۛti jn$VۯBwfV7o apEhM"=e@SU{kE9@c1$hM-b~B8 pAEzHof7\b\pCUih MюyܨqӒ٩3`|^rhYΰv ݨ(ʁft}sDѣCŭxVX*^/Ws~շzTp W<펣T#ܯ<#Pcko.ZRbƳޅWbϺ|>?kPru&vEgR(];6ڱ+-9C+96b8s} .% dJ)!kgd6 X0 IGrێY@- ^Fj)֪cH]dZb1Xt5ٳnP C=A:$ =E1.U^a a}SA%# ,%$kS&V2A!Bth3 j3rv D2,i=O:CPf|ڽԋ駫Mdq}OX'!ڃK<`W=ْ7C_-MP"z̾0X$1d-,x{P&*:Y樃jsAF6>-JdA#1 f'ɈTk،ݚVi qơ4օfԅGϤY@\Td!9؟&/.6/;4Kg/~C.?ή[#@f)$ 3BY+'Ap̲6s^ï:{1+VUmjt& -ط>eJ"m[[cE^s1qǡZ6v#' HJN,D%ハEhS(D]dbĄJ2#Ci؋NuM&v)GQA\ljfևS̲q(]cF8jmoYX$u_K )Ű'~Ѕ;^(TS< 5Z@VT̘!GN&{ʀ MbbMc،F֋eb͸PE"9g({癷Dk2a2{bi'PhZY FFx 8؊;Շ>#EQBFcW8OH_(х`Mޔx:cLߛ;c0 Z];# ;,:uE c0h7b ' Z;W,h͎bplt5qr**$DQIUףQ,m Iz HT0h%]4eKk݊}z7j־mp~y{Ec. m+n1$Z6s_1#^}! J/c6//BV%=C,Z6%L*e9s5 _w?]~d|7KV~kD(V0 Rγ0Ic2%twݖ#W~V]}>nA'턓H;qy];r5():7ʫ}K.;,_=K~m淽[4"'oOӪHk|%X`|2Q'Ӯ8 ^~3?3N#9گϯCiuPo&?^Oת1*uٚN]sQO OvrxqJ 'ˋAX!g3~>uMZڇ5 ^+M|Nl3k뺫xA9cRg7-,:L @ ;~#U|4Yd1[F>c .ʲNѣޙ7Y MU B_tb? zS ןǜpͦ/,]QƆ$cDrѐ֓ap !V>&a?tK`t8؀dJ}onB[52%4_*hYWI?j~66cN\ۄE/NA?Wy[i,Я_8˔'QpfeSfS&HeVN*?ɻyx䏋h'Ч^9X!`OV_qV {2̵n 7dྦtF-N%5fz1[wɷ?0,FM޻fǗq!ҊTֶ._O6?RZ|_fҷ_cSdYipT[~ȄE3Z.5]^E[v㘊=PGK|t؜" Qf$<:䵼y n$1FI>@BFY4?% !;ɤ]%dR=>:hSg[M/^_?%IΡH4t1*3xtKd~?flgwvhq #PpnˎX6c5hmt'˝vR $C: ?Ÿ#516obWHjTHjĘL"b U +"1ˆ(Iڂd hh钷=ZJr2eƤsA2hK` :2s5F3$[ `?~׬pafw,/n"6ݣ҆*V }/~?T;n~u^v,g;?;o_崲z݋/|Xy0ܲ/yk-\,cEdK#zwk.} 6tIZx_yN½I wD6/{qYsܨ ؉,eg]t(`# /HaRgU0 ZGG4|tRR)YRD"&d[M1֌ "O6uE<1Rvt1E&H 4'ތntաl_N S U?}@u;^>P5j~KhAԔ)f)N]YeeJfL%kbfMH(goQdrvHh4X/i_;Arf2 .mEQ B2#FlAbTP.%U7O)'-bЍ9{Y/W$fN<|QP,I᭲9.2% hI lar @Ī0Xkʣ_+P{I " JgB"},#Hq#j'~36.;8PR0"+)QTo\l1  v|b lUDG1뛫'5QxիA<ԩag$^!UߎaC!nWsj HjtX Զd\V2XUf3 ,ق%)* 3i4-#"L͔:7z6k"C}80D$'!JaL(|PF<^H/)-2Ya=Q< Ȇ 8@X0B:}eAЛY5)mN[U~^HԄeB˜lB&i^QJ3k%땩5R kRW+HiS]N \QȤV0ΰ0DB콈|fg׀qXlo ͤB$($+Lz3RԞcl`VqnJsbpnsmbab0Z%D,Ѽ3G;l% AhvŹ28.ph%["J l9(Q)RqZ9)IRšABHcA:"+wmm۞=!q*\l6rj6.U&p(J*4CHIG%l3_4֜YazNe׻z"7gOgG0* SSX_𿵋ŝ'ͅ7l0&J|pp|[n/v9R'5~ׂ- jw$FaY,hCOj#G1b9e`~SVe'wWV`{\ܰdsgU,&jO{,N>ϵJd7TpNl0^.`z޾WGGQf޼~st+q#0'vgg `gp MMs#дUmUD^_>F_[r⧳A_T?—&Ză ",AqCU*j"%qPJw <x?FlKm$ Dd * 9lD(DB\**OU^:U"JBlv6,U[*=⻈G!*EA̱o$ŨUr\ N+t*J9BeG^t-(ç2>*d\Mqv,.H'nҖT#SF-#Ig$s^0cJBr4X7[s]zI-|2DKc!NJx0"<ܢ%ӧ'6oMLr}ƽ λ3ޓ6y8 1qo/G3/7KfaA=ih?HWd'00x!yJ!M(A)!q]v.ȞfV8!.0ELBějO$łWS/3U 2%xn!t9ߕM$FQk9/90]sK5t_BO}~UCQMᏏlڑ?9>N>:DSVMx)WٟgLRH]i2Khegi3:O2(hp$@3U@$EW> r 2ٗjVFmК3LI`Q)b%|5qnGXgV94>/sgjrhr>J9_δZcc)nw8;chYb D9U[4DEthpIqo+Ao/$wӛڇs-^N{t+eV P( A+1Nc"%5Xmc>SAT[U#&$1gpZaLKġ̆rq$ދ14sq%}Ds@pc kv58P6]h₊C~O־hwH8vgP|"w Q,p<91*hSM !(0ŴAK+#Jo -D]|ˠ¥|CZs pc %cME-杵Ax'XCd(C>2vFvp0;Žmt#ƭÀO!>|(.S*TvT|[J Bя,k#;%M䜻RU| RL"cܜH=#clƺ+-iי\WL.p8{f&fYOdgeY`Hkus0V0qR X1c8B,J$Ny sj|ӊ9.K#P '@ı(r*`5|4yc[!_|.™9# f,+fn(=ѣ`}. oLlIRgIu4 TI@X]AmۑCbNQp?-eúsj2B(Kd@cRpbcb4iwHDlOX/pnfT[e#rwW\*yǯE 0N>" D#H¹vGmr6jR)szH6yIN#@ GLn"8,'b>:x`8,\~& D`=(nULR"~Vb0Q~40GzS͗/Z @lT+'uuş%)~[pSlG0p#xU[b_W 'DN&ZhXՕXh꡹{l-٨Эun z;~Ym3j53 ѻ75**~oV-N[bS)x2yOit$\>l2pO/mT#B7>~ײ\ރP&`j/$b>8-_yu>5fhh:LaQ-/JY/rpt0=ջewq4找|$Hh0XuaM_φ:ؘ*__ֵ&O0%k 5ĽqwtRf. 7 PD^]nmr|oWn8isDeWubEjʔg0i 5u{DrcquKe pc,PSEA%jJ>W6TeoScb"bpqCCKo`]'ϝvk,D_WzŶٿyK.4I}"CC4,[KKn)ZPiHoΗjBL 1m)΢X/@<<EܗBh㸑 0Np I1Q0ԁ$sj9̩N鵢 鸬6?GnmnlɝchMkBOFqg5S-tmRk;Rk!YFgC_(!,8`X ~۪~s4lck=P"'6W2teb96ql=T,TySk)T9!:1FA\`S4pV|hsM1*kmH_#%8:MЏj[D:$G9~CETK19zt ;ː/\HəeNhNz6Aʀ9;0ߨ({Va[:iDϤjkjl׌J5]X3ՅPY  NΆ A{n2zdZ/D 7G) Y$2\65g砇wvqjI|3!HvY|_gb3c|-L*2DԆtև燆m:ZNUbv?ZW`dHИ^sk )0%lFUWl "DJU rD\1&(%68ƜF 8MPk#9ПX5rv(G: mcA,kByDҭWJf|9f},&I"ρ{.'"t yshArIn{b$x HR)i49dPQ`FX*SWôs,{i1NB)lXv¢1I"Kdve{Y5rvN8$1.60*]~ YE8J gC4,zdqYA$UH)})P;8+ƣ`u)-YI XЄo/Nj "S!jt3Xpr(O2n9z£b/ģx,A!{Jgu1z9-5hH isOSP: QP!3%S*MQɔ(TJSj>D%!¡4WXKcZGqr| RHAzm,MǖЩq/WW/,:+J(\Eų`.3p*4֚E'%R`C e XE [o 5!3q<'RU;*5#qsןl?Z7eCP`53 ]%Aʢ 0L,HzZ? OFq2M; ʄ\RId90bcRh.1rLA Q2y#iڊZ{#6^HRA,;u<`bH2rfû(R;oR$~\o'_qkk%%?/5=^J O8)QI_5o(ћ|"\FV Y,t7<ͣd{^7Aw-RkZB#n(c&8㒉Rrth:#xq]\%KrtȬydNe"3\4[蹝i  gOOOCL?~W{)nDڞ[iꋏZ50NR(4QTvykKtrz ʽ 0̇oi~ew/VĘEގ (y;]?ەH3~u'_-9ckVyΕ@ݽźeeeV mѸOS,V|<:zͳ٦WlM[uݽZ F.WHYXeh~g;w/}]7痧| {^z Y?a$pMO"{}',mk-MKK ;,tͻ^0_r'֗$98$(!.KBE:2uTAFIgPޓL>czwzpY8 1Iƣ\&&T%eDsiJbtT~5Byn,ϓeHDO_Joo*:bԛY# t!@n` h}4( b({G>J9y![)gWH,gL h aLBВ[!<%nޓ{ydu!gIVsI3dɕs R|2NF58{t@h#ה] oxKFN҇\Z0ۭ\ktБS0aV^HRM\ ?i^ow0~Q0u 0?7l>D/Q-aɏV mkV`Vn0jRFJ-W:%KM=L~Ozb -1Bp»dE,M@U d_r¹V%Go * 6Q,,rːq\d3˜; R2 FΎERGU1fKgtVNzY@si`v.gYt5E[ݺY0nzawz~͹84]AJFҦHhy2;;="2OڮwhtK/ML>ƧMH#[.ui7=ޏf}a救a2oyKw(S|}zr-/z4\/|@]oؽ^f]wY\<>;⏡b)vI !0VQ+Y}Vn=G wxr0T{sX8_Fg^$'wmmY]CEU86x)rU"4$R)Pveh4ϵ!\$GX^n>^ezF/B[L64M ?8}?IqL+/,4;.Mkrptt`wO LbN5eL/\tᣚ[:b4uA6v:q(/CJUZ]ͷ /|7X5 \pXj]>a[+֕vp騍m$e}G0|Ξ0MmE -RTf̊9x M&0Bh @D ݺZ a<ꍋRr[Ƣc\'LlJهE;[L/HqɦBgEu2ף5W- *pRW x LqˀY,C2NcʠvO{dH5ؤ݉e)Ùe` jƦ &uD%ԖhQjcI&zGw`Sx,K.\9XV}KT.8xK Sd <2Xc%b <2g <2Xcx,e <2Xcx,e U) :z.jXAv&S§'YΎ,Qx/ Lj\@%WI9BX B \t*gEC%DhV,JD"# FT8RQ P(2v,g_1[ )NLz#vN8aN"gxeJQJj9g,]SVQoB8peVI] BNU`Ucc4Hr~rt%_|FpLdn1 IbP1pљE)UUA>+N^tOg obS+cG*`ͅ"̴4mȹȌ3D*`..WDT ikEM6R d#HiL*-ͣV8hG gPzQ bz~P~BI~V!xJ(W4Ҩ+"Z0~sJlF+K(BlS1-QI8wREA5NZ'ÎyHsL(QmGs(tdXDcZ)"V_"әt9b.4S)PW.['2ekPlxםb>I0bgKrjy9Oہ"g4`[*TVohi)1H-h1s:^HE B!fB tS8(c’n1d 5!"}:/A.DpCLDK9 Q uauFpke1 =)+ *߷VxW' ,8V9ADA)=FAZX5Y,XpfDx[d_SaI]lr*uV.gos|@ d!i>и)ȑAkpñO3.~a遃#5y~T." E.#ǟFR;'-ɔIz7r}坨Xݨ$s*!0)e- ta-g["g=R3Mxt𬍧BX{ gw* m.T6pA ˳><ܛ%uCu\i.O?zrDMͦ&<3F ޼.vj]^M cxFj $zpOwzx(Ɲ0|~•-VT,ARKf+T cpte^Nwi @.Б0g&jAH'E>"?1C?JY4%X`T9F ,DD CSFDoDH0<"q8ungl<ƕ`j>}~"{hƽ˖]/LMZu4ӦB0{1[R*X!4Xg'IZDn҈`iJK-VFb-}%N'pec&PfTet!`)I:PUIs' ฮZgvOo]S4㊼@ aLFF qUmTfOƘcdKry&'IP+R[2Km|,S:Ӫ^fS^V6^vt?#smӒZU!zIⰪ4}0.YO;p9pR\,BD~Ze:X6{MY| EDP #KNр"*%8&z(EJS;ipK̿t㇗phJo8d>14L/mޭ6yׇ!prpRԣ#u?g$шJQu)`OwOSI ~KN;}Lc4ҁLDDҁ3iB=QDaƂoLJaS\D,"ߵ6J||{띶9ut֫3i3}e@_,2 rr naɵ::+`Y#D#$^޿SA)ϊQ WZ `nI [AC0@Gg4#At1f Z3k$nZ\Vɻ N'-nXC-ţU<(Grƅ;f,ye&`$N,Brό Z":+np<9YU=Y<9zR`jq؇F<A4"6)D g(wgӉIq'Wқ5qK=E|;PCLբ <q8>(I flFi!pf0d8)&I>J gm V0i繦M3FJͣ`S%S8a>/ M;&SFINa[I][ Adx4ȓ 08?B 7ܑ͏/m:75^(vyu +7S|Nz {s?t2\ ? uV:Pl /@cAA WhDklbgN0h[~]K|Q .>i]ͫ-GvS謯to 2n^»;83e[(D0)>i/Ή7f YK)&|l: EoR'&{4_zk|Ҏ5!fYJN[9԰͟E$m~[plM2p)x[r_W ␱tz@EYTW,1C{llMy[7g6}2O=ŗ߀F8[-ad?Q5uϊr(ҥG:FU*L'a,GߎgOh~U~_ ӎ,:AS,ӶL*K7xBkmHЧ{cI|7qi]ՖV%(~Q5HfLS=Uux4 ;!@d uȽtLN@7^~w66{V*h= yc7ɿ([-ZAv߿O_jBx6M;-(UhFE4ƒEk\ҵc[ Gcm'[9{}ʤ4*l.&;"qIȹA*a !CFDL\,sGLC)sU KS\ɖea:&s 󍐽Gkj.lݳ=wb43^ÑY-,';5=pX4I p'-BB\YO១%>Ng˧<r[S܀N`-scG̜:Kj>%þDCRV|n([}|߰Ҙo/lzkCo?^>\&b-D>I-d^L&39Hl̆F)2gsJ9hp)K.z0:W6 ͥ0>nddFR]#cglGJgX UX*>y&2bzQem"be'&q.$|͟Є~x?L?sf>`m"acȑg/uF7䮆34$) #~g$m"dVtJ`@HQ`SЂb; GMYF]a8H+!澠vgcW]_ E" (Xv.ʈ^[%w!32` caƆ"E (:J5 )x(\4D hCubjkَmPȲ(/MLjh*"VD4@+\5sȀ6%^D,- A'> E1#AbcJr'E.8 Shp+HPVU$cULjؙ9Qd?Ok]qubōLJpq4 Z%xrHsAEJ9t:9X7<ѱ+톇Oa[f^u5 ^p,MvG\~|Gɘg?S MO'ĉڞ M qB&&t dFZ!V!էw4{NSONtrʁX@bJJ IW"(cc^dᣖ9OܕWG#wM NCRAK̙Z0LF tWܧ}˟AhV~ur{sEc*@i^)|S3KhUof LbW*-pfޕn6j"8'QRo޵mN8j|MR<4Gt+Jpq[?$>d^q~gu_;7ԥgZjVc |f{3[yljdI:dQ"W@BͱqYX`I"ק8︎{GcdWc_حS癈JA%HɵNkS%uivHu6"s!H씥F},2+}H&Q1qsLHSFt$x,3sQ>gsQTe~seXrJ!0C /.vo\5)X0gţQ@& 04JqY"/Z:DvR; , X%LRj4 "$K95m a'Zx vLP1LP}/"+x , }屋ve7 4{ҰGeq*պ,we@ߚiIa8G}/1νKl.?sw\vP/_>rER1$^JD@[8  t[ʂ9%*~sjef0{(o*OyћF~=1'r%6z&' B=oWgR}bgQPgU U*VX`U +vU U*VX`U]`U `U UbU U*VX( 5߱*VX`U UpObN*UlԪZURJjUI*U%VԪZURJjsJ$Pb;b?Td{^_6P _ U*VXLU S`U' U*VkU*~ o/m4;j9pPix?[| Ƴf(G`)AE$K DDD%m\:O%`XAj 'm]O<9:t; 9<2xV dK2E3k6J_*}ri*].|?}{aԻD}aVPQiG+Tkb׵ :k?UUǧCh_ ڻ**^!C݇ )# RQe 8fLP(/ ` md6/zEO!r f' }q܇Dj:!WY9NmLJ :uрGK{sN;,)*Oe,χgЫKNxtYi- qE;J0nS"J$"r][u"m *܍tΚjP~']?|_.wZ#6_fmE5xNRϴ)jr`UHӫ -/,꤂EoDi ӓTg-T(z5 A fNG>j4FxrErLf}01H!U+, ~odcօ[ꙑ^X)ZCL"2SGe- .e$J b;X7N:ujg?  m3AH#;K]xqA/A;I~0!x)4:1d21X͂9aI h%w[+YoO 9F7$aD%['4K\Oy8> L Nmd:hًdvj5Wpѿ 8?$&LvHʜ}r3oМ&O\ZϒikL_RKbzNp{sz`%IY[|_/ [d6nv_ RCJmqٗ}qNj6OGW-7ݼ/z7o F8$sdv .4'#,]x8ay4c6Bfr|FLqqܙp`y?w9\;ާ0NWG?1Nez7zS2)'ƦJ"Sbg!?_└JVOX始>N xZ+.63o[>4;-q|}M 滯f)~ u=o l),Qe}賡?!5Ü{7YL YmgmHԏU`\'XpOID*$[>!EJCTS1`4k]UuˣyJ_fW oCN67LL:RD;Mn3B30L;p5z@}wΎ%sΎܛbxzr-vAʝlP;j9zOԗLcjΘ5Q&zTO3{`]ѸS3b@t>u}F'3~B\` >)FrV1ĸ`Jt2.ю*o/WP@+`K$@b&_pB}uZ(iE4GѪPN~cĚىӞى|ORkHEV2<! ,KBQO:~PUHI1ʛeLɈ <(x`!i*"S*NXz 8cӹv7EoXWA)Ձ8>Nh漖D3)@BH,()H \׷C;l.ѴAR)uRq-K9#1I$ȈP# iI:\ZřT  D's 2f/:;(m>?&^?&uvœٚϸ:>xr1ރۿM'u% ]']F*~C/Ɠ{^cFNCR~C b?\ Nad(7=( w$=;swv7DxL>,Z@J8j=!)1 J!pvofs_f)'{"wsoztR^>j.tpy~?QŨ0[Fkg_Me/=דW?PN;ŨwjGٵ.ߚ'.OA6&/tvz|-7{;.+~=@ȜJ ^|2l2b2!A͟p.f)>݌7zt増Wed㗜w:78FS lcޠݛT`R8G3iy'Avt??;׏?zÇ#>/ѣP؛O'?~ҺҸxia,/_WQ{|]6fODW_#+݌Cu#Zn&ZħYzaEm)(o A9ʟh1 FUu%^+>\hjFm$^dʜPHə[)BB,0en*q/)ZFzngØm1CW"Z8*Fٴ,fqJyVg"'y H .S$SipI{]'SttNy@vZ*2Ii۴\y A XG0 ?}:FytՌ.؝zsw^_{;=73O\`|˻Y߶Z~Z-|umnQ? N&9~U[\=wyח$c sL[nm%p>h-@5q[5UM|B y:q^ZXZE OLy`N΄RCl#:B;yF J=>XD1W0S RpcB+ͤsQJ蚸F'wG,\^ >~eeBu^7mw yGߜO Gqc㚩]l6?L2_&|gsvJ0jy-qS:a*K/MG_V𝪷U â^裑$v8y岄Gu!y'+:TVY]H PLP2F; qQ dCNDt,8Rz!0͊bq.@.'T&H@xF'_EI|O"uyW>:sK*&jƼ3럮4hy8)_4"tg4hvJ$ hf[b&`Q"KrjjQ%R4h q. 40b FMP3@p&e6Gb% V1pkXRN_ Cd+gZ=0;d{ϟ[wO_E)mڏJ!^2V!@IS鸑NQCGU sZYLxr#"joՅmjeCx;f\Wfؙ 距}%kdo2X2w>o\t X8 фJZ#*pWF Ri4p^{ww I sNhI:dD3*Ɵ)-  P;KG4D ÁDVHu ңj*)?gśCg}FoĹ{bxzq]~t&=P٫˓f̔vJINJa;-J#-C6 aTY6 GF=y7J1hښbNRX-Utb,oe46ًK)vEgYnɲ/z]ف|^|4hw"BzG?{G?}@Þ]裆<'?lC7+}I{ǧB}N+~u6U=:V2S耮>]?L{Bvk zг[sWZwzw9Rm7mÂhw|#'G/֌y+N_fVk\I;&eop%.<`tJ|/۩ފӔ ׳+yq'ҷk4gK }6Vx\l1(,H*`# n䥈Be. ;§( 8%M5q1tp/mNW2"]N4Ln1tp-=])tSp.-i C0+R huNWrnB8շ+HkDWrJL+گ}Kʗӕ @WOѕ^Ъ^H_]=N•GǞj G -?~vD @Wt=9{+Е}+t(o]8+Bz9g<'w❷zkSoj tZ,;+qW ^|2|عc8ȇY{{Xn쇍ob6n7ۭIr=d.OlN ˘=F|o0˂>c?iG,+Ǻ}[|;c܂J?Nz; Rig't9DWCW+)@w@WODlqAt\./{RkOx" RȝZK+ExJQaeSYB`,ˡ+{6}+$2z t !|~DٮuysـQOV׋nl.^_]rs?lB{s/+д>/4Vk޻!}}|mҙ:CVz\F7Ec^oK4*Bl %Yo{UӶ%,\2g8s_G/j`>JW|*Y8MmDKd-q[sF(ތl~|CO܊WoEe7{GCywǼ_-yBxXO#wͯR~f0o>`?y[V([ tu?n/|b[_/-kvJ|t- 5[ȗޙҩ8O1y߿@xC6'W |zyTI4|K(98Bt/Bوcd So ׃[ ɖ\bw]H:gr3mV\J9;l#s}c(Nu6uhtg?,4(pơ&amԠJF{6S(͙!v-HFs 6X"FɕV1;Y40ȍFy\NR|(OEF]YW91Dj+00nvHdWzZnH 9r=bs$3qX 9q1ZcF-9Z@LD{}}?NoM.k Fm ]o[h alTPa 2>M c,cUr#QGT&ggz݀F$$W~}yk&+*-SY(/`TX -$!wysiVU ޶\sAsL mK Ar#QPAz/Rk"ݜ|ODL;α'9f8: Ekk.c@F5y m Rj() A4!rX[Y_Zh4|f"r>dk݄F.T ssv`J"sk ],%Ģ:D{ >B.@N`dMBф i()ˡ<#BEEG|7ZKλ>8XNƅY Brқ@!Vߴ+qA[8![Fh#q2*.t-#4<ƺ?K+5r]c`\uWIFҼ1Q ['9ԠmEX6P[ s r 8lk);O-`R4g\GZD*1+w!6\m`6 MG29G1Q껡L(Mj&6 FmF1K.Zz^EzjPB] !w|ƨ͐jPo]P?A6M[@C,JLh)D<dD3 VU r 6P\_ WA+l*1P(q 92j$X[ fY6(l APS.0:Y+_PC0q9D5=( E>|GAsȼcIFlgHH/>8Bi:) `3d>! Dbd dc5WVcG=D &O 胤τA m/ĥi&1fPTԃ*ND0BKw;Lp\ ~:y/Oi nUpUd=2?u#D1I|tx]w /M , *}rl:+K\$];6u 2j i<ѣyԄ0 V  .g؃JH VdCp0Z4Xyy@{:K`r@!Y#e"ԭ(w2xh]`¬JrC[k=!Vwb:@vV+×ii6ANv>3yGyQ'k7YB,U؂)V -XFX;K $ny1ρ 7D2&/ ā !A`ʠvfR nk 92vq5ڱ!,P"|txUjhjw ,hIfՌQye@R/DጰJ":)يGcwa,EJ3-3jt2$C@ztX8ga1&ǂpNf9hU"~B`IgHgYtgWMMkdg gV ok*x襷f:x$$XHĨI،K ྠG\tU0$7В}0{%'9 DN r@"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN ru9[LN l"hAl@A DN r@"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN r@nj 8@,'R[r $qL@"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN:exLN Ź r ja #'ir@"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN r@"'9z8ިoao)N5T7_@*yI1[NL Q@ȸrƸd u@4d\q^{ k5Lǂ+TYBp;ĞY/ :F\W[qb\u\uRk? .,\f=7ªm{p~K@j[#4뢜(V_駯R}U m' n*L&eO&Y:9ٞSf!wتq1۳n vAd<Lgm4=>T*M!q./ǓˬdM|)3BA?rj5H2.M¡ߝmo!٤:_ EtRsF>}S5oHX9Lgz\,7)\ 4fK/AbݎJʁߛU26Y___^N^3=ZTҊQ \> |QDn+w@.s>({M4!G,RʼI}VRgJ`4yDA!NK.:KP(c܆JIAaB}֑Cq%M4B 5ZyBBz+6"\wP&Bڄ+Ti)#) 6\ZPi#\Wk]D3  P W&\W{abjRhpr㉮P ~apC\y&igu,\\b5T WJrΜWۓw}V \uSO]uSiÕ+Izjs΄ K) ::@b^HE)r% 9PyzO=\kQWUZϯ'CҥZ v|t]-'P7I9$--3hۛ쐂7)_dJco3;ݗV"ӑD$+SoJ D+XɾGL~B&S 7٩iVƥUrWXtBrDA/ v.SkMR3 z{Jn O&DGG>x\J᪇RSW X+ PrP TU/qBE+,D5p\JAo@+#,."\*w^G+kD42KeAP^܅p\5;pePFE+kWֹ++%焫> WɉjP㙽rWͲ>&t\JEcW}ĕW9!<PWh+Tڤ^ճg٪xβhprOpJ?v*-u+gU1]`'|4BĂ+T|@?Nhm"6" hpj WƮ"\=^ZW?G4'^ZwN*mhu҄f=7J6 σƇ2Z'd NkGg PzqqtDzh(zQ%$7wo"^}6w[Eؖc8:?[Nɛ^<vOa2*ABvqmo $9MҴ CMqj!ۏRtU9jJˑ fhs}Nmg(J 1Wpj~5#&6k]K껿³Qo_5ntW??$n qV=Rկjڕn]WՎUZUM>^tD_e4_k9j% *O+6QNhprW?N*CW;;OtrE4oQ UjUqe._@0LVAJƂ+T{*_h7Nz+˔v""\`\4B& Ǯ@e{?݂3ky,SuL os*S TSudj)Cܜuo/i~ 6(X^Ѓ^T)$= zU4W(E3j` URRj" v nbGW>J;.MLVG+{joT7.x\Jͨ3G\5"*\\c*t\J W}ĕu^无`ˢ=-JT) W}&%|DB&p(F3::P7{-hprE4CVo6@mB:WnϬw3Ip{ipM>nj:TppWOzn56\ C(~l0Mj›b3:]:>l>Y>~?WW^? qR]Ar|AWsM |5K{^LdgU>vHsǾᓇ_roMw&a@̃F_4Oi]7l8Hލd9mذ._6"Vn['t$=ԦX.6vXflQeus>+5`)>~e&f4ڻ(3L)K5dKUVPG#HNy=ChwrF̋6r6_`ޡj[͡q5Ge:Kۜ;O DdgZZ1[Nʦ1~2!2ywYMH4'%v*gCnN(|6Wyzv:{7mg?L-WԛfhkuU$M;lЂ7_-ePVZ}X/n\KMW?xEo2w57/s0?M'm|1~n?~/|r;a1{_`8L>A~[j(/^-GVQ8O=-<\=y17&, }^dd*&}{BcakoC%K3.z1wTB;.A_Rt">_Tżw_5/*+ۧNxF]S8 z?ދFju?n܀ 3|sQwfן86 x{mh zy> >( +ZWfw_9v_T56ΫnX۷ື2wzA9K*SInrMngکZU平Y. hw򮣷Bhf |woQj:rKŶGwŷԿ9Qk7WYq mĺFXIh@s$E31fiU?ˆGe!*LPXrEE4ɐKLm8cOc#w/Ĺ+v_v8a3U,߬bOIb<UՅ„*e2eP Q(A&A}M58Ei Ȭ@J#H m P Ă$B!+6DV4fy t,;vޟ;[wݱ6EoY+Fy,K-s甜{&9&JГ3əarچiCӟxO'ljyS3&u&"43%EJؙc:R%FWi\af\aG=Ž+LD.Jj,2IX tXV.s-@2ZXR5, {0w^Mo)EuX=ӤD|x`FcEV]٨R TU6)d0pkb͌ VD5@* фP@9 @MPl0ڃ 4,)֫~bt(cC1l"fhh'L+ qH.yvER+4xul@:6Tk#{]k#tCp۪=ΠL@˖qBfpYT27'*{뜞 Pc.D,U*bDHhMyBAﰖŒ@NQ _dL~L.qnD.-RӕAi @0maSA{|H#y>[J62PU%D[l! FD8 "ԹC;g B2(!%HlM{9Hg[;F 0*i ZJk9tGx;;꡾yL>9:ãWf]CiD|[ULzb Kжu%'ˊyC eu8oO L8OΗo=;gDGyGS* a&p$c%TdU`Mh0Ӫ"OIEc3#ks"A)-lɗ< О sJ1&㽀KM!cĨʹ~ ޥ;tqdPEZ[vxX@>_jU}8/I s2b¨5hg@QV0oS(j o6c`|Wnϓ D+μ\.~%%GbIONO':*ts.P#ntqla/g_l Pt\~ 3< 5,\cb3ח\_#N'PvY?\Fvhq_2;ltɋ5.~zj|Nj.FVWfh]LF%}_gt3jľ2:A)$QQSƢQ ^*Awm\*hîAgAI}i|p&cuuRa1k;]..e0tcd%4Ҫ¦v>9d"v"ĢL$]ITDuVFl^-s(lvڶ1jGUx& HJN7J&EhS( O2RmX!aҰP&XGEs9G(>"QM160F&ʢP8]cDt#"oE^{<-YVg(Hu^KL=)Ŵtډ:ej10` ¶1Ye,˪T)rRbKZKXXsyCqq,9Cc\#. `9C+~pv`M&LB@l8:=PsuQDiS`xbq,x@؁UWF6=(`Gяh{K //̉]ռE #XkKBg$aEh5DcS;;5qÉw=8VEQ$ej e[P,mIHTf fiI\JPv&ЭyL!_@%lٶ.9>?1תr.ؗ hoh9dj׻gwHW:o?Ul*uofڀX#:T u1Yiih}D'{|< V{͠β?ߏj҉OIU~'6W~ڝn 8*bۍ8ǀ^䨜ӹD&DSRy큖rhL^xe]jN3WdE1!0SQjSGG T>K@h(MƭZFHJفut0 Lң5hy- Z7Ō(`0/%a%TA/"ÑF*?<юٜk܍$:gQEٻ涑cWPʂU[Į{Cd%)-H" I6k93}{ӳhR[218 9$=tAj~@̛yd4WפZ3V0er XI!Pp, \ eJB X7#D)59qŐTR@{\hg= NE=&پN,FPMcOKUT# ipbX~c`q4Xo sFTN\NxA GQfU(BVW"j$Xyq걱Iu$Uf.N⤅6rәS-ECEb Dg| J)/q4xm2ITIy{;{Utajx OBfZl<O9CX :3]`4^^ ݘ`HӔJGG-*cN 61@Mo(X,wQz,FMBqz 3_4dq-` ">%w$bJ%&8tl2 979w(`ϒnxkp׳qKs]M:/wqTp464)F n(?| ~Iu妃\ &qE{1<M)_S@&W2!5!7Ӓijcν͡{ݯrh1<_,4. e,|OS{>{8yyU#z?ߓ i LhfQp@%g(ALOW?*/5'L)T$|#_6*UyIj+ ݧ%Z~rbd1S/gKwû @[OI4\cι/}2%O!TqǩBdxtnFz?ls3.;ü`ASIfS29uz?%J 鄥ٝwuZ]'kA׼R(\U\Zݳd۵ZYPhz@4f9_7W;Yz'$'[eȌ=mmaCH,TK+X%U}WQ>bn,Snx#B/pۦ\Ow(invqFÖ|v}+?"B#N+"ρRڏG0957QB 4 K0pk4uC뎛Ԇ菛XXЊ.88#Hg(Rs9Rt)Xo ErE؄<$FJmbp̼F"x} J6=/NW,7-k0:ezP+b 'i*uP$: gF)<}L )~b|I [qNJ <ܖ;"E9H*"T)͸ӀKЈxMAyyTnS}*ց yxd<vVA`},Wr(߉c0AgK!$ T5C2!N"ӛLS%;V )T 鹴YPvO@Nc7䗿}H>yF|j BPOiP^_^s@>a:1#LX+b"aH .S3iÛtx Cο+,Y0 *sb5\:X0\Z70ow]G bml|-E9n+HDl/ʿ+iRx+&t6\˲CGbh5؉O~c,fLHͷ~.yzie0@d|S1TH2_F@;  =1W=]ꆬfYX>03g1ZhxSN`npbu^WU)&iJs?'.6}= 1 )d\t圬iTL|,7W߾~y{͇sL?:%X`fQ譂 ~ B;_^ߠkUW_5UBZ9?oLfq+>J0Qzr_.޿xS&1j.MN| j~YHL7o"G;4ù@.}RnSW%,ﭶOV^}Ml#HDy4܈x݃OA2ey!:!y1n t#=fCIy -QBRa-q.M&`$Ř`X E/d5pvtjyӨC39͛ȹ3x4;sC:)i;'q2ϋKΒOb [D}|N8)r#/N~Oޚ "&N{B?Y̻'ߟ޽ -%E[^ʥdhx92 {IlpH[z<_0OK hr`AL΄C:4k1ɋi58nO~,>"7IxOy_API 9'iA} yyu O w|NG9KOÝ )ӻ\< .bՐvH!F e6J!ڠLjL )&ݕ<$VnSy p%2G1B D(j)K0p2+pc 3ְcR;ĬW逰7g)Oc.SX*c9 N7JBjE=}o:' oY~~ r2j=[cq삹sU#q6RFS̅O@(5ܟ=]~6iEڬAiF\7X%hUȁ(sg6pf3r`SZ=C A(:>)k5 yeiɬԀ;&b=T|5wAi-lr^*v4ŪfQ2mOU{obcEޛ,\o6s874=՛_6WV/_UA67o0+MV?i`\}ձx -/=^&oקb܎̏GY ⧏4W[*3Ue\@nzVI'Z) cgm@RFO,uJÃ&tE\o_ ֎)Xkvк`ˠ+Iɏ)ӓ c]@oZNW0쬨<q,-A=Y丟.A8e W54QGM(JHLմL1MuմzpKY1wX=Be*PyvHhvvH2;ڡܫ-;W%4>GB SBH"H2Ne|W5>O@8U:yFnԊZiI &;•Ihhz7$uIG-D"`[CW_pUDIEGWGHW k]`AHk*-t6߻󎮎4!1o]`Xk*% #Z&NWЕ^޸8 ]mfh lR6lJo@WmS,j]`y`p35t*t ю`vRS.<t~Zʲ3uV#)a NidmJd2z!,7<9AOUQE \zqgaQ=djFގ.,tcCQεR?/ >n6439P*S2ܧ FZʝ˜0ERĻ?"_3?$σ+y, j?SObK߽~Q-EoEDt"Wr23J*e! aBU>>C3y`0't1܃]mVb}cItQUَYzAF6>}{TA?6pZp"t{ xcB5#Fu4T4M0‚>(t< 1>⣴TqX,.NvVD~0o F,pNj !\Y`̲9S˭N3‰uXNUioK+[fd$):X8D2Dt[B@ݝB@ɸ#t Ti1+1n ]EAޝ"ʥZ1]]m+w1\BW-o<]EvtutӢM m.yJt(uJ(}a3![CW05޻Bv1ҕԌ6yWp+˱h ]E[輫c+dm+Zw2hRMqc{3| W\zCOpz )^ͣɨ:}/ON㍓8Ntkk`<-oq8M +y:aOc)  ,g:L: /ʋɸt5.t^OԼT1嫾T777ZcRkc'P7n`w`iK >ݠsC~]k,R/8:TtX̪CjQWxzxJIU*Zvmgi>LԣIQzUg!lv3^)ŗg\4 xMGPޙ5tWbrfmgcOKm[k}N燣٢jBb<=DQK'@f GDH;φ$Wxac"BÃ4igWQ\VpnD"!đL(, N%ٲxk=A$X/G? Ÿq.]֥H^gmmTe202^L+Mxfgs%]e QO˹kAϺ았g~5.GhȧK7v|XA2goV~"'.?-5`[:ZlgԔase{TϘq]>B.TGV[c6N5abg"T[j-G_cj cuz6֞BmJgeCljf2;5;Y-ojXM7Ħ/ku^-Wp'M5\6OW_#^jO"sSA>˴9%&c,MIX#MY%…@JdW:\$e &Q)ML Eʜg9-K)93Ll_Ong*0ɻdI8^MP&9GAMۘm^ +%QRb >@6p-u ܚkP%r|"D㔲Wb!BBL"e?0RuI?rrc;OR8NP~ۉa3f3kݲ2?9ɇUu hN)-̫piYdQZFtu{xqSO'sy[32h&:%`$-RBNeq<0Oq]ʦ58̱Zϴ1h\ 7>%5@ec5q&fx {M/Jߡ]чP@);`{_ċ/ ]74K/M, d -)丒x=w4F63xƗ\i+pVdbZQORX,??~CIY.~ -F xWFWt2A_=cw28UGD\uu_nVJfJP4|Y Rn~.zOG`=,;7ۜKKC2#2?»4vl,ݗXֵow:UsWx9oѢ 2k8K UCOfH2SN1mA2mo2nvTyJfْ 6G˂R~4Z]כo eݔwYcFLLUWQ;"²{N <%5C׌#Io6Ң^x u]dM6.-;/ӬeFﳀT޳_\5;vU+ʩ@d c)@+R9Nz(OtWsA/MIԮWG$2 +̓F3?SuCCt8?v[irhloN/B.ZLmJ Zo/K.f?G)\wвrcLS憍J]ߗ]bZ/^~Rj75Mo MkyÆO)[4F B/ʛ +=N!%rTdMral*gk:d`x#Za %'ZK  1PR,q+YBrEVgs.{g+D5Z.+azrR^wRRW:KB)", S=<)J} B'!Z.PQI(313(>fAY rALIw(*fa7QR]QC#mȬfI&!9FϘ*0 ߥ˷@,ۜ0.lt6k'B ψI1aUB~[FF%V0X*3FY,\$+3:^ʚ`]s_PaBrh6nBkZJ,Q>`R4@0 Ax6sTLLZMQ D DCFc(D%e(].Łhs RD` \ @#K &#X .QY=Ͱ ʧC FVw5R(+}ö8x|mQE&)G0jspq)÷S,z<OnBw(Om qIG)@Ʃ a촷Ҁ>qrF%Eqիa;Wi 1t !bIA0IaDT!JHJTAk2,*/p*JpboM@x0q 92l\'XD5-zYA4BJa0_h+&Tx2*ѕ0ZI]sHwmm$IW~j<>-y`WLiIeb}OIT$M,h4 bTVd9']Q@I2h%e L`3/=kjEQ_U0Vr&!BIX,+T! /\9$Vc+:c"&LLg{ -hڇQKo+fm1ƀ/08BL^D]ia;0ŰZ6m?n*Yoi]:lSGLǀ0^ BP8t`Vi7` s[ N6 @L.H􀁩p1$;`Ku5:/Q%(x$H&rZVȼḾ>X\8Lt~XezOyq `1|Q@VǃV6H "3`G[]ƂٮHp# VϓW14+JXm))w2<}XeX+?wy2!] M("f_[b =N 5I|tm^O?+ 7Db6y.e}t/@QR" e@;{f n!*(S`a @pAs^ πB.dsڝ2V5u@xchdr[<x(.65H8K-F|d)ebzc"Õ,c< Ü@#܁?U26҉TIQ6_XVH,4LkTRp0U UmeVN h_W[ 3kE}׌oN%Tam \n F`ⴱAs5NPֵøM]C5ݺg-骅 #I`d*mtnLk;MKQ#u$^kpyHY5X`4vfc\w0vmhͻ _𙻶I x XRcF6JtclvDKhLwzT*|Dz;`;`}}^#úEc4Bz_a+@1\ƒr\ \1DNa~;uԄZ0S! !-JFJQ5z:YXXD .1#dnІE{819EKkLiv 58V׬T gF xXH#\ɓh µ5#]r5W1`f%ƛZ#@KvGQupk 6PXǔpJfP F̀|Qbem_0""I +EV;MW 9$bUrDL0K1!;Fff%ФiU0"tŒ.klץ5HxXD3 P GQv,n*RU: xCZB\2_k7݊Wm'YLץM3 j&ۻ\56kwklV5q]]t/aׇ %om-.@\nܤ< H܈r 鲤On_3@/?0^\^_̓`n~5`p:*U12q`\OR.(ZS!׋ ~ZWgmh.?ρK[1*xМ-^Y Tz=3-ZT;`;htd[ X3Jj,Z=rq$0 ,#+ kض9?渊a+5=ʖbKi299xl;';y"qnlJXkB1l "eU`G%PGnrg>!ͥ~PGu wi%ʊ:ފ8[[T ,L;kK>KGu8,qv3gD-bհ~eXPܟ*P7?9wUz6Jۯo-|v$%koWtӨXܸϥmר_׀+ģϒiJUHLf+Cb3[U>w=*`:Fn쏌JX3s,T _]AsKCrGލmt{C-omvM&^/uFZxL&Sӭ0VDMnnjNVahd/ddB6 lGcW|Qv('Y9bw3gĎE1O%jwc5EmB-O)QCxm/;uGhQw4FѨ;uGhQw4FѨ;uGhQw4FѨ;uGhQw4FѨ;uGhQw4FѨ;uGhQwMY9uGCN|\Ϧ;j代JOew4+ϺW_1?n/7Wa^|P hGI;2f.HqPY<> ETWGⷉJ_].iW'W~ԈV {%:mVVe<5U 7%ڴ$9x6`kbZރu,if΁-| c6`o>>n|@YrhbB izoE+vs2D0"HuVHUk "kUsrUWGLI͜uwר/0”&E{$+||0lg=uߵX#V񔳌bRx# 6IłT,HłT,HłT,HłT,HłT,HłT,HłT,HłT,HłT,HłT,HłT,HłT,HłT,HłT,HłT,HłT,HłT,HłT,HłT,HłT,HłT,HłT,HłT,ޯEӄ0b΢QlT,`2'b+"b!*_W>X8TU4 XWۊs4$}Z58;mNMAQx8ʖi|g&GNv?"EjswZPۦY.퀾sh T\1ҵ^wznWm\_V=u揗ʌ͒Y“f%bZ9A|^ME]>Ȝz+2,iQn[GkY*!LԮـ𖇥wWOF?W U?n. iI c%Cm=^D itQ<(R[A*k-;'ŶoGkjv6ZָZ֓;Z%3_e[ { 9 X͇OuzYq 2WmOF qʾ |}H>rۢ麗/OqRۋzD#~\ގQX.F=%8G*٩ʖ׾2g8S1⊿<[/?؋͏w1Je 0FE:qQCLP[SpIܝO܃^_u\Y'K//ZZqy}([C>C>C>C>C>C>C>C>C>C>C>C>C>C>C>C>C>C>C>C>C>C>C>чz~^hhvtDxp@ܩ\PcS=#f֩k/4s8f򧮽Ь4ޣB۾實Y/Ӡ{6PsRL0VNUAI1+U$_M!zX]rMٍfqh7ĕzmw ;?^unX\_{lѣ>NdzokIZ+o$ˮyͥoRo5?^B\P/ޒ*ȇHm37ou 1syX(+R9a`A)tdLZ]oGW wl í!Y"߯jfHQJ9HkD9_=%/b1}Vb7gt|V7g]xʬS 5+NgAЈ5NQclpR.r^%cNZxˋ@R)$ent [t[#~71tM|  %'}j`uLK?zUR=hZ.ɍ9z'( .J",=u4ٌ)(p0S ^4)SYÄ]Xs;xwa(b|KAOec>Xv.`@0J('l|Ē7|3c[6צA%'8д@@ gvrTdFnuæ !rPL|JՇ8t1 }SEq*m6{{+nJFv=LN;a`oNM`F3G>Ҍ¹1?W]Էch Y~.|69j>87n!`͈9q_|xT-k;)Og⃏~=N}AȂ GN #3,@hc +|UD->_Lz|9f?krj=tF]4WcQxR4Me.pgT sKwh1gP<~XP_.B7߾?=ͻC߽47@),"AIu- YC mƛ͍Xbhڜuls ƽ)>F_[90Sq?Cbz/Z.f"+E~1?ˆEhQQkTYsFU8_@g8>^d0cE0__]&#-(Ϸ>eI0#>A Ly%ǾG2Q[s2q9,s ӝ)mi) lx+"9%I8<&ws$$U {;J{m'Ul}εIc-fU(h֪.p nRz]!1+#z~Cfk\Mxf-ۉs]^A诡+8Qp\bC.A/+A%7|&Ẓ5P]Gef4 ?mX9 #-hRbK/-3VyNk5R Kl2.>'G٠R ;>,-L9۳1J֤%vfŞؼ; ,MXI23"slZO]FyʹȢ\ʚYT\8V(Sy00P( ܇3脘(NQ\6,r} ~{~i0(uP63%ǝ^fbLBl@|6ʩ(RF&$Jp29{S[.ʔYŜ`DbĶil%3-Fޅt4rAs 7]M>ӄVjKw69L%kH?ʘ䕲L7^\JxobI,30 XijulZ]xg*TYK\8 8}c՚aw T9״F|16ʋR g׶V^~O}yb,˳Zgt;y2,E> |tG'!_pzJףv.kQƹף̕DJ2E8P>PU 93{Y.[Ưx;|jm\Un֫{)\WȳenkaeP64hw9Uض\ MnjQ;`\ PCQCY]'w{=INX\ F3R>y_o=G{yEiy>wB,e)!ʔsJE 76c[U\YKdZ 2#z|kV4z:b!0Z(Qh)H.zdzTU0L$Ȉ0Kw՜I-$g yj7t>!.̣URrobHi!4RTTyni'i-F;/ n'ol~ f_3MԽoXoopc$- \oK8Ub1H]8ch fo \!bkH2WH1\@LYM/4ӄ+$m \!jn}B*UW/ 'Fn\Vɭ+$תm+)d 3rŷ*|{bWHܚRkP))꫁+­yP}p0}fz煫QèWp;Zu-+$؈+ ׊+VM+*!WRe_gks>W3iV. {\F1kDPo+(ܓ?+FI}kN G>]YOE&yEε!iI՜&ۘm^ +%:DxD@^nxAvbP{o̭I U#SʂOٻ6ndWX~*ukݽ^Wr7)WhX,S(m QE%zbfвO/]%8#@lQMgKq3b|>usj.K٦O0 $gڄS$h`UH˫,YIEoY&OL穇#gGy{U b2LԉG"YOA+$0D:#O)t1{}K>hY]`:&3Mpb>*k=H)z&7^7mjAf5C!C RT[?`R /5ctM0X׋eywfnX8;qnse683s`I'~|EH}sp' x]p0f%Q\iȚS}Xu a W.YqāK4;O G*5PتO_5۫O7MW"K:yaſIY Mڑ]HսtټR+eoc d+,'w/"swqtUr]/K2pIF蓈s ,\v:Od|/o ϳ l>ienWUwo|_y9߾ }ֺsm^\ί_+lyO8jNvoòxq9*}yIJţp1]nam,]pvvkCW}uypYZ|Pz$c;\]Z7m1AD^FR!-JVg}u֧ ;;=e8ۻEI]$HԵiHgGS-fFCu-#̵ֻ{fَ>o9ޝ.'ts'jk]#)M4ny@]{3jޮffifmEa/Z*]۪dtq3Øxd4DSBj9kS VqX)V<>;1? U "߾]^⯸08_}բ^n('ʺ IH@29y'1sp<@B2\Ӄ^+30 1wCg67n?D]\! y_f\@6z(p*Ica߬7ŴrKl ش @xknD3a㟗)c_ E .Zz`֌~bT!: Tg>7޳,ZB;`DG H>h,1rLNr&+%!"ƂTZCF "&[ZD@4: \p#P\:R唼\i,%x T&ՆgV>!@7uL=wn 43^Ȭ>p,p'-I 2 **f=$ :;I5/8*b"tNqt 2+ jT|_6ܛi ^S`zhsRd J׷}6r[SAϖS{=KrUT޽Q3CrȊ!dڒx;L6 dR<3ZAzf30 FU9shp)K.z0:W6H%z1n$pdXm8WfơX*c4+Ua̜nbsYfAh8|L`m"a#3:|F7$3tHRFzI$˱n dVI046JVxB Y(ʛ1h]eĮ6݈&㴦b jWںGn aJVE&`Q\J7Q9 32z0`#|]mgphp2/-.%͢M&DhtWwt܃*41kЄ,cA[}x$Gs w^vz,U\ZaTccObSDV0c J24BlPdGlqMdكرwIY9, Ǣ JFɴ\K9jYH"~1wR9@j Rj4 bH=yj;UΞf?R^RNGR4$F0ìcsNKq(5B1sG s$$c!)D6f(X)겄r:LD%c.Cbz\C,Иp6j$u;Fy\]c/c$cR6Wșd]4rқ$ !͞؉ M1ZB־h s V n1'2a/% 0ΖU_xC ,:%=O/GQBFIojstQ<=C>!-K7\p~>[7^%6q搣eGo&CϞxcyUR%Y$9 yqYJЧO$ST1;&\HZvXik[iqzRjZ@8M sԮTl6`pjz-ѵwM~n50|o+cy:rpŏ0ߢn:lyYdg"}U[ѵՄZ#gLݳ.YJ{JDJ{)P*8o~I^\(3kQM!@NQ(e{y{'KWϲzemv=CF3#]mϴi ~+ ݲBd`;ig@E Ǵo`UG1CQ~3!T'Kt~h9htqkpNrю0I#+BT$l]#n g;%/ˎЁЁN igѹ$kzmnXW-걼\tfբņnlfqva,:?μm,`g-tj:-`͛o );dJՀfP/ nxrr^Y#q6j452$AEs4BA4,`" x(p- \y%gCѐ YD҄8 UU/Ike9*AɒC]-c&5-RjQuz7YeEw,BOXc̵ g]:-xK[o+c;͌_sTؖ `ݿijNJ/Uvn HO7~&S(fkflՄȕ*GO6Ldl0-J; k8:sy ]c t?ZO1{v/_P7wFy A*fdB%/){\0O,SHZhR\1Gg>O|c{T v\m< 㸷G𔦅 UJ 4ZhXQJ@FȲՀAIiDԘ8Tw)6؛mvy;GLy.U6Īn(CPށPaޥқq*1&)4PJ]y!@9AX+Un^U0l<6֗aovBu[u0K冼eeQ|PHl)ʊt"$0I$8p9Ѭ-M' 7'9%6wF{9{IV&%cQp8SZ+o13f7#ڂ3Lr̛k= 8~YV{{7pՕ|1ß0`?TWT~75V?Nn՟;ApJdWI_PI{?R(e}>\FC ;d3{2V!v8jw8BxOL=y6Oh9 k6#PY\)$pzI~L>2[_6PY\owjkl#Dk{7i_#Fg{$p }/ {jr7{陞Mf7or}TJu:ɶQ5Wl60gp)n }bA؃]թN%ƝǕv Ï~?Ã~=8E˶ I _1jho:4|r֓x\9.Ք[ƽ>Zmٞ 1~a¦W͍Ghh•u?Ta$t\:7ܠ<8gTЈM LKB%<4]a~7P__k=n#HTqPe9">ϐ3Uyɓ(= pDl 2qa-t^0ym28່Q.*(`!ZLBRJ&5G648˝N'w:ٞx%a|gժ˻o l A&djT VT$A-kk!F]2#6Vs"jkZn6/M83cpHB{9HJ-=hgPW08I%<5BBfݳĭkp*oo=Pذ1"^o&xq4 ñsk2洤0 }Mwhc dy(^L6)# DyJF7-d{ h^3[4!xY"%m8_csQ٬u"ǧغszq9:yPѽt-klYOv޷z7~+wZ޿{>?c>e#_} XVYu2ۚO.rwSkz%ɟnH_248ǢVl5FWSa81tU)׌yWsSe5'Jm{nE"-oRZ m"0Ws$9$\ԥ {Ph=1!U]4-騛jmEJLFǐMXLNDXdP2BTVeBrEqroȪx阷@ H60MN'Ą!蠵^t^y,d D*)p2 YF5 -08*`j4IpH@1 50A)FQ),8P3,ְ:#a e`/[2'u9%'F= 9&׳n5>}imH)ݖe `N{I)ZAhHtH#*ţsZYǩ¼0D{{%5E&i|{wo&x0x#״kFo}&F2FCĈxFL8b^DM^,фZZ#jF RkUGU,7^gŽu4r6Zf%s$)#!Y@̹eTߒ*- M/gj*BwZP@GrB Q؄&66 h Qwor"A@cFƭFxI.fo%[$`.GLgr15j4Rv[oPL[9?/FF.WCn}8c]\'HjЏ]?ebLX٣`8,Fva=1&G*&Fg WV!jwv?MhℙlW]G~wK9x[-wٛeNZHs5FƆT8'#AnS[fr0S+Ȧ+JIRŒ0+$r5*UMWH%]WEgרqjBE\!Fm7XqEq%rmWHsC L.ɻ- toJIYWoP\IOQ~Ii0oK|sMG9pâ yZf23ة05H BH$89GƓ YGp%ܤv;xۖxιȂU2a,y z*wǝc9gõqY+ ՜W8h\HtQ3)qBe|pQ٘Yr$)߈U~b|=TUܿ\OjSAvƕ8߭փ*J$M#> ]vv !—ӛXr孧GM7A`ŶF-?Re*E|ʿ"JY@d l- D"[@d "[@d l- DȖ5]ycJX"[@d l- DȾU+u-U lr_@d l- DݩҨRD8 l- D"[@d l^03D lq"[@d l- DbYjYj%[T Vt{+knI îPD;q4pTaqLjf Ea(A:ܖ  3ˬ<24M{=L7dl Jѿyů+7A5畘LiW oUŌp*p*o@3_Oa{_=<h)8Z5$JEd4:[Jp9Ed 8'r=P dLÍ9|Ǧa66GeV5#gה7\x\{,GԄ A}*>Sv=tOnǏ#͞>NSzGo2 !d%‚I2 xS%.pؘM A"cp6R08 /c2Qh-]۾yqJ<ڜWfysSl:c T$a"hI8 }+Qٔ&"ېlxM&D (!r}R rP F1 `8r##2ZG'!U9wR!Rɇ`"S0h5^!"f?-=NbL8b4]Lla⑎&%ʙT)x"*\-hv(-p:ZMsx(|qճQ 狃<Ѣt ՌͶ@eTYE0;Sq G~'u'?ܪ78F>QYsOT/\߷?_bݥn1An1!S7 7W߸_ȟM>BhCG=4g8AҾhhoϻI\i&{W| |20.&]a9M&ֹ> ʩQG;uwoD>f}k?H^6K^!: WTQE,7 ߍ~΍¬ FjrU_c릑rT»崯~ߠv>YnIx7oE݆66S߮23 ĦJd*㌫ {J#p}l1?>o-~m}rY/,}4+\<_^OW~yrv9~ե|Q}?~~q}wvrI<*bvOor|}yI9xrz5Z~CWsѺ_H5[.IxKlS_!^9 ϻITFG)Ir=c E/smqش11{%l $?W0.nxAݨJ1 o7c^0MLL}wFK_pwf杲~5Oq&k|p_ښ/c>ct1*! 1{*DE2B:QEZJP* xgV#8*[ygNLJh$Z2)d IB,PWA2 BQu`Y0E5 - o2K Q .BcU hM/,-ü)\7ίb㓧nNx*?VEm9 fƂRV޽QaCYw$+Z`'Rxtb ǀ h"0;nNh}TE52: UO(,Ԧ(UQ$%TR5c1rvkrX.,BQXA=fx=&/.6O[_վk%uoTn2]N 8wӄF "o 3EiTeٗf O_C6\`0βJ{%@Em1%ªc"^҅K[clL̾hbq֖`v-#c{и&+Am(6)ML FD<ђfBsH([#^FԀz^ƏYKՋ^ԃ^V$9 (F1Y`"+8 :Kvp## A/C/'C-a*lGRTլ;Ob)n(VлG~\ S69f5y+B[OM*Q`\g!'ybNYH 9ԳoP~qvlksޞrqP\p %?4¼t-^vz4(UK :]eo׳Z+ zimIEW(퐯^#uO _=W"XPstP?Qx?rzoGbǥv٫~T~mcE)S#e.M;,g*|8i዗MTOO
]|⦱ވE ו*8TV;_EL6J`wv};V͢(D}vHi䎧t &vLH:t=Fg>%ëQ|^ٛՓBĂ|g#5т9WVRӐ"*&8K qPfB[9SUa~F ypvj{$W熐KzrUܹ ˖+D$TbӮv1OQ^~>*eXEж)ΩdHI]22 QʤXR(J$ 'k[C-K}$!z|=˒#] F#Cmߖ~v%ps )I= M+}wG nE9!1MESԫwh!)WIѲلI"yiR^3Q4h*ibDL@ zhf\F"V#\Qރ%t}m~~.R~3w5gׁ]}Pba㦇O,Gup j jcmYJHHJsE#@*qQ,+A m8P gUe]HUq)x&pԗdUOVe (k'v']퀴 iXMޖf\iM3okEƛ930SwS~MV\ q]Bۧγ~4+$)c4+]}z|zǤ1mfBYe'z?zɻw{̼6r;?g--n~}x~7ij|ĝr]ٻ6$WvJȌCz<=tcn ۆEX"5noduPTʇlȬȈG d\+\Ny ji5Ou9?r2< -r]؜K^vꙫ~UiZJuTjӯUiiJG vJ:QF$Q{* QC>rSG~b=*P74`^8PX :K<9b`hB-̜m&4'jdmaccҾZ(A۬Tv!pR@kovg:Gᾅ6y!m/”okp6Gǿ ۴UU5RIcTND>D6Z`*!k|BKRdv&oF9%+A#PdV,D@h-rĒa3j\nyQ2RI0#gX7堝'Q(l9˔g%SYgl)g\}铘9{x.FR%I).hDYqlaw&gV[k\pDdDKBQos'Qa┨^d*ؤXH&[H%F=ZZxINzz[ݮMo]@9"IVzr^FggQ{d΂`0g#VVEAqt8fjccmg'7}#3=#Hs]=j*#PAZLfQ' ʹ2P^ki2"´]vz=u"}mP$0fQ9؂Q o º+Ϛ =_: [A::c1Jұ+ ^̪axsV+_ҴY߿겭#Q Cn3iB+?mkgiZu&շfC ,[ ꑜ:I'1qz2S{֑Wz_0 4~y?k|fe P2B%ީ`j3]lO9+|gْeU%> 6k> ,N&-Vϼnx؞ |8'|~?N}iy~oSѩoH{E._-j ~<ۋ[]C.sjzF]M,|vxFչI4GW洸z4e8?Zn1VrẺc<)GUrmׂ#ty[-jq- a$q$G:4< mfY~fMy2m`]jŧ˅]9>59yT)u9Mn8/Վ:RFYѴ؇=M'M7o6?X~o?|_}!H{;88ϱV?D&_"pc+] -V.gXՂqo5`vW& ޏ~Ix6ckw&ŎY !YP){AQM&%gKY 簧ɞN^;o$uoW}UC-Uz. 7.6DλʈqQPMHh)rFkUvJ9z Q.>reI6!@^[SpF6F!iXeHɣNGk lczw(H,^wwa#EM'\zq; /'^LJ/gvk炟L% ~‚s~MǮ#:E\rwkUͫ]`GH5v_y|~id{ߡap E݁Z['׾p.k^9-f~==WRXBQ~޾qCeJ&Z+=e'2=<JVT* m?*ҶnH wa-TeNoo.~{ssGwg~vtzW?O)|u-LKIQDX_f-om۽[b6 ԺzCPn1g*)-Jr6ξ't bnhj eGiHrܟ7=D56R`쑜N[>j-DԱ=(hz$w0XE$H!Z%c$&/}Lœ#(;NrY'MUqɧk:3V[f/ J9ⓧMɏ_-JiD*بDNX DY` `PRlwDxYl<_rOAɘ3{h(ΙRZaGV6hq2gXމ Hr{Y{m d(k;tzpPpYC:Cjm iW\bT%/&q[演't:f~#j#+D>m- 8Kp\okt(-,_aR&EjZ=lKLtկS7V엁rԓ7TݽjN~"LU\B<:mAڇK\poM4x?r Lg!w8G}7mto>z;y?Z%Ǻ*1ZVj_MVM7|x>MڂFmi紤FW9=J{L* Z0H.rUeH)C3)7%R}ǡ/ĎZHoYUz+ZY]ot=0?]ʶ^RSy]iYu'37*eklsbXF~0pшvRb!85Oh"ZI! MA4,ӿDT$H,_Ր^2% edeYJ*GJ"h'P%%NBմ;5RjeI`YˊQJG"Ykc!`\9S1`'lGoi3o7Xiy_\8z5q=ԬKځ;f,{~:3z+mJ>#ZFUuJS:!eDS"3l֨c6ȤA9 e7}T @:tL6*<=`j=#z?"z4rH*+Udh&%l4!ќKz.3zdG B/{`8A``ll!8݁@C(FގƔRzɹ{94 );kɒEuxn:fCiEt]:&}fўg+u$gڄS$hPHUy tB\gL n"@H̸sQ0oSv^$wH)K}5?ʎG#dvWPmD=(]@d%{-ti˴#yVHTV).F&$ K*z <$ג(ȑIa|Smg٧uCk\0 0ޮ}J\,?v)?Q].<t7.B>=zrvY0n_ٲGDowV?*s5[-jlCU9J/aV2WLWa7%Kj㨝7iܰnƤQ;Y) N4}n.,]7k-ruiͫ_oSnz{v1&o'jH ,qȼ]%t Af BnF]V6P!(kѯ+}9qri|Ӂ}#yYdSZ͏)6\GFlv:)(|xoIJeH8קJtivM%iۍyMc^rssD|Dgu]$K?\qŅi[4 4A\N}l#wV;Io J!Wܒf3s(n@F2(XS()Rd.sDw; a*ٱHQ>Q{6ҩoKcuxt zNY.pY4I V wVTT .K]D5/?%$pU !"Vqt ʂZ킊o]#2Kq\ާnxԽd|ߍ0k:lyJ<9d^=lݙ:xO**oQ'Yf2VB%4x[LtʋdR<3R^:3%FURYI ؔ%=-WHEHRۘ 7s"gXSWUaa5 Ue,T=Z8@E&ڒ`(#: Jm"̤輱{a| VsE2Y\%aMBrd8F>T#x_;~eQX싈2"{D|@1FMs@M"CB > h˄*":K VlLH!Z 6F遫HĪʈXib;?Xg5.me\{\|%,pXm^!2,haYҋ,NdYqx4vb-]e< lKQԴЫӑ['i]Ip} e? *q7ͨynkSe6.(X +CPOK=Baꫢ4x$GƜsyad`6`XB|yB]G'gCxȔBG.#},De4)m9ɾ;x2 vd3h{\͸d=ozQ\G=?`! o%Qn#W|G-ڹ,*HV\t&r$RNWg~s15ϿnfJ$ : -]7*,:?Ӈx,x::lw\Gݷ4`zّߺXLCIxN.0ןV3w+ 앖y'kY}}>%X^'.bdY0rǤtPE'!2KzD _*5!JmP%Y@L(*xK޺ " ɱT9;,B@͛ ۲rFGJiߍ"x-ȩRY/BLkn@eAؙu23pw5LWGk֖O[* [v\3߶xg O "##LHip=F 7sZћׅp4)sn^-/7>axzBޕqdۿ0&Mվ<fbZ)R!)K*).MluuթSZm:dצ "El?5˯`rE Ӣ ?&=U*(rê^S8ye0va)Rﳵ:RgeZ.'?^L/gXf$ 0W?R+Αl8z3-Nd_@[|A(a]Y㏗tUU .WYa8* `i _z,՟n.ͥ+YUjU]b.Ja>FROKYLB6Y3٨<_2lT@xWf3׿7/~uՇ^po~~3 LIWA!~uU]ECyb[E}Yr%l-bQ>؟t!_Z&X$]}lW$s, T?CL T1YH>(^"p__:@|>ZW;H6/FDnDASR^D ,ԉL"2n FzngÜxdN Jy B] L6H Ř`XfC"za(ݚN-k:Z[F-:*6<"m;5n[n=1Xoۙ7|?+[P]kT V0b]N}nTkvS` naQ ZI )[-S&Fea/XHt{o0yCz)Ҍߜ pD4OkBRcn e3]E+)7h!k:oGøݒ'2+(>yq]s^ymvnM/<:uW{d49 5`auls9p<㳪u=:_rJ͙*М`)%Enm8z }4Ţˋ8z \BIV8БD1`tXEpÁwT;z`|Ws.#R&4DYڕhi",92z cj\rvZC!',u!<`Phªq(Wq'n/Tmpv af=Ydjn95ަGCJ@'Xrnq";/A]R i)!p ((I?N 0`iȿ6ְc`bҹ42t[: += ye8:nfXYw8SٱWy|~ͳߎZ›$oR1qH0cĜE NVF_iziԚт|lY,o׷![N/nt3ŤbCi0̦.i[u eoG.޽oT5}6a> =b.Mv).M稪Z`r-礵wogeݕ?{ITLsY.bږ]c϶zcLOdVY :Htx7Ca('Ίqs#ʽr*CqݑHGCtHlF+K(BlS1֑qxkNDZ(Pʍ"62ԑaYqGkZ L!GRrCmp^n[8)l<ɓww$yGϰ#u<NbFbbW 5KZrܖŏV4~#T[x]]5)ua0Nb j^Z6?޽>eR_0{ޡ猞c#jE1˷we-Lg3vttcCyk:H9Wveh*||¼w=X子Gw^A*z{b-{' ʜC v56q'>|__ÿfɄ0FN^ˏ?|G`RGSJmΌ6>imw1yPrĻeYa6 Ubsޞ@ hNV5^ JP~bŒ'H +ZCW .mVut %Q*5tjЦUBbwtrK[e `۳2X6J;zt%$RQ[CW n{V6^]qfܗHWR!*U+[$ժt(9HWJ1tG@CW n 7Jڹ_"]iT]`CW .amUB):uЕܲeiK}b ҧ]Od R6t%;ڷDRFB@szZhJy֜#"Nk M'4PR thq3S Ƙct|=R/>N1ي]}ג)E 0jdp56pќ QPB,Đ`YnfSE{vP mVƋN@CD0ƴ5tRJhh:]J:ztő0ZMNWb + VEt|]t*ȦUBاCWs kҕ/IpO}s'7CW kKX՞ඇ-XZMsH҄2&G]%5t$t(f]}3tlzCDRt Vź\N|`7Dj'O7t:ڷ1EB~'uU+ZCWVjn(%ҕXF*bԓ$aIg Fm_obZv5ZԖ{SYĭۓ-;ݑVhف( H `dk0嘴E IpBP'_&T5VuzOh7>2\_="d^~|V /r֖j] 2r(,@"ʯְmUn96RJܖ!.##{;b{蓤)XzLקF_7EFau,w~>:?;S˕(h1`H0ZF{-c.xO|=Q{^b4a*5(K)µDX/5^#pRHR~~`ӵOJ.Ŝ?GˈeX,,^]o/1{I^Zb<\_S٫*73\sLH.ɥŞO\`WNE$OPE"7 c"X Äh9 '92z cjnNKc`2䄥  JMBBX1J9QHmpv E ɸK,ءQ)έC:[+FK qc* P,D`9`=9$t<t33R]&_~[P= =ez73˻*bO' /A񈨏$8e(@/ BQzN"nM l<=pΎF4C3O|$ܥ^ =wr;?d4\ϮaVwe๾n4?U>GmhA[@<_Tf& 2AF݌g33A[{9}ʊ&> ޢ,Y)'2< 4dt9^OCC?>ג3nt3Ih:3W?~Y]=8竿ʠ˕[x j6[Sg3C Y+|1Oy%R9ao<'Js,fO?0}=W0>{q'm F_ӖΟ-1{"u ^*g>%/eev3,fYw8SٱW<˹g}{g>UھySO{PV&&Nw0K!zƌƁZ0cĜE NVF_4NXS`>2,,oC6YgaM!],oaN,[>r}I˶y74+4GWyK@^kXU.N?~O{6ʟ^W3=מin-KhŲTh{{!;w"'5].燓Ab)ݴUcY3|nQWNYE`w:8+ʔ5{Ef(ŎvFf6J%! ^MƍȩI0!ΝTQo c(Rr 8M(rneZGEdQ0A[&vX+B"I<AuJ'wdxmOeXRl8L'g;._b5\sHA~W=`+U(tRy><bhĵF9{_gрmPq#"oAs߉C߇ȭ R5 } <²++R`oglPPR9@g0xFR K^,Yy(&DOZjٻ6Wto"HnE"O¨ꮶˤBRz8)Y|nY5Ms353]OU#fﵲXE(*`$ 8c|vZjP黻=jٻiOy~?kszѶn㑜Z!9 59KAtC/!);*o۷l w_=7R`ȐKb7=wѾy6}hx4sX/n}?4]n`Q̝h;2զg я3uѽ~u듋E48N0GrJ\DJ:30ZaV)l)TFQΊ:ň(WNQA YԖ)\ls`<!ulqr~noaEnm\-Vc6$}XUsQy0aJ'oq-L,6qBJzɅdkPF(mHb 6AX$T({ņJUL|'x>-˶eveO }>R6Z}|ޏ|a=mtlxv~+ qwbj8`Rt Awv:QOd3M_Oa{_˯yFr 4[5)P*IkHQ FY!Fkce(tJ84zK;o[̼ܳiXMQ݂g״7\cx\{-GԼ}ݴ`r ]&UL+[uR #cc͓M! 5uϜ܎$|**Rc.D,U*bDb.$X;P;{0Cp)q`s^9B[{oS^im?+~9v:_o)&ô =D>$<-%'y%D[l! FDxDsyƧ vg B2(!%H,{l%¨5h)UBVM駘qD%"V݂|:ۯt[*TO3ovNz mg!,+8 j$-f -q@l cF)u"T-2fRȤ*I3ap/پ7)wvq>L>-Ix!O}0JUgz# .w 7Oj~MWu/YیWF̯_e^]&~(WK;a_=~ \*fy3A|/:dpHM0ýmB˰8r))-_@Ѧwۣ =VSEndzIw;/{AHoY)ӋpH^O\>W"!tH⟷5aq/?V Cy];oxf <}HØ e:( !`~nv6]91w Ch"yu-hz_g9ɇ)pƺ/V$,STs>|.؟skeøBfޚc5V/v/<Kj 촗֩JY&86⃪QSƢQ ^*A[fm^PxN2P 2YYTL,9MEBqTFkzu[T:(7GvPLpjAKJ5'UO-C,?i̎~+$_]:ȫ4]MUEwǷ ҤGyLQ`"z.d vL~iҤ8G>mF, |-F&qY$6(JIJmODD #QHL%c5}dɲ@-:5&90Ip'dYH= gL -8I[9/1bQ:e,'9%"JF˲ȖS& +.th3z/Bjٯz8bZJZéXSR6(6_V~'kD{73b=C??TҙXur(LV$1f-g}Q:P(t6^AG G]%(^B.Z*JϬ*6f'b@ajُJ3,l63B ͈IsxsuM^]l?" ?~\\_/8b2xq%XHbI!Ŭ|tѺ̺AÏ{ЈJ :lۉPE2ѕDE$Zjm%~4ofDZmQ{d;g7I ?DRthkS2YT2!(BB\p}rl:Ò4l!333dbQ 1qd/sY$6CFُQx*l~1"GD5"Aؖ,3f䭫Zb:tғRL{A6 Bc`mcFXU'U8S0I)Ė2^&1"6~gįSu6%bh~w21g(y.ܮM.M&LB@l8:=AlfrDq!pdxbq,x=IQD~ |OgF޹?n~|G#0 Dŷ`.e :΢*ѝx=0t})ٚ#ۊEP]MhO9y**&DQI&*fV<$1Kb+"fYZ. t+q)A7Pxr6&9<;1zn.ؕhDZ} jwכg7HWjZ "@l:ob:Y :Lwv&k좇ԑ*H4ه!Ɵ65'GdzbO*(%5-Z 6JA dQzP:RʱCt:,w|zp\t' o8VdMlTCx*P2҄ b\;q5+Pb]W3o@QyMTc*nmG] _ۮCWaZQؾ22l~**<Գ'Bw?[ _aW9v6E(\_go~Wsq5oӗIkx6[ Z^9Yz >Rճ~!ε _{c컗+[#LHУ#KC7mcWA|W|Ǯ eWF&oj/~Q룓՞7fy5.V/0˯LVռ/?) {wo lC ^080j}0c\O2WoӾVu5{n54@AwEUG+ Ey;m醒P"~ Տѭ<;=RFF*i-Z %%2@XtV-Ni-Fu%V]$WRzD% L8Qe%9 |0WMqN6V*y;}y-ɾ_Hu>E\I&AҊ2aAf^RT}'PLI!Yo!mg0isuh 12)=[={"'n&΁^)Ȩ))6'jppPDeJTd$H%1|Ftlh M /*f>gl%PK& $)=M2|Y,=`{贤LӨccDr>WOəV`CrK4(x0c'csJu/`d~kzBV w/i!lswsx=G|&U]:mZ$-S ˄E3Z.zk[ lnCPj{[?޹y5YLޟ/&W~qMs}{Dk'cc⸀&m%ؾs" FF1f'B9TѢ`ᆳov4k'gFgk\~ Mޟtqu} 3?L&79u~۫_OVzkݯu֭n/|`=^/T~8]O rzQ[*iᐥtTD?b|MUUXD@t]\*%?{Ƒ 8\FGK@:b78``;FF"LQH9q߯j EJ#q'EtՏvaG=(HokV߯꭬Gb]whtUѵ:֞,-}$oY$/էhH`^X+m傶:W6Bt޼z}'gZ^|5/'aUipǿeͱWX-L}1V]L7fkneaktRi}vBmqr6 sSg4 v"`s|0 $M d)mjCfŹ'4[ldK,a±S=V6(iYqS)#ӱeb_hBj9"Ct#!5$|JxECkVG8QjHUAp* [OHݥ I{Z 2I|},}EWڄ"c6W\KO檩Ovr0o[MnOYOtYft4 fS!"nzj)x[ԼRr<\]\lo7}FJyK=?R|rQ72zEh.(5]5w]?𴭄&WhVn|OksNGaهb" Z\#J>GZJ$Sj)fR3c|Ld6~b0-6λ&::Z>fû~6P7gֺ` ^9b "0sYVweNcҞm2mp aDǹ xm$΀wC;7g:i2늃3|;M3_>\M;lHu jA椔>)J"{]PjI&BudzmMsj 8ԬT딴x]soȠDFY pŌ,G68M:J˪e^!v gn NlcR(jel,"ϒy'[ֳlg^%5KMw%<gV ،pc >Kq0;3RiR EUM=˿C(!X]IFCaD2$QIMV2ql4tֽ՚~ԯ^1piԺ5 k^UYvQZ0W3pof5K`Cd)|D8K6ta#xͰܪ_㤳yw^ \HjpKk\mBL+\c"v)8pfR=?ʹ}7NQu% rP[*: j]IYKE`@? vYޜ|yvOn#Zđ 8e.fe|-J ״R-(auA& Qu݋^ĿOյQgxc}FZeBMwI8s>Z e< ZGZVZ zop`FT)4][\'p"!$#gƺPj]?Gɣ'Z͓N.}~2k,Y -:= ?&kP/&',8!cjd? 8H~ʃ>l`ؑuʲ&(p==wr-Оτ<[\І q%;:5LW#?0r4"呫-ř,{@ GӚL1J%0~l|ǧIxJ9>4~kv?/KߐHS5,trr<=<^}\s:&GqR]GK/kWk'pWo(apO҃YqZ×?g|lƛU0Hb >ףJwlW6GO&Zwؚ-zu͈ͨͬ ;Tڔ ?Hxe'7mχM[lsNku]_#Nȱ=/tV(>T$.1?⃋UPFeVp{qO`#E?_AgCX 뛟vhڶ4hZZآi?ޮukkڽ>ܴ,fq96 $O׿?N? '<ăZnfM\"AZ|aϜoAWѠTĹDSC!V|ɒGKFxj_wN~|q}hk$k$a|k֗upMy 5eILsUQIנGHٰz,^%pch6k;gr.&oTʆf^wRtFgFW+筭B X䢏F+UthlKx \X|^[pW h'-G2<%j'Xb=>?Ƃ:d|t>noFk5J׭5}4RVp{h?֑V -$VK Ihe~J-(ЊDWӺbJW ]ZmNWrvՁѕUjU]!`"FBWu W=] ]9_Y]!` "%"Nw%pWCWnˡwGL+k v _ P%q;ЕCϵ:?Ӻ"VBWttP !ҕߨq)(Wt00Q 0u @`_~~b:+ippQw'B0~Ĕ|Efz:qL8ߗG y'q2;nb&ۜ]/nS~guuBfᴧá+ "ZCWCWu:4_rte5+kt2HpY "Z IW邛;^ ]!\"ݿK(ꫡ+Ћ#K.T .=rv_Qw%v+SLDW؊b *fJ+B+MPp|DE;i&Q Zߨ-5tIә 潧I$wvYX ӋlA Zv_so #`Tbal)050t7UV#CWWRZuBе{'=] ]h;V ]!\X)tEhy+B)yOW@Ww~|WV-Wח R̷߆h P{Zql^p}\(跓̽s®/jVR$YK1;R jcEn>-G>M' #KoM|yj2BaY4Z:AMזEJg5-ZTY7lw~h`u}{s[K 5e$7Eq1Rgڼ0UXWPIJC(^ ܠ`nOx,k4/>as*Gz^/tdws|$W)>g8b¦7sr'e/6z)uh%x@K2RU:r/?P3|_sH?wU>Npr?O ?vٻ8+W&x]7U>"{ˆA} [n ZC`U*_fwer~Rnیa;cgyܔL)H5` l`g9Dh^Z*<\Ҽ76\HZnz|c4鱴/gX;&1`Ϳ/+ hz4(Xsǖ{J(D+T 4zØ aB*ZlD(cB]6#ѱJ`K40:iZ5*ZLm̞ZwTk4gWcXku7d| #a c&3$JˣKQK!՛2NE5Cc"cn|mn^RfFj)& hb&&Y;0ʐmO(hck c0O(HyT㊳N,`ΐ?!ŜkXhK3xB)b14S#yL)U"NπY7 Iuyjܼ(mMN4Zl&5WTyKr-!Z[OفORjʈ?7W!XU&3I1Sd04qT,ӧR<"WHL3k-ġ;T{3ҫ d) _hRb6>]L%XJ@`س;(Ptv@[RZ 4Cs8҄~+i2PTP!e*2Hcm J{kϱ5rwR$l SK #<#ֺ'rc>k-ӈGn W< YkBBSڶf#aV#gV:Tk[Ot^FFǨ|*}AӖ`' 97~\*`P2rl(`\_e3,o d>AQ~[JoBeF|3c9\A3Yk}~[Ew(!.b@j;7CAwjG2Pư)0}߆ 1[@ʄ5Hyf$ ͙f+|m ZK dⅥ]o7r|_`v=v~ ~[&)[hDŽ1>"*&Q`m&Uɒ=SMt>s U2d`ayrD ilOMBWE{0z-4`2:]ن)!lb%VXA8;KJ>Ŧ>&ʤQ>>[W' 8g52S(]"TƩ8ZЮcD;v@n!zjC Zjw ,h6ՌXF?Q6J/d̰B"rQ:׈7rݧu6)I@B4#Jkl\ u"7bpntg?MeXNS cE9aQg4h!0cvqZX+WH7YLMSLk$,άvXP)NUH~d9.x}Aώ泷%Ct(MXg_y~M\t>_`rަ>ř A`!a'[-C ]4'B{j@Ūca`ZKsrdns^,Z,  Ę @9( qYaSC^Ç^vts]^`7;t:&DkelwЩz`m!%, ]:fGP3Bz p}v=1A8V.O`E_qh"R0i >y8HWa瀖EޜcY 0pRsb<:(GF3ElRAn l5(?+| g2- & 8oBEc1?}-ɮl}~kPy2;|Z[! wU| dAu"w]-jY`}!s{BS+z|dMì38 NR>0+It &ޣ0@.u)5[zM>ŪN\Y bs ݱPVm&8 e`M[n$tipMja/(HrwţB`cl0H5zA.Շ(;O3B.U d7e%[׿G?OrǾ0"*XfZrdzw _ˋ/r6oqNR;\-4v+rxup7r// "oᠷ8; Ǜ7g|c]ziǞ1D7- |k͵i=<~`KG7ol3翌Oc :)?S&f&/lәgdbU(~@K~7I yb;$PISLe"kH@$&4 I MiH@$&4 I MiH@$&4 I MiH@$&4 I MiH@$&4 I MiH@$&4 tI Orte?I e$P'O2 l$&4 I MiH@$&4 I MiH@$&4 I MiH@$&4 I MiH@$&4 I MiH@$&4 I Mid@rӎ@3ng5Gk@CV@$&4 I MiH@$&4 I MiH@$&4 I MiH@$&4 I MiH@$&4 I MiH@$&4 IMAz$紟$hn@SOP$)&9MiH@$&4 I MiH@$&4 I MiH@$&4 I MiH@$&4 I MiH@$&4 I MiH@$&N' _/2ѯջ˛g@|.Sp -~KK#MK@ɬS.G߰+nJF?qMP,O_;+vCWw6 ʠJ ;2[͠LY)0|My{t%(u,ӕ̥/ML_2]= KWOCסLGf+]}@ѕy7t%ps ]m{JP>jJW'DWG7}Cxr辔_f|tϯϑό??^}kl^`q&,-&,I)}/v׫oޛ+9s /](C7(X.A߾p~^~9o߼8ګ$(9UZn޾O͸£gog}x6/M7ovB~cMSe/--gW`yϋow..=nb"^7.W?ݴUss fk 9<*-YЈ6= yi{i7p{c!#z}4p;+톮7{+A{7ӕLO(z# {+AKtt%(UwDW8n pq{+AF*]pi7t%pn̠MGOW2뵫SW̠͞9'ps ]=%;]X1v%pm ] Z׮%S+N툮p+vCWݱӕ<赫\zziI~uz2]= n׮]= ;2uEO+RҥG>A:tf8 0C4GcvC׆д%s4-(xSC=FaW}nΖ*lx?x(4;7ɏNjI/(‡dM\ai. /5T^'ig5&hPwÎ|XY<,Pps2{sDdާ# n܍(|(I* OPO;+nJR壿/+]]ÎJBW@ٻFW#I0-" Ha;Qc)3|_DJl%v AKթSU%)ҕؒ CWWCWV۾lفNUUBfQ3 ҕ!`QNUBF%'C)ҕMAtU9 bFkrf<#`+G^3 :v(UJ+9s<0eB4]"]im[3ExOiLJb!qLlFҢ bX[D\e XMXYqR-E)4$\@JAxaM9bd8v>'u8p`SM Hc.?D{o;A'(DWؔCWR DIS+I%5 b J^ ]!ZNWtut$3їF r+HIKU]!ZNWrPW'IW1IK2X4rU ]!ZINWRN,.50Վh;]!JfJ lJzdjWZsv(mtznSf W6µپۤކƽ[ !.>`{FnܘcLԁSp:pQl}VEV5VEwZ_$-ɻ#7J #r;5bQ6z1|:bsŭ,8H+R wBjj#R+rEco<#mيt݂@WzpV}M`FU1tp,}+D)@WHWZ4#*.`L1Y[3uxqTz~@Kt5&}j:La5Md.Sx}@K QҒD/ыp%-E"Z.z=A˕4=B`X1tpe;J5xS+.5|)]!78+B^]!J6S+ ./f*wN Sŋ+kX)thD|S+& BZCWR bNWсNԘ Fn9#F!xI}WXR ]!\ZLt(`2f9#Uޓ\ ZCȑڡ=[iZЕYOTwU#.H_]l"ݖ'pE5}i@hiiZ%5C%4fuPДrPV~CtT buFKvj(wWnko±!A/?Z̿ץ%orpw|Z˥~Q_!SϷaPAl#|?ui#?mb~Bn2Gݼ׏z:;Plf!ZEdn#ji~vI_|5Ww~Gῶ!]@2*h$ u .(EHPG9P95uƹL8n}Fz[ 5"U1ݜUu#Hcq^O IDYj֜I-$gJ:2T{*EuH`c?u"WXB[_Z34Q)w]Vz=^X'ڛc3,Z{܆s7TI#ܨ0sfUƍ[[r^5bYnJݭ aFY<}`~w/Ƀl &yw~Kk܏U^pWMMP)-1FJJ*ϭ8WqZ &CvwTO_f~ u:5vD.GbtD?ử!𔶅}} ~.!nr"Y(/Fkd6j)D}3г-XpP6A\ރ.˫ej!G5^% h9Y4C3Ο%KKFu8)[{i9 )S:]j qϪ`BпH fҹ%Q 0]0tq;p8YxrVVӀ(pv8O[ 0L޵\<Ójǟowvrqҽ:({Nqh$8#3d~*r*ԩ~dy@e2wrT*KDM$* g;֏yZ?a]G:r]#>o &Nl=!!]Н>XꙆ:|Kgd1+e&o$AK08JӋ)&2Xc ѠNz nZc58X*7YP*ku ggOgQQTwSg}x>-bwɿWlZI~x}ZvF9LS5zuW`Ej6Vwg[^_S_18#9Ke;ʹj @nUM&A1%DD0bف4<Ƙl {'M6I-_[ oo)d߼峾+yga[%42> h9;Ǒ *+)cB*%>2Oo{K-~rf b8j;/o(LF#5т99e9AQ &:% gYli!8oISSɸl#1eID$eV,S)c-JayʁP KC$ǔ!$Fz+8Oԍ:Ms`{S}lt"1XYLXod}aF|Oxk!.,hP9|GN"WQ() 6!F [On6%HB"Nj 9 7P0&%aFg粅*ﲌT;o_(]Qq%C3> g1fNc[H7[P'uGulv u;&to>=t ݁Gjq;98v]h a ?O)Fw߆\{ߗ٧i3 qxn+U#o'?91Mٸ/1'[ XH?M9kgzbiѦtTǒTZfb6[D”"=YPaSl1ݝ^lo}߆YoO` S)| G M&p@<l쥞y9 ܵ?y&] 2Q>IˤPNeq`Y0r{ӛq~d(3m} Dfp9C7>%50& BkL AZYue+-CGSZ? dGz  ~apZ¯i8w a< #RţDG)"#e,{iH%GI1x{7YM]wй b0$vU}IzE.]x碸S#8[G7wupiA%2Z7E@V3X q~x?ʻ^xYY,gWWpMV]cQk8o^^-y/{-u e]~˿"S6@Ȥ{&mLf1b|LRF%wڽQ"S(@[Ew7Eyxԅn ]ϖf"u+cyqzYA\>|w7/)z2:27:d3Gd3<f3t;TNa }э}M{&оq w 5wG0uK(v0@\ ]\m!qv6j7y{tk R]4bUسAQ{Y`d4&mG4nghnbNzB }`7'N]ʊq_8zkv|C>%%#-OaL<`!#މO%s @xgڱYy{`QBtUV`X%}ͦICKVlRj6a;A+b-b`\qGJ\"rn,rfr9(ATlEDsiN)@d*)%9%o39qA o5pL\˲tk{16,Pz #ZX';5}X4I NZ ɓ%0dU%TTz2 -K}D5/?%$#@Db9M)X\LLv8"'>v\J'OJM']^[J89nhN񂭛jC?*U hˤX">I-d^L&3# 3Y(>8S!0KYrA#&"A \rF'0 EsjJ5^X 3v兪2/TG^He h=n6ysaۿ Ëvslcki6B<}3g!zCj8cL$aYo6h9-@!̊{!) ۔FQ O܂l;|ʣ&G,ZWVk-p6s0Pv5ؕk\[Q]8LD'#EsQF*IyIуqì,d "+:Jk( 'őx}!eX$-@e~X 3Q1 GrDS##Gk'8 ;F?~Gk%$.^r#4:\Njs܃*41kЄ,cA[hqW<$ܡ7!uEZZd΀`̵ZlS_A 凅ζ>3SFrDx.sk?@C^z3鼪/:WlmLxԼ jb>kFV8tM\$t >޺/caRi$[>}M?cwt5%=X) n[ڒ}r;ݥu/vjq. pu*yaLً ty^CLr!J&x0'y|i E٭=ڣ Lo41a3fdxW W(cn**Սu>Fk? yg2m}q+w$]Mq\iAk UqPk"]t& JW hY'|no>;4W\Ne{I C:z[o;XoM^n˟j<=[jCoXT__Gݷws0lnsOOj2jg4դ>JN[뺼 ?=6W ė'dg+nnuRy̰!aBR25:72D z͕>=xp.v V n8uۭH.1pȰaK #.TvIzJ)UOǘ hLơBXsnVA+kKTgRCoMvc7/ qj@/bC2esםWZI\M",x%n{]namJN^>v'y&*(Ix0 E$V9ZI$P~ZΨ_=a>tUpEY:]ˀED\D$%LVjUStxvҒxФ[ŒA[jz H5uXZ*|ZTÉ Y|R7kۈiq i٦,-*8HL[u%$FYG1]#C*6T1"F|Xqh"2uYEI)K&H[\uGuD[$cLl8W=dZGqrzJ(G)I4 !vo$ux @߱C7RdJ yg1)ҡR0MrD vB[K01dp>u)5W{K,$yNUe>͒uWW=9[3 O#{7ZQ#J)@2q 1qdk# )!32o "QL:}U[ڼ4hn4) kV2Ge*lWJGOppKQB Xe6i=>;*8YHq?#"mhBu%dI] sfɦ*e1I麤lL>AzDVr2ւuNېK!$FL<f##Gt6HeBڭRx4VЁ)iJ7'I !$#gy|''L&~q5_px6+NYY:V=+^WE?aYI9-˜ ;%:8qʨэ^83>W4~y3 *opRm%SHpG@q0L|`'oxmax}N^7gI!(ª}5OHgkaKp5EEi4"F9N2&UP<a$w$GyJWՓH<2#$)0d*fg/¤@pT kp.iu%hʟ;}[\$}Պ*Cngp9%E"ˑPGsKfh,sqH'!EsRZ,%o39qRL)RY3Ζy%j%7a9r8KȵYl>@QO]f}\t~[F74{2^?=m0rkwF64cӮ{3 <3\x|6oyvW޵qdٿM0nn= ; &~I`Z&#AnRMJ-QbS"6anw=uNխ{rVחqXA{*soY]v/Nt.9[ RXݷ,vmD\iR xqׁCNk| ]ɵѥcJF]`'ᆭLJX/<ǨDsNFӼ{$\̥HiBndQ a*m=^鰿kbH dtn19sTUJAOS +2!9W87ZtUNt-!4Ak@"Ӊt KY̹ !Y4rVI g$``ˉI&˼HQI€J,pԻՎa8~G)+IyE+(Kv"\H~8 4Ir\_//-Q՜Ť!˽uKu_i<w01DsZwR;EDqPVT9K-saw|t˃'B(QQ^>v}jmތߏ'#gwxwϻn׈lN2\b] ,|N1i-Y뿏{xr+-Xw;⢛`Q=DV|t6̷0b޿LCnz+5S^Vng?]&g 7g vJZvk킍lefJ9Ee$(0+?ƩsdnFCŊLqr8dnU6}n=?Xx\vT`o,L.KS/g9 WCs6Mo]?Dy>d'1@J,С2hfFrB9lteK.ʓS<+$ z弔zf.7HM&q*OsQ#O91!'rCoܽ܀] װѼzrKCcG$3"!"u-A?"pqH L"V=Wh-|1Zɴ3T ѪVpK)J \Ȭ p J!,>BsUkOݞZk]i)\=C2bWh0Hz0pUP*[k].a4WWfæ7GrI̓} ^Rb^ւ|h{Y)Zu/WmzJ6 6R \es9\+J5dWJ }53h |ӕ&}IZ.K J "N7?zQL'Ԁ_Fe',/<qyߛQZ j݋7\jGAe79'ع9,wl~g˒uf Kι|zqFۼRUȏ{qO߯%j-6?$ T슜y"%󢘽?kҬ7/߽ΕEVr 4Jkͬq+ɭmj$0qor6*45v)'*qɉD}K>RǷRvC|@;mui%)::^:O9ASUs*nΫ0_FM/q17E?&Z)ٹx9I \W_^m?WR%V^oUL$+q%%bj" ѵf\GdGl$G-lRB+&s\e59r(pRCJzp%jyNaD[?=wF5vJ/Q]# SZo&)YF7og]؏2 ^woʵlQMsTMo"sT:Epc^XqתAi'_ڋG7^1jY.Vڙ=FAnٝ*AkOU/QUrV ߪ5x=~==xwe;}~ Z}MX{>/ Dkipt&4p׈/R}{T|_cGӎ~7 rUo[s|qvcv5,!ilCֲ`%Z˩wdb kϐ p8Sah-ZMU58+ !EJN`g6WL`gV=\!\\\e8Dk{VaY|򮚫2 6 ;n+G9˦]Ό3;^v{ۮ3V PLBHJ+Iy!}*^~LΣmދ]6?$1dg[C͏o߶êykgKA1ì 9M.pGI.Y8PpV;wz%kHY5/٭29hjXh5!Hl^~b~[hpí~ՂCb!1tH :$AĠCbg31=(u{eoT,OƗޕ\]:f+0pݴs]6qIك!)' ){- 6SA<uBHa:re=GLb\%I5" FF=CiC^ծIMv+R@LNe B`RPR +2!9ѳ0T`EVYY4rF=-Γ`sL@My#((ȓ2Yv,yʛ/M˃<~l|6i¾ok˖(rvVtB2wkw?\T;Jߎ6? I5͐n&hbN Ǜ3cy8E9G*'tHs3otjΩ*F5?\'ſ;;֯wx-6{zWڼyQr%Lb D[C#8x)Q R1ZA ̪tQ &K]oןOc+ʻXO(Oq\r~Zӌyah`ĻӴNJa;-xTqAI1DaL¨KmjQyoAV@4R[V[\.it*Ř,:Usȳ?N&ԵHcb"FAL& 6qT9*x XZ̹z!&N'mqǛI(;Nj7d\" KpCftT N{Mh$T:0ҡ@"Bsz^)QO)8*༽8o޾ߝm㛻};=0Iݹb i7A::H5p!&jig/ s F A Y"OTG eQ+u}1-0J~ >eh$p(O=iR cS e,<Ҟ#iEܣdZ*Bsm:V;fPrYbd<$J(hMpydm@ {IBdD6uJx9Þyq$zAX{xInOfWnDJ mK_gq\bAj HVXrD%vaj"͐>{w-_Unΐj!}{Emr!D%‚N"rài p˭Cn 2F.G㏄{evWqۣ2jֻ8]3Ϫ]8͝wO=b>bli ҙhJSLNq#HF"D "t}1óB$48gEέ@h 9{2A<Jw K{ .U<|gݮk:d3nMa-Uq ~:<ws}}p_D/_ڵe]:flny>ݾ[| a>˫ofH]s۞^3o b[ꖉ =}Bd܆o KbcX.bcAѭvR8~/o/ȦؿF;VcKx@B|Lr& ;Ԅ/nn|)7-vc\4.k279;p~׵%X}:JwԏifG{ss=,w E6z4l<11{%l-\860/[mX~n^e혌ƤƳMܬӣiˍ{v{N]ʚq_8{k֬?_mx)}>2+paL<2ʐD'd!kSAxgڱYĢ%T騬q7xNpPցt:ȜFC U9y'18LEÑ{W 0gWE]3T W '8h$)qg#ᖡ6XŸtΎ~h# K ք ̚_?_*IG 2JQ{`t4o4MڟZbb Z ci2):1[ @\"rnl̈́!d "es,b@4: 8@CTR K^sJf.sDK2x Ƃ0o5rLܪO,S c۰L3Cu0:ޱ쁛IZhP;i'Kr **f=nK}DϚO I9 HHlq:ZbeAFnAwӶR0- >ZO:O?]MTIOۓ,7N󀭝gKsIgwTHCr9$$Bz0ό6 =90f42s:&R\_D"(R@6d![3*ta58TʺPu#iV1EAܜ4ywy٧YH߽݀&Flklc &B6y&RYg!zCj8cLC0wM"Yu PH'e(^HI!EQh-ȶc3ծZ&fb֮jm]Yk>#ح3ɪDtB;, 2VI&"*]L>gB`#|]}!\LȐ+DK&!#("@:HF5*jևQ(T4b58T#ќ5Y#nx%J[&qզkKB d&jDŽ;A6F$X^dRg At n%ɒ`DG W gSuVCY/V$% 88moV !2ha' t:8X< ۑ%z{No8A]EG95FU~l. g AkML26mB<;V9xG}rma;DKxGVdŒTR mAS6" Xys>qWGЛAH"3A`̵t-r)A寠j֞&8\=1zn/>ۖW YNIƤ ,m,3)Dhe"e좉$@nN,edl8)dv2! mgD@BY/% û1-0Ζ]_x} ,:E>LQLM(8ۻf[Yik[9iof}kbH7|`NX:&lS^7M9}~ӫߏ'r wCg( q.,G@ٙHGVpmPk{QhVzTON,CRɸy`RϚk<(8ՐT9juܞӣ?W=es-J[aDžb, ◫FoAiPZt7[=8|qnp >NHe2Ȕ# Xd)$1T$ϜO'l]c=0Od(uhobbXȎ#47 =h_{߿">ː_LfZdZ!}ΧFsG' _JۏB`L>ӿ?ZaK*>Jn\&r`$EY ge}xh!+}Q NRt!]KB7q݆E[ղ|tyHPzmw\?w_1+7XAW~3_LdzN ~O.3d1 ހ>\]zr@84YLW@zy:w߽^K!nh*l}l~7qz%mШTn|pQЅi!WkAo*rceo9ϢJ5q! NCs ڡ}TH'$lUo\ |dAkߕn/ӠL1]2W3$wKnݎ۸9'2L99ro5jhiTڤF%$j4R eb &ⴎZ=e3 XL9o`ͅ"ii$"@uXQX ʂ9(g"`b#Iū% TU bg'aܾð+0!X!{#3! J@gE.ct,Ht€HJ4S I`qnE [Xx(OFRG8GD䚱}L|rl\$1L럹rh3ޓz >X߭Fm$MFx>VPy!:^Dƍmv6 ooa^)ð8OAQ& DB1>1LbRD1`pg:Ukg<񍄽rm_Lz^ZՇpa; yFiܑ\ݨD>I;R^ ݺOݶ/҃< J/UwJꃁ+cp$WR(DZ.GFS.βy.VIvMVY/}0CDCe6M`lyPCc` #GD+"ebM0"rW>=+MZp0 7VߜȍŻ+ej(t^{qPX.G0HuY)?W2Q|ܚc `Ә 8a0̰r`g) 'a'y1|jB/7d~ j:Rjsfȵ1'Hk#ke~eﻲ;H|y bҹVpb#Vu` u@p"QB2U{ >2 a VWSU"CD-WJN{zp8W\`\ ;\%*9ӸOZGq?~~'еjZL63^|_Kr0zZCxxb%F@օEQ޲(Bcbk C083<٥q0զ&n7CmQ~$?υR4 @SQYzsp^xonc%'<7XpNr R-q,RFJUXFsۉ6 5׵]nKۃAG>VJ-rǬo{wc~0f;6nl71nxQ9.v|nƟ+ey0Ȯjj$JF=ެߙg %ޞ BfulJa^iGjd~sVk/;^B>cmUZMZFL(",e My+!K`E()Kg|3eoKP{$mX>-r.gJ"w--nhLms;;Vr nOksZ:Mm^Sno4B4y,`OPEIF|)&`ahHd(=H£9&( 1Qڔ @SJ3;i&?ҁrz;+ ~"(M1½$"UiB=QDaƂnLE*Z)l:?"ߕy2զO:0Mp`sZ[3㏃ē+秸+{ME1E/(bJr#J.HA}*cL}Y4<A!r g WZ `nI [`F1(5clZDkV#|% &OM]lՔ[[b}7MW E4oxnpT[9A3BHƲ"NN!gF-uTzO73L/|=#7^Y;,&8p} LQĆ3AR y:A7L|FxSO/DjDĨ,, Q H 964P0Cw& z/-z]hm. m"NJQזn&";җ$M]a85,AEh`1L-Ƃzΐ 3zLHeYkWՆ>n5"4\ߕ߷#4|yBnRI-4*0ÍPAjy%e}5bU&;ɊҨuAf]J;߆G4Bj-7V+FǚLnn<ָ(F2kFJ0C~6UH Z5ibʋ@c#wM57[y`mDPIr "ɚԴk:6~;mzLqY*.9=4_ 43ߛ.;ݺVDDތ[f9KxƳM %D(8 0Io#jD1CaPd֍ܳ(!E utTq3i|n"fRYƃ9d^+"6 [cbA-=uz]]mo$7r+~͵/Eqwbpɗs, [f$o =Şf4q!uSnVyryuL뷖?սmmqo-wI:kl@y^󔎸mp( ISv+eGzkNTidЋ~?=&l! 9!FE8x,.)MM& pZ$Ka:Ɂ/ 4hWZl;f)Uʵlɥoٜ\ZRb֢Em| ;nl$dv,g`U.y*89qUF>3kDa"TS$쪨 *D^>V4̮n?3"D Aay\Qlq$g<%Zv['o]~8?^gJa_VYI)=`g|CVҙZ*bM`eMX储7r ɐtU1'UI|T]-PKD ]MKڈጂB4%H[[ncۤETHy2*la-ζ&[H˫׸97Ui1`H''_Npb.)"#T/!XYS^kbWB"h+chd/,MC@J0Qp;}df*]8[l:?+(XncW:[m7Y nL)2C*U]+d* cS \b_{X): j-5<6GQr!Q,CJa7q#_EY4x("OqV<6vDS%o|- 2] l$V!w2йdDAcA74*س2,L@)h",t٢gWMbCuv[%Eld' P9#-;>zCrPY^&#dEWEJ*Ovqv`pbձ=!f`IZ$z{7q acApSkE?u:qs{?/l}ݤ/9onb[K6w?ʕ߯yݻ?>ep5 >߿_%-(_iW:'ʑ/E/Fɼ"Y2Rӳ:#}X2P.mCB!a1i4r! ]x~bcx׏Or)TmX^~\\-&Ю>`VwvXjv6_G?ףEl,)sTz^|+z[~ݑƣίf7xf翊]-Ɵ^8 ݃G+\(ܢHNY֝crhȅq4PyyjݹlX IP$ k GLC.99jbvhRհֹo>bĢE%$͚v9f}d+)RB*B m{M.J_^(s޾9/o҄Kۉ ?d6fт6+^fpqxK .{Y[Jș(RT.'  ᰁ,+05ZTun\6[\&[KbR\H9GEŐ!%ݫQ<_M7${gtA0ɋ"dJ00,KոLFe, djR Ɓi,Pռe!.Q@j~]Uv.Lukケ+~aq~\mJŢi̕X<`5O^a:w@q@cl14vu 1yk3ө"U"V즐+ٹMQfUc*`!c*> P \Q9qBtѝQj7q_ESlIr,}Bf$n?%#Ӗ5!^rm :!'"\*%*V`eS!hfjƬ[ds^ؾEl U :1 &P6` $4ZǾq1\DKTw@FHb\HP#"`S> Bѳty>[XtgVkZ&C(%E"3 >OE;ո0ogyұC1]fAhڐ۫gpfiB1Ƙ}ZxM)15 : 4،ssM5tv*\gp3hr;Ђvd^J9NIՐ?seX*pV2 )Ru&a3UeMrpMet{n2cBݝ;^їGWg\~Z%>+c D<"ۿ"PԶՔE~ FY餫%`L1ؒ!W%:ҡI=.6<{63ٚ_僎|泣Kpݿ<9Wx7y5)M[gOkjѺuݗAnF@)_N:g^õu -^06f+ +A*5ݬBJ_&uObuqq<  BtQ+0~&zp ~2zXq@`|.Q5xJSҚmaÄo.N~>7N:.,C@{[j4d\FEϣKVݭ89a鱎$o|9 ڿwzѰuMv0DaVJAM`L&7o~P6jR"NA1DV_PC̃s.{^ِ!&@@`09EwrlA)}wSTKi3(A) x=D&ֺo^Y P =bWdlߵTkӠWDfٔ%]v#K]/j#8VsVLN-ߥCI*QjM  W5kWPMjёVsBrnglg?OJ}PE_h߯FSE[d{>8M"3j>{?՜|[ςmsOTgUP@*AS (ϣ:E_,)dOf|ey{f(dMJ>F >i|(pt Iĺ ̟E;#$8^I1 pܸWNS. ,]] 󱗋<~uڦٽ8yfHU:*=v%Y$0kڱQyM; њSYAQckcl뙽da1z8Lz=f3'?9OsPF2rպmi7=ޯf˃-f5rzoouwswfoz.'dMM=} eKsWC`[S|k^˓O] Dvc0Moi))\ڗNU^UlRruU .1:UTFv}cJBf_j٣A!*s6!(P$gR8[٨W윩qriBۇRK+tƕ1o}6 rP9'SR bJ)!H9L*S~NS:[ UhUJJh2[bEI4sMGV7/O&4xw|v|î&ﴍT1.IH9 LL o$Z* a 6ll~?q ZoSJkdbW7PL< P5O(ojS۶uk &WYqNWbUZO@ 9Yh- eXVTZQ][өc!8B?_Z̘dyyl(9#ƆX bJ8NN؍ !0]j,!RU=Ͷ}M֗lgtgk&!A.GCшk? {>.:W^aef{'K݇y*(ߍw Pi :-WYSCkܦeWB .?{Vnm0_αfΠ࢙A?[v-yţe'䓑NփkCrR29r̉E IH3%ce!H۬(ANڈܝc¤P)tݶ"H'ϐSb"O"֫]͏+U?ץW[^d<{g.hg'u1w:^}jzܹ޳l>͜~;?IyXiO[,zuʲvhgiftI] Y$uAqb$?dGoM-g-⑟Ny8t4Ϗk-F$^ªJt|dk>32sf+mOwJV ʓYO߬>4it[iN˟:99_޺IJ:Mޅ&yQ]vյkĉwFFվqiތniui ?KTS‹ٻ/^ {& ̹8wflGF/׳f0$Ζfxw3ucy MyrSaqYr7mdz/uNwٗ[]vծZ+X:'EGj ׋XzGeTi㘎ht1?Jɿ>RT&~x2yǗ`zO_8ϿtS_^"Ο0k.Z??'鿽em_MeG4t^ jrG578Bn@Br$8+.O<ꃖx\WIGn~>x<Wzc+T<bbn}>x)kWn?b~`eΑ7^y]w'h}(eHy- tY*#Ӥ "|IH=|#ы K&KBmB"YL!^ ){A.!Z 7 $g:<٫m7u';F N*ߖUm;ٶ6m;i2=n'*:i68aXcЍT65x$mip#F؎Ո}?[aFfd1c!q ɑ,CD[vcRآ9A4!j)-U'"u pHXQ*HB6VDY@I\,<#E1L޳N : b=Jyn_=AM|pᴩW~G~yf=/8HIL#S;޸`'Վ7nWVo ứ޸J+z#mPzw_ t]wבQs"FZgh ĢHVXőʺrITwKHO&s1#euxߑ 3qJ:7% `Q\ |"q[JLQyŌK%8cɩ`-EgJu3F9u ˘")dmTѤPlߕz6麟$(5ŨTh]§P/$e ϣS$LMf\օ5*wai{ UZK;Ϋ/ċԲ|ȣz >˓p Rn%htٻAksäCsT鴸{tލvL Eܬ[/[aT;&+:A[mTw7^eˍz\VJ{CRoGR̀py([F*\ɇeh9e)+kߌno(\Y=\?WGZ>e#N8abg?~}b6kBW@JyP'JHsc>&o>UnHɮz6>uNh')\|\fB>@cUcq$wiNqz*wK)lfWgeW~Y@Y`'GYe?-C Fi?KFQ )fV)+e۸m#sɍA6!IXa< -cvHph5(:g(@wUl` pL^+Jt JZ *`G+l0ꪢ{OW:,=GRqDW,pUKf(tU tUQzt9cDWb W *ZtUQm28+$!]UJ36hI;]U}ϒN i;u13p Vb骢u9]Uv8W4shod(=7CWSϏ rsdp3V6B+톮6Cvc/o@W@W[O1Z9H|h)TLo`r@4]tPhhiW6C)Ł!Msbj =%ث:f;:p6w,&SLȃ~|je"hi\a"4;8J!6|IKi7S,ۃ]vz0pPDaEhEaEiA>CQ(c5  ],5VR;]Ukz>tHh9%zlUO+bUEvp@Wχth*`CWi0tUъ[Q*{HWVqC+I+\;J0tk]t ՝(HÅ{Ug)mj=K}S?s[JWuRR^jV( Yoʻ\CY3]\n g [vm%JoFDxuNXk_{z; wnݵFZMj m{4S\}W/㞨ENܣ1%vbX:p? æuIͷ)'T/!R Yk*,~޾RIpsi^lɤuZ)MQ' 5E_>8^wU~;"]B-j LU3q Wz͋Șٖ`[L -n l`TL,BǔɅ%ss%c`,FV<2)^b@x'Zq+uhM>gX(K.Y!EL#)3qt#9SHDw*hD9!75xq~~=겥R)pMQcċʀLJ()t*|%`>GCVpfLc\FKIN2l]{M2 h1K'>߭SG)p3F!A5hC#L* 'XSSatF[bDëkJtK"sfsa-#g <0 4srK>W|#zZ&vV.xmE6O{T2BI0۬gY$WEk <4}>0:RΎg!NEo=|a#<<_xLHwuH-7N ү?&k\"Fa%K[CYa:3VSE"1xL0VĴO` ב%F#pDGk2K T&KˀWhLr UR; 4A1E8TPt-h?B@iU\:&Lf`D!'[U:TY "0AP,1y6|0t ]`ርd JxCVR+s%a'((MN!}-u{AΚ>x XE%DXkg|V$@&:"Mp ڊH\\`AV#k  "gFP*S3D+983&`IXH+U"%"$b3.cHJ31$3* DbH}@ "IY*r) [ [C > ѽ: ì )e&FW2)ԙV&m<D V4d 2 WHC@G] 2rolD 1(J8A1,UX:"< e/_H+eT$ fW: l'u)!`w $1cҾX*LBڪYBBb|e^A ;% OQVp`"ʚ0x"?f؜:h|AiQ3WuE@dbVCp|5c(]6,Viw"B"Np?4E8UcW"?AՈ~gE5Cl+ɒA^VA#h;-}??vs|G.`U$CeR֫X&=#0z;Dҧ>e#8h$!*stȻK@|À 0 ^v@^! )C X4AG #ѧ uNBPX,0AAE/Bٗ>XCA[1M,jjF+/ HHDhRqd9vz:Hv> LIF,hw9& jB#qȈ EO"PaXDH{4Y%Yo 0 ۤ9mEd_? XI3b$-h)1T-@A ,{$hT=Fi SA<8YaG=J-P|N`Ÿ08iJ)2H<*pM 7>>EOh* #&B)J2rHmj,(JFb6wv!aOGBU.E GS{[bc=fFO -rz b ="̤zB0sETs.!\>y:-1v* eP!*P=xZ)E׸"ӆ(|r1 V if} 1u"[Ŧ!]jF43wZe&?wEα\P3' f*kO]r28GBKrE+*QZ2zs"4GfZY#[9~ǮjeXteFȕazlKO4$Wlu3rEN"WD{Q\Rh ]1Io$#Rw?/{Q/tF39PWzg}۶.ޘ!_o}L^CxQQowf;TC'tU3YĮ}E^<2B~/Q'5Yc 7/k)m :uߙ]S::4 %+c)) zGBZ VDi<3 5j^䊀mFkE3;j9u"JgYf(W&NO+v<!\oZ+~rJ'5 :k!9&pLnarE.\P\`"}r`T3rEdh箈2ʕZ関+kю\њGWDx}rxDW۩<]R*B;reOh8ܕqj?R28R=#ʲ\=@r'iGתVh]\eP,Ws 1=E㞗Vxv/}&[ g&p(`+ۑiղ~2Mez2D!rs44yR-@4S\ɓ>4᪣gul(#ࣛ FJɏ<(v@ im)W"\Z+brJ' J/TKrE5#Wҵ"WDk'( ,)!kHط$pClE@@2Hڱ\Pژ䊀oF7VӹrrrER\PvL7f|2HA\P εz^#Wβ0?]EFޝ?\VsWp7Gqj(Í+rتI-+o1WVhM\6`z3Oth#Ҙdr+) -Oq\)XP|L3Gwٝ!ksF"a+! WVa=֮q(9e0 :ې\ߌ\rZ;u" Xf(WCCr`%y2B2"WDk&/WD9XEUB !"co!>ָQ:~;Gr+[隑+5<%Z.WD `mK Wժ"Z;u'D8\ f Xq;O[Y#WY3ÅG?ԇ+5LA꫑+'…ic(`]=gj;Һqab,Wz>o)>nppj3u"JkX(WgJv/$J8AVǒQ{q8xdp S;3rتB{)+%+5" .WS; y*m3腒K #mlr+4p4j5u&JYg(J ϓU)e3Ik6;O*XЩCjY\?5l01~ qpu? 7e#Bo  Gŭ+tJ kI/_˗ WN+gevR' "dctRC,oSmϽ^Jw1ۼH#y۷>A>RĿO==aU6Տ\W2OG}+rx"A]a/D%ԏБr}~Si|s?n.Tw{2[-OO-bHNsQj3 wUXlL}sMM)E\Z{yRrwN(Kv=]7[s":t z΅^%eRg7F\\"лPBD1(Mr^KsfGơFA?ڪEiwmn5KW[йyGܶ>o|c 7_gu_ݻ5CZR/gȯA ,wS0Z^_y竸5?_rS~j>p?;٦_sJL=mO5T}!]@`-ļA첩*ENUcĉaJVHiB wCv_KDOVȬM ]/7{Ƒ q8`q^nl861E*R _-qHI="˜fw!23#PLhD˹0z3h*4LjM!Fj$,ĘX/9Q!jTd.hKS$Qi[F\G]a[z*硽Ш&!5 ڸJ;(HnѐlpVX BUouYP/ULKІiu҆ X!E:F!i&*.3hA< ʏƒ.qCƧp1sģ.8CqhE4odhR Z)M87R;/rsAwڟK5GP+5hNinv)PsDLQ=H$uHˌt&I9x4 \ҩ© @)γ=%ظǤ< Yg׌>[V]V~ڡx/skCoUFWEß20_we|{ ?NnU 8%ra4WGs0,RΉxӛ2\(0ϧYI4ClBN5x 8rۋjptX<[,Suh߽_r*-%eo8,(LUڞ!j9*c:`Wf~Anb;'\t~BLd6xH [0Cb5%5XX 5Тߔņ[ C D6h$F1 \hyΦ[N t&cHmXQr2:LnW%}>ngh=2xwϏpunJqV8rp|IlpxD֑]o0.%r/iY,!{aw{}oSv05;"użq9@6ˎ]ͺClY`bqZvyp =o&OV|̙G|\ZϿz㹢+zO h.lzlnutOFZs?yxQp?b!mx.[f[vR-A~Ӄ. ν+p xi|uO%a:q46 ^XCsM,K\$S 6X1i]l{uOVҴK.? )qG619)rJN4D`&%I Y}/@V+=Bc@ !J9(M\ c:d)FCpdQx'NZ:+/s.i此\{0 ALe$ܔOɧ] 3&)x0DBD E [2t+őiV48 *C "+ @9PjrH"q 40b@Cڠ5gR ə GR9K (VkFX('V'E*o2'ʛ"qĉMYãpcv蹱[ma#^;chYb D9U[4DEthpIQv .Rl1ڇ}7Rw mح*g;mg߼}-gAPG,8b]DMWV7 ' TP`e!Ւ8@3tԝy^?{^ĄwIAqb$8 x f!3d*Dl?2QS n`Βu"$$rT*E0q 4?~!ufAiד=\yvkPlz5o$Yz> И蔲L3q%8Jb8F5P EkZ p62,J@!r&ZM.`zfR5˯L<95rsNqBLV>[omyEf,_Ȳ(07h`XWPUѰk= M m*%`騕 dgP~=7R1(yT 9Nsdc j1XLOZ+4OA{U}x5-DaSAN[\)-HVh4SV1YGd E  hbF:*FAe*_AHQ&wIo!pm$Ÿfvä`7@ f?4zȒWv7-%zX*YT̠;Ed=u.0ʩR ˽)'R.q+j<[- YmFΑC+{]|ue^[Pu w$ *Y:lKPojfE3,̆jIb"P2zE)Y(#'8JzPA7H , IUhr#c6q#c> ͌}eBcQpt񚀚ʵ& oA5iLUt&:dR ci*0B[O@ h$!E#א= JlR!`Vl;LBDNIBَvH*u6)#H.R)?>=F0$WyZ qbI'IL8cu4(&LR#1sXfD&yW.V>]uf%ʌ72kM4Ê!UQPRO%5$R=#m r=.;s͎}0sm\#@Xg ~am OȐDg~X.>*DM:gJn'WֆsfK*j-x#;;%Љ%ЉN;9SVȐlH`(j. 鼓҈I$qJEK!c⍲8*h%{m]+R0#4'(xAcnK.%K3sY%ˇ7J`o^N3w^fJ`ILWOW1kF4Sz,VIֿ74k_wEۋGm:'S,«G a+tt?A|LQI8L\M~|A |;opNJ>@/j YKaaa]JzV+,kfvqËW-AZ 6cڞJן8_x-]Xjlh.GNe{c`>PE7LgQՀ%ܠx}$V&&=Qpm 7қv<,DM'EKWrCO㯣rq5>~ЌB)Yg9rԅW{,uH}=ƕDͮcw3mu+Pv&h Vv$ڈ]rhu4AĜ ƅpN"6oҾpM 0᚛/x ! X=s}m3V/U\"ɈPg$2Ɛjqg r33彡G&T8ŗ[t*e!;P墡 ~X.%rV4hQ[Y;-D6ʱ_ruyćwfe9rO-#I RE"" r=]Q::ڡ14Wӏz~24@%`[bm8&*FQx с(HĜ[CHu(DCJb DYj4vCP quZ+n'8w\޲ӛpnC<%7hd&Mf֕`lo\zsxMHcH^z.Ȱ1g >hSHJ4af,:p\]& a L%M FҎY03Z]^{|xlyHAzM鎦pA-AI4G-a1DKVx!=ߢ)@6K3>X|* pRh2XbfJEYY%17{,|uKʽeu;ruIǨ.CJ>0 gTO`pD V+?1Sױ$\)tE;\Ƕgp3wȄ`hf|+j_]ll&c0fp d-# dԇN)bx2هu]RlwqRm0*fzqkef._gH/4@;Bڥr}(@R[cR`!fH ^ wE%Kʮ%x8պLߕ$\Lci 'U("x65P }*)o>bk_ }36?FkR~<8\gƻ]̀,u@oپ PK-,ѕVҞRRq%ßR^~WIUj:RjKf(L7&zGY~'<щҪճemeY]O.Hb31-ɴGcZმiaUi^¤dQKnbE@-D!'QBvÙӋ EDZbCE%/Nր"*%8:e4PyۘQ1GӾ$[֑aY ;X+\$JH!3}/> uǞM>/+7֗(+xWg4qE(me@+Mn_{se=W z=F7DcH£%&( Q2`u){BڲΥ%ORv9+A~kDZC Ԗxy`PC0cSV +6!b}z1EKxqst܆zy?\x=շE5A1-@ƨ:H8gX风Jy1DGA4i aQyES{)ER)+,kB8J Ř80x)18w3ZwWUu8.j36_-JF"8Z78$ <(Or kw!Y啙m89Y>YG=dz{Y ==|wxf^ ,&8p} AAL!jF8CG\u` Ӽ|/w<7'Fz"N"gbT|0 Q H 964P0CgNwFke]ktvW8$2M'ܖnNL BfK9@<)p{KJb9STb, IϹ@9DŽD#viD$a0W\cQq`x. u=nIzE.ŏuu,ݼMR/\ uSWU{ѯ8:'*[$u@+HÝ{*iSM77p URQ=r xD wBhOK@fOYlQqmWikWW=ELaX) _+^cF#&Ua~)~UmgD~ 6 cڞ)fm^⵼{]Y6Xjlo.|TmM4푏h~@͏7at6U ^ wb67Uv\7Xx޼.`6nCz2 <)Z:?z9_z6Ë"<.o` "C9t(R[Jb(U33*ܗ =R9  z|OTaYˉRq/,k<{9]գ%` MyԵi!Ƚ9%%_]kw!Fh{i<.$y\@`x\D%IY=.IJ{ H\%u \%i%:\%)•"B=(\%g%q8i:\%)•ktFp!lU;~(e$ħ''I< \=EJTؕ~\zL% S:W\ORrK+Rvr#bu_e?>|Y[ wA'TN') +)~ǻWŤ-SxxC`Z%&DƜQ}{&g~PTO2#oǿ.&Uk ˅LSdh _Pr.R sNnZ:9s 3֟]Z@axhJQݎ\K8UQ}om\Y]VS7ҕ~~ %2@JF/tJÃ&C| @>]@FMk}6ą?Bzt&)%I $`@g!+xeB\ Œw@JTW/ӌ3+r6pe\*I+q*I ]koIv+ ;PS[uHvw>LLz,-%{fSMeG6 D6٧n{8Е1JӔ0 9jp ]54zuPc+kwSRWXAjpy2ꪡ_R#]Mi U'3P{CiU ד+E2jh?P1ҕXM<lpN7^K+F]řo=v%\>R:%BW[qͻbsZ^{]^4-{ ]VUCi䉮]ƴ6%>g0Jr[Ezm7CJjsm <4=|sg"QMdq}Q iˮ Wkitľ:)x;!y;,Ԝճég"r˦r&vxS~M󪌊T &f턤0K&#T@n(Ja9d pQmLVhv(r':B"gsf:E\BW@K/tP*u#+q٫_Q_뫫7]o<%zqݛH;zӹv~λ4&89>((NDW|odF5EgwvY;m&ܽwo)BZb(vX8f3K_XsAڿ|˿>cR=)0Myha]djr?RgW[ڽ3 گ&X5:EnډwqZ@^MYf ݷfWOe^Iֳm1+l2?pس7-9~m_kv#'|p.jv>ZeԿޭLc ˖%gyqN'=mF~wnv]?αj_=M$?Nffmf~{]%˷nf;>]-nWս{}l~r&(W[>fTg.w<7<r6q3FpZ su.Qe=[}ǐvkk!رefI"nTTgчNXC?­}(X Q.95!J5vҚcڦCÖ|OO\->Pn5f}à _K4\zΛE s ˹>DlƼ+2w|=r"*MbʲCZࡲdf"7Xed?]PaG䓓KǍrK |9g}ʣ9[s:mSN_ne^2.Wg(4CR_,tM4nԧOKhLrZ~͆\ZcG?c(Oá92 ;'CW sS+ftP:}#+<<jpd5nI]}9t%7zy&r6!,]mWВ?ۡrdJnAWDWv&DWlXN ]wnt%cSW'|n'tNޫ7UwՈ>Z7RܕH$QN;JNH"\4 E/PZ'{WKV^O;m>ߋZ/NW@oىHz'4HJ&CW 70 ztPjy#+[h} ~*tRmCNl1L )\5uUC6p+š) 0U'Z;% ':Bj ]>fZftP1ҕ,]5<Ŵ]6p`CNRv:ȦCC[/]iv(&2-Jj׮UO`g&CW wS+u/N~j(I^etk K:BomVRok?Jmk5dE;j1FGN/&۸kO."> M7zw7%G w,np vhI] 7T6]0 56NNUCh)+vJN\#BW P#Pz':B2 {U;'#8骡4DWGHWJ[vf:cW .T hj( 銭a+>-ujoK'NW %1ҕfQW S֍~V/P':FZ]yMO4'@jJO'GAWfyoYM{{^ڮ-zewsϜ݄;Es^+8h7+Z^Yݟǒ?jtA?wIݛ?| )܅xY7"ݼ]}@J'UܼFr~{w.ddouU״%-/{.~N+s˯}OuG!uu{{Fܮ{l,+>Te4>~OOd'lfn9"2D a,=q8N!5yl/3S>ә@w ]7PO}~mcO^A-\}snL/2<RZWN`dVQjdM.HdЄv%ۋ+{]/n%w h:z1x/p)XULBxYmT\ "`磍Fa4I wƪ.G*H-,aB"dܦ2b*eƅYRemH}']Q4R_՘ҖR ),PR0yS$"D1 pl X YM1ZslS$%J8_S AXlk{@xƣ%<\ XrOef8Q !c[І$4) oR!TP2Ɗj34lUF\|@lfk@+Jmᜐ)5hxiGҾ eH& .$hiMV2GxyKu1OhNOq(ylQ$+YC V䇂TmQwZ ^ƠL:b(ݐ|4 /%|'P4WI8:BR|!5Ї֣41ՐR AJdC|.jmER٘" {,IZ͋IU"8x5T')!h0f^$lȽ` 0eU!(dGoE K"o{0L!D`ˈwh8AZyC%-OήYAEE'  h-ĩж mguW(BnD)P,@W67Sf$h^w<$ ;ՙTMBJcb$ J@F;0&  .WCY(fT_ [PV3$_uLYT@2+tzXEPB]^؁QfBnTzCJȸ@LAA{q@rHBB3Md$mjLR"Ɉ<\*nQ1,bm4hL|1%.[C%ǀ:3([ E?XPf̤}B8V APP{Sv e*PHq 92r\/XT5pA-KA^ 2"}\@PS(HYnጴY^2ZëAUDI)eIنvZm`^* R`YKz%㠄jHYV+"QY +\aF U+c+N Lgg -eʊ~)mĬGr|b DHM!D!N?oS̀]Y۰<\rC-RUۜ|6Tvڦ:B$a ƋAy@0>l:!s N{ I BKU205n:&q"'p?X(hP(#ZByPr $Pd"eBU Q58Lt,1hXh=ŋ `ȁa$AkuS3<oĭ ‘ ^XTu<*PJɩ᫈;# )m ɜ.+$`0~Nv?n%꣸sȓ`$] XT׆XktdSHcޣ.%Mdۡ Z@mD%8y }t@jT51E'=f !nPv)Ѵc[ 8A%DgH!LvRmΛA5 H=kVW8ɠ P4i2Б9Zhc"(Π$1ID(6ɀ!RP bEED֨d%?r0碬" EU ʉ䳦dٻ6,Wa?`R"]K6APψDjDJ3SMbі(bG7Fd>UuԹubI; . ^kY "kTRpP2U#U=zˬGae +{Mw $afs` "}A׌I'y%T0 \nC@`v y :t,< ]v6NʊsMLn/ -B骅 $4z`2o6:7 }lPpRf(.֐D{碦

G/&+b\D W}\!O.BvC^'yMfeȅ =E 9HbQyTKF|Rt }Գ\O` T.C127iâHY=+NFМ"5R&v֞@jX{ |,%=3)% &}*@$4. t-c|{r5WhS=xYkʾ́;J*C52mP jx;XAa^;XtdFjC0^ԀD\D{.č0Y2&3ROQ!,2 8% EK EvV;^c6WŪRDt0IJK1at̂f%d{iIH"8DԅK{P@[ 4&,? tjOW$"X0BlovcL~AA\B3Z8K 浪µKə)xVx}|?l|_?#_TEYuȤ:ZF?BR,؍)]_AS8?o+ߖx6-t|VK7e2 .^.s{ѷW:/g''\c?C4kg[V8٬]i~>ۼM/>̟n~/_"/lW/M8}Y'Q%;f{nO?U]^H1("k6'z=SJ9^Hi1r@"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN r@"'9^Hs'P쇳"*nN 4@ d3$9 DN r@"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN r@ ݐ@80:x'8@򘱦> DN r@"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN r@:p'|H>U! w= 0: ;Wu=}=rG8$9vvIo%'9>HZ u4L' 9j DN r@"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN r@"'9^åN mϣ&mi+ul._.ʝJɳźQ^OO0(h=5$67q (^qMϕXh7\$CC+Ԝ]FBʩ̪7/N- 6?Zʳ =F̞]"v`uLj:_2Cwty9S593o\nrU󾠬,Lre"tz=E&(aގtP:>G+4reO77tmq_]KWIjsތ//R2AYv?i?٥w~yh\J-+Qu1qHS+y[<N§t3T`*nkxj.ͯ.STt0ݽV~TIRh>ґTR]jؐf>jp ]5=x=R3It -f@t;3\Ziʍ!Е6Ҳ!;6jp`37^:] Fj{&DW ]5 V utP*J_#]\-kp`AuJAk+g ~*]l0tUCġPDW i!CW UC~!P"$Еz`ӫcvQp5/KWC+_"8]GЕ"ڵ3BEߑ#O=Xx/=RlH`_P0.ȵv;AY_Y' t>ڵjNB)둨eв;U+ZT+\N%滨h=R"@L03 QB'wop(lhXk0tGjc{}:9OnޤDvGO(SÑ p&mtnn('n~7,5Y߽Mo__N\߻|!/߆ЯNh1zw_W?[}3Ȝ2˺S-{ݢj 2Ņfx2jMOON\!j +myeCRqUԘp޶OP)IM_hVqcqiky(Ʋlt&' 'EazeuFi_N .ׂڦm+wtceˏf .鏰icsg{_LH,_O 7vh|ou?Jd: .kgRީ]_mtGݢ!\Z]?AU%,*tY$[-m5夃f,Iv}78 } wm&uC,Yg^Tju:&Iv,skk.m\m*0!1SgyrU\ߘN:BMgo]ý8C/u/,#Y^{ ՎHKƒAs%L%kOK ]3z"f{ox!Z OD}=t^.Ɠp{ Ei.y*fF~vqy^%l(=\R,U?m]m?1n;zƏ^nz41SΓ>Oo2\9qɽwI{G8٘g":5$֩k_4z;' m %o<|o z"=yOCYbվ%b:a8fJϋ>\ЧkZU-wVּUYm8ۤ >Kc~V6,iz}{et>m@=+-Ƅ\zp<6*>|_7Ѱ~q輵v@ɇ &)bۊܞ$-h1wZ19@bȞonn?:x~ط$oq{Vf;9$HIZ=BzC+.Beދ$^Hi#7^/'A9*ɻ>twDS.쒋Nx'=0ͯKz2ڒ˰dVUN ]*!4#\;{wrJ9xqI[U(z,v_=l}5tz)wz~8gC~NeΕhB&L/\tHqgq9Am乮 87TSpXx.?{FŸ`gݏꗁEm %dꮶӔ=!)K0մFM4gjzU] EbCBU%9%ArUvܜ{uc;l޲cWYRxQ-JϦijb\b赜*~5r jc;Ž)X1eFW#2>.9ys۞!ƃ|PZ"er}1A>Q'-/,V |pҐ|p`+)ߊ]2?^;L{"ċiZv/=OU{Fy;6D OܳU<&AKd93dXТW$NL(8IFF\.VyT=;UϨy8ø>@ T&V3dVG>VZ  ѦMj\#<4|^1xĚKrsr"R< Sؒ/#Ў56BH>5i6(ͤ~e{#ޅ;dX8T0tOE-)&"J fǗ2JiT㟮O#&RVgecHgTASd" tV%YGYr9K75^խj;8<>%ί~{^`^x29L.6],N7{_qC:_w Gn_4pY~s+_p:?ꣁ,[zvh39?^+"~(Pt3lGhxћ>EԨ85Zw~;J[ZmכV%52~\uqY?_2}հu_wڲx_﮽edy/z}M~{ /!VjI!fnۇCm4MT式>H*;Ě_&;]=Ii ^u‡G' j;<Taj` ]#?|n,y$mt׼.LS}=dtqf6=mL[_͗ywt{ +Ag&)ͧNf8;&h30{߄^ u[/KTSu4ܰ'N[-ùS0zBNܮDz,x9&w܎=N(NEP2 t`@MF)sCT;x1nGqL/㾼FP ҿ3>}GM12 Td#&x8 JvTzM^T^wKZ]_ E_38E-SS!.zƥJ}2QB OT(XE|3>r_:6zzfv 99ICR(];Б>h49 -p*DK\J2xka.J 3J%)na ]Q^G) ^u>X \dɴĠc0h֝䛑sd^= uOjb kuXE&ZIxȕEg6dQJal a}SA%.g,%$2kldLB⪋{MFO9^N݂a)-~J\ Wb6nWW]dvae\;Y8 JܲKpS|x"Ȳ/ V$1d-c]Q:@T(tY/QJ.VUSh+9DQ[sR_"r$ԼlQ2jsX3*ta38UBc].<{$M "n~7yw˴)/tq6|7.W_!\fR,I/y ,BY++߹`e9-.PЈJ6uZNl _`K"󭻩"yebEk7SiͨG7)%;HB%g!S^Ȣc hѡK2Pma1Z_[Ȍ %495xT| YG񁈹,b9{~eQP4b38U#юqԈ{)9 X[d3k#Dq 7 CɀX'LR fLȒNO-iNG'!1X#6#爆įY/.Cu6SoݨG `9C+yvQL̞pڊdנ#uTE{"zzq08wCc}OӇgPa։^y,s>bpDя6hh/%\.S ]\4w2Dcw ;,:uA 1Lvfp-W: A;B^J[n9-@ yX:Luv5vԑ*@\y,aY=iΕd %5-ZJ0&H.912EڔP9kHY:Oknuwz]4o;'hYUlJeP>:,ĬLB%c%#\5+F]eM٠/@XME6Q\Ub2ڮ-mWsj vWv6_8^6\1kZH|Qbvöŀ 2Vizda FZm)CZK7FALœ#2}S*$JPX$ShSF. %y/L<($@63=jmfR&8{"kFΑn!()Ñö:U ERwwQR y! .PI>l]6$&Bы\l9aXk8&KLR:B70.|H wdd"sPEr3F|?O Iƴ5]VVrXdl7ga[ HsmD~O!k_CP]hz8Fw% 7M_d1M2y ٺئga /{!߽q]x*|tϵE3?q*^j\Q V,=elV3e.}I*gxwe/^=/8L,>P"Ru#XBouΉif /ýnvyWևoM9s㲽EmN)0b\+W|ɟ~q^2'^mwɈz턹qW]F^^L6_WSv z+4-MtAMZtլ4è)2!vE@gKޫhh,.ͱns;b7tu1"aWYsewB[=~iy9U0פR&}h$?% ![ R6eR=G?={tmM>$F㺭*hp6sg~;6Or͒]~nkf~n#Pg{|ض;EV؎@f}oeK_etpR:Z,@s> 5sP-bA4ZpNk`sj`,9)7/G=H7zs(kDYQ~A=Е:-躽؎H뉐ِvjU9l -:M'ٙ^3aDՉMax(u+\e`SnxlȨC N""@EPcU%P4A FYBеnL#&&](D1]q!@a $%D(D?Y'tiHՌá А{شl#}Mv<.r( (D@?/#y" \mfn#wo~c]@;B'Gp#1tz@4W༞PYyVӣiu=^)14T XvC~m3ZqQ$䠬չ$D.:DR}-pv`+m(q J`YZ+YHMRfo2d8厍"oH: u+i$NjMNg,g&mer&fVeNhLQXqUGW hqf7‰-{ے{n[/iY~\ިd%Ur'QcM* FHie"H9M);1ݦGa2=k E6|$9L'X͍f&Q"фj-^D;h)e]켁cc(!9P*v<8TdqBi1fI`k&?O' ^": w|>ο~2}o?|5J? Le@rg}g,Zכ-5lm>׭PuVu+c"" ϊo~鍧] g m^HM{Y0 d~= _OQAO2**YH~P$TC5S|g}Z6}RWn#2'9@r&*/!]x'J/2L Elv6X&žGs.l6RKSC*8k9Lk@2hpvw:U~k8Dose^lzZEL7$RhWuK!.&mtETu @D颶 @8e]CM3&X^{ $w&fEpFI"$jDJi,mUOzK/ 58Go|^JVRk.WE! o<*;腴# Oճ`/~-xI %Uܟ&!@t xQ NǨ̅ȄU]孚z/wm\YOiJW"XԮx__/t ǑZCYdQRn咞ώQA:^{ *w3dv*۽4|i]W_nH$&^2{n*nGwUF`eٲ%Gt>ckOG8uiΧ`楑a4ot}bϛ<2 l}G&[LgɮeQsM?VO;-<jrfo%9۸β󴽔MQPsO6.Yqj~\~aqAĩৄhsOd:4 ] oDIYIv,F(zߌET֧Ubl ~ce\/":v}Џ_z~0uEXω\ <*/OMznv``\p~S۔x]<|3#ۛ s8M7vbOv9?$kbZRU*̡Jx 9!RɃ@ BH.HGX:5W;71t爛EG'x_|[orYeYR:ѹp9HSg*Se1>" |6t9ڄ$G=`R l8??+&&C^P)QpdEEoBpJMIKǙ-%%Lߢ+ոzWSP-JP %FjKI>-?|IcVR$Es;%cDԄJJǍt"48V֡M=ݎx[!:5 KW=YCsթ+VBg_&yV~:/QFuoC Nd4ֈ9ᏔZ;i-ͪ,WZƎ[N:{oKƟP^;焍[ɝ$#IIH{1X\idJK+VGUW \{`%uMdHfEy:QEδ?Ά NFH_H'Qᶫ>^@x\uBkPڛt̒vJINJa;-J#-C6 aTY6 GF*{)jg jJc,o53J][$m[q ^g}tk_N^gK_jbZٮ,˗, H .`\gpAEV}A ._EaC$`s{Rۋ7s4_qN_FP5wi5OMQu}\1p27ث͍Ȳįʡ.mwF ::5ڂWNDOS %0" $D6q'k4' :j$4Ͻ,ZDDnŕV0B%!%Dpb$\!'PD1"Vd,!Q}kTT#z_hM6 =g)Em[v=8Fْ\z8>~l|8|6M#]k/ڈ@ 0nd2b!x\*T:;*N;PE}5~7LeډֿyS\N޵-"䯶X:mn{nEn<ĺ%]Q7YwRdzD>Ab+$E<A'EܸzEH7{t9ajY`ht6nX6r۩}Y߬Dדt| 7t#rC?V<$E;|_hwؖt|[>F(27͌ +".`Z5)+:@{Ӷo6[8`<j[8@eSOxU\L~[䴱գv]q]K-M? 7H^vv!gq+1zML]~b&R7ԭ!; ɐK*85[D7o{mՎv=wd[ZMcn3f7x+*w~t:a69Fv{md($õFꡦczk[Ƕ{k0>\^Cm*moi|9Q5l6Eϧmq5T{6΂3n78fxfoCzLtf;S_d=}n4ßZmcz~RΟh!1zqu˽_kem:koJh>M.k~ِ*{$(ráD9*z6D$zŦhsHl>]Lt,. 7Le]f7{?" F![?䥯4\ s˽Yߒ&3 0&BΧ\.8xb{Hes)RNZ$parz3[~W{W?i̫q?\$uTKJiʷcajޯNLBQܬAQa8zI眷9ݝ`gq{&;$ hR|uGguО10ΘFݲS}qU`s/杲m&Dl !M!1*):D DBq!ĸ531kTgEy?ZY"A eC;n@# s#]tqeV8HXD@x8Pk%~KQU(́!jX,g~ŢxVt]Y }_kUz|8Jq ~=ԯGk1X'w?oU.ܸq!u5@'=\ q?DYA+}'=NOأh)FZ'HHdPA XTis&r.ܱuQb +ŨB@n]Չ),wVz=So1rߩ(бh\)3MNQ Z2:YC T9TLz|xYo *؅XE H6LgIhQHt#2z 9>a)7ύgK-'OK,f6Uҽ||߰"O.7YK{O[sƃW%ұV&IVz9@ԉq+GAA%:9]QՌc8ez W:M|=*HKJkb׌J1]X3Յ..RlJ3.> 'bNYDSi]C7zh30wW4ܲzWF7V؏UFLboī@.ʯ5@Z)!B%@!BWVPK&I RYȊGة wYYRQ M 8ROсt.&vLHBGi?ϩJG_e3o'=lz^)d8RQ:Od@aMB~b$-j[V;jUTtkUȣJ[zuN2CMw-wڨ-s[EnxX\=*URqzJ9gԐjl{LsK1 }KjJƍ%# ="z)bJFW(5d3&hWem4DJR0Hp9=1BIt-Rs@eN94΍0hd.46U[VlDeKݝR4Dp@ShMfI"hNUR!lT|Ҡ@L@" £i<T&FFy z90,9%|$EbQDb);)D\g"% DfOUT:=`/aoC+5< cn"i6EȻ$#[W΋ʣa2v>2ƕ -&C~ qrG+*cT&xbB,{;#C=}Abv ÁTpđ{;%z5 WUc |6N O"Z*ÜW߇ްq(='sͻ&x܌W1n; T汢q=߽k=2y7Lex{N6ơlm72<.57skf p>-`3fbJ{HX YCne^VG7ř@\ix;W4 3/nʆWX=A[UY,瓺UC嫿 ڴe;zP ػur*O?+mD¢-ZV"{%MJ8*brK10!t}n߽WN^p7>1}&_ (E@+`}P"oz6rz:޽GSb$`*땩$ xQ@GAld-J; N :<^d==v"o{:^KmΨKW֗Q5s &eJ,LʈE{nGD)xPL,k'gA>>W.@ۻ0pKm-A,q֮"J+d$iK8-=gg%–xNƣk R r ͵N6srEe`@ʲeyқ2Z A!DrRh`=o iR^89/W*31w5ph_F )%lԯ=팪v(oL喾e(~QIlL; .J1,gO"rmoA)r@I/$ 3sPbR1*啠)Q -$Z)A$PͤM Zp$9g $w۴֖̀pKb)Nb(g\}<'1c{ta<}Qqt_H&c64O8TN;p"hp8-LtV*mRp{()z";IhHvVGH,&rjIG gIZŠVX{?b;dxy.[-Hd指y-kr-f;7]h*$aͬR3b!m N@rd =VX&{rf.s4^N` 1Mz E,`' 6!Sۙ2܍=!Cr^CPPC֐ŵr?Z}(^NC\p꘴I޺Tv a댾|Bm1Y#ېK6HȤdC(= mNd@3 X`QVP=~~kң$u؊{G V2"ZS!H@ fE33+k%BJ496\cY@X#JIJzAE"qky̺Wftt|2_[akaƓ/m➦Kde$:ʛG:k#ۇ,oicOVbBOcF&GQlmƠQ%<-y ,imrJחTMrU}؃j2H7׵NthO6l4?8:?X}_W?9n^ћ^,R\L?Da W m[- n1U]zwr-^g6QX܆8~jM稆HA[kI7QQq=U\~&6?CF*,hTYFsEU8baq@LWH~y2)-⳹ŭ;q|Hw$rS%'/TywTR)PAzTf.GQ:G<=ܑay09ny i19s^rJZe ]8N'>^Ew'(xv8W(Wժ!i H/ji|#mr F:Փ wپb|4nϷ@ PF*㚲+ޒ@Y~";=vd817TTϯ[z1f+3|>(< OTd6oJH/85Ȋ4!ݚHo{ԿP0QGyсgBPƘHh^p\ j w PW7{k`tC,YL@,pED#%sJN6ZS))a) | { ,F3"epRc3r:JsE-w6 t=HwD] $gfkliS`cn8H |#? eeRK&#:%ujNEX ;uqNmI/©78ko/Ϗ/\LG!:j>֞YU`i=|(gW|7ϣWoIi:˲=Y~b9(hOMƢָ0oyF=$C*kEO8qy*[LYt|DߨsUIChIEC_Q e5|]ɻ#͆GF7+)+渾sܭX۝ ߢվi)Ay:Kg\Qݺ%[nΫ#kˮ7uIe %VҦbƱQr{C6ߐMݢAkj̿osEWLkQJ&CT1M@0Y5{U P?Wsrn\R6|D:܄2'4AsKo%<-[ˆ[ޒ#WoVxfV !|ΪںkN8~aQp8i ~ۃԈTc-Q%2Ԟ;QsYC~(S ej܋gQx46T (P@4g9hU R<@CHswejVP ?y.N,TD% /`-iɅQODnr1H'1{+RFzc X*1Hh0Yad,31hLAGMמZ gY0x4:kH;/h,xHKN1.@ <\yP[ mZ1i!$о$Nf&bA*¤,wV(e$׉W|5]q>~6Hp0;n3 'rpRY|^sRcLΫW?˦/1M_tmpBpخsϚ}I'Ic;?ݨ5GKmCCdI)i;ѐ+5>f'帑pPmXeYx=NispN |B-m@@ʇV0EDtRƘsYDZE _ &ee:\46ɧ'iDQ!dYLjWUWMܜ}/W?7ߏ'-RSӵyMA+ҍ;e˴^Ӄe{U+hkIWxaRc&򼓰ֻ:|ܑXԿ-$>*TC8L ;T+8LZiTfaz8L^<ᅠKOQ ħhO?ͺvI)FVUigh[ k琾|{#܋wŒ^h19htѲ`rՁIɊGA#7S7a],KK! TƬe,k {/#$sN #7<ůsc-0Ӻo? Uh[/וF47b!-bnD0ߙq!W]jPip#~7bT@qUVs\r5qU㊨3<}풿,wPUfWB}?J+9WJYnv2Hop*ؕPz]*]=Jk١qUE+UVvUz8qe<7]L˶ /J5Tt\gzDV 9j=ոc݋+N`nzRdZzsm6cSv[vzM m6^\*![O|xk`kYhGZsKtS{A9wAʬ F1MoW3X<$M$`RDp- d5ZZav:ܓ{ҎNIV#\:Ý)$ǽF5 tt;.s&h$UXNi]=7,wkۈh4֬c:TGid9260.-"DEklt[PaH$ A1gu̔V{G'a` 9jA2I{t+elGlܓ;uD01jR7MVa$(%d0Eq͋[M\`ZE]B" 'BE9FȔ4nd Bݽ-Aw_}'NܳXEfoNY㽦87xiX*tuL7@mb@{P ( k'b@.bXF>oib%%0`8H{a Oeؚ8wD喅>g9𹥻}y?>qׂ Z^Vt|t wiu,5/f"2R'Jze)5'z5=_=fB5US2]lm&B9RkJ/l'Ι$}WJZTA|%-K祱IhHfĒ)b F-WҲopĝL9 KW=ptqF̤DېZ''1vUUPC`7ĥ3!` H'N\s_H,Axat[!oZsCR&J5.:EJ%)c($.f]3(a_"8Ա$yN&CnDQYTK1: {NFdv(T&5z!/ܖ\GGbi!M뉹KA坎hw]w\/K# Y`(tTRGW&&2{o$SCuUk?->fż=r:'Ia^]rY?y)>Q -m$JB8KcB:"`%TInuC5X5`,]a@-of uVg>!P'm}pO& bV.Wq=7b4`k} wP6 ,i]88GM]>:ro^_M/C / TTRx Ւx%0WXcR Kw{Eq 3b@|6ʩ b=&TJp!FI)Mm`*QfsEgx&8m9ך8{&'A9>}~%|eGr)ۮoKzBC(|v8gc$D$Yk 4cGD4aFqA+ 1DtX¨vIOnv,,Jȅ3Jpcn̤,sJ5q8d4E__w}![.|R .KtH(6yJKA9oE-tC u]&8"~ 1Q"4%#w42< 3x" )$ LjkhXaSPsY[RxR''<Ɋ0 B$嬵XJ"+nь[gH-Ԗ8HSVZ 6'^~+ &mw5.zHB~kט>S*=*uWWo4F4!G;?-/'[Hńk4DRDKϙ+%Loq_e2-2#/{8Eˠ  Öղj7E  40b0RLblřaTuvKĹ;C8V'w.2(eqvUeUk|d~7Ox굿)|u{1"%@ITSU^SK`n(2| oy75h<jLg Z"IBd]K׼OE9}ȕ ӥX?`} HKl.rʰh ,V*+4ɩ2KIܣ%m1싃+_~ZUCM9=V ?eڇ#!P &(,u#Hpy\hR_()2x(X< c8/s.P[? 'ͽǯS_ 0NUeHD[$\{ QPFGS ywf6$kae)HTg"乼Ƈddg@(bd[(72\yю`*BT [w`5Wz9E${TUfہg~%A"F5ZM ւК8u'u~_2sU(o}вj;ky]i L|-Bؒw2uj%{PEYb)muq4ɏ~gV?WZ_l ¡ W8> K ~[83R{Y&%Y)8#ABJTE~AzjxtYq޵q,׿BSx~Twu 58Fr)0~H Q\e"=Cr]-\ns9Ԏ<3:}Rn~ذ{={Iͷ:$t{QwL@''U[aj}}Nיmʒ]ycEW) WO}ᚤ?3|H-e@]O"5n~wl679HW KhyEb|l꡾-_s3&f;V7_4|k|`N۰˚_h#9Frk,H .}id5J굎Ed`cN\Cs^"H9M`}> "2}Fk g؝#LH!)-5f!:u.28I |yzM@j`,oAu h{v XԢcY6䅃ǪnZwQ';smduWS>j.0]Eyy7[#BǼ_t})OѺdP?-ԃkjb=s 1J 1KJE)- цP(QSے:sٚp9HM) =)y,R$Yu, o3qv@wlϽ3-"koYQ)cN@'i6et3%Ϟe*kjzv,?-MqldS 2f 6!B@Z2 8j~{?bZ/pQk</:l,?O/G;oyN=_xOesSTC/m=$W Rټ]NX @ s^` CQ5EF밤T3'G J!{LVZ$ Tkdl&؎4f/Bc, IsqyU0d5ywx9o~ 0/q>q` %%KesRa(ie;4DWbHiUaS[BB-ط>%""u\[s?b,J;Em͈#ݦ:!DS1Q\1{λP}ΆQ|[<,bJ`/:ꌌ5)r2qdR1IDv3s?N*ˢP~싈1"GDܺPeĊmashl"!AJ1 s Mzd)ـX&U8Cl=iJ,,4Ffln ĸ8_)LKEq29g$ywֱn{ R(drG x avNU3ɑ ∋Vڱ/x@=5hI|?EECܘxGoykB@jc4>߮7דۏ_^Ga^Zɿk߾nr,Q1Il_]|)Ŕ]2SGEǎ5d@c;CہuIKKiPG  Z'eHg1E`WEh$Rƶ0} 'jTp$$MQmISLG&F.٥|@Wxn맷oHkmxdH ޓf2K1쫳YX%@'S$l L$)("/$}.ӓMRl4\;ꪉJc6fj(] QJ̤ȱzp!A,SKųɇb@k@Òro1GikbfX>3T-]%M:Lޱ0iE562cڳ = m`~bOr2k~=oo~17?6=]MO@넹JkF\ӵkZ#g@XF{*vYJGJu$JP ??<8='~$:f%?smJ[ibO|x`M3414VcnK] O.Omp =NiAb! +ku*^2Q*y=& еFh2Mcpߍ$'נGwIŝm# s,S'TkaWc8jlUڵR0c5gX O@HadઊTR0Fz6pexO0B*. \Ui ~dWj'U؟3<gJ rpU\>+ \UQ \Uq8biRC*%!\bWU`'WU\oOXZ w(GzpmԗAd*wC+ ~;pvzR(m=:\=H` 20qp iQՃt ʍpK U J+W q*pUՃ*Z\y~͊DhoYfŻ[OM|~>ͫV;gD0{۷/30moY;H^ ۃz-T16yDg=) JQ (("QlֲY>Hy=WEtZk xJW󇒢GQM&N n'CJ Қ*ڷ ?2֍bc%XIk5V+i=m@,-@nQ#6xԝtӥU5e˲HnHb;2#1-g`-ܚi:h:xY5lޅ)|tfZ-MhD&V9KPGT' l~XXk%?sE٨>l@; Z**2*((B RjEF$KklN#&D];b aA H[lQιX? - ]Zn&e oЀZ,;>Jlցws̏+f"w߮ߝ\΋giWr^Z{s"8RPޔifcW}@s)vR#cG&HˑXa7K6֏?K~R=BJ%geR\1Ib scmsvВ>AM Y!q`؏czFw ֻn{ȏ^1y^~L5o[~+"H(2iZ#EK>#-%tQMŗH#".n0`0#0<`9@qi"/~s0 %T:h)U,n qHۘ`."T܃S^aʚH"I9EJbPY3qvAF7!sf:=[owz+ߦZfl|A~0]7~qdqΌ%*)6`-BBQ E `((jL~6s~q.гzF(z -dR&hÑ0"*Ϧ('t胍AHiD&SzӪoo7Ց^QP#a {򥰂;Yhd$@a}"T|Y; Fof_o0֛b?ZJAgwF ! ^ xy׷<ϗ8CM7?{ƍ _]c~m 4EӢZC0Yjn3#YNtq-'2X3sC2m"h.:"tz)hFDAi#もG5j|XAMD:<ual3(bhs(\G>7Ȓ /^֕^Q5jō&ߍ%ӌ+q1X=_mw?h A)O<,?)w%t2h?DWP f5R'jnTʚi\}߮oheR֟AgwafZPeؒ;[KCUSPiK|a? sa ASQ%$Ϥb~ʂ~bk7t#|ޥ 4/?UoǯݤT'hOӷ{b=ٻւM-k{swmԢ/:x^iO[!.Vl[74N~گ?'L=Ez9~}tZ'QDGQf!+!՝4"͓x6a z/WC&0lf~LnJ D {穯uqQ|!eO f]ۜF3\zϿPsE>ę2)kUP^:u) JH(~^|ZƗ&M]cjc5Nw]9N*ڑ[辅}15&ccJnhFUߏǓt@ڂE[TW_.c{=Imr&gaކ-JݽcQϛmI 0~omx$~-`TtvӎΰM=f?jTu9--BvOn.+&L1o&DtivF1S0<$!@*80,c -[JǬZPiq9zu`"!:ZF (fDmY #\ _Piu(dwھslBύtEiP;Y uzEjo)ʼ:*N^ޖ;j;orOWfn@YɪcuU+;>s]e"[r{=V[۸@)8t9H;Ԛ>HhJT gj ;H%_bv.s:*p9WbnXg^; WZI)Mėz Z(ٿ؊*>Q)Ǣ 5:)T F0DZW cXTiZ9|HQXV@ɔOqZ>7] DCV'SHYLuwF-;oU{>85JH%J(-y!Ihz *J&Pp >ZSAAe*% 2%͟"rG!yEAkMLXP;#gi] h_7*xru(I'FU}zUҮޯYkk܉{~Ղ7,||TfT+^Wɡh:+Z@Hm ĸy£av.;Ռc#&11qP@0_"Ȣ6mQQE2\B%յf쌜ݚ;_3]g EǺPu<{Ve5emuUٰţ^ -Qyj'E0pT\O^1ܱčYH8Ћ'vC۱>4{Pa;p6Rk3$0FQ[`Gk+bX4oDu2Rizj—!)_C魒^j)zQ8Q Wp{=Q('A+&.8@*4P:ĠS`Yb6HΨ'69f#5y)OHe*Qϸ(@/-pJF *v"6kS]N>tWoo߀siJ h«mB|o-{۽כLs<sW_ՆIPDz50Tq#(Q+%D(E0bօ#cti!RDK](%)t8: }؎88Gv; YRt['r,au*_t*Ep-qbWA yzTOk䪸s^&#W$$A{Tg\Gaud]dxw9sK:bƖ:9)KF0!zD_1 R&һ RI6$5*XCGY[/, !NOPE#DF;R;#,[90#vs% )`H=|`ݰCԫ-0B_wm":HIРRa,|GN"WQ(wH :8 eC]X# @\.9I.P3oP~0&%fÌNXdydy+w aQ\ՌQVW@q%!zqrE+*cT&xkP&މG; BɵE'ٞmd1H@jUxvnxT}@` b$`JM) Ai9/h>WRW~|>dʇ'Y.=x?]Qm[q=Wx{y6^KpWyO"o[僶\ŴTOԒ..GkB98 LD˔C'dXV:yo .x&y|tm0֛H9Bq>Ȱ#3ΰcT)Օ:(hC$=~/x%>;7B5l4(\,h]8s5d~v7=9OZyŊy'! 8NJ)uf}~ p)&b`Z.y B NPD2V`Q" 2Ku}<6KSx #hNH;3۰h}HW &3%hDL,#5( ԉT:phVA(6+͍e&"Zipgr"|؍hG:iά0+ .s *:0H1E(tL[cjjCNxh>j5q+Y[̳ sN8巪ӍjM>a1@/0z?}pom']'Aȉ=@halh{ »F ZO>'` 2QB9a'rVg\7'N*ڲ @JzA %uwUR% mu>r9FL>|9ߏ]X8uv\W+Eڞ/jPw]T2)`ɼ$ܪGS'(7Gh'.r=iڍƄ>/neYͫhq30Fd JݙD!w ࣡={ CCs#n1U]>u;UDvX|fairkx=(~3H:mZfS ".d_6g"]T,Aù*\Zv  / X 2@}J]z(׭Mob$z1_xdv)睉s-+Dy T =*JB:2}:ٰPye-=Pa8d.( p!rtԵz,1N)ty.$R.b >tSl>9]ĞIMee؋p-7ݪ#lq`;Hv _A7k&΋g?BOc~{&/=Yg *'7yx啫N/N+)1yC>Z;|oD%~] B%E 墦L$Q j-gA`}BbB4 {ǍYdR b`wr ,3l {.i$4R?ذ9Ud]2`ȾeBw.^vv,>|{ušc KR=Y<`y痶Zݝ,ooNf|r_z}N WkysB]hOKnmČX5oANg@qS8V&7"Z!\GNycgIGB"kӈd :`= 0 $#꼤 )yfe{@#AYب@(Jcq&`٨ͺ"mPt۷B [#|6OoBb#fYVi~uq=sZF,6՟zZYP5Mj0eW+CÔZlĬpH煻dJ́M\=t yQ$Iɻ0gH֓TE.%ged-e9]jrfB>VZA<)꘱A5t1yOV NDVN/J]B17`A]^ ;n›~0߄8bۢlgSXZxy>>9EE͈|W T݌}GT ;7gߜP <iBj @(֧H=B{/| N~ce&gYdna% S9+s>Ց=o;+Jn-W=?] \iS-#jx !SIڴ7 ) FLr?r&HTLJP YY Z-1"G,X^&) QyQRICx"gpX_EIl- ebPrg%<38;Y,n|7)TR.hEYC$O6gHkT2"Х?$xz<g(DNVx!"ɘUpIKh2R̨jujӣN'cEE;]1{,ZɡW̒g,6\"HGb;$x T9,< k֚M@8xK(IXG/@Lw#jv<=G_gSo·=`4;EbA8׼HGg3[鸜lV6v71nҴMa oGHTCl'gJ 6?>ʫ6qqYh%}hwKF=44̆)Rub|/,G[eL.H%DHB"ɴO$+uFu6Dc"DLNw1 >*9uye (E䜋E``Wbv]LfCTrݶ qr?u~'c^h*ZS{ԀfodbhБkի(%m,:mC(mOj{aym.h,ruE#PcadD&Gl]{ͦc]erP,&*V *+X--ɲLTJLQ{-,HرSA$=*h^ȉ18W 8.%؂a+/N XD’t1c{Z!́9*B  ,/#iP6:V3B*0(,!,l:{~{[#xt/u;]J5~~Z< !bPEɐF6l7m:+g{ulyະ~M3ܞ6$K~47KګrѨ2 ^5AI24 m[&Dpdx! 2Nֽc=(^ߝg ų=߱21kra'IANPVrZIE4,Ǯ>;sNÜ8;׉KLϷ霞73tO۵yMW^NyMOO}:]޾` YmI8 D6Yda[9g}@cuwNw=rT޴Vx&l dT.ijT]ߎ͞Ŝ㺯N^#qQeV0,AL1tbCVƹ`:y~j{߫PCgQ֑T"JL̈^ɐ@Ki&aT(JƠr%kgsQl dGS\&j*u?vgYa#g+ķn;{{չu_r鋘vafC;|J,@r02Tj6p r6q C4#H X+0D|V]&TBP\dJ6D,T%otv Vzr\Kޔm'{nrM(ݮ-=YDd#vXT2A1Ʋ{)-' D._-ғf;XL:QP+?YyȒor=_wB!@,uZ)QJEޥ8TTT6Q;^lN ?}7]۴ڗѥ#^?cUJd_9R|^.rsUW #6y_8KdC{a <:w??tY36? ':hWr3R>ehY}$TY ^6_k:Xl[Z0|M̤o]'˦m #?w|w|"O-7,_wK߶d_`juwHƿ9=M =L~xdgLܣIM1WCe.j{l|=>V@d~{Ѽ`Ui CWRCVUEtrjDWQ ١u<*J4#]@rJ*`]UBWUEHW/H)=  ]Un0{W-!Ĩ]=^ S#X=_Z{8]FCuXP `j9hNWHW/`csubӅ\ir r*7m, Ǎic1C6" }ՁP>/&&E_g4&B=5(uAw{[@sA#gGmGE;'s?MO'kBųizd~<9٧ҡ?=:TfcQ 9W#+錻x~?ܡ?} _G}˒@ Ժ:Զ/wÓ2a%,Kh?Sn'o_QbMY6D1Dי$zANAE É5ڤt%7.lBĤr`:VRzklW[>Õފ`Tz_ҫRjHtU[ ]UdBW;]U8n)DBkU`0tp̖bE+m骢D= +mɒ]1``µz(thID1 +jh[1pa0%-UEG%ҕuVqp ]U%҈q%ҕs DW X  ]UBW-~甆4fHWut2X;=6cP誢U7+Jt#]=BУx-G09np^h~(7y@W(vEۑ3@q&} 6hZ& hS-BHmD X)7p MW{f8 Z0W҈/)JD{2RI* X# F)p`BUÑJaĘ@{ڭUh0tpP誢UtUQj; + R]1`x …gzH_!yP[$byHvEPH8"eY;vKmE6SU=oυ㧫ޥ ]]9t3f>g\BWc2,gN\$۞PVkEو\D/Вcu=AϒkgCWKd?v_~w7\ h@I~+UAgDW6̆\sVJ9ErX礮\scr$;k :u5І_jJktljYn'u ( ] ]EI]p\;l Z( ]n>qAm~K]7}}BBQ#AWv=ygDW͆\/s+/(=-tutg'+ Ɵ )ktm۽lвgQobXXcw_a)-{[]F7hك|f$ F /}"w?] 1,b0k&{_t\je:v(uHW3φ\BWC@u+ Q]Z3pBWʱ@BW'HW.rP]p] <38џ(bO|s2fCW|pkWe\)U4$:'3/~p%΅Z_z?q Gp};C/Ծ\ya3ZBK GvWރx=#͈8\ hK-/tut%69ػЕhl h=]]&կCWb5}D+[;to`]COhڶ}ڔ7^^~}@}"M{uPs{Ԉ}"}y}ҙ:3C}/ߍ_Mʗs#wo ƼǷoиt\]#Xs;iw#̿]m+Obq9B}׿88_|)}_w_~n'QzrsGܫ]kVG⧑ھ7~K_ɪw-ݽ<>U gwggt<<!lOT]Bś>R2=bP{5hsB1{BR*&UfժO9W[*8toK0٤H%~^hή*uS&bmXGxA;M6ugRiI!v@qD[K!BC[%ε`0"4hFJ9Eg&ѢkȲo o?UK͖jj6@& %KF1=FI#SmO Tn#D34f"s E'$Ei!S2Wsᑈ&ؤk^iՉ&ShCGhCg?6A4,M":Mr*0J]4 4frK>whb0քGx`( b uh<ߦ뿶T7R![^yü 9 9K,1~`'9Λh̪B޴ٚJjݫx4bMLw$#d  ' H/&Mj}C5.RH5DH,8dfm6ѓuzMIO AL)PA5ݸf|: ܔ}Qd\` :=KE Ȏўw_Kԑ# Y Fdm~}uyooż ȓ>B2&Xo,B&Ȉ`"A]O9t=A}L!%ݦ%@_>1gk ) Fc*Li,Z:h q[O V 3dЎa<D"+P(Z|PS`eQbroqGВYO DƢ td!n;i6sPI3J+1A2?AjDao8UK2,L$qt3%VPk>Z47h= A{نI5 [547uJIǩjx/G},e促IX G6>Ɓid fƉ!Hmi=yrsn?ߥ4ջvǹn8㖞`MZNf 6oZ`- LɊ4QG[S\k l/ <RVO ]_&fz442#dc0p0)a!/[t+9P.O63\a=tR"˕ӝ 2T ,0eR-1#K+HςO(XR<vl}ج7̰6vE ?[bDImb>YH&`~E?"oQ^1b^8H x,G 3t01t:yT=$X: XCژlKc4-pǀdBw37Vj c4AF %_6Ι4OI,d, fj)Bv t9yYDŽ<5k!ڱ|Ɖ7 J k :j5aвpд =+iq#f0Mz|dNzO^ 'a9.Cɵ+FR,B:h Ni~yl)W.DL0r( ;fA 59n^pIp.X,>tR@Ւ0[ Y\ 켶B8ˆ;y0Bnڑ`j?^5.ӻ~Z|u] ;u)aQאO|9}/Ouucb ,kpŸȤ[_^`z\XNR.[]|u{vI꯫]>s}bn 0 ?_=>gfOzټ9?3qsW3y\VxaOdn'm.o܏'5SK7[tMh ?ty6nn^K {RGWumʟɟ[u n9U9.:(0J x@@K%IVK%R T-@K%R T-@K%R T-@K%R T-@K%R T-@K%R T-@K%R T-@K%R T-@K%R T-@K%R T-@K%R T-@K%R t@ѱU%P hOqmM%Іq(t:-̢s%=ۅ/FR 8k=_=oyBn,ve\ ~[]_mvBTdv]9sϼmcKïBs7nރ$bIrqԩ-bg#>kEے&)K\ Kdת8N}zr〒[z>- }0 '}j1tzO,U?P1 a?C9 Wnyލ8LEIk*I=U|z5jOv*zr]^KrYS(Z_Zo*-/c\j^,/W |ʝJɳA\;@,Yq #d=%d5%PkEKs2.q鳪ۊpu jV PkҒqSW+]=ѕW GWR J;%DM٢pr jw\Jq~ +VO2re55BWk@\9\jUOQcĕ{^`+y5ռBV^ ^2tN_b&W⪛Zn*T\)նUυBV+D5=BWiqWKq_gHm~õ۫ܠۍUgUj't2_nr>̟kXVs}-8c從GEs8ɇE߻1tzO,4/ @9¡ߜܭj]q u>o]/5˶8jҾyߛw';lcly x2l]؋7pw:>d/R|EaAxd|=泏c fWWjmY&_uWʂ=6BHB43xF+'*ֺ+kWV Uz=Oz}[{: 6U7WEWTZ޳ 4j۪V,$X] PR Zw\ʾ-d \ ^3̀sϵ>?zzg!e-AhpḑWdr]^KrYS(ZwGFS0|V5s _ P5JK:F\)nzzvVV+k@fB_N:\imEB kUA&rWFhnuEB3vrM5 =ԎTJN1 'MMVߠ\jOQu'Lj+'5 `˪uլlWRRtu:*•qQ P&BDƮ^ޞ2ŕ?U'zC:5@Mَ,j۪JasNW(DզBWqAnhp=9PLu)+zAKʵ-nnNTAKJ6h- HVMЋru59:uA/LR{AU@\\kt#6u {v{+T+]q* #\!aE'w~ W(W+P}+m8pu2FYg*vW+$cĕ5NW\`Y_r-w+P8]#ה `ǫ\Z'lq*!\!5Rה `%D5B\Z{\JG ^ޝ2unݡǮ:5fnjTڞm.:z?+P ݢ`'\Z/+$pu)6&{Υ=k7K!~RNvi#A&IϷ]L()SUrj'աJ6v [g5Az}|lt8i6v}0OWy{oB0p񀿟/^b41!FqYjl:[C, \Lh-^$kG7o}|s?.yz]aX~I^'?a̖e8p0Uqꉖz&Umצ~'e*7De i63VyEZ&V xCmmI*Ҷڕ^^m֧:|&Scì4ܜMUsNY_"mPfc@@*ݪJ"ĬK%L!5A,efs!*fey8lϡY^-O;CcgwW^|z< QT~3]_ko6K|0X'@iuϙnl4 88V~\ʽ8raN9_nyfW_C[v@&u^rnZڌY!}p9Ժ 1zr+mS8jor2q/&lln.ooD+Z˒yf|@ѯ.Yw[;ɏS`qPW,ÓdL}<;qn1q-ӣ$*Oa4/&E Oo`l nt+.dzܞ|#Nt1_͠raOϽU L 'o8nt~ ;|̟o4\sdcn n.J:}cY~sP?s{.ti={uojs3돸d :Y,Vg]~T.AԿIݛu΋|5sxb(򩁾 >BnO96|~Ȳ JqLd c$2+k,-kĭjI.SZyƁHƕl)mZh!=J{xLN=NVݫ{v2y %4X)vx^&7z 3yӮ2+iZ泳O'Cno f6pT͚x5}Qt on?B\Crkn%#3x欘"қhgکlJKw/w;Og407z# =.ķ(^1bJ8=d8J^7*_A ch)y3=s&efζc4G'&[RQd021喥$3Y E\ ?UQ瘦Q[ő>oV",r{$\vw>9eM_宣× |Ers%e"贐jHpL/\th!q nO8t roskN6b:h:ޭW=Z9X/ƿJi299]c630p5lY R kMmLmIklK  \tܗ2b\Er/(̞yWݣTztYuT]G=.˹[V¬SwU֔GnK&k2I[b!Xb|t{E1. =I]s-OEE$VH-SD̥=wԽywF5SȢ˴q8Ǿ,[;r>xo=>O?^MY=<|J58$=[dTd /IUx6%([]ZV+|J1+q,L%p uhvNH_\U 珎чf/n+u8%>qf{zu#x,Oq#Oa &"eDs|' fD !QɁO)Y/L.V SmK&{(vlxA%f{i4UG.h:>ht{?ǿo4}7}nɡ]={4~@5? [&lv3?'PEvI[vPl9Q;Yn,wZY> UPGo6Ѵ <<#,҂>#UJ#/yEK/9\-PD`]\JĒ~#a߃t!(Aj7Ĉb37!0|0B i_Sϯ-:s&SӢIv!1RkX"vDFDA|ƒ{\e#!]6mzg<>̪hsAY65ΌdZ+ HNML-1ix8E\V3xo0`a7Lfw$k쑜!kU`Sh  I1UEVIM'{goʅ KQ‘|)> 8aCN) Ԥ97Xp2F]hf-CC,P:R '^ xuҧCnQ\n*Gg4|rdm\sV$Y1=;\ߦQ}\F7Ӫ=siocU_hqUfϟOf>gtSp7WWcxľ]]"KJhIOV'Yr]X1*h~߻Yh|3׳_۠+f೺".LN!].e->8ZӶ0wZ>aqV.c>?ٺF|K4z3iIZ2Ҵ~Rim~hX /3 ֺo}8rRyqw~Wߍgy`/yQOs_yǨQ!w|Uoq˴|<=_;Ϥ aʌJ̰Ə^9lYlZSFu~,jv=KI4S鼌i|ja>~<#tO>{Ǜ| bǿެR{/Ǎ:MǓgOf}sԚM1:ߏ~/Zy5:0&iil!?MFvsիWavYYqFs 6#e:_"wݬhu־o~='S6Ch׭+\\e6%[)'U1*"3b76yZőe:*ݤ>"*4l/st>m1swk_߯#Tt]pZ[ojĊK5~jOG.WliW!Zt ;9传G_yG&g^T,lBr F)9U΁4^ l/NA=Qh /re cp>jo0cVZ&k"RP7:5^FIԚ|Γ RE;8&Nj`*o]̿m ?w뻙n{n}M~$_rכvj7ą'Nd~~ǏICeN{p}ј˛ ȷNH3ֆl rAb.D;B]L6nrwGza Dӛ\)%}=˄g\Ѳ;\U`y?kn&._emZ1.1j(] e%Lz F'@c ds }n{g'A߈٧ {,۷Z?LDh؈ 5%%kSr4c u$b <p%[y )eW֓"J9Et@ـ$lگ8CO&_חqaɷljl4*'Fy' _/.< Ĩ) E.̟DIdj|@X49bf{Y`gblm K(x*O8k%(d/!L62m1( jAHVd\T U@ɹ MIA≜gI 5gO;S23V#Z-cRxSBH %y d ]iD]*&+r{'FȤ!z[D Zdz JϚDZZg';@fP+]O8inI Z#up 2HPER`QY GgQp0/Wƙ$/qNHnA^Ex7g&$2b-.lV2Ueڡ3dJ,YT* ҰLan;<ыS1f*rjƷOH6þ@(@Q㷈y^)2a9<-lXG3 9:hkRMЛE]XN^|:Ex)?ץ[s=|5!a= 7XM֜wڈdJ+x:v.hLf!$ˋ D1FIDIPHF%BLLw 5j?E6&m2I$bO#K+ >xꔹHE|b}J6`b7zիwȑĘ 3[\&pհ-ђ(J?͂M[tㅯ t|rfX.a>y'ljYaJ1SLјmVX l b|Er*c9yivs0-PD6V`0#$$%-ZnX ,`){;C~n"J #axE4I;S2JR3Z)M8n*ǬT'?ax{iiuVv848mr" .eř<"x,uב'ClCiά0+`Hp"fbcj'>`aͣN]HM7ÚWZx{[õG 0/&',,:j ^#!skɡId`JZiO?`yW*|((@x3AO!16x C%68'8< dd x6H=,9RB0K2K`pTd7~~Wwtr4Nz{ӡ>USҗU$RQ ۋo*:::~T22)ray7xJ͢iKX'.j ҦE:կ?\N>xh-Q>e߮x>:y 4z!YSvmMB^ٺjjjVu´ qO:yGOuφӻݵum5Fhui>Gb E1+8Ht@qvJ(TL Eq˟~~^Sf_? 4X (h> !p꿾UUs#6ڪ&[=UD65ޜ>F/6ðXr 9v(4 e~WxErՄ.Q,+0pRY\Ce , T,VCP/XzxɠǪ<\stUjDt,N:t}s {0ȠE&Њ =*B:s#=aÜZm2'rAa, ]Rusty.ј䔴>Ӏ†iNUfWbΛ\k4ܢVf;1Qq,2]W '܀+ K) ɸ+!Is X*=$ʃL64d z+$=)Ko¨ SXMpQ= Vc`{Bi GcM:9҂6*b"AlTh( Z7'5gsBj,c_|昲wGE^Og`kbskG;Ҏ!ENL6 ^O}qqr0N`ɅSl&RXKO*`?|uagWp> X_$+f2E$ Z&`*.~Tw(}L.~nZ);#< 9Ef݁zLCY*Zv;fZYC{fQOUrz%n7Y\ܐ.[Ku߻榲z.,|N^9xM+7(u*kErrAY2vyįd S:/nkt oȸc;-<8]o7k~k{F{vػF˗/|W޼[.;ʛ??72}2ĒS,2]kF ќ7z^}9O>wwR޴9W(89LҶ6kJ%WIpJjD:Rۧ:V7d2;8\'s Pr|)5g9!K=BuDX{e//&0H.k(.̲( jgU)bd&6dsy{'̙9L]"X&M]" i!cI r s$¤,L|տF?.a ^VWkK5& Tux1- xopL{,`NƧiTYei"!my?dNHGDE.E I`^ްQ~MrPuy ZqfpTVޭ'y'p-IsIVFLXĵRȣ҄BEL0sRd*ɭ5Vֈ})׷[m7 XO]OӨ2 1U* (~:>Ecfa44Zs )qyK톢7Wfv+{5z o>ȍ|5wngu}Cع,DžH芵5U V^ۑ'$'qU/&Հ)'iz5 6sdRHK) f zvdxP=탰4@k<ȨdEKɘ)K"e&+  "2=RRƹt["RztYPEVBHZcL{ 5|~+ ݢ(۝qΓ͞"yo  ֵySUݨL\Y%`)hg]JJ[%JtoVɆhbDWJBt-t(jRB]`FDg rB=]}5te7z; KiwNW[VL+nj;jG.ۡ4-+]ٞ:*j++th5m;]!ʞ3_X5R+^jDE>*^\-T `#ɏ/'x?ij.U LjE%q?FL)]-b-R%I^RJ(}BDkՏHO+:@2mWz]Qz5mWzVһJ/g.6٣\JdW >} QJҕh!蚢+;shm+DXOW{HWky +;CW@+D Dt=]])A)v ]!\źBWvB?G҂k;DWXu jB*=+#0Ct%flm=]!]Yaa7!B;tpu9x]'w<{<]!+I6zID2Iv%`-2lV2l$ٜw{zSQ+Mg RBBW\#\\7JI} M k|iՐOi_9ޕp9Zh̫wi!E5 y(̚~֟ʌ\,JhI9pV\2b10!cȈJr)p5TW cZR>(V$8y&qZd61h.:"tz)(uR$cA] x<МeҷW|Ϣ=UAqWԩN^^?9[s.X 2j,fPt|1;^Ά|íA0Oѧ.eH&_y*]kU/?aT>2sR ti}3-Ntb7iq!~`F6' l Bv-9.` A-B?UJlMZuOfQ`8F&(+] 8Eqa9-THJ\ݵg/JÆQŷq8W(KAA^JVa:MߜUr MQÌVk^L|jkpt%΍ e+KD+Pjҕ"s!e+theUeOW{IWXE K:K;chl;]!J۫#]?{Wǎ\E) H @AyȾl~ʒ"W x-i5*CxT7O3h)f͋Rj}-a(zt5)$kZpbF ( BW3te9T@kv׮^{W~Jwd5Еzk cP}g00]A2eGg3t 2NI֊<;1Nbvy3\ik2k1[f4!j[@L96 mXp]XhW@Z[DO `6~1t5 }Cyl']"k);^}7ƥzKNWBWoe$?0ph:&`($]y{u5Ŭ:ho+d%t銉L +lCWrA/JQWo1 sgp~)tQcA ]N[ L;apsgp G (>Ijte9t<>>]G+_:-: 9uWBW/z QEWnMNWG]21;pgΈNgv'zSgzQfw#^tPFǷ!aI+ ӣޝꀀֱ[`~1pd= -ށIDqQte}P}eZ}Jc ]MU\]Ugb DWx5zwtEluuAV r+5q1Šo+,-ҕw~At=pb{{c[+t}WwN<ĕz\p(w㘓1?b^nuZO޿ CE nPۃksԈ͟-}&/?3u fSlz4~)_/(whms|ݤ?ޓÿ7Mb )r^\:36گ8xgu o+6Y~vn.jO>`dۚ y6w񕬺ăxmlTT1/@$_P>1_-8_7_WL`~Au8>KKcӗ7@n۪O`C2ΟؒCIz&k#yrz]>$NV<μw{;HE|] vӛV?mcf`nV痵.%ozEZ虁X嘜5}v:)gIhbp<?IfU5PbJU!g)՚nq`YUKa3(ai~Yhhn}uU*bMԱ2#X%ʚvjAC$ZA+ω[K!BMP![%k`DMԴA3jm:MeI`cth .!bwJͨ;b %StMSjJ1'Zj{F,=zɌaxhk#Mj!SR+7@x"5"{}̳Չ&hQJm X,h2ѡL2:玡(4 Cȥa1MΉأ7*>##G(7@xhE酤1/Rid8Mq-kYnj(`ɘ!g7 }|>oNBcVͪNTRϩTRmm {Qd.7@x11ɇ%jѻ)F$Zt%9o|0 46tՐR !q"$ڌϕ!jT&b>iyRl U9]ЇdTW)NuOrE+ܔ,3sf(R>I 0kjA 1Ȏ ўF ޹6duGގ`FڨȗBpv lp#)K[rݫ N7h WZ:mZqg(*˳ADUXu>:' Ţ˳եcMm̭l%ODHY`(.t-a64GV1֕5@7m0*coI1Ҽ1 k9[P6PB kGU((J*Ze@Pɔ2XM|*UlCD&0VjNddb;T ʄW'#I9G!hTPk@oJ'm4dLBXnl&!L6@@;-V(!Ȯh܁YWBnTzCZȸAݵ\=:Z ֺq:LhD;iߩ{"W*:#χPɨ[s6 ƒ9 N0t AC\p ` 5RHpPgU UU r ֑lC@?V!joLEwf%RI9n,Ӭ=KAQ 2#}@PSƺMu3R1 ETݳ.RVO.32-Y4DRzLJDgzfN Y"1ШY! hBOE jE}B2eM23gr3_n{1#.UUq\1,4ߨ(PlʐvNZ'D̿1`x;/ݬeU>&Hܫn}-l/ L9hh"X 3 o byPT8xiPtdW%S2CҕkɂJUe ,#(yG < 1/,J茸P4I"iyM,QK!j 32]K1%$0Al$kukx$ۑ< #7XTug%7\mՍ.< 1g_ywW]nWgeXmnޞ_]=Q$7vڬ ]!\w&f=)[ZdCQ.ߝTiVnMŌ{)DmzI(ΓFꉡ56n(g = AzV{20väDy آsE6G=Tʍڪ,hNJd؝d*!: % 3Puzv}q` n{+Aj)5NnB0Cʟ"o(V1Z8P1,*1"@ZʍQB=&zus[sOuHMqQc@sZv@[<kѵfzPiU{/A64։ d, U.#TK>dQ?mN{[x# R nX~e-nZonYuUJ+e(+n HK/oQ~|͟*^[K+̴WE&-% zIU%hM{Om,_]7wmwǀ\ftcHq` |'}|'.2NO~Unu'W;;٭/b,K|OkO_e=qvWk|Ӈ;mT>Mލ Wwv|}j7N~v*=|iBv6|m=!龭O>sqfVϤ>M|f?}y%$'r W%6jT Q WopjTbÕp%+1\J WbÕp%+1\J WbÕp%+1\J WbÕp%+1\J WbÕp%+1\J WbÕp%+1\J Wpc^ Z]Wc7\ A6{+;nJ WbÕp%+1\J WbÕp%+1\J WbÕp%+1\J WbÕp%+1\J WbÕp%+1\J WbÕp%+1\}WA%HAI,p>.pE|o7 +Z Wopm,+1\J WbÕp%+1\J WbÕp%+1\J WbÕp%+1\J WbÕp%+1\J WbÕp%+1\J WbÕp] W>jGK2\A:Õ$cy Wm[4\EfÕp%+1\J WbÕp%+1\J WbÕp%+1\J WbÕp%+1\J WbÕp%+1\J WbÕp%+1\J Wq# ?X,zX$^`^o1_K#Ax侑;XjDV>,YdK W3$D3(69u+zԛ,U?apRjIo,k#>G& .0eW}~D>1DC *K;LΙIgUZJ)SKg<ҕgoFWf;{+Ħף+رa$ `vn 7auUF0*P@WtԪ.a8oNi3~|ymStaOʹM OS 0q(ˡJoKr7OrK%apOB맟^n<}-MAݦOfͳwqkyT:u:Ǯ1;qx<4}v\ګfQQOx"jW9^q } aNh( L0[ `X(# O!])0{3])-T?vW䚮F+Y2`b33IWJv])%puԛ10] .:+ )P2:t5B]I;+mܕf3sWB׮+ŀMWEpҕؙ\3NHsrJJt5B]HL3lFWY43w#g]QW !KSh'R\23TZ~0zKz=R ßXZr.DJ)(uIVV p";Aŵ3w%T]Ljbb4+~KW$(mSJ *XirLad2spi2ڡ (SesW@WUs Jy`.]y=>ծ+t5B]꺦ܨM%0qsN=͍*)7FjRF *crJqɌlPF`p$6+]hFW؊6;W7]]>OEV /d(Fpq2 4>S2X pf43\de`CSB\pm`5Ɓ[%Bɣ~trws9mSws,ٵXyG r󍰸}'`i)g?C9->ʣ<1B.<<&|AL'N!lP=ܜ~{S-IwfS▴}!HhF>pIٛ ~R=\R6lnҋKi^/Zf}h(J]jBHb}ҕy*߉]|PgmU7-o3;ξ]u$|1^j`(T<#!qtw'N Isڣ11e8n1p~vC2Ќ [|(QB(ݫs+n>wwd`'ۣ7 0 E{|R~R0d9F速uu+Қ2`: 7auUFVD]e@WtԪc7+JqJhS^WJje2w@uY z+Ht4Jd\ۧ1#ӋE@v#^+]m26OhtoW=;>Cmѭr;XC7_xe1{:أ:&e| *~1wa:nN"0u} lgJQ3Ci!b(ejc1A7t\`3R\HVtJ)F+ %]fԌȊv(FClHW)F])n03TZ^WJCuE1_կ+`g}₳+MJ)s QW9.WWL/t!Zѕ>u])ejoǨ+R2+7RZ^WJIMWDWy;T^7u@~ aH]PWŴ<`2^WJnmzJ{~2)ѕ2Xѕ=cRʪnz)]ȉƥ >{ ^a"g.m*ă5_`|8˯ukZt' >v]qSdzqxGF  7 x +%`xp3+oGW;蛑bڔjוRfl yvސ8;3R\Vt%dbJ lHW\2+ hEWJ;Ⱥb^fWxtRdiJs fthf݉_ %6w5F]EB2+ΘJp9RZ>"muer v+%oFWJ[A$j8;Ζu1])nJVt\O^U.c2spi誈FWe 5Е/Еozj+Ćt%]).d+RZuMWcԕwSJ_M^9Y.{%IbdJ!J4RieCw(DKD~4mrG'ρ1Gg'U\JW)1wA/t lFWft9Ԯ+7]PWS2+~p{Jqͩv] etjJJ9ѕfVtJ)Sƨ+lbHWft)ZѕҲ]WBj[t"ʁ- s])])Pem}(u;4.>])ЙeT]MWGWaǪSuUwt ~O3q>gηߟk^ﱥCSB{w2>g`zy[;Ls$Fp{^Ȼ//f dSgWG5+M8G*ǿɕ/;\pxOϓ=\9^y:>]-.&?WWBz]mo8+?,6_\,[̗"(n;߯7jJ`0HS,>U,o"喜}(I6N@y/Ҝ9#gl}~HmxSYH(enxTוmt}=T@A*]Ҭ Bt:DJZ82u`u}qdz4NrOwgt1W~Y,r?l!e!OO~j;w?:Mkv>>=6$q Ql% Vp̱^AD/Z,; M9؁b?44z?[Lv4_V^nd"ZE($SI>mL8mϟOlgη, h"PKV4E<}K_t+b:ϋ 2o-\.L\w&|Ջ1(I]HLDCH &X7hrU'\ecE Ia sާLk}4̰摓i.i3),J7D3X+B`BUD <{ 8jG&Aw'R VN?{ F2׏t?NL~N>d›vaፗJNR 8IDBsz2DQFA14qGd6rʎwhNRdIIɔb'2bq¿&s.ɈvZ7wjN=d-O?s Kւ0Jh sbjj$w|hC@6--Q$Hӗ*%K5w:sZJPQS. i,ݾ"ɍYe53לk k*/[;;Mt}т)8CDcdMPbm~u1C1rIjxr Y_(jFeb+VH7&o~WZ|Z)Z&{E~姥eT 2ﭠBaߍc$fRyHm321Q:5U)UbTQCI5M4v\4 gWO2+ΈtZ$:IiRtRz-E+ >%C^)}"kvns2'v%;{IڀF*T~c;'Qae* /yҕeeh@Gy(&J(T\gIP"@RȥJV v!Jp9ܺ0Yy7걺TOt][cqs]Kw H..@=WJz}q*!kl/݃po  xGKN@n ndJ/*YF7m/bC !8S~i)5a`} iC|c+ 密,C>R~B i'c:g*A_8sr}ڔ01̿Nsjpj6GJm]:&Ӝ0Rs}L} g,!b^EXLK"E?~NO[h:dT٨!m$N|[Ԥ(-yԁP ̸G̸eo**K7#f,Y_(%^axn)f=U@#;6<T\"JXîN#yKa&]/M%BֽqL}ᬕ"ub[Jp(Rfk( AvQ?rKYE1u*:$IF[p&]B%11A2e8eꭳ53-f /;5n #ZK!1FrB^ Y·t&Nݓt@ 6|W>(zy_vܢrl ,,PG\bAxP{ $9L&ăkT&8+Tt( u(2B&ӓO'Z( :Q TuJ?EV{ՏG% `hi6 3rF)eFHX]Bq% bg=.˿>zT6?VUv# `j *ݎ+dқjv>i`{)C4$z(q$5]Ԯ0yU*qW(|6n1-vΙ>`a o/C'l:9&_؜/oM 왛-KiMtaء{X8L}<U7L0 Ud(/t22pZ)-T T,]4.v[OV d_>~|)So ?CC âgR(]6ӛ $R7]x#BP&hi3C htvK'I2q/$H Z-iL547UW=WIf9U SJ.I]AA.E.#gzF{#i2ŵy1V,zM3Q=Y7 mu}Mx &&-5Z9Mgf(]L)MǼgj1T y5_y< H t MIE1mi]¦M^ވ&3YO_~jUl4=\K,K6 E<`ˑMvbZߨ4Ut׏y!kUZqZ&_g9{ЎVaxGu%X.ʯ`?ϋg2%<+QoRa8aG8,!񝈴Q"4͹s8RbJ! ALiQ҄X/.p^(YB(UPZkv ^lH\`4 gB tAqGeZy:stŷpMs?ާabO=Vyy_Q=ωKp" 2L[%˨T,eo i`7$P?GI)M*F{`(SM,KQÛgo5j!<7Z h)^o(Wq Z$p ±X/ұe :H9rd$&(8ˬ$|W$j%xBZg,ӝ#%:L:}1cZz]wi8v˥rwxxF@i&'U STH;@y @GqiGz{h[PɾyxRF6)'v cA ԣml\\P6ٝx[ $;VCgҎjWOysRafe/ko(ML˚!EF`D;%hB k'wiN@#,#Mr4X&>qО6yL ̐gM\Q`Qz:Y [Zp#*k݉nGyqI3in _ς{8(ªcUHExytjvdv5osZM<`!gv#`۬k@9w!D[/܄ؕ~LN[f($sJkULeSQ@6N5Jf`r zRQ Fӗ5+SB6}?z[]2qYF{PpQ^!ܾQ%_]Aݞ6܂ fje.:%4< (ޛ8}X%?A1 "Nq r<((<7bUB20{ZM<IIG YLֱC?r#HwN U6/*XQ- a"&S7 d:BV a㣑P IxhP [8 瓩q^ĴeY ަo/) ]xmӄӼq K,=~ⱶs4 |ʉoiN/hL>?l#d@@GBBh+O-`[ Yfv6 xC5cX?cc>A4cXS6X.vpǶK}F;?R]QeTCTu%PLc7FP)ޣ4[1d*@iT~8AbT>$*w!R~axv*F[jd&TQ\ 9nؔQ0cf|!c'I>1+A%?15k|IȼrX!?[VjXߋ20O;闞z'&wCm5:ԎHRʉR_&iկi5c XaVTưőfUhd,V?}El46;g6#RG`%(bHT])D%<'OfTWbhIqͅ3 ٸz7čuW|je3VI(bj>(.9ce_]RHY) SՊG)S/d< VӧgӚl鬃|$S] c>=T\Jo(5g9~x75\FTb0Zkxz J6ļ5 T-v Va''Qs;ث}q.UsahUe>#δhGF X%GAFYap0qZw=c y{ɗ*OM]<}惿'i$iK^¾>S΃䡔b_ `=kժ׉H-h Nvx)h/u{U5 ł=۵3Uq yg׬qN"{kɯfJfr"43 Ǡ!o!.^tq{\d(PZԬn%bcѠJjָ7#VSJ&m|^\Kɭ%.uI.ܨ J֬rdchuyOW-e(ӶAyj)?*^bdi4rT' !(]OJH~FEFZai%yKU*11pf_=mV;@7¶ = !yjָ [k#,v)Im&T]v71'yH^btv?Wj{y~UuwXMp0ot2fnʶzWg{%bZd%V t iJ֬qR;.6.v;]+VNIhrkVP{x[[_Vg&W_׍"BTVwo.ΛBcu~)-G,m=&2{ fţ!4Xm[}b=0-弾vnMxfVJVmAAUac*ndT&NtO0"CmQs#hVt m+v`HGgA9 tѸ$-6;\* uwJݎ:5:PNcBn9Ј ~}5ʡze[_Sj!q¥/E"NIq8vvI#LK8"rՍy=o=Kp 0HޱBI߬۩ ğSiO  IA1ɷTR$?7)B LJۮDW1:)^@NG&)sLd4BKV.¦Eˏ+)1hè ׁTs{CsG) ۼsI)Yc_oH҉xȖ<.+dY'T  Gyy_8'2Ɇ`J_'yDfC 45̨r9_Sa]Y_Y3.g7;ϳܶ$:;ֶKui'+m^.5L d?ݒk{>80Rwݲ|RglzD Ȯ>qyݪ~n'Sv'!;R;b[1JVHۄsA)ށ1}j\/g-zz @p\3ǓzkNپϨU@`\jKxSb)~Pѕ㥵q؈A*{_My U4 WuJ|&홐'y@uvEg{b~\ fpjf1Be]٥W/9-]{ON>Zϓ,-γ ̼5E i׏Drot'pBqd%O7,'L;rl8{R؅‰Gqd d)V(M_ }X%?AcD^v{FKrPB #%~RZۦ +|PzԨ iy?Y ަo/ID۱YIy ꫐e `E%% a"&Ni1B!L0$f hݴZW%|5lUFPӺk>SUTDZ +_"*˜J/K=:!OxS$ڼTڑ"4I20ywR|% Udyj9TY\)oJ Y XL^7L>; `|tE?k "fJHj5NtRA긂L:oD+`9T/VRП^_E]vkc-Gt1)}W9uR}) 4^`N$0_d!fir<(4<7c>RZ =&m눱9{=[y㄃Yq⛚t㤻9dKشf ՛vٳ⭆HrMnJ(Fq:Xy {c} zR2핦H;m0F ^B!ܸuk#K8h+ !hN[(+ʶ1VSB+ߥ%sXk3B'ml?#Tb҄@B*d bjwl(h(X0g1*?ź'k6{ޅ#%5J|\S;S؈Wyfűٗ>[SQ0ᦿZ\,j5v{[_[btP,GI%ʀ=c6p֎_# $h N)T !Q#Ŷδ V2ȹυsp ÎOXy+VU'5a"(.f׏SqEp>⾼0ZVbw2[,Woҷ5s@E8d-3ʤ`)Y $"[N)"gL6`YQ! 6F}UaZ1HGp6ϋQ?>o&M.0)VRb,&].sHܑ @;.`V$O.7(41&)ŧ"ܝ*K7fRrŽAĝ_9}UB2E[MJ]7Sl%b-A]t>s#dRv98>z O!ޥK[r}EzUо@J>>zS3 T}&R჉&@js+o.c y}YSЁ*Wƀk x5E Te@fW:~ h gm̚ txj2Ne!ASp;& )؊nNcx3 &mFT7$َw$ǰՍ~cĽI$p HqMf&~\oqZa I;d>ҽ#Te@ $ҏ $)Cn"DMka_;䷺FO.d42lj(wc~/]UHCr$1ʇ|{ٗ*$H@9܁ 4q -%u$)m|-_j|9e0gi,]CI2!Y`Lf> 9, ("Yl@.F>5‰o$'5fp@@Is"ڞ9Ptmq}*rM:]?{WHn0_.Ak rs^>$i˖e[/_Q喥juKE=Y*Vcx[bVa5#9վhWCֲs|; ]d8_UL|緖ݜ]aJkct_3_XK_c0boϔo~SnGƱ^`Vf| #1>U۠I%." } QRaL&-s^m1pn.G5>Ehސa5u[?pB,65rp%њ,]dEZv*y(jpFaġvZJNGꕎR0OEr)WnXǏw&g $Xy}sUYJd,WW8j ƒ0GYgL aꠧp)($ixf3MfAW\"+  6eL'VpIejCQ|8G #t]-d+aថ{-1{;{& jCtXPg"9EgM8.!B܆f oOV&HM@Պ"z v 8Ivc FcD~_ztnG+̜F 'Ee!+ ! .˔FZ9b MS"cǛ;X+)ΜR[=+^)b?g;$ւ릻k{, >4-io'Z*Te/w3mv8z)shzb2LL 'uM콶74UҲRJdtͭ Rq|T~<7FhxSL{RUe*ޛScf^2L/1K̄R̄+fsP2q(cUZn9[0sAUa[v*=0SWPL]o3w=Vc`v4mTaå(A5[XeNT/lѻP1TWi27ׯ0_Zo'Ts{[i6(c8W"bVݐz ,FZ42x{'_5Euy|~ר' ÷OK?᯳Yͻl3Oo& Bp*d1Xq朵Qg\E8å̠_)Y7uZ"ð;;SeΐUcŇz0(ę="d *+*(ѓw`Lg;$փpäa]TO?_Tu plP,3z;U|-t88c`wQr SĦWqT-Y&LG3 ѳgZĪ*%d8 |b<1kli S#"Hx&n٪̪ˣY7#NE.s02ΘͬS^M/tFt,-&'.eUM=iV7f-eCtwk=(W we,'`ip$"r8A9P( NKD]b4A^]i.~l%>tVj҉+Rxh=g<4ki% [hNPl+J9# ^]R4-Дw)9+Sբ6,&tV!q]Tϥ!S;O-Fê'8%*:r>í8^bǦ[3-js;%z#W(4 MX5qV#c3oqSV4&sp2t҅uӉF#nt 'Ѧ ifʆ]0#JF vZz_^Kdh Csϧ֓%22:|~Thg0#n L* ңKs0lv=)' b-TTY<ћϹ*biIGѻ`E~ƸP"?ΣcJfg8PL%Xk`g/?0hC1)Ç 9㠘8ءd1vNvl&ul8Xǁ}>M+'9܃1tdVy 4փ]> 0 yvip_O@ܧt.XHvoI=i,t#$ZAim\n . %Y6Lq.MR mE-#r!bfaYjV(bZ.35N:kn|u"PBࠀ a *RH9`yQߦLb%l JӽQ`iU+H0-7 oZQ狸5eH;M:7Aş ݖTuR0է1&c;xj__?O zHKrq*Fkw69l}(zgŵFx U1.[ż^si4zIvSwc ?hx}{3RY#&t99m]g]x:Oz+ѿӽn癮w\)ѹX!Fb̝-v[)1;bZG%-{KI{~fcx}gn87͠8M(ЦݎAɈ&Hhzht19jp1F$&MY=eT[DNPOw *=~ܒGs úSmT[vfE<%q3UismX{^}_Y,$\'נ(6\zFDM{vm6eBjeIuPCwIqVMMO%YJ2%(Ԃ{"yc=`=1=fxٳ1ː\ Y Jskҡo^WdQ~V"km, |P)'z<[Yʎ}՚ :fe{܃IěNƌ"P` GPwZ1㞢vod0'yM#WĴ*67z xRQ%y5oX-XLz\T!J(A.ѕbdX̮gi3ζ5>|rWLc=kSi'ui}Mn孫C:΅=.Zk]ѳ%-Ec#vS Y&=Iu >t0(nIc/N=Kεoֺn5 OST|SVMi#:(ws3۴*ݛf\69+b]W+ COUbA` 1d-}7(6uNLIi>dFdd_pZkmiiAřGy֬. hqz+ ||lp;U'EF bɋܻ\RCC\4*^ 6&@7-xJ9fYYv8M?W)De&ᘙD:^,MUD7X`JH-ȋ>Q8} Ov\B@A`wEk~A}11#xf/6q.?G'$6|6|ߍU#߽X9G}q<|v8}$Z{ݲ-ܫ|<j$&CPq6"zA*`"V GQ>EsNF|i(?^S,k`ThjX#}M&k~"O aɚu:]k")zM,MAqn.*gahwO&'> fFZ-wsWAiY;5`pսz g{e GgWu@͜TeRsʘ"MI mTAc%̥/gKguZ[7nEMQ Ugv4Jc<hzd]YrVY)74x`!I 3H].Z蒓6KZTy0l_ˬ'[r[0zU9Tvag Tng=6|`J60Ѻc 31$|= 1޺Av^sN[7ɞ4k^auF[1`m~~VR=_^0T !~Nז~+P|zۺl)?(lt[oEb.llca:ʫ))davΚx|>3n};5BRS3Ac 4^Yoq]8~9˥c(O˵>$7;3zm!.aօØn]/U"R֑( zAm}BFm]7J'?=qj/ 7uf?y PƒN[N=.βd!&p 8Ȉ?T%o={]6+7Mᶶ]PX-qXKJqI8a\1V4Z "&\Xq*}>KuxJ[C7SˆhZo^|u7UIZSL%?"g%It/6_C3A/l !`:P$ Z~x}!nn^pZVcOFERCb\|B ,/?F?>Q4LRka̞x |vQ@,0A=spjaʯ9XaIZo[=35Pn}j*h:3I Yw7?l@g󋼉HP#ۻۋ@U w.1$#1L[p-*̰T4%aFL*rZ!Q7CAFE k4)/vN`SI e7H\).fF(˿˜ՄqA]\Ñ@F P'dR >"e`*h >R >>lb\M+L ~p\QA.ޫqjtJjH^+htxu`ytxMcXL`¡`d֖~ 6l|~&0j4"4֛q]YmM1$*>p5x'_>p'X .hu_o&W`&v{"9l6r. ^lo|]/}N}u.ĮҙX*, V/,ʲUx)7.2vQ$fu8f&QNm&*ddj\"pǛb01X}NdCQ.m(Į2o4(*M߮g|W dYf-+8*tC]yRbZ@q+htU,TT.r59yiŚu*X8b~xLj[Yk*/UlQO}ĕi)ʀSՌGwib-*Egyv";~h員FtüED%kxcR4F }] fS( E?e7Dd<9ila,,'(^~{[9>7bV`SGdl;.2v^wn5() ez" C䐘e,5HpYBw?Oİ{6DwI"{kGy?_\6}VD]<^cln&7̅C ?R'SLr*%jYΰl@:eF$ɴ%$B,Sf: aYmeh)7ذ_|EO=^!]XZ;[^!=3=Qbi&WEӽYn^#L3ߋC;xemgLxl"aT82plճ) .@wX./He`ar/R )'%p .X,h+8X# ̒4UV"ץsAI:g72\-(sn,B)~$݀KRw z8֙lbuبIB%iS븶NSpX=翸0 |Wl}.VHö2QЪ65و6$,fVX F3c&xGJ5! ^pVnyznG!d@B:&XvG(H#%TP#TR,raai$`ZSReB|4PF3Ii'/&jURsx$׮ tFy%Th*!ݤANRQh)eea4 q|A-*j g؊FEP3TWRn!ۤiHg>.}e0/vj"\{+[rRb"u{Bxr [t"`݀&Raw†53.Ψ#M[i UWY5 .ȁ(,9 BrH1<[/vM]u1L/جߤ}z#B{ ORxcFG0C%d|Aa&KZÖ*XW^^A8b-i` &E;tՕy= o9YQ ,E{q98W٣)a5b>4:܊]SZ)qp&W<@/l ]gSEwFLQ94bNQ#_`;A ;T)og 4n`DhFZ}|~cv;^44ƣpCnHSD^L;"TJtx W00E~BGi-۾`Hs:ZVZ`^Q\9O4;IKxkQmk|DJ wa;FHHMC-WD@T@ƚ0 :Ly2D#mfDx,^GRwϾ,1fJS&]zj [Ŗq*5V2쌵 G·vj8c8`UpKdF1x1_Ǧ!<6%c,lzP}A֞*ϔDF7{=pņS};=q"_u0{f.58~eNT+RN12)uтᷔ[fDrT\" $YFx7$ L!*tu Y4,#Ll꼖xA*5ߝݬ̼k)⵴ onW VIX UE!NԎjjx({<3U?o瓅xNIgYQU|'Z^G诟Nі`: $6p3(8&3˲>*XGܗ|i&\%Ri\jN:j8#XIč\hK ]a: ؤX'%Q?ׂ"帴TS056a4=4KoQ3iOfzn>Sh][S\I+6vΛ#x}؈} jcLnGYp*)N 6uN2R)]ccĨ_xejEOB–.Tڦ/,U1cFvQxVl#3C@Wd` e>xr!-^05DOEuqB50[O`!\h z@ 9m4 )Ct R(Z'e{ opǛ< n7 3&qVQV^ MI0ʤ:!l6(FltExHbGRdP4ŤQ/y-Ӿ^r]Ϛ: !WLD_%sr~ڳZS_r5/tᬾj}bo;8=/exc|է'zz_\+OzR?a9ù %UJ˟SDl;mokt} T>a~Q8p3V|_6{qJ3a$(:Umհ@97}q<#>577P[qm&!8;rrbzHĞ6ݝ$N?߁fO5t6nϞ 3T`P=A 3̞kSf0Ƙ#?0w}g#TKsX6gyWbx.Uvp}v7f wnUφV=Zlh5̆7S'̃Fs폙}k9)@\m~C .[v`9GÍvhQ:dQda-4-b:3y#Ƨ-LNk)`dct& К]Z NYDĎ&3C;Ńmȋct6q+ykY* >LT5[()TH8)z!Q_E Ek7MdI lӐoD}z"X#o8)bآ`ܸ T{ W`s+X`1E͋yU-V's[Lū^U ADPG)ob9 5j\梽mx -ŀ Bu=YO+YxbLOnQd[`)9A^SԬ!@$ ^+p1r {yLyfF.'}xs5WW!QZaz7 Єa{Y<9~˹x;4RcX]& T6*Ҙj=\t.g2۞LPI1k &Xzms:T<$I=>NzrrO`Sr});'Q}BՔhbbФ_ŋ#$'n@ǘ30W'-hܗCqk˼:Ww,n]J"!x7V|)S%vdub]ٴQ:jVfx`;7ڨfMx\;,::麚1sQ"x?˅E˳"ZjߒPzRT!˯12x2Prxeo#I㿾dk/aoO3aa{P+{莯+kcAb ;m񱶪zW7BI=ƫ8 |:Z!cBEGA5%fg⸠srNlb{ݨ-YhO[YW߰[I(B4),#'MZGޜc5e"drE|.Wպ0ԛG4yrxe-;-jPˮ].],ѧ.>zZ]HΠVȨ|IAyLx&$qp2{M\\!򨣨ɖl@g4ZS#q/ ܞJSNԸ<"vƷ+[; Vmjՠ<|}ehS^9z9%wZC$Zlc%{:Vp5XEs,r>&G : El7'Mzb"LYpa8z+v% 4*V|,ZqQb1lGh{Z,%^Tv@0͔Ĕz.UyV ۶R "ҬZC/q'Ӽm{޶ ߄ ۢ~Q^(jڒܔmG'GҜ, :4{4gb"J`t+ZB&c8oT9Wu"0v]Ɇc@}t|:hо"K1f*(˿r1USl%ls?1x+@ˍD$ʤ{c'lCoE҉8d).O"7dM={QlLԛOl?Wor,A؅(^kIKo#oXC/Ei05̅kWA'K=rh B#k)d L ˶`G"dR7k,T9T3w*hablŜ Dۮ~mWI9mbyc'\yBR}y=<kD*RTcC\skOk NX)`;l}zf<$/&$֥٥1]:$! tٮ g0<}{3Wyd+&%pj+Rq:T{k0o70z #z%=z*MMf3=1w; +u핑%'CR&771SoKQe8_:!l'Dr4sԙnXfe׆Ƚ@ܫprS)Qpm8Br1ĭwrdBru76<~= c Qk!jo$gf`>FlD1,˻ˋ?.z6ξ%/ /~q5&IX$ؾ1xD O:cϼ9;HGAzN@ ;[Qx܍҈y^<5jB׋w/7a~pٖczv!G -Xy% YkxT:tw!Շp1x!o[LnH3&Y>!7GA^w%l:zN^~rf V%:FC7- 19oxz4;m'@cǻ7i/Fga[N8:G&Bֈc>HЁ]kIX9 ȩ9W1ZASIvXk+譀IT$8G(LѤ78\;bxYQRXtpmW\RۥxJH KQ;J@e2~PDY{kDr1XM`s.eYQ(=Q OkT“l5+r;UݟvP`n ٠º [@A:<{s_ ?-B '|}Γ!; ^uu$o$,̪^FrNKexuO(g@QqxwO_c(&|,k&I/AB^]{`̐InVԽPBǹA0R `cB"ps>'$ N#N* hRa>C ;/|/K<ijbWY^V+Μ9/l7oxb U56An׮1O9s;}2Q1џԂu9O&?zdmFgV0q;@W:xކu]bwq8ő,ҲOiگ}1l\?=Z*}*8gpѓߴ q2]p14<\9YagmRLo۹J1Q=DSL0dLIkvHv|'ahlC0q .x 'aNSiS@ԛWZ{-:J][or+߂TE@ y *Z(Y-&Wrw吴DYp; u R]BG܇o1T֮Ɲ<dܟ]UԳ'l(ʽHH?p kdOwss@ѱ(nQ6WRh,ԑU,<*OZy`NEɜwp#S60<>>zqgi}ƜUV3чՌ>f>m] )ʨIU-^IHK" GL^Rf-mO%Σhti;+:T:.^KV(S?hcuFb1ШbQ`?ō AɯfJjQj,8cPkOE fwõQ76l堷̒cJ$HNpAk n*vl7Eil50'}h}2E Yq2O#3`#=ԲMGhG9 ,%&*8dpKFIXS>6/t<QUNP)H1 ڻUyaSHX}I}ڽթVL yT7JQ}=56=Ѕ@*7ULi^;PJg4G5vӽScYs Y6KO:lRǂ}='?F/eҒ:v}#%Pd;L 5m)!I bTLAAZRu41蚒E5f1fẃf7yXNMWs؄VpR@˨Pb h8TQWxХ zEa{7RXnc#A\ ̉+Q\'H{бjVUT:rQb*CVה[WN?`C/lM(ϒ RzOWrTM@w8!aBIuZr 5@gEwYS0kͷ>S# D>xo<}KaդFme_!Qr*T>Ƒ~AXŽ+jr6矏`38M%)pp ٕ?iNl}]M҃ -)pXͳymY{e(*:fh~va'h7xk$1}13~F5udp1mULT+:{$\ݎd$"# Z)ʏ"arqոƈ89_&]䥭đ"4XY` &7|86 V/fkśn@bRvɴн~.=Z w ="PU&L5U#͞BK9"p-!!Ft0RE/{6F`*(oqh=hgUZuO6/6]Hؼ?2Dun>_dgbh)*t5%) HݣϢH5V fX6D&6(.3={_u/$"UnEe\}"хXbWՃQN.we4t}36Vnvuϓ&Po;ekBN8 nHUc(6'guG✗0Om )LBP͍ BX>Tyʕټ`b&9=DcQ옐edQ O&YG לSReub6(̥zsRTKA)B=H1$ښѵ)9ღ ^ǔ<& 5tSJz mϥ˜ϭjF+Y4%wӓ]_"|ymݷVI^3VMn_T[G;Wu9Fٽ~ٗ^Mhҹlپ_TN^/6?hwC?}j-ܼY\s{,wMlRî !-6z9L}pۂӜ=˃ړti/[RD^vӱՆ?81Y}6(L>9#@Ac17TӊaO<_1қ'LұtSOgHhj+8] K1o}cyf7xFxHCpwoxuӐchR Dm3IO hCFKoێ%}|RmPD['Zz|s۵lTMMxY$;ݰT\?{Mݡ}g$Hц_f( -uZpxQWD肋VO '4eGa6Hy2S8hv&DOtz3 P>`\~36 7vHq/,6LvEɩgug0tǏ #&Jy G/.1%߈-c 4b'HWzo\t@26D^*fD uEwR\>]X>E ,kwNQV.x_[(o+}ywQz͔1ՌꫯKәjvR>(!qg?]u7.O/,-ǫqFó@V۷+M{:D%|\TRi-Ojrce"ӿ;ŏϪ;ѳRWᥒP],~[DD]|¨^م_uae`ލ_y25֌a xଗkJn?Sch *^\@? ] & (W9gmz)vHaq-.iBv-~rB-^~N芝sS7|ZeXH%K7)`.KT %yv _6>DIC.K?k&`3ʙt)CןҵLdZUѧ[g,G".'ܣx@nҍ`LB` KZo}c$t Ȼś!R[-D--{M%Sgvu ځ:S-]z=Ch}uf ܞ"b;fb.*UԂpb4=#dwxy !bQow (0^݂v>Ϫg&w 5{!݉`8]09`0FJ{M>E$I 2X1&v)V;HJdzشk02'M]IJe%)r9@"R REP7BKk°OIwfn+C^Wf40>r 跊sm6~]8|xUOo [cdxx[>oXj!'BQQbM[J>AOMwسR)<č(ƠP/6kugA7G|u4*U9Ma"tK)0@R lR%)V֣KU^UI3z\{zO/B:PYu-Aef#*U0dMPpq-I.eα#yNbyh-[Jo٭[1͒URc]Mb .X *fL64%$d яD-%@EĕvJjɗX^o8l=rJGCVj eLٻnW}  yڭ}ܪy#egS}AْWn]Imu7^A WZi`<8|%)Tal b# Qr^flSNMhfZU)œjSX7 AR؏c`+d,d>].T뭘)K1񰮘xXUL<=يEv,!c}W SGaG6>HȽ٩aҡOJ EjFNcd=i3ra2WP /vi R+g )@%6jGLhb&~|?d#a8jLqp+Cyiv>B]qy/vjpc­kv; +gVgX;рSRF*p@ |9BNV67% C$4>{.tn ů{4>t `:nlՎ]r‚$|2rCb[rFwZ'ԺF3IfK?]2DzO j@$bވӹF,\PmѦ0Q ‘h'm$0,)ه`F|5yv^tbfV_K`754]W ڰ b7x(.Ty_þ$)ԄA*M2_ɾ]KCoX#W27ǫn~>OJW7o+"lt}z^$zx)"5OG$:Jo@$M&Y@l:w ,ݧR!(^֭'/N )tԥ]y:.Mxf<ѮpMU£BVm~ 1E=[a]Cj'%LZO:T(s3=g{8i96dr.iԹ7o#3w?hWӚ䔨''Q6V]t:Yq6C}`Li72u#Ʌk2̞yk2&8%]0x5s@Q0yQӛ+Z|fU<SuC|3R pJDi\82T[RlWB/QgkJr'/}-5 >cyU+WR>cOÐutȹ͜F{(~RuRThVV rme$j)'( u0G ş7r[t~sӴ#1O&:oC1.l8l)џ|#ɅkrZu:=K[?uĦW˷o#Q-AQ"q߆qL慓ʩyF%!6[sm`  Nkw7Ev 0.DB2ZoX]h|ײ9hd}K'- jȘClk܆@am\S˨9 bJ~[`n!6;af^y2,bLZ %b1Xx7<,F8QHZqRg{o.k4yT<27KHK܋u-Ʌk/3eY]* !E¹js#)lfj6Ad?E"9)Pt)P7C"0h0SeY% FԹ$M>ғpiA9s>i=<6TF5~XtjjG/oڂ q8+5]@7?ҋ4_)gE0}cs'NC2VH@VI6(Br=TtV69o٬#z̓q+RM91{5iZ#+)Yb{NyNXKbNy{׾cOaL[Ӏ=eg4~0.)*,HAg:o=6Ϙ>hN;^灢c<\CuRN>$8s꜌&/~BpTRam޽JE],1䲇1I+%gf:-}%1^t4Rиtb%k-h ,}Mƅ_^zQ.\*R[PdVNR.AWS]1K&є3>jIe0en85\]TpZCMs])sg(4 s av 1ћyH+C| :jGAnPv"u5'{ y9t3fu{%G{^唯sS!'[ ÞAV*6S$1q'dnSkfHЀ[)F(X,p~K=VRl:{L*$!gK-hK=ZoY߿]-vF-WcpI~^(H Jk3gn$pMb]f{yNj?r{b_JodPOz3d1;4r`(!JmZM\)JYA5J9pk5K fd}j%kr k }LߐVduo Jmͺfpۨ@..cIB0x/Hg:X: Qf0 kp 7R3<[ޓ>HFӧ3LQ0mo-p,Ր' _8 ?KeQk9Q*eB%HDw{nKωΒcWEо*&e?sѝF.- _gP_Y|OS G]sd"%^7t_ wa p.՗3ՙ/O1m?=Y'V]κQ01+%ג[]0\E@AOWZβFzdӱ)ѣKI2!|scGxwΎlNg 7mCucG;QR }+ȎćDŽ=?x~_vAY96hdm42v6Qᡔiqܠn7}wUWavCdm_4kMe  9W.T#FQ3Ɣٔd[)&IƘuıޚi*ϓ1314EASZ 91&ԻBLw mM1 O'Sb~:?^W'ĪT{2%P:gci(N:*='\(#z,Mlm:V?uĎL=$@ { vKY 1BUY2)6}K{z wo>x;y*dٻ6$W~CF7\9,pW1-iEq!)š4"%!ؖ4zJ)VQiwSpNV\ ݩU{Bj,YdE 3ôHKza, f.j0^DT"6:O[DN"Zt$j3K2-8M"nxqee3қ*Ua>*jJ*J]@ŁZ"XMc)h{qypTH]SJ%B?EER֓#5gPB(ju8Tx K<1$5TOjXP^YIEɡCVDorz0չ0QyLMZ=bs&(W&wA}'QjG'HtQ7(oBr$[4$pʬfUΧZFUSKFa[ 4<|_lk\1#t'C$NWbp<ճneL|}*}Ȭֿm1zN׷&ZJƱ w*$1Dh1o⡻fNnQn;2|E?dvS@\>@rr(*3W`̟)m0S^CUNM.uz̋)u݀fJ ALud fY%FS [J)sOϼR IUKJ\[n("Z\ }DKFk! n%} -j*İ22dWa̝ DHO}d%v=Pjv0Z BkZ'4;%vH7qOm==$M0T<*%T9@ǭ&7)nO5\RukSsQQhwh]fz-gH\ r-!"ðu:lyU#%gCx5?$A9@[>/=^%u&͒xfIWu (͒6]N )^xY? R[ jl ݦUtVrusG^H>e0n %:ݷ\/EG8!eq 4;EmpT,g-# i#(>wWk>jR=o*pD2(pr*W6&$d̑Q)GWL)(#*} G))up!;}REj`9V.-Ю@KҧC|WCvW=K:zhWݽOPJ{ iV9^~irZT]_0G(z5%ʰIk8y09e1B$F y2z2._8p[ 杖By5A i_w.(*0˸OrET|M>w_P|]k#Z5^] wmAG)(9T ᆢZح߸j)n:zQ2 ȂPm:OS!2hw꓄4 fQQ}-dI5Η!oܿGg;@__-d8rLaWx2vSxэrx0Gy͗hׯK_ ŸlW"^ꏛPrݯ1YUX5AԞwiq>w&q&],$AՋ| F0t$l;B$j⯲;f>ңwns~jx6ҧ*|8;YbE|s+3?~Zl(ߝǶ̎܅~ ?q㫛_ P?- mCl.PXXTr@2 *~:(hnrj Њ"ERl[gQ+z̜{%[,E;_"UϕpRIL,hsF8.ֱFәb>OClDneC7ӵbah~H)+BRu0CG1jWŅ١%n(֥!b[sJ,a~Pyb tB\q)Jפ;L!U-9Z %nP:M~{uFle^<߻hbwNjn EmH{ɀ<2]_RkJ6N8ch'{2%T\kJ42u6|K0m@QjGMjCЫ&Zr1R qt '|_**RY!IMIU@#59bLCjTN|wy EpswG[Xb.I3HdĨN3g*ݍe81ubq&#hP*2#T%Yh%|Wjo8 wɣ`bx_$W=?4og/>,R'gד,EkPKkIyW !$ IٟC2zD>ڷDqcxkć7K|0S%k R/*R ŵ)%[U u=P-g O獾G3}vz.o"򏺸ࣝCݼCg!YM7?ڳqUHԁAV 9"};w@y+zZx=j=([!LA8IXn0El%D#}i.8#܂| gpaX[V9^)=wzz4xezj"mvwLuJb.F F'u6l ?Cn7gw+vz/~3fQŅud>EFOX#rD^p LnTlx5JI\D29uM2P@宩iU҆%Oǧsg'~ H"|k^{xRlrD=蔅' JP\Xdp*T^kfE4 V 8Up^ͼJDࡺŻ٪==Caq{Noǧe~S%^N2Ts$HTS!UcQ3U}/$OU) ywh |".w{mh? S[#֬|CU|y 5b̴1y,R-rBS9 l_3?4 y: ioܫCwCsT_}Q-Z?؋kxISiJؾd26hEO=&G쀑3n`d)QǍʁj&a&Uȭ|jNoosQD]D?cރ7ggw̎-q`';gzeH-oRZ')D>yx{lAhď^M/AL!TȽN]R"c"2 ]?SYt^!j1h.(lQhvbZ8gm5ܥNZ1Ƌ` f q<>%"'ZS]kBJ 󾆢 f橒4dzΆh@w ?(OR`ɺ-v{zJ(/GDl9 ZZ<1TskkN} 9 `}ܖxk] e#, Һ産$JfZ͑jD yET$cO㫯C|Eo" "ď'@yc Tq{8Q$2 b4T^M> go~Ónb˘_-UHؗAXM}a-4|7/tyCF&%AUHy4V ddRPݗٚ%>9ӿv2*rTW9d+RXBD@rm%M!1SDҐ۝jA@Оe 3F#aR &ؓJ"dP 8:)΄yQ7zbXY7wT2I/"D4Q&4u{+ge1 [7ed>z(#gj.E(gisairaMaWåhQm6rz5'!V׀^}nۻ~|NgM@[6㈖ZF nPӱK&Zٔ!qDэsK> $/>wB~>z{ j,Zip4QVye$E.jHr Q!55FRycr+g%VE\!B@$!Z,SǓNy5m|B}2_.?؀}\]O@ F۳ǬZZރwoH'>wp H"kF9kF쮦+ýӸKVzJPx)zE "g|c 3*K{Nۤ^Kj4h:sC_68 K20Sro~5~MSq w%Qj7'Nd4)E1jNp3l˔|cL?P~q3do͐Q2j iJDSR"'Bh4C .)%DO*.1H+ tt|c]'JJҜSH"{3G9{3G͌gz9_8P]}Γ70$$/ }U,Q{${jH])Ls(rEuq)hY6 λR*(reyR c8++G!*m]틪僵٢ZrMtȐמ^gJw ˰4?͡b {ϋ>=;DTuz>7z:|Ș1d?c}.7wMppsW1jZ2k#9 iw3 X(jWr/+y̴-[0,d2< )*ډ`F³3f56p&[a/mR>]B~ K Q'F1$3aBFMR"J41e !q"k'N$yd~!C]{tM hU:=MQiPSïܐlo)$B[cSN|H\??)|?}ǯf'~PR^dǰeJ"H]?5d:FƳOdZn:PhwJ4Džb Z;PVW&'ZܲR&iΓ!B)R%0%]# b-mutJ)l)M 5LxL%YKj lnMZ:1kZ9ibd-gGl]P5kvԈ 0;jT&5FN01|V^QTMf5p,Fܑg2.eO!]!ې  )x`lH39 O5M5 $Vc!zdv./YBPVlW$#<}0xt csQTR( 31/;c-?$IyQ1XO[I,BlKf_Qbk_@X!.<}BF#&URڔg RU7a[_=& `k0*l`R.f#֢^]k:wqL< t6װ_3Q`Bf,5T^6 iMѴc-gsȃ٧?̮Y˖|b!V:d۪Mi Y{b<~KG3"Q"Bxh6B8`nQ-= $rv>WY۰DTo>}ZQO3eLۊF]1yoR~|W  w~uO2|KY!S<(`b1tQkx3u=sKSB3xzm1t'X+vCjEu]Jc f,iuĞjߗ7*j>,ێkY.׽ G[ܧ{kޱϐ6n_Z}g|4G@|N,@ )}ʳrKj5=zi;~ a)Rb4;Uu.;Z뺣-h/hCr(6ƵۆV޹$W%>e*vlv֊q񩝋o/mVe&⵫u! Wi66㼽8oo3یmOO0zvC }VV3MY$S9F%mR9OOxz*_>\S5h[\Hu9)$C@z{1Ԧ0lki dzgvpvv~ud?~8CBj$vz<=*xՃ1K7cA7~?g(P]q &.t&L()ě>>͇_;ѽDe&FOY0U+dKTt~2޷ =׿2>qWt?!Ϯ\+$٬h: S0LN"ةmWpN}3JM#M$<7^'}*5fhuv ׻u_ )КwBe HL}bek-?T?TQgFZ}ʵC}0/y蟻5_5c?2wĿ;}wvlLY)Nⅻ< OL=~A˺r&. ׀ha[6ID(rE HAF+٧nXK&(l6v1dV(|lGgj/oT{#agW+oOwI{cNB L /XɧQL{T[pc?u/AP 0ejOެό ޟ;wJ@*^- d[K`[c_ hX.-)yL*kzf5*>Yt?EsD4%U1kH9I:NA91q?`X| c C F= Sw^=,-*UXԏ"n$ x' qv2\WqkI(#Q/) 4rk#5@odpZBfxEhWVRx2Fs3R__`ӊypk0r*:~a˵s;X =g'Mll0cTI9bQ$.|zW7G*yt"*"<81iɵO]TU] I< XŎ ;kpgo>Uv#-jK[𷐴b(;]Cl/a7mԱ *~z!= NayI徱m׏d:7.ZJT/o{=HZwO34m ]csYb,lJ>tkSJU|4| g;^PvQD)Є#kc\(@vFT8R[V[\$ l2ߖB#㯟P&仾I#9jn\* =.N@3PJy˴-C~y|..|T@&E{ O-d$,\YWtY1n틪tk_>X5+S2C3KcH:$7`73>]wZ-hO髏_5Fgc1676US0Q]_|A{V/X v^tr4x"JѷrA*̽Ɇ!~F6j)7XFNܜAGXgko)̅='sliro_>9NjoV_NEw.ˬ΢(s"^#[Vnd=\1^y mJ@p:czOjHzӏ>Tb+ĖB2 $xwgnB;nLݰ's8Q,08=REŕuɊP@$@F92?40A%9+ZFLZr\{o1iVрÝ"qw^͇;@ZvXfO/ j{z/Vi-l;*ǒB$L22m:؉BĄG3;>|Rfde YnՋUG430bb'!^nukHV>z@1*Nv sXc^s0VQwsЅs)WUEt{}(FZ}ذ'V_ɇݾ;;6`&lﬔl?;.|>YzdOS{]Vcm_ Zm1ZEsl,zH/]Ccx gS7Yq506v1dV(| 鷤pAyJ9ogmL9\N, ǤṆ= BcFTEVemwqtyFj ]ֲCȢwKfڿ|{4Vur-X=P`:gQѣYԻ#dQFXzYCBO.̢8Rs?-ն+=a)$ߌ:cEWkqrHLIH䊤ܤw7-~h4M66߃˱* dd7A8$AnȒF8|&)%iɶLHEUWUW%6Ō92Yy&/?vzb~b}Dvtl! Qq7-[ 4phvwHn>HCo1*oݞЈKkSx¿s&{GGQj!ZaMEvT.<N7w,ы/_ot} %Ea녆jCIgE TfKga*fٳT~]kgʝZ^EfG]SR;Xvidis3`MZ]'gKO}:_{_S?VX:vESP;N~j=krZdK%噢.H'"UV:b v-H7nIeu x rSiŨ'AEwesj3'hToSKd(~7<@Ztp]ߋF91t~aR[ )@AEGAxdr'#AǎJ+wק㣊2WS_}{z6>*_^m ?OzF+Is n&,bю2_ʿ^/AX_t`gG׋ eXׅ.9|ࢗA{B6ᛙ-.UNFcsRIJa01Og\s-Kĕ v Wdq'=iAt2[" ;?Z C (xqYX?6%ΟM% ^hO/zO_>ӟ z[[)Jowߟ8x{xFj?r]9/ח@G^j6|~a ʫ _Fυ4 ~gxƓ`K~\jW_ŏہi!ݿ(u`l eO'Ѥ~7SHo_ L/`|J#^aX)_`Jߎ^NFWZ\.,|{H5&?fKYWʃTUue2~C駵O}*bh7ㅇ/׻oسaeP@[sd*R. e(:㲅c- QHB QKaiR?YGnP.ܒI#V)ed҆m{0wI0 s0ђ;\!)H< ')1%:s$acb=8f sǒ,dʆfjQwJ7/Ȇ7F MLrQ8'N>V r+f= 6D_t2݆joU߾{rW kێ'6Oe;i HjL 01O_ngJqwh\WRaRa]wժ EAkyw/v6DvI-o׀J5˳4J!.` v:`?ֲ)xٰ?Z6m*ګekG.˦{l_˦ilyA{nOO7,r.3eXPq%6KUbLQm0aٴRvtߤ< 8/[F1I_wY#Vq҄ibKao'KXŒMviM6Uqy 'J\f:?TBpBɸMr0L},$.6*iXmp olp [0(а8dpB(5 FMh#JU?&ݶMSЗaFjwU^~[=2U^IJDKAFN&̘錢9Mu ) i|N`BM&B$R nd{,9uF dh^q=8w}^#BA+\ mY! m9㭩1rG)w}s2Sr4ゎiiE9Ժnf $w -b~ZhHZ^H`̠Q}}xypmCDˇv5aT~11Rhq+ ֊0fujW&gSHi^mE~ .ļ%Q}>L28Mdz3f_u%S@ G@  ~n-+nڞ/ݮ~Ky^$L#駉DTyE& aF aN/LCpS%T¦ /'BHʴ֩7M`s0l\dwklZbcM1{:<0h|4=*%:ŜgHet+6RɽM2c}Dp!Df p e(+R &Xøe4Sa Pb ފ^[eܼfQlb5.RN&SRr·^͠3Mǝ;z:YL<$#0#wq1[7ge]˲Dʘ%ky,N-9l1T+UBk|CI]&^ϗeVvGA"4 8W~ϊi0N.nE>́&v7~۹?(*[_SP >j`ڑ,2P\TkE9x}F#Y8DʰQ1*is xp'm)>X 8H\Fp2JdR+3Ug{%͜hI(a0R4X5P;zM4֩Mh89;$/ix:qTʂ{ ;t F}dtFA3xq )Zx46T1*u޵^SȎ(T˝Oc13ggHA b8B *&pq#h@C,{A3/r&SxK,4dg B55Z()ZI&ܹub;y1 %Wy kt T`!P9Qs8/*Z +gs=nDnK𝤀2Ԇ$TKG)V 3qʌq(mD'⌫"Zx@u?Sh*Za@"OqhK( d=n֤15\9IZN84 !Sa+)dJG $S8Nj%xg(t,T8;E0h=*<EĬRġhj@g>$~ gG2*@!d-!Ha Tԃ`XڢdzX5AP-B*8\N$"&V&yh8.9{$DA`PB @ٜ6?\le5sD,@0~|r4[גJ`LGG~ u3{7x^N| ݤwMBw,圌rgثb.yh)c*NN||ngUq73S8__™E-%npɒ:KɩؿJsUs( \] WDP`JWnc)oYoxcVLp@0mkS !hN &8LyٵcYT= Cqd݃TkfބvVqCF*j0<\ n`zJI.So@4I5wx%6o{&=~Z{T-F~xQJw.Upk-:u_os%;E~]t!ze5vE{W8n?{$vmbΖ~osĜ<M\_Wl1"M{R >{agg?ϛ)Lwo\"hTrd=a!>#WCr-H֥G͐5[)RX9mFH IQҭ|,ZSK7E[)RX9mFii :t+4UEHlL(Ț'\T_Nv?4~8دPW9z>!FQ>(ʇEЍtPOj$9G>O(HlgW7ןv?H݃g1(Y`ǍOۛݫ# bWw<#Hd%W@G-%}]/:kTg|T$)]NKYi/+V#~ 'N㾴+eNiWG]y(J,4 =KD-\^qbO'.͚2]n=?Ӌ]>=Ɋse  ӓS3()(mv k[H-9ZR^@bR* 9QBo\杼Jd2Bŏ|Onz<ΤsF@}jϊCo`>4+k[]yM4e׆% -i A8kI! ,]s-;n֮' 5rOR.ȓ )clWy¶JlT#Zd c8 0:h%9 r&TԱ1'DȀ@uKh%ג?8p3+ :‹FȩH~ .}~?Ɠd(q-zx.د$F}FN IA?g[>[تX"jOp2-^ԧ)RpF5%[+ :B˥,@%2Ң"5o)awI@h9[Gyj9`(?ȃN9³hzCNK%[%1Zʂ8YK0OO P)/IOdb뮁k!N'P@ ^9'q$%L(C?#q}f[`dQ/ 0;v)ktviEP_R? =EVBI>_ %Ɲ3=b]IOwΡ&b{pw?=9Q),ߤX%%s5E^$* iZ(!r:CC! 3ӝ29Si2_y2Ck(1_%BBq-ÂԠjJZ߂5 k(v*-m(3_C>%.5#r^v ?Y9Q;ܬ@.-ȰI@BA;*id~h\mV1nVJp;RHE= kT;, h/ocI5eV[Zz k\R^9՟)p*z-zbR-{r*GEs&-}]tO xtDtڵ΁ǝ_O+~`Qm6yi(l`n`TPJG~{컟~9l.>^~ٙ3pL֙5I6E$'$pzHҩ@*|HUH21=$4杋$UWt$* [.y 261Y8*҆(8-^CVUx *̨)x'3$CNM\*IT% ŜJD9I%}1>Y=O:-^uN |NUtQ*9:@H~_28{(H ΥbhfƲ.fvW[P Q#E$ lo '``~y1WC1楑Yxyg Sj+{cK ;:/O;LI V eĴDFUEiPMAsfVPVuJV8ÌvCDop鯯..2H:/a !E*lWߥG4)8n>.VKX9h=$4jVnBP^B).856-ފ策AʆZmhʄx^8U&{LњnUT&]ۛM,]/w_}7>R}d-ZN X \#(Ό߽99/ |%W+of|0M3[bxT)xKxB2`R.C˄+)Y 4ޭk!%<1 ^5%oҭ<ϧzl_;(z C $p6. wtpj8=ٷFᩨc`Y__@cdXL ׸Miېe _Rq8h-ib7L*DžDѱ0Qƒ?(/=|~:??W.ɦ9y7w~4 ѱNx6cN^/|Cn~~ZvF|6 x }r0Kgǰ>jBC-H{*K wSoD2+=.0A4DXX:(D2Z~IOn6w [ƾ%ߟ>?.Gz?g c`u:Kz5M,bG!JPXrR~nCӇֆn'1EaiM;lVErZP!ޙ ~K5p0x|(\l]e~S&m'tV<sپ-Nh>Se@XB]>d&37]/dT1lE)q_DL%fQ 3(ٚzuc Oeէ>{+b&º'JX%q!J#ai!7k$7ts7 a3b42@{N&!l 0=3Nu[zc%Vo-[[BgBq?ƩB0RTM“bor\@s!C(@d>Z0L"p\[0sA#'f18,^85Wjf\w%7PNQ׍Fu#?Q:Vb &H﷞a*;@1^*:.:.Y#K׎K%&Q냷~z|h~];pkr O406xw6%>ɫH A迅GgL}ӹ>JHN[a2eE:_ey4I˅w_qօ>&'13o%j[9pqqFuI|t9AԱkMw ;'/,]Y(/0bT6ޠSmcBKĽ"a Nb*xi]'@nv+ˌ1zTm'NRBIc*WR7gHlAO`DXj)ʌ(ng h,FD$BKlQ1i!I7gPΠ6+ )Ϟ%{׳YzF a4QQSis+L!9Qn"TIt5s/) Sf",4/'EVF١vI&pIh\Ix@SN51P)m;+'fm`Q$ 1 NQkpYjˌ TYszKi}>z~ ?>'mz VqF'>VӧhlUv|*.\ضMV%f颉3 tIrU6$ZtS8ExlIi$cܥ. =F=8fT)xgj|E9WN/hTJD1p@D*ZeG;& }ty3C"7,a\_Vms8\X.E w=3!=,GK(*cJqI?*BP\s` OY=b`03*P4HEυIbZ[0qXjj=C"͡8lq9tIU&gD}K?GhIrTspóU= rTCu138kYl8}XIt7m'XoG `c=k{uƚ% !5 \zrE5urnxtIN:)_vߌFRQsk0jرQDql .Y8y&++rҟƓI0F;A&z7#f|`y4{mcMσͯ9WI,efj G!84xCxblc)6'=-߾ħY`DcJLDD#/3a;d?HH$T&gaؔ)Z)H;PZ%'`Xa[E<JmU5eç9CT9FNZf0)0N!#Dl|7.ϡqxg6-^jw Nr<$r b?'}/fH|z=Ty:-0T S;silYk6NkZD)G.c5M#m3[yr9 yR?tkHFtCjs\<97C?ǭ5r#RA^yUs>h.IzƮڮǿ0avԫ>f6`3-SR|>Ey}pF;ìifsyޙPfj}W-^G ͥ{ %_#g(q^8PVL^7w5;ӄ:ug],>t=<عFaJ)Yuz%>}3\5aBPFp7|z:ƅP>&8zaӯ; %dh]6Ahкpx⩈3[KVǹmb1Q'Y﭂@dkZuV*\m~W&y8rҐu*}n:IۀyԭU Zxn*ثђѴKyڋ b,LyMXa2BJ, IL!TµqH-y^jiK xUhcbdRHBK1W"X8 EY=lO7PXo4qqV#ev+V^hZXJ"yp$% 53.c04q( X+ ªK%2$dSB8; :!Bm(QN"F^z&47& ?ծXkO$I|HTf 2q}t"i;3YG*V~KvjVrl=) ֬./"h:1]:Ԟ@=* U +2HERŚ aL7znY.(Juy}3IJJID2QPhT,ȈȈW쵓<]O[N$CRumve~}jO1vȆu~&m`g]6J&//9DtpߞAIc*!#ZS(@XtOᑾg;[$M^ִ(Epc2mMKv؄j0ړxu~ٞ3m6 i\Q@g[B(٩G=LU6 붣;4bDzR.T.^ӕqW9,5I_yk #9ӳ^ lMy ؆875[rQ|K)X7K"Np:87N2-AisO*ynOq%u&yYr=^9s9+ÕjJ!$q=_?0))9`AC{bX<*- 5Z΋5aJۃ^% }{a8l굁քvPvGn+)G e}bl©݋N43gi'wM/e{T $Ȳ7A#'"\ 3zs!>IR@lq8 [ieyte1b rF} F+V#3(8C@qσ~2N VIP`]]`\ȗuOK"Z8ih# 1UbKv³ȃ I!a:uz%1kD~j3[ -"mNC9Ԟ^DZw/`Qlr2z$oO D܇|'7bӂ!|¯ Lc,w+ &{9cR㍫>r 0: [8AwI)fU#DƘv [RDH]u&V',= krvWYJH9{u03WǗ-;Ee H$Qukh싎k%}g5[d0m.UQ0<45. ^(ZĊEaO <͊yfjNFG8DA()\qs//!S@6ZeD̃$&57" Bˉ,;] H c8"Gw&1)9ጢAxo +%j}OI:۝BR3AL7-L+5j@ up$؝(7yfNuح] `ۦ+Z~Rr:\kD8T`wBk$Z^@QMw[+ၘS|:{ї95@U} xOI}p׬F-"X"ܼ -8 c O}{<@Tɛ H^N3I*Bi yd|lAP5BG]pjX?N7-U(Bp>zN}G`󚩾Mh ݳA(9"kLqݏÀG f˾帵["^k=#jq4C"gaXM6lL]e'byW?3_;l> vN02s"<ܝs)2z ja񛎖(^!&%9,Yvhv!OܸO,|%ͽ&|hxCCw\ p3H&rU6F1Mxcaoл7i\y9_N8Mh3Dn϶nOYw ukNCލꢮ[OO71 VY-oh//2'#;dc4J#2R%>'Mt@[. M?3gRUΨ8sŖ Mf(̙DCFb<ԣ(Bš(brPs,^g^R)ƶXyOuM5>x6LP>%{ 夑cƥIBCr;А('<{ugbl'N.߃9MgW.``F) dI4xc!老Yٟ>!>σHJ%6}HP@Q:$l^Qq|Mb#!҃hOr] Ǣ=NݽF>ykE[rW2+[·晴]#J0=NnKCEBK]mJ.:a˞&0MaC[YK.>(3N5k|H9{Z2Iaje xPcΉ~*.9F9 &ƌ7}6yxm]X?iQ݅*P >2a`$chdC&=wg7B 9?Zu>g_|)gOrSU>iD.3 +j,C5:o^/W(5N,zX}z6gOYJ=0pV}tQ]z eT{-[o\<=-b=B0nb^b F4eMҶqԪ*|ꉼŪR6:ײZB]pIgLɳkؠSIԻzܻz+Q^t_w?(3KT5 Siĺkc9̺&8ɞk6cS?]S!YI0]y-Bf?YMÄ_XqteFW A 3dhQce^oJy Lx,힅 '6`>r~8'kBW\HƵk8?1uJkA\ܩ ]&y%9%|$&z4 Ƴ6z]bO ATSEOѲ?ixso;T+6($`KF "ّ7ʭl߆6 ~R=Sw^(`˲‡1s5v~gCuT1sj7+Ҟ~L+v3aZCXUY}]86&Ay593&37'}2kQ{v"]PNTߍQ%>p~h$=hXyQ h/Sƚkuҵ;RPyD &Srl/Sݺzoe棍@H*f;:%N.&^u]#],` M[r@Bsd9i#?? '1hGͻZѮv GXݍBEID# NQDE"h4qLC JC2a?Hȷ6#$!IQDq( K̯p(%35# %߀r'EįQѹd_TK/V(ŠLJ4fF3<.l%Ѥq[i{ t2QA:o+ASLw֑J{K٬[rƱFc)#.Wj~ړV540 TZ&/%%;> bmlg!sX} ,)BĮF,JbKCuw}\ 2#Iϻ_= c squ J!L ENDo7- ){s!͏k}-E -ɻ6 JJ[[Q?,)%@[{%g&B~b `Ad9ۗ|F#5a[dKut6 ʻRbfT=jP6Yݴ IFk%mi׵NP8SnKD˥WbzLh_hs1=ztڵ _N1ᙷ/1E9.j˒ԌC {yaQԞ 5ϣhњ[35ѽWDML/Qy$w+HsAI5{μOZ [nE#p.?rE =ЋNfQ3d+_M3@jPGWhXG񺛍%d{;~EP65^pS9n[!ҲVq6"jn1Tk}<Ļ9^flp`xE>he4d}JOpgqp8v=J"[fў;xM*]\xݬyL}T,śOe{D!nudP}?_da+a+γI4 OW 81]z$d-H ՚JM" pwEdUhG{O'WS4Fd3w{*52\7Td$a[}[xni6;\,l*$FR.wuOsu7k\LqkKɿCn$3 &6y眗wd%)teeK ~L'ZZ>N "߽ƿ`~=8-/Y/M-2 ipoO!ޫ43k}L'n! BëMٌ<9P;]M.{J :C0jP7v: v=p5 -Pens57,m#|>%9-q|mdl3Ei  T179\_yHz Y:wFfj'L3)3㣆fUcL60ET2'1~D!G`/qu)$d>IK0)P·8$JW6z~̾TۦHVLIfr Ӽy i|1ļ 02`i3 sg06\d$pD gXk#ivE+VK80z)Wwc]+Jz;jRHv;Z6J>qE~KV/<G|G&V`GhC=nvg]t)l2勂.Vp[b]%kBWcdc67ڇsi͞c 2M:ϟ JålW1 X˂ц~ c7Xelf0jqA &$& H &@oQЙq-Čvr>nMmR@&SBY{58C׍,]iηƮyѿ6wմrw.Xw?v;hֲ>Gcw?dF6AV&?5㕊]1ƠYNe{F*Ufcp3m3Ѭ|MW9 Ԇ260WS[9\q{S|FSӦEPm-Daw.J[wb4CtLj2־K򱗵J :0s Zig ?+6,b.hGNk b9)+W=HDn뾱cm1,+k˭tCnq~]:kWЉUU&V=[e k>z\^ZɵH^QwIvuɰjܑIH &rn#[9j] W64{DƠsSKkG)\i$i|-9M&$2exhaA~OnAHQᶜoMLP}ZLG+sbwk*` &'`*(:IDD"f|0B!NDPRU!őaEɬ:!˜doKVnDQXV<sF<y`  nR% DGD.eÄX"0 b4䩄_N0@ ʤtC=!nvg-C,$QȑϏdʴ*mŸO(N'4Qe_9f7/e39@ ! FS)?W{ۛ9*L^3ԃJum ŲHsj0 {S8c5->/x] X:;:tz߿ɻF;MҰx4lU>um%R+o g6}"Ж3>;hAx3G`egjՀ_7{@ X$".01w+|Zvt6~1XFvRyvͭǷ;8}$h>ˑy}\mYqUKk*IXdE^sREtj4nw{F%3jsqTmCQ6jo E6J鉜 :/C O lUNRB閖enWi(E{ȪE!D-Yَӌ`;, XQOeͫ]9c5:#chuYƀ`Z W5$_l2 Kv͏8$~)PD} (Dl~r DkyVuiPNA@qwf'5rYX1#yD~8c>$e"/P@,]*i1V)nakau|v|_ztx\_w$gOj$b[ I!yj |ЛbꉂM䡓RBڅ9/ '"d[ ljaX`-`v -YmvLlu0v0YlWFu* Rvq1v> $͇W`ipj汼:B`Rǰuy hoB'1 gd2g+`u6M3f_$|qћߥmjœ 廫Ň`~q? /VC_Mb/)䎉K 6=8  ƞ(* C{Ɩ1QҊ10t7Cʶn3]T Duru5wClsˇp,4Qj.]z&q鎼$͠jeB*txbΥ&;S GT-zؔ͡Kx*l{Jf7Vh6l,pI[=ZqQY&&f9գY=&Ybf 3{t̵!)#[QߎKo_Yᚯ-WjׅMuDqt k՛{T.%*ءK;BySѝR}~ODa5VvemTJ%DSn7@[C4T J{F1`dzliYramv]1ošm ]ۄKi~~Ovߢ$PNc:.0tޭX;DVR;u-SHH@psN "*s@C8(=%x́ D^z|DPXk-(*8SHD%Cqc0ġ(L"Wf|%TF ت(ZݙtOԫ?07bêVV\i:fd .~_b@TK(Һ zㅊbz5Jk?(k-8g/*:7@ j?~uMYp7kN})C%d Lm=9s %vhs@Ldee0Y0e!ĀGD F4aT(c U@eaP)U38J 'ɪ;=]$C e0 (cq0I"JHH <-`ۍՂYʅ<(%-7QP14]sS7LTy6ILr h~K@s&vz;-S93DJ{$Dio^A}-=-xqe(8տg0͌͵:]NEqf6SG`44v]6j 02gLkLn75 9&Z-rxwsZ3q0ɶzmbzvyq2#UJGCA1`Ij?f_lT-jLYv̟4g4Y 8?'I27cbrѽqn2<R@ҴR)yAOd?O˽ײB6{wS6cҰaQDÈ]?Eh< RYf0Ąv+#߀eS"kfP&r)>T/D^1yIQ$V—a}i- H czO!EJO6n\ʹ@:֖WSQhFna (o0VJ[wb4sj@s'%:jSH'By}}qw;d_gEX"]aÒ $V S$U Nukc BZ!rjh@{uj9* ;|فCn?B*YICT-gl9_:]A@|:} yv `os`2&r#Y _7_xH3Ƌ{l>\ޫ|q-ͷFo4iKh>SIcD"@&K M~#&" <@E `Lu%rQ? sM^ <47T:o0IxA t\ibd6=T#AQ)i4m݉ 2+D{SE)/&4R}%4,J)DDq$Ɛ>ʢl,S cL3dڍKG"&Z@,ŏ>cq#p?)jgY ۾ Ԥxb%3> `\ΟL9ZPA,~4Ѻ?p%LwF`[ m3D+381N>{*+̞4!ː [dt !ojv pQnrwˊ1N|&d"Q$C$ E+@WqoY<[LRƇ|?Kc0& ='5NFYğ?an%Z{5u n'D+kn#G흡p:Zv5 @IS߾"%xX(.Kt;,XH!Lyɢf%b1uN&ӜT(i PH t놊UB%*xryRtwF.\0cA97#\c}G*^38'/^uRG! rrюD%qR麷NO TPR;=]t$D IpHC;C)ä?{k 0+ 4<1JY+X&`4R!!#3fhf^ uR"UN>9/LN"F-g!ⅰ ΅I*I+vRHBg[OlG[As%JG? DWɾH}G/fFo(IVT gH38B=5#n 82az&B՗?Ąː=X]|1r׹SLyϘ6[D7WdJz,FjG:wdİZ.)90 *y0X)0$` &0ÞPX"ZalLH6RjjDMfxcvsB-`h¹s! 0V2!1Sa)%' 0clnO*mS8o4s &1%h kȚ 1hzDŽ0l3A1&;ɹ 0錌Tf`y5!y5qA H(gp̋w(pz bSsDLR.=E9fjʬ1H0|4p] V;S W{r׻U?wGìJFEӊv/c{)xt܌7Rʬh|vwc[iyp*+ۼ?:oGay bQ˽9&`09KR@iՃ!N!(NJIJH! V4M ʛ95 E-HpQM;njۭh]{uKD["`9([Jw^ wvu-:;p~ab0-AHoW O7~|,6߈$} P־hנo&˛" e87x>!|@_;TW .*^z~^c*b/?> 艞aYA"˷IT񝢌t8=\JLT ]'5kYN~=3F;o_|{V1)jVKPç#~mZϥ{HPy9]zoU ш=l4b]RsijV>18J{Kjښ=*R6rA~6AB rtiM4j6v (h6rUkߜ N.7}6x7Y]x _dO4bUͲģwbVDKV rnb| ԋDX=~ ֭~1@%atG8ՄɞM [^ARn?׺ɂsEsˊ(JFļ=er18bWRm㉪UW=1w6{eF=Z$cR>*ݳqڞ׃U ZڨfqW6E=r@K7š9݁78!%PS/b*Zн\oF mHHڞիU[%^Z=Iiz$Fhk>I9alE5:=I*(iaR1V@]ʟsrJjּ#Um|sGJ<7ˮeA4uF׃Ň1 2oj.:R/o|27,C(枯[?J)Sx1:G|)AtH1ίtlgS(hz7 ޅQ~>Q! '+0Fer:>iT="}0rBӨd'J~ҨT'J9ܨts˱1M*Ds+G6gST`[mP9 ^,ѱ\[?6/F\=DDFtt[Q0EDFttGQ5Sk,wKH%z`.8c.U{]*!j[=igjݔrQ=rHFl꣪ՀFYi5ء1H=bLO[v?J QݙcJ#aUВ#pl\4dsMmB#ʋ RJ5%:ӓC5)Y_I(V߼Ko&rN С[C7Vf\#]N3leHM:A-h.i:ڇ -BpR}RmZ!]:O1ՓS0*ZY{B(=GdmC&^aZZB)굑 0wS eİ}\O/l=f m<{^4I訑@Oɮ\:ƵX4]ȧ}G3Xp5{T kk,A!)"㌧uӇ*lRuC6X1e"p5XVC^9EFIu}n:NjU[w{:(%hں%w4ֺWNNIzuKeN/1ֻy*o rΚlB^,b^cc*BanrU 9vlV,Н,!SJ*IM(BJ3F`y -Lfq[9nsKgWL 稳SeR.yunҙuEbwjU*NJ0E yMWIv!B :mTnzPZnnu0S4S(zM^XTP'*֭Uc榭[rGc[ yMT鬸ZEJsW;&PkXlt}fϴT<ιe9X# FFyi9Zi /XZ(<^jhݖZRQ.Zu4FF=`JK!ǵ$߹k!I3ɮuQ{: HlR\Kg &w4:r&q{ Ä `RuC6XWͦ[rGc[ yMԱ[rr㥦X0{m2%b=§Q&wD"t{DyD:))ItJ-qԜKud,;n4 N[2%su(*=, _d;qȏxR }$$a6S/` uaQ+\Ib4'FfHªH`&Aj-8g*Ur%QQ6*t7²/=uNK yM3ɎuӨɩ::OlRHwށ֘h))zqnY7MATeb"};[EӲFGWN-NHN3%8/.=EQ,WmAr q>oDF R#{)v5ffQ5G]duBRLhԡr@R*Zr_8 -WwajU*{8HZHpq»0]_TX*Zx-@i { )x IdƫVH(x%­wb,=pk)8õҖPW#VK.I@7I=K RS'Jj l l2 ]#7œ.u%It,Ҳ%7]o N2EZ)?>_I^sOݜ8S~|g_PwPϏ'o`8Y/h,4,\Ԟfa/.MS?z? 2#c_,Fww'e:ʇf| sl<;ޝse2L0y$`:FFnto ֿb!f [ =7w矃-?x_@8ǚ)B;ܮJ\ξA(M Kl~wS;}Wձ3˛G3?MȟǼ|vJid(ٻFn$Wzp̖PcŻ )T{(^EQ*:&U L$OBs0->E|N m/㫶>E|N m-?E|N ͉h)X!Zo!;p"(#μ{kpCeq ^r` MDy+Iq?&͹@*WÅwApeR~}BQ"C_[ܤpPT,PZVQv[ij|`LJE.Vzۛ q-qs?3$  /Ŭ:"EUccfl`X9΋CU* oyK/9Zn[v;Q/!qɑTX5jW4=sZYy[ #9)JTQ=&:^.xƙnWҚ pu`ʋ\%Db,^V[LbLlǿ>RK+MDO}ca;;PuZN Xi뺺5oH CTDK+%јu[Ɣ(NGǒާT,鲖keo^!Q<' Lof闾o¬TF- zuRI^WkUa#XZİ u@ii [0 &429 ٢ܴFܴ 鐙,,je^("J J)6bHaP(b_A{`=&.3|T׀aS)l!zWxK>Q%AIApN;*ȅ4:D.b R6Sy@XEz(65PT(T Oߕ6Ma}pA'ad+M:M'~pAP)vdQt Hhp p41K@("s!q_HQ[p"~acyl!͑&1KXv&"啵^Pi>"ĺHJK8(\3X"zs7 h$l* o H`cGb#8 YV0 *>H,m;,Z紽wѺl!Ld,X{,ڻabwe1jG]rI<>Ku Wd|'H$]W|WƜ' Zyb+ӧ-7zf:r%qY6.o4Bϟ6RJ~OJ"uػ^EHLIQ Km<ƅ"Âu*Uer;+li{?[ڏ(T= ~4XW."e1jy,W6_䦁czWuy#:`w'T-z1sx EePnDgD#I4S ր$R %ؤ>#O bUF6(c Р ~9,R>5jcQ*eZNJ1VCZ`tIDJ:V u/&_'PIΖgDOL5+T߾[t%R1A _d'3ZRPdRaԿ 03h2\TOX%gOs|07IȔ=zww*JL_3hNtr9ιa0BSm%n Gb9| :Er6;]~sl 2 (y,k[:bɜ[dNFj /L T qsk)f1W&0 *]45G8<}ڮiGN5pif3gUrK9`A^+9{$FnwG 6EI`XU1|B3Zh;+ڍe~Dvg /G8U&h f2j5C*JEYSM{f6Q΂oH ʘt;b@]zP\ 'AqYu"lDu$I;݁uInllf1)"1|4ZF0U9|YZ ^ɬYjH&X..o4۳b\AVL(>_g1]qdh0BxDB~nzw3 i=6>j./Q0L"5H閻 w9=btyqA_d< kO,wBDb(x鴢t:+P}=| ,Va`_ZZkG 1%\oyxwv4fOݙ+.s^7z^#PϷ jZgMP$aHE4C`δ-vOZATgnn*&\wqsc+?38}}W 0|hعDU*%8W]B,|Ѱ7>} ׿.^SΐnNɗeX`Mp&#hn^Jզ P4"|#?Kԟ, t$ra0Hh'wybNRI4c lX%yN`5KTJ;?uۚQŰ8-PD0:y|z4jYnJGuڔU־J+@˗9pg)[l|fi[Ihu#L|v{=OZ_c_?Nڛ*hl䉠ԸItE ɵ<Ȯ}bԼ`n߭,@񕺳]IO\Sބi#!o\DzG2VO~VB)DKuS^KUc+P%Ջu%Ma4=)_چ}W;MXEIp7; >|/dҿށOwh@h'>TRdi% ݃E-̮ L)tH*ͷ~WܸX$55Gk5@2rn<s76?/7 \ ~q'F3d|의J[ǘNڄcKyxF Nc$F9 &[}[J,"J .Ůz1&`},̅ݑi0o1+߃|CLXBm,n|Sl[bͩXs3ondӾ|#{2Rtw $4gInZdF^]v(Z/>d $âwG|]Lsҝ/n+@LuTB<ņj$fJSLx*cdP5V6N!bމ9_Ah(QKj-EF; (Hb E,8N{CI4'Đ©c8H*X&F)= " s$Bc3* `$ P8 b$-&^Q 1qt%52"VyKL7H%%KUyLsA(ӏc甀^C$5Wc)sl`3ᆄ`ԣ:(3eRH g6ߖn`kg%ݛg3ż담ub3&%R'UK3I0AU1}WP1u;T C0"hOY}&҂$AS-" e)*R2^#G<Ge㜸.chFPo\ n 1r[PjC#YuS 3,bƜSPZPx5&:n1Ect'. "u5Ø8#kߏG6|u+:K 3WgGPӟ>WMAo;Ra^AM֗jaH}+1tRJ0܆>Ѩ]5BpP7*nփ%jgʓ'l g+ $SOv? 3 w!VߖsnnOiFgES`NEv_`Qs ƵF\ mm*8䑵!]Knsi)T;]PcxTP@NӷmYjJIN *+)lAv[ B!-qWp@p۷ lHpf\p3%_B;[%p;bSd gLhූlҶ6!Zf=:>"2L%8=ā }x3~9Ƥ$0$IWVc4nybב RTM~6`><'|n}Cd'IFWgO7ʟPDžS7̏d:R&Ӡ~tVa\H\P^NTZwqg aI^xF'+埶?3OT)\{rAeo!A4l9ZFg߄isZ]t{#fЛ h?}8ćI(F# -BXrX#D at鰺BK 0#|nmy%kyA34* TXu$%Ie+ |YeՆr^1{HLvM"aMGa,7?e @^-Z 0 _&v4ܟ@a]! F Q6;BVuڰ}WK^CYYUy5OL1CLMqCRZ vNlAQ#-AGɟ 55?#?iW q'[tFs+BnBjB#a ġ](bk"qc ,HHzaRJk'iFxQ8!veS ̼}G;5uΑfAC"S1(:'4O%6x+e3@0q^8\^ĘJ"Hq]^^@++?W>@ _uqGNz @' &^wg#&>A 䑓ǻ!@WGoN ǺQ@I/&«+FzmK畸ݦ>5($`gdL{-ˌ 15hRw{n߾b1ݿ;@|]9HmRB.Vq#% #tO!qAlSOPSȓr` D!Yu\5ATp=͕"͈?c9*Q&<`!S7!վ`3P%!4cqEp*-gD* m* &$WTC?CYOQy?Bl yYHxkfg@+i!'A?Q@g›--C.:J#/VZn-Hˠ&ȐrIAZˀ,\r3$ŬM޿:{~z7Ϝb>9 ):xdI"tCGd"<> ^x~vx'e&To]7^E/?V{ 2^?,yJf9Cr VIQ"`0 lJ0ڻpY跖ǧu{mg8is=ɖFQc9}_Mgdh8Kn#WiF ^a$F D%&0o̾'n>RwߑN2k8)M5tPhȭvӣaME4ŴRzf2&h;v8t:?GCfC.2?AlM~s|rs9G~Mg{q(]m2o1rj0RHؓ8[I/ug..'s91ywp;B/VSB@6%ذL1e x| V'_r{uI#'3arدm#R(pQ޸z_ZcXuN(Lqv_ R9j9twwz!ο炐83ISCK:c'ñjmeNAG3:0Mdp !q XF2 P(*311a}r |Lk3ʄ )wZ 89k,'`HQXQhߎ6xɅy]0oxoQq]>˖΃á]o&4K,su`wG3&!#_Җᛥ|ekx1y! 1 @H(B'sBF-jWBG•Ɓʏw3CP/*([-wX@M6'p^_O[onܾ/"3%dSrM #hZB1SF*O *M&az^/~#(Nj~rjyUbZr^7!o|M8LFilL9^h^>Ĝc sL,Grͬ}zܺo/6[Pn+|=|+=O۵. z?凗e3_˜Ї/0 ކc[w?C47 6p?M~ݍWۗ6EX^/~?qBg`ɡ_X7;Z 8S`K,QukZ~Ȑ)n&;:X˜ pM )C3<>y3slQJNj ;dTgY9Gݥ@آWͨG? (s8k1@)jeP "Ѳ¸G!37)Bb{o3=y1?7WUbnwuA^Ry)6)NuA OBfdQp`T h 9}&1R9֛TaRko  ׍J[]^NB&o3@R?ݜtox(oqNu lJ)d׍g 6 V`%T g*+wj+5jAc!DF;HqRrOi`&0fiYC]<8q!rP;C SKwm02% FzSR+Bre޽[/fU6!Ɉp,\S0LK!Yf NLz/o$h,)T-Ce@9D(Ȋp+eoD9i4RI;UeT r$3G2/V ^0@:Ƃ<*\cT~sꪽ3Y@H 7Vj9⤟P3 #a/L,NZι(7SrBƆO: Wϕm"0Q-Wk$݄$y}V[mnPxpoO3MJ1-h8RI\j.kUSw7R0r8ٌBe!7>?&~5y#ʄW,4'n#3R9Hs6h^}m5{>t((D^|/?ɓ3d~mQ{4A$6VqN,ZC(D)ٜ7A>xSI_4^_4?wو+qf8T886g8"|G9 aS?>kD C=뗏&1bs2яUP0 ;$Ѫg I".}n!ʭ~ ȩod砘7kQ"1Zywk!tt 36$e'6$[y@m1){YWFNS1ߘoźVJ T72X&2v,Wg_;/GT#HZOb>=U6Q|EäČ vg?Ar9cGubZ-N-?|^Ǹ&nmGg‘bmo\?&XlݓJFɶۿ'_s#DίBubᓯx˂cܮp_>-6|Q+=ku–\P^8x:1:Mb_ׁ`d89?g.H/E0i;*ܾW7L@I^G 6SOmܻ .HEkѮALek Nujf+SrVTs:-nL, [.=ك͈mmEF{5yvȫA֮ lC}W=hq7kӢ?}0o"_݅oN41+b=_uCzKΊmz_{ߚ>Zfq^7)ԏfz/z9nq=N/2>Ix%m Z ٱ/N*|[guV6weu;p50uAZɹ}z[!O?R tXs1t9Y}jYD%]XSh[0)N1r Uwz;KL{RMtR?HW E=WUAǬ>ώ<*>{J)( ]~a/*URlj8~u䝈ҡuEGyp+8J&~}\rUki]#;ЎڗeU2Hnj$Kg_U2I:/ Ď;)ľJNXjpyyc7%m\5Gc˦Jusr|HFD1RMs*Д4 bl(UC:z.3EKQdNmЋ#cmsWIg=BճCB]!' ~!mcҏͷ=ύ?ߛfTtU0piV(=T0Pfȩ]dcZ*!ݤhNv2荈C1ӳ>|f g<"йmZ#rrݻuur.,Jq~RC>áPCF!X$̩wK)yE;}0MW):^`hKp+۵òtqbsǂns Ib.1@s"ҝY܍stQI(=ARcC8zAQx }KIr*O#%FdNs.a|qOnc6>ns)x[?#QQOBU>f`s )2RRT>$[4_|ֶX&VkD!~c:xD6˜b R[\lG)rRx=^䲍÷Q*|1g]ܬoEy+qHЗpa`b5q XFRt[MAԔ(EDLD쪧nҍMs;VmD1"݉uETn۝f $ѕƛj<[o;²Z:VN&ŌMkSsK6ݡ_ȍᐲ 8n_Ƒ 1bucŃo6 D 殂wo щW~ơxN[YC wE3c6J6wJH+Xq(t3ln4 5t:Ɯ#MGGπ?[^p@*R\ â ozhױSzEۀuZ+!jcEY"fXI R;?_Q0)h0ob"~/>:ǽ>L pUO{G v^' +^,}z%0KckX:]:@y.`>JU9,Wն%rTiQ?./d߽/85`yCOkvxMEVaAU0^ؠjŬpiZjDKd/ _W1J@ ะ7RWz}jׯn<\{'fnrrt+ ?4ԾtW'~IA~TR34_r:mp#EcJl0-B6icfxq.ǟ]u n:} @APx@<`i-g΂R1'{B"֥nܓ{1KV}[7U+*mv`mܲw܃!(g y0!"\Dg5HjMuM'ϵxm0,vܚ0t4 ( ixDT{WxsÖuG']Wn^ ͋ݢ+,6/6Z^Z]h7/Z^8ؼX1y gQX#OBLp|?c4\r܂W\-oԶ+1dw|ʁ)ͫuHgm!3?<"dXFbA )'C!A 䱕Uhp}-XMk!9+ju!1!YFaD§e!PƅN a9Cpi2^pITsF)]T+*NH4ӏa>:s-e 2!M.mQS4F9NCbø0%w\{T Ef)jԭ5y<_n1'?rfԅ㞙 JA̎RvOS\viUZoo>L$ciWUT LNC.\h&o]4Lqr݇'(+DZԛpcnS'ͮϫrSlZAp/fU`J3,1Lt4?wҿDNqޥ $HC <|Py9f, y0 |kkN'OHkvS Jyk5SWQMkRM,2FuZY9tsGl!& sE*i;PXCM:M\N!H r={ 8߲;%d|ߋy.|7 n)kosi-nGeohy F{g_pLAM E8ߒ-* K v좫zR[W9j DŽRk&EZ_rT+םr5]էH"t^D |K/) a;͊r'ǽ-gƱCv =3x1h|=k1%cp{ydBK#C/8K`6oOʴ~Ӳht?)%t.](gKkFC\EtJ(̬ݨz[*uT'uUe@i] yߚޭ 9pM)n4WnH["c+nۓ2T_7wk@C\EStb[uEAIcE>Dּ[ޭ 9pM)-wa6[*uT'uUeLUmȻҫݚАW$Rb7ZVn 4Z1qr&*=nր&`;:4wKĠ$wkVEnϻ%ﴯАW$"z[z7YŴ1:ckj=|F]րt{ x7YݬآwKŠꤎwk_dJ~ZNMք&;߃л)JvbPGuRXŻW2s dC-<ޭ 9pMҩ]9ylkohR-zF-]~@u]lR;/jP[V6j jݘ^Wk@s]{HfLz=[Q*ofٓ0H򮿧i0^^/P̈́zA^F-#oLJˮoh4vvF)՘s@8޻3P8OjPcWcƈFsWcn(Y՘jqWcj̍Z{Wc,sWcnxǰy_uWcj+XbUWsWcn3՘sa25fΕj]QKxk\K՘s кkGm֘1w5F-;Rj]QKv[YcDɮ՘I)ݿ[|3FWc>3՘sWcV՘FWcVwsѺ|mƬ)S՘5f(j]QKPB]ìKw5ܠ%NIw5f՘՘aʌhWcjMZnfj]^GQ x ܸ b<>-A~ t Y/h"1%\$h0*i;&hW:M\NI󛇾#ǁ/$W[5Yﬗ~%$~*7An YaTϡ1n|=4cTȮ b>6GnuM[ 3wW#x.O(ԀǞ39ok!..ҌE(6aD0W{rwH_ƸB(^27O]ʚE2\۠Wo߿Xbf2['(Һ&0  +J`[G?nUYrӫ*G𿃿}#i%[:ǒ)9Ɔe:v L2gLKWk@Fp8pr0~ ng 頸xsׁ~\w9ȣ,@ xin=9+-QBSY?xy)J^đ؂(aQ|f ܿXzT DqA+V_4!BdGl*6+.#P_r(Ac_l CrPfŋz39i.N$ɄCr&~8U0}?/Fo khytSA+dikЫ[>|?y9G9)ќ“xma +)TAXV9L)'\/SQx٫7I1X,M0+l0awwA7j fie%_ *)X (Se}SNrFL+] PYHI_LJU-?uƼ  Y)yb43]dft}68LS~=5!q~WG/eK<{segg" V0ޚ&q[بdf`4ŪLgr?̴rݯ0|?TRh9F[c X—ĕ(w/O4[ ߽GP &wy}e\RU婷e;W{t6[|sbj,t ryGy/ HSj0_1teLXZNXm L[tYld%\Ōغ(GiAvRcHÄd/jPIdkU}\'faPЊpك.>0+@LjADF i5tSezwۓ ]ezvo4截rey2n讕v&E&rʓ30\yGs" 7+5K ibF^aQROt)4$ĚDž/+YQ ,iiWr XƆ،=s_IUZ`-.5Tq+F̰{΍w#! `!&`g~\\NsDD0اdC1]0J #rLP\,m!m( lzeZT {k3/XXN\PA} y,[-f(̀q IHПa g!JiSghYX Mô Z$BZ"Fg@ Y%dXywb&uJ][o+^ٳ=",^ axHy9-Gȹ,߷ؒidgF= ˚KHV~J1rt5'vw4_ZB#Ridd}r#GF YSN̈PL1F&jC7ȲyaK)҃?B!N\]pфZ3LMFC'QF,{IJꉦ @q͚>Ţ2<FahF2JK^ 9+ b Cؕ="D!D_Qхh>^^t5}QXD>mw~H/i |G3D)45P旃KRHxu>'5'>,x}B|Kr@ãΫ7'Mpp9D$//.f 2 ?pfUb~^E5]P@ b1ud:=8o~Ků H7O}|YL=/W8Ecs²a[g8 E)^8E 6+ҐhE&b}^oMe 6R,JZa49ctj]Ii%V75$nxJ6vEW(b1r jc _Oif Kjs׌^473HqJ5d뷝;!a HU+w{Ͷ5\ZUayݕ;kT10aHگ4(5=l4i3+sCٚnTA=5ڒj4sj^}TqáT1gL4NwcY%Zake 'ȲDJYEf)W#+ufXRسrBeXL&|ySpZcлH/3O+F;WS((g犽^װ4,66P!~U+wM1pb p:zuk@{UլnȪ.!jه;&x>J-猾4J}>%& )f9Z8.85h?13&T 3W5fV7^3.+f ) p^+]&B;9 ;c Mw` gd”vtD38C+E'{kzdVJfv ge<|^՘sá3^-? ^U1v/RjkV]6 `V}d\\mҽ z,Kr\} *9,HIT:*g)$B|2PL*~=@O̺^t&_6^;NsYzv41nyBnMH1Nu.+g2 ͹geG#p4dMHD;vEDH6WOZ:%%4[#pNmkH{jw[c)e-.婈tP6d:e.rw2ezN3sFryѺO-*~{n6DyHDZ` mMۂ@eTF+U[#}+O{Nŝ'bB )w9UKkx8],:cߵ77| o }]>? 6f9|a!O{?@t}|s Txswk-?5ytvěCz7^9uLFdD} go0 "U?V as5ʈTa]475& ZKi:^Qoag>,nNHRGÈv-ЎfKnC:NZ8zv5Ȱ-=gO]HGI>If?Nk↛uOuL 2+)AÎfbXmd4 WBba@bHXuDsKћ™grʙ`V@rRt=!brߣwk4- }޽,m?}}ޗ茣ѫ*WMW`^^&5r->8 Y o\ (:AE"DkpX/u c^jȵ^f!ͽ]bɒ<Qd e\Ke  B[0j'yƌN%A/P$9,H5 hΓ`R'T-"dڨ) ZR[Md2a~wHz8;z81xhN]b1n1ΥHĥGu{(}_#vdzJX_]6iz~U!;V(20X34I˓ J晦7g\$ gA!cvYɠ/"#KbVKr>pC+/ m0'Ё^ЖH&. ^LJRov2Πm;o筣4owGN$k Q mH7=Z{n<7mffffP>(-Hk"EF>8-)M[ !1ZJvt4{~26Dau+yyy6/۹|(Z^0wA^8/CMlmad$:%G S% جx$A5p1km0. J=!THuf Dz? XrK/?rvą+zn^gxQT󜞞Ha3TFH,Ҙ]u|Z+hӣ.{δeR*#50aJ(~֊]Vqi8AZ(Cϫ{D&6$85cщ0Z.ze|(}KW~+ԨnfDR?x1.PZ/5 ɟv/RoZblpTmU!x.ri =p7Z~F%wcH"1<=dZXLeUiD5dH|X-4\TХI1D"YZ*P!haZKZ?\JYژc{ۛmG׃/mޓEŘ!S\vS`wP0YbWKkŎR;/&1Ov5Ap @Eʑd,:=!iDA,園Cl |EI>t繏?w8.7K}}{( $AHC83h8 :dBRYK33zÓd2S(p@a[]oreRo_״ƫ:fiY':OXN}\;Dd,-c2I4b=_JcIM#@k8Z@{XJ מbOYf(2G8:)vkhU# uN{Zn۽=Y$havEh ؘ2;?ϳؙ b\x\jKmqm<6v7)8:F9#2CSn NEZJM8j2tAeKgrk*QnqoF:+`2_ c ̊j~˄  F~91F2(ZII%&ҔɼGD ͧ[ _dDݛ2&Rrc'D;zM6&7GxTQGW?s / M)*62&8 gd6|&!aQѬnJLN&d`b=AɕDћ@Ouڍ<98Jgg4&4uiUji`_NtYPv}LMȄK 0~\.LQNwA("^K#Bh`0b^ PB"c<hhɘ<%3&h{EhIm%m;E(ɡ]/PLꍺTO^ǰ]Q,9 (%Qi-Pd2 b@j1  b-s` mט*:yu \h|˶-$Ckݲ7+ae"K! 6f) EJ D  ÚH,ӷ`#(gd抶]cI-`M}0p{lg((%T7 Xre2Ģ^@0n2*@$P2lj= ]w(Db(%2VS#Eθn ,l=b9bv(FRСJ`l,.Ll+\9 9st~k.yǠc%: ?cL RGCi M"Ҕ\qiEBǂb&<eK[[_]߅깻Em{=~/ݟQE'W?O`J*Bfi4WK v|N-HX߅zةcS8b}S{W%D[Z4:\"@ǭIנi`|%mU:ZWhНRGf#'sbnϽqc@O||P툳&8X83sx4thJR8x|9(![zG !َޒ+Gیah1<(EvF#w6f6MzayODp|ur*Y#Hr ɠ!Zv1JENZ-+x3.]IJ"-C~|2I8`n w' u+p,taR>Yi79<4x^)ۚ16 8X6 K AҪ{ $navcAA˫JpMtхWϰpq 3,- اe)1J: iEAI礏-H .~gP>!ݡ|7UY F()nnB|HR^|ĵ?+S$W¨ܕd_PZ?ugtoW:v _oBwXZi+TBrZYD(gkkY]b2$O97fN Izg跠ƃ>W=!+5#Fu'ߥНQKS(qDB=!e#J.)[2CwYގjAE)}#x'koU(b*Ƥz]q"}{+zl]9IihLd*n ͘T _b9 6ʼ&ٲH)b̖ b!c=޻g3}gjU"{PoFoTt&(S$*P9Z2lS$#`ԝɝ/s:.eNg҇wt.)3 $?TcB%3-/c1[~AO-ܽr!1 Ȅ:15W:b֞M͛GijKvWL}JDϞyzq}//^ G6'7!1؄XLλEV)!Zk]]9yk vs\{x):7fm- "OQ@|A ?3׽M|*щ)?5cչ8hЭ5Avp-Um'ڴja-ZkvdFI򽘶fA9!60/egc͜rcA*+2ye%sT&HQ l_cyKΖ9exƊ>t8Nў8jHz =5-ET^S$0nd`KoDGr`Ey ^[݄?+6F6F6F6t9@=GFuR5#{1Ya ƘQ#l#št,#A}S >C#l6_}aт-j)OX,,fC|1*lœopGb<4S֟5})+%Z[nPQ Jz? DFg%kzfMc5`)Dic+ JH jȦF9 ^=HDrKT}P;ޅ7 TT[32Ck"yj3J$>} +o &LE\r;ވ%3"pxrgY(mטH˷lFF '\+Ǡ؅~@ՠfz95Ѯv`BJ N1M×. cVµP3^WRY4y50>sosm䘭vQh-veaC `:ApKå4;-7Ɨ,RR9?xiҽ1u[ UYS5!p=5Վ961kly`RH!A4y~ VzDžeTՠP L6FZw]£ L,I!FX@c՞3ULvvvdZt*Q[VP,"+@41UN6ddfv Q5PP`tHȒp_=hC.w:dɒ2'd#CxXDgH 8AD 鱇ع󅰟HٯT i/넜=FdxrP,^T#̈́| QT=ΗhCO?+i?8go>\? ǐ*%#>y~~wJѭ}p'G6 _H`0^<{Z%e2kө$ac182QNBsw]i}sJɤĮiTPQAFeVP)J#QR RsVyx#93(M!)ĖN~0gk%2Su 03C=/Ъ{YQ}e.WyZ2 5afZsJE޻k}3YB{p4 B>׳[)舞{4ʒ +Hr J놡eju5 ]F,aNĆ"MGh==7ViL2Ɉghgo6y)3,>$?-CڙYi9f}7!xM .sq4!?Lu9]o ~yAMaPpV=ҁ<u=/Ṿc~b*@>&g05Tz=YO;\3jَȡ6}˟UkVulEWqS3`D6pHٌKDQŕ8xB* 8e:4COc:4CO䉦!":8-ms2TҚyզq&'ONT`gH@5,w! #$}#A礵t|GMfr>Z؟ 5̰uaQE bh|ñJd0`*e0F.Diy H`y P#X_㩼[M‹_"5&I"6h~[cSm#Fy!0u@k;}(`hoM^\Tv[ܘ!mF %(RBC\PDoזTUQAB1׮ [ =m"4FR`4+Jitii4rG ҢsdJL1uܵA Y hA8=6h&\:6Zq0>ֶ{l 'JAI)2>mpW7-61p*Z\i$d ī.rȣw7|K<يI= iD^"%ɖl?LP\C LEI3 ؋I/AF&A(f1)#22Q@a4ѝ)3c[W st {#,؇LwX`*f4{H@U'ufL8cfL@ǨbMrͺTDjnsiǞb#Tsö<S(vFdնe *+J.ڳ:Yϖ<0Sq&Ȕ*O=]PWx]6fCq[aRhɃ`V} hTwpl2"Y\`郲A`:ZІ):,+¥↪5`24XDRӊ?#wS8p@|\]L[#Wgfa,GdI!w88LPړO冗q:~sIȦ.8 NJ 6 '~ygꃇDphMZ8?'Ad2ĪջFV5@4%/ѝT"U:" Ic"MIjPrDEe!鉹GV[nsh )>5ΔNP̩%쭌́OQU"4HfIri{R|eR[_D˃_K<0hh`G::R,7׭xr-v0%u*ah!I +^.FQ~'qMON@|Yq[wv\Ǹ ShԒ0.Z@YI2) _̶juIKn+0"ѿBο:܏N\!.K,0H` φ!&( g>]>] f܈ߞRTG,¬t`cy֍%jtR1Jq_!؅ U[4!՟꫕׷/0Uv-W>(Weծ|Ub }ꝏgI;+6$]}iQ)N޴EԠt}+iY~rR\)0jzZƎ$< 㗃4vqy>w->IUw5ΠnraI%va O؎vnfj@(~[?FRQάcl2psw'W_tz'_l0 4KXz1']+DT]FPi[[":h\_ @Z ܥhz#1|rϧ]'{WТZ_4>|4ZyDvW@%=o9uj Ӱ廳%~[Cpl NOsfa8,l.=͗v %58BCq9dr`BG^)mhTR1Nݚ]y}vBn74AO"-T @3D$:JSLY,(r2uM ej(nå6 4mpy{j>>rĸ2HC$D\SE-ɤVp療˯dob"i)ukZw[/2,ȵA1I:+C%"!8J lw[Rk DrBo7kN) lv>V =t9V#{O2WL[~x%.?hCoraˏ'cQp]. ه]ju s뢲>ԜA'=;wUjVd!߸6)aJQ^p.J$(w_K %1c /cCUyD9Z?8Y*)Ly֯^e̕w^ yt΍IYy:pB gRDbS.\ :yfPj>b\ +R,!!%tU"b`˟lT_jrR1t=xX>t=_Ɛo5 . [PKQ \-<*a٣BRh>|cx7ۭ -yѹ*E+SѷbҖYNht{)4IH'Fv*hmϥ:WyÓ7;0)MUooNJ*y QHob-B/A-ꠊa5p^HMNDn^TiXͫD6j7%g^R= P^|$ڜGJҫ˫ heYm=ST \1tz*j@v] J+m+رqWw*>tvU=WzwZu4 55Tirb^"R{[~ٻy-ڿ0$hԚ6D*Uv{8&M!Ϡc Fp4jT&yIPkh,;\ҺP &bG50>FݴwM:vRK|I`ܡ;gU8uΚ8[AMp*n56l[ܟ)j,12ǹs% )P5P w۹A@ hVl Vѥ /c#q0s^P㐹ј"v52OSB s^ ,iE˜|9p!1`K:`)ZrDURC]e ۻf+rkʽ ~qX?3Mgc߆pبqcCakPQڈ S_ωr(s_':yyy̪ VWg*y)ՠ#P8#oW>їGx73٥ɨth." em 7 S1|]l"7cgN?.mϨ+0& i|MF8Fy%1$ ɑIyph ;Fi3sξt %cyUnz ܭ%+}KbP(.c߽~HI?eeogq/눒].&GFd6_55}yp=>dd9 Wv93PLnFH|dnvOw77T־PZc@ʷo5'nj$]+y6!GP(Pp#{sK]$Bһ$G2-G+~!#-Zڕ{QJlUdVq6uN 9TBG|6*0$kC퀟 9)n؎ Yrc$XII"B3'81C8ۦ^S}aȞ}F.uPޝ֞lR,LQ I9ƐKpR6].4~I:?aLpiwnI, Q?L_ÄJj)/.ϯ98`nj,IJ ZlEm(>*vʧ,BKQq{O  NvwYd`SwN> pPKa̱P>-ԑLlMo1!:=$>&"$>$9Sڃ9s9gfRNgl'ғ8;g8Eߐ>?MOH(X[g<$Z~Zfp>TÙ?ovNtec;2V؃]_, ?B2U4l0 yO{}ak~ak~o/(>58Fd)-B3ˑ6\X/";] [\Jsa5oK\.leu'Y@{Eb3Wż3_hGldkO!y9 &ބg$5 Vzhbl75hݗy[KsڙcU-S~B*`liyvB鴨bVf|0a0،ǚ xas5#'Hi0s*ㆅ ւ&O|u'N~l4DN)m+g_)~jή+[:;Unȗlx={KͱTua5+}=(r=uFGK~ph}{?_{:4?эo_,4&0([n#d P$ó  !0O9=ha RiFE#kXnT6v\ 0D, {r 1wӱqnzMz\}sUU^AٺL'A)>֠]Ԟ `cTަttLn f0Ћ5< #!Z>c9B˾VVI,y;V <ūWr `UZ0 S"TTaXD&ܢ`MV.r$B RǰS.mWEʢ_QY*~-lͥK٢_ukC!`c@D ky>5*uԸ !Ʌ@aPwھ`]Ie/Apa+W ˔r^)z|/oGUi_oBTf;7ݥ&gScjwE`_ǺkS ';,>lh/$}9S[au6It\MFrk}T_;}!#"NMlx{rs&ݫܚ#0NkP\q16^,T/)TвB#ʀ2i*8X4|pEm߅?T\i[K(çpfƇ9ӝsq7\B)-Љ=5x8E N4ivf\&\v9 sJ+Qt*DQ %8Scqf;CL[(#[i U08JEf|hkQ)SMB$SPJ@x'rU8I- SZr)9f[cCJ2Ԙ05d!d2lks媮zfmU@!C͡80ϱS,}#`/Ǡu0Uy!}=b'{Ue!^{?QBEA- xؘ%pq1GB q,!b "]A!+ݭXq('T6&DzkJ6A\GQw  * osL,?9=@P' Xq 4ݨ8/kzπGI)5&w#Dmd*yJȜ PP'qZ j`\@aC/&P/4մ˨Jb.@ҵ?\}OǧWI$E2X`6:%nr̨*#!+V2qZԊ్Dg:5"V`GC /Źb(P| MBHƫUMUXjL TѬRs-ւ205 cR.Ѡ{ kYWEA.V,:jSMAPaT[!)ìq'kj{ w˽֪zȩנZ ,pŐ"'CdHNeJ9p-Pϱ s\,{5"p}rEoߕ;R!ĉPYp}uMaʺ2W!dά{q{?vYqһؼk אα<," noݬ|1 ̩(U/)r:vŧo U'jtTx1Wj4 HE=%7Cg>DBCn89A02>,l{5뿷~3)ʮ(: Q$l^YpHSz e{Z6{o<3䦼tz%\m/&YrPbG-GWk1%2cJO- ȵi#0vDࣳQ3uȒFG'>%QIOIt,iƞufAȣ]1g(g;8Șr'Ht1jE2*0_`U\vˏ]<ɥBq HW-.oPݣAcUoE5Iކ;ןEuCY~b_Nݷ.UddT`|g忟t?~s( T)J%7c{iR[Y]EM=J>^%Ű/B*bV'1?ڐ %aԗVcl|ւy`PLX7ma_ KúYJĂ=s ~HNy|\cf]rR'Z7mܒ4knsRC υSX<>WF; js %Q:5Cfmsfh#gQ9=EY&wKݍ 3d8s=B5ހ2*S)OU%lB곻ϝEG;9yVgMl?x< &wnbGwhh(ޟ LK'bRZ'(?Woy}}aİ7y0"8۪P9T4FT#.YUErp1̾dW#7eY[5ȪJx0BSs[R,NQFg{-i=K`joczd5*.LY(֍Ib3Q#|R^9o 7vJ*AtqJtH2ۂp[pԬE 5g$hnH=v$blƞw]񵘓 LHҷƐ>C )dŠ96qkĩYTkC;̤ )lzzپ iom!rX2~b$9I6$=6l$\9uLH{ډ>s.t_h:`-l?uŪP=bHʀ@hą $$1a/Dvy⓰3W&#<cqA!x>G\;n-xg+&'Y)$dN):by~T SF1ǎyW5{~ނZ`̐(Ogsj ix|tDŽ 9*Lt&eEuF&(5,+Ef`L&(YCYU0(S[KlH޶Q\ڦVNg䩕kVb7BD<M<<Tげೊ5dOZDʖ ;B)i! :-3K:DGK.$Q͗/D˴z.ABOaHӼ&\E`Tsc;zh V!w3/3MٟWGh|5[Kapj;Pq/#5%`- 8Ii(T)CvD6/rp#1P v<9mbW6Zk_՝"[:gmteO<'?\ qe6ZUoQƹE8N9 0zc3k!iM$c^cC4ȃ8`RRj1)=6Vd;Q-;b|`"_©̣{ש(O0S1=@3v D$pɎ%&`A!8]k7WҜ.WP/*^ug7{çԾ]kx獙xUMi e5̒J$`Ik$QWYXXEq-2[K•%r0ZІ1{*ZG0es7a#8g+*,RxAT2.,eF"a">kz2+hsb=L$-b*9e, sBDP`݈8פebفyÐE1)>C{@uA _~:[yP|do[rK+rfvY9k1pp]藷#?9K~<w~7CglfF:;K~L]V0)b;KHfaKs Qˎ!r82c9@XۀR*wu2qPւ̶a`00q31=LTP!Q0D9|! )B#ɍ N;)Lٴ۰%9$mF2.2YA KO,GRDLt+[LM#ԖV 3Н/RJL 8x#j4f;K *B|@rE ;,",'ۍHWy1 yDdd _vv}F3p8#wd*1j3;2*(9#gmL;IK^,k>} lwa+r91#+ ÂCw*Vàh_@1`mON.>W3:I123ˏ8@:G'W }ٝpV>CWۣHv>·:~ޟ_}{Y)Uπ:/HaNn^ۡ:}^y ʸdSa=rI䳴̶"M {d1H)j"6ߍת(+%Y]'q(z{FI,tk;A h) %e%g>ZxE4+&guby".~r)'Ơ@Z_hsHE*sV9|,XQ靇z@K  JYXdDn!8l҆9rcŠݪfY QCW؇1Ιz\YQuǡR`F pjju,FDZ=HKV_gY0c7On׽8;^Z4:êcyIF3|+KLb AO&&m#UǚIހcP@V4;6u˚xi&ܮn?Ǯ}=a$A&[2L&JLLzCAkڏÙqf(SqCQ,Ow.A ]"4Rrp''Fl'{$gE9F6,w18)Pl c(ʤ}rQًgHZ5jrχۓrٽSdL}X+U)eB8@vRvZߖ m#vnS[+7wd^J`O{ǀLc j9 'pTnWXc7:u3emHQ#QKhY̤ bm+Ӽc޾@UZt=Myދ:HIf}>)L0U VL`rJI~{ hvNIE={uwJ`_:_WN 4e$4jzGzT9yp1 Ĕ2Z[Hk,B&EHS1VdV))qdPch*ʌm p*-Z+ePJ(CPŏ^u])i![Xd5H^8'.1eLt  RGCRd%#)s%JFԩg7µ. g@.E! *4qIqJi/RȚ^䗡BJ4>t53'I /s]N-4ڥ2Yf?ӻ.yIu/"C PMdQy\VSnk?5NJD(df NIE,ˆ@nH2ì8e#eWX;rWgo?|PnN9\h kga:v3ԂTI2)gCp!`d ꜵos!C0B%h/ 36bg,F!z"A1Fit4xrF|M1Zs"$'(8 US7yyIպͮ*H2#X~Q39AD_LY 5ԲUZ0]{6r`zy۪B758fE mB+1o33`V -V&ٕXfeUh54O-,ȞffSN?- hs]h1mgDt/6[~hfv_ گҎ#%?Xm׉oǠܢ ??'C^[!R<+<?ᷳ `.MozM_gg ߭;; }zOKrBre955DiQwt듋wߟW4-׏^ǧ'+8`7GcfUJ$ }E$BT]CR%KE.lf-ŽCYg1AGW·@ oyo.tl[ qNN}{骭37ýKb=*y@م^뿽wsN)A)Xԍ+u9^}g)*yU^O'1/mz ԝ9@ șlV 5+4g(7y3T~kTwO?XQ#OMdc3O^ḓͫͫͫr xH*L_ o hnwNۇO޵gb1k͛Yğ˛ jk o2뒭y};bk?mqt]w{2w3ojJ"ۿ9r'o35>bOᠯ!AySH݋~0ybkN1rB]ۉhoY xuyc=fޥO6f_R;*'ol"X]x>݌"{ޕgjB't ?x hփP묵K9 T0oL j"cROP5wrkN& S f<3mdq:zZ78^iUVXV7Qτ?(DُwްW /y[(D饷^ԌS0Dob$K0[KY4Шcߚ- ?eiܾ4A[6[S٘. ! c5ֺJEx/uýq_Iw/-F)qMFfW"KB%脐Xف\!hFYY~x_0hts7 ˍ; Δ6I4bQj~בYvXhh-dBطY~]uxkYv+kf9D5ZkmPMZCλ--PBm;8_P~6F7!Z)xvhyi{XwiEAkvm'}D-ą=N]û|~vݰi;t@f@h嗯=Yـ!JZQR;クۢw?юa/v{@N{p B3U+C\/z 8+vM̌! @ Vgվ ruK9BX o-|4!fQ ϜD$dclK۬9( ή4SԈHb&jeUv1K=~(qVFD}t?_r!p#H?ܧ⊒ҙl6Y gس-5I$&A?]l^>i۰KCcDxfA̖H212Y{ӧ 4a㔃 ' ]<Aٙvւ}miK6Ghڒnlqji' 뭉wJ+';aa+ ,,=-҄:b́}8D\X3t 9S\ u^nk2*?VEݝ,0к@&/eч6{34({v_TmD'Kbmfs\v9G%5mT'>je.L͟0}nWcTebCRM㥨eʊȱ\J| loB.Vf@ 7]y&@5M6.oЙLuY$UHA*sUD)6 ZNL^PMBoXf\`㩣3aRrq(ڠ%xBO \ĐD\}efvRBH$л>6l$RH%X`!ll1V:yp+R#rFR M8rN|'b=J0XU+DKAh]HPeY!4*Q 8AYiKJպږCeqcl?lT bUf:,6QZ$^g Q Z6ZI)P+63Kosb߈Foc5d#WH)>yqmѲzOe݇ 49 e3vi@K jiٺʀY `@EuJC"5í]Nv,Kr ,~n^Bc9@a\;Nڐ=^=hUOc:l2^veq+edXWd+(#$\yI!eN*2 ]t .̌t܈ekٌJFXT_2_޵8^}ֱ߻_\>/3IzJ6 k _}z1 #'ɧU`{Tيr:IҾ!؝鼏#rMnָA<.WH Q?ZͱTA+&?f+ ccP?AvJ4fb0gxxDoS֐2Q*/j"eDF)dARZJ16P`p@66zF5s "pBQq zi#cl0/WI T>!s^UX̃6 X+Fw>L윲+.hUtaG}.{Jb*#9 s{4ڎF?VgzH_X`J0`iS/<%ZEd{0}#XHՑfhU_Df\"7p;ty a1痄ha۶~, #JcdB0eghZ{G>~+(,2FkII~J#!K"fyDaC1BB{a6ȺphB {%)0'O!`[bfv N{E5!4H#,PϽEl(YS[!l97U$sTfq^3ʢzm#|1Z?OtQq*KP?Wx؟BTiMDMZ &ǒͥN2Tlz)B/'/(iS2SJ!5{L"6zCJy(WF@1m6& o͹3NB*tL IJeb-J0~2uH|F/Y0(3ת> 5{5U}fl, C$Cᚼ{HdCP[ڲgjiٗ B4ךܘn'w^GIS銪A/7O{DafmܒaٺxC[hIf_ˋL&8ˋ_ף;{O˧ya j%k;dΊ8:,ɽOҟCJNՀ_a'Q6{GФ٘qJ.l9|-~|@$`71ڀ.|ϣ^Vd$dB;{У*=)%a' r߱\y>=߇y U+AZZХ9{1wlcZ̕7o):b{s밨rYXrߢF&hy+_O 7)`B|\zR4t~6~6)/h6=ûG3W ջw0Km4MeVyhk[Vvul4.6 o@eM ;?VNo]LO WoF``.zօcaܞb y~VW778pӋoln_FY#Wr,5g(_Kxk Y4ڨ5lfd}!ZrT(e1!'+O^-0/8nTyAޣFi{seM%ЪQFޥD^SIT>LqR4  Ƞz6bG`)T0 jA` &\H%ԋt&JJhw1n8WqpNPf$atض^. u~ҕa캂-铭#_g0HJ{_GwOo|>܁D}CXf)ʫ\ʆ)%7F"MK+=I8=$ ddedhňf}.(v0~6`6}uWPnZy%!e#LQ()BI|z6&/I>jOW$/ߚ_56wxWv3G>뇞Y[e$Ly_\>H쫷ي$=N=ܙwCNϊ8 }עqW^\ "0dM{5aqplX f>O{)6(AAoғD= p}0%D6DrW&E8&xq:eE2 -s\ QtU4 =į1Ӈq&+\$~u$>A_RE &1N+7erG"QTo'c(K=U?0AX{^CAFyŁwϿ[MWоOvox37&WL\왋s~IU'R:PGZ#({1bՆ`)[Q*Pi0= "%H>)_lѐ<(B>z53;rfMX)kvʧz;<he|[^Ͼ/h2u|1]=rgUB}J-ןs@pt 49:s5 I/ @#-8Fd@oCFB/ܪ=/;\wl\2sdE < JnTVVx&.}T;N ;l{-.R(u8; SRY}i\ޱֽ/d5wЀ0Xϖ@R:Mդ9#E %^`deTO϶$>td4655NGɄ}qj\6BXGll䃴ܞy݋R-7ʭ?< 60 Č0Ճ(ZhyD`D n<.c^dU7XA {$:`uO1T<|eT^R}N[ 8[=Z c_1V)gE@#3v´Ct2jDv={B=ʍoxP8Oe@@v[?Z~)den?ln+*y?W:*FIyWL OϪL(@TXby"F`xu[d5n:0:\sPpSVO-DDSCSca1Q"y WX3_p%pxlb~lag-$rZxqu!`Di3DUxF"U`y)ǯXrͭWh8rj;Um'}R)/G|Ee L뾫zlMae_@u٠QܔaRF-j>HXQ\|9Gr%p\6x/ {Iڀ-s#>/ 4ZbwkE?-ͫyn,܅ל؀''I, vSW[Oz{WܑۜhN" M%ߍs({i:V7䞎Aj܅IM.LX(n4 cf6LJY{Q{z:CeAz\"9TGϗM|ʞ_wq~[Xn$5`#DPV3,*&/ VxZqTkԹn&WęjQ4!\` \hoMSSXɩgޱfohT5sjYdZKL 31* ŝRhk qBph:ˀЏ: UX Q}lmB6\(ɘR}b"A}G|v "Die l$M/f 'mnX! kͼz^%)֜2lIJ>T2tXbD@OOl+}RN7'ioHCSh4`er%e g,([\uSM٦U𵈫-jX RFjPZeh`tA/7*YXw:yL7OV̟ԡFн}՛$fR6g"$HpD~y]8f&]o9r=SԒ od볖4ͭAĴO22)rƒEK$2HzdfN+հK[tt#?q)$` 2#˰B9", 3,X.Q (ݰ~.V2sII*F?Y0u*#W.Ҍ3桩&x&ag}><ѸxM'o?y HX͍@fUl]/*b˅ۋ;ݰSji-qS1lopn]V c[97"D{rRIzέݮɝNTٽD͵=6 BW^x3yWP' pG ?"r:1f_5m›3mRZ V,5Ce`\m1J % ns=RBj 6zKI*B Ph[lӪUaxZt`V(bQ 16UhuZtb{vC S3Z*mk%ۦVɴdPpJH(O!zof ={t`GY6GSk"9l%VG#)ZVKӇ{!pZ=]=[%ڈ1lZa(1;V˕uB>#W I·L]Ώ1<=C4ǃB1?_|`٘MkcV*xtes:3I0ʾci^}pqu HLk[{߰ :qD* =L1CնxUzZ&l0G,*ٮK׮s录܉lQPXrI'c%FIˮ* 8kZ)coImpu}I+^ڨUƆ'w0+܉6pfށD#WmuԴ/sCAG"Li+`'|KT*0;c8EeByz]x储:\rRx:X7ɬg;&|=j;zy7'5jĪW`|b \92/ /iJѽ %8qJwU aW06.vfV2,X5[uW1p[-ȶ>gP[ʙ8r5d4R%Ew퇖e#"0cIqV $d/o*77&[.P㒝orђ#'ag3Wʣp5(Nq*MZ̟sV;e,*-(M _^ȅ]:g߆tA0WKJ5YC檟Z)'qn2"EE 㽑GN8]E WAJd Azǰt.W wܠļp:nINS@P}v<0d<-;fH fg:g;o |uJVN%fbs@s\WV:c>  eJzJ,k`VG^dcnA=mw<7r.f@Rx?)F3(Ey)ȹ.fE0UIafĭ4GL o$Q /XZF%HH)ӂ"vX#C^Їp`T{a|?4H*1B}"wlHu}xH->X 5R@IM_k+DDȪ~e`nh9SN{XRd.SZx`NiEfcdeyjZE@zbBr#e9%wc,\ "J=N`uz!< FSŝ )_)oG 'iJ㖳7ﵮH`Xcj|24nҸYJf4nxxň~Q]-y~ yl`A02L cGJDF4@b%N#W>\[Am!ĨrȬN |hU ^)9? ʈdt8N2%uNJ˂ -b0uS S9++a0Y( α38jW͝#.ܺdq@2mo+,C(~J,Ǿ ƀ-L 0ʹbB:bާe8Df60"@ ʁ_e:7 >Ng ;# 3ߞDhY^8U YGp Axʐne GO5@3*xhIndd4{N&bwg3km#DJq4JcQ$2!ZX/d: PtJ-TJ=PKI&QG=Tbe)F@Ss AH }|Yh` v)OId*XCkR$ y":)hQPĕ)F:PZ1V<ɹL66/n[~:_>;eJ1QmazPΐ8,W|J#$[M5bWgtɱls3s_JS~ӊ7-|p|ĔC氁JnHܝ"E~jO>g,^3F&t tEP}_,]swuTdw2h}h~w3.~};{lދG8t~s/y=/vF_ӽ'4 'xn^ofN`G\_|?v34^\<@^?}l6陗Ü=yx޳z?X<`}Wы28i~Ͼ|4N9=%u2s܁GGfݣp4|LwG_nw~·f1ݞ8LſIH8Pe@`<( Fq4}5 itCi\ʥ1χ8LZ /f&'/m97Ͽ}8$A^)\qy ?LaW |><}.rx0$>P4$ҨYz8|upYl]I==}?O`? ~>qv<ܛ EFÃhrҥ_ӏ°7ޟMgǹodt<.nYtO`Hb j3B>R^_/Bn EZhWmNa_z47& |P}"Z>,\?EMh:ng ?;>;LtwO_L.aEG(JgM)ɔ{v;ӷx I<3n)(r6?ws k<\Bϝ6H/*uZI"EP+~ ^,׺4;p̈́6cA4qi0L; [S>ɾe(Q,Qh%V<*992`biǞ " 55z)F.MS3z4Y+_@ ˭l7[kv!dqK"zk[nFCRhTO&fbyʩ)\%9$ˊ=Ij4$hoq)J$znK>]JppMT&RQhBXdd {v怱OEOYi|JċuHX+n3}3GW>2/dY3 5gzFN2/>ⱷm#V3^5yϟup5^~m&rŋXa\/?u'?ᗯȃeunWK=뗯jek/ߟϧW5;?̗/3m1-Wu(>&p|~5)vӬK_sf$ߊvIIUZmUşې@SkT ̕I[Q :&A1,;1FmD g6Ei7z@M)#$_Cǜ c,*RW]B٤H?gӍ6"OD~sFlD{d~q&^ͥ]V#FtYh@i'D&*?ZrP1M5H>gv#}5H_#}#}φINf59dVtrfx`ב sc$,)O(v)nbх2zLeLjT]1`xg>2q-8 [CKB[>5RfA0)HrEf 9{5zy^53L "Z|vDbRꨕ(\u0KܳޝuPfk 2fy{6{wK>Cͬ}r1`!TJǜ[/m(V1AK1BG/J5VaR jt>Kg.K-%dt:QF([ ~>4"Y6zEBqYfhnSzxNvS˟Zuc gӵe/y0"x cX-j6]ʊ.4댶 x*|)j8N]R%G񾃫߷m'= 2F."$?HE[R$aF2E{9J39/<1KU&fE=@XfJqBgx~Mrrt5rݻ}vz2-^EۤX(C!gh!lQyY/dtp%kE>[|K؊X5Oj͓y=k͓Z<5Oj͓VM.U(SWl%P꘸o93nfUO>C[m8j0[1X(68U,Y"߁+#/g1*c.BCJyAAK>[†NKab-vOs1];o;o;_]|V?s}l0ޟA\/6ޫ(v8%wR{k>+xIl7n{ބ5)P8:?E} Ȝ%Tzi!ɧҖ+gb7KtTc$۴E^S"wC^E^E^.z1O/)KbE@M\8DYmAdDK&F"8 ] JC 8H=F-#q'ڡ=.;td|I(8 c,*RKRH>5[BO/e>+5FLKhN|ג5ZFKh-Ycuܸ3΄Cb볹˨k 9, /,x{+7173j 0ֳ@O`"{Nk?s0$%J2C ֊TƇ2,fCPS@Thƚj )Vf)<>lULh^yVgY3*1'd͇d gak2hz|qgP;{<d&قvinݓ$RA\I 4=]&Z\LJ$Q(ɗ2Uq-{$4d^ʮiM6dnMNg>IzAԲ\K3Wg +)%+ f|2htփ#=и 8$ݡþ_3ymlK!b$1w!@v(Z@%AfSHcV[ʄV(Ѭ4P@v4P@M5 B #P+R+@lx TX;n| Z;10u7 ȓg|[,p I']DY 2ɩ+gD˃Id:#^r1R#+h2ZmJ2mGۨlB-tWǷmhhՅ.f#6jGJvvoF.2u$C ":c#C#j؃>hBг2wFͥ;O#wu;\Fx"q9/K-j|R*ɦzG4ja)S+v=fz\/]Kka1\Դ٩W:c4WD_vQ9)?W/_|}+o?p$s9XVU狗/9~ QQZ2+?9w}AbJ|us[?o.5gDPSu :Quk~Op_ک wFpk5)q{u-%=ԶP6iYY&?Yޝ=hK!ó蹴i2НT0aX@,PUC^S[mA{qwri'Jd+GGLy!6FRuX*$":bxwN)2?>z|v=zʬhUV^v4ӎfL dn"CR!-p߾WE9I@!_!:tIFQJlfT`g10# OeP'ɻ޸W~JDʀ>k'`F~Z$JiyJ4sދ4ϭ:p73ro߼yO?~sl<g8|ɛT7nb3> [F`?.Ĕewi-{JDsowD wZ E?jro+}g~" |o#^ I Ei-Qʕ(KPM5cj ;d+^m&p58̀I\,ϳ_9YЫl ]sb^F GKI+!-u3M7~1FhsC0%gpa>Nny~^a*Ck- /~#s)A*iyp恙Pm 1Ȧ%KRz].а"ڗ,~gc"{JTMfRɫ 1N1K(2l71o]\G)khBϭN@Ib+bmfl}rʦ)>(|9:C@vyFċl:=-w>lT` %nH.yLΙDS\&s-+.1= Y dU AU~, MS7W?]G:*!5^3rprֶ#x kzP%NG.BU#eҢJ]4)#Se53{ͮ *(( 1D^TJ2w0 dvA P+I /x e0DD0g;ܦ&"@jtVsPNגZ)1ͧWkb}BpUqAA+lO _"CDBRV!IU `PPߘIlTByJTn_ )BEuI7< \9q<%"}#e:)VAb3 F N9d,K":;M&6E6ةs5T#; "ޛT*fJ_aOr Di96z\4?dBQ!uIz<52J3`ȷ yz*% QG55φ昜t@(~,p 4{fR|r|dKJ߆otTd'a?#BD^'rTl ~/KYj)8oEsL[fWel|u]0$E՜ T&2X%L&ѓVz3 h(Qf&gNd[~L]cvpjSQ+"g7OW|G_! .\7K>+BzphQ PFW EbM+ s2zU_4>h[řU!x{U^y]_z;|$T?$]յ?Yݡi+60?|_]Z`C_b*㍕ z./ 01P?^}Q,ʲU뫛x㏆Ӛ/>QEнo>z}]caz k'%+>yqW A?zwtPֈs+ E{Y%*b:@7G^ᜌ-{hTLct _́%igG!H/̪ ˏ42rX |~'Xή $]nBd*(1j"erFt"[Y`,TDe6|0$u1=d.wؠ)ĕ55,vL.S3Β@)Y+дYM%]R%& ggާ#GFXj1(y"(RYoo%LcxϠt!n* ̌ LTv&(gP7}(M~&OA0]J,L 7ywy0[Kh`=apQИ Lo9LD_; "V}A\`޻wdPPbB+¥a?k|-\G5L}HPg3;f@3< MgJ}ugFgLGq4LYAB>9tPgz]1pyOPtCg#_7L9S?KɠBV̔mOIps`d""FQaJpUj9,˥cmHM,2mt0; rrL@ @rYk=`a0Ѝjm@&AXssyޱARɊi>\#7N!!؄aH`jqPf4T2WYk\í%~c ˏ4_ڭ%~0YGtJ{qtɹ1`ZH VjTa] 0d0*0< b6z-eg28g?kkXACa'1sP'w;B%$"Z?(s _}[iB^a..;6(^`Ͱ >Jb\xFHp0jHF֏KE38>'|T0́CF~ayqXr;b-kl%EdNT/N8n{,Όx(FP{̼֬D?2- ?Y'c Rj2ѽ{%^qKI$!O[&5KMF2t^HSNƖj-3Ynub,_RS9Pb;uFOJED]\- ʼ >&eo>x4 b{m^q}{ݯaə I#S}'$X$` ;E<:eN ɉ:w|PV$Hv+;%a*]Qŗ?K?x&氄(' XZ %t:vl DZefwE΀UAxѥyX;+s)(vluz[.jY|Z{75k|d՜sǏ^o_| )2|?BfEʻ486j)!eC|,O 6bXC̰&}n1P aEi}Վ䨪W aŒXw8,9"MNt٫*pɤA&&߮oNYؠ 7S:SGc[,ڝl[h&mXYɲ gO}ht$cӓA|̴z/-3P!^KkNRQ)G{_#=92_"eg7n'>9+>Iyƅ3֟޸MWYAeٺ3" [Rnaސ{-i.?1~ԗ7׏,pyxL#pqZڐj:NjX9+fb1Dd6!?s{[FZӦ³NbbY<CCE^nWzp4,T;R O۟ ib01rB.YAo2`Oެa/ݓARZD:&2҅2S^,@@_p2dfԈ1j7>e5ӭdĵU7WUWe4J<z! 抩Lmh@g"^:P,"͕I"ZHFC^zGQS9Tr A]K_ Q$|Rq8CYg/7n{'cEW9SIƒGd- o>SM1V< .Onө%fHzK1g2\*AR;ܹwlPmu_AlL`}U0#7Z|S}ː_zsY_[fTʺ(HMپ"Z 3k2]Ɇ~EC&GIo Q]r4J\_kCIpʡڍL3-덤wleZ_FV:_Z&UZpjeR_v R+5TcfS:қ  7їNUt.+(&~^4K V!/s'삡͠}?DA!ON]~dHO r7{dκ7Qܛ=Ȝq9wlP^xr&R+`;lNs%ZUń\-V,nnEkLPFY½f;;{QpثѲeښ۸_aeOP_TTM&qűp1E2N?JD BYf+) @ht7fR|K2A2.Bִ@.D벼!d3-m '#{. #یÇNP`kq5][?[K)K*7 Ʈv叉F( AaL0/1sR@Pmw Bxf7ʬ}0UR:<8EXzpRg"'Jډ:\M3! >ŜզC$ꌄI>X&b:#6hh0Pg,R00V;$̳bb=1e41p, 5$\p332; (Jgox826eKF; (5`U l㮊{ n3Vv3rX?hE˪ zKyEeSLXHZWfM7߽fwuMG,/?[ bKJ`Ӹ̌\:|О3^\/Rr|l?BJu pYIR6z?v<ʍb=p]~v wGEc-I?h‡sbȃ4XcHcor4>Dm|E?7Z,u ^dzA<[?E|;i=cbG-5#+"57ܒ9YӧV{ZJ7vZz;79&pCy5yzKWRu7zfuҷ@Pj0nqkabpO,@9ۂ# 9ςR6 o-É+ N}6v;?۱r7i6N{ ;m*e޼]q#C\Uc|{?d~`~;/BwU.y''y)zM90t3I~.1%S,]L_4O!#h`^𩎏֔Yd p!N!Doe"HJc dCw?X,C e&w±ӟ ?3IS(?H`_W2t0NB𼅑64{ix: *|S2 l&)y5dۺ]=b|my>go~׳}Ætp&n~DWSڧ1X Xyk}T"4VLwNq²:*I*Y޷XqϵXb³ہ++`RXw#y"nL깜swSrP5 %٨t JF;ɔ+Bngc[9$Иxcer=6l^nol&Y0>sͬ>@&`j %~5R\bXɐTBaep iO(f1b lq.KɺA)m$q@J&#.+<65txQVw'ՠtk,RJѬ΢%YviY1.*I}v23!RFdW sqYs[u9uyMvncs~=g,AUC`]UeIrGf_p1 2/ml63珞đI'%юdOh(sU2I'=i;inTSOHJ&h/,yT/ dT&An%ա.M1#fcd(u BdH{Y(Lq4>FRM3od?-)9js;f)1B0'e`esTdfl7LS3 PJLJ|*%X_=\Pk#{rѕhFqS9ўgYwLPǸ`RlbD&WZ)[Ԫ@ZmZdK4L!H&I)Za/e\xk28-NyaY ±"J'D (ZD6E-5'}v&f jCFW[=+Z|<PI(92wY>3d9ݦ Aٗce#~و#˻q,};oTI rGR)9ۗP/*J&~f.;9^Mmȶ徵MRJsյӢ&ڤlݵF>ǒ MwY \rcpo0 c|c^^Zmi{8E5 h7\ʹvBϒ[U*AJL(QD҄*B34s&1\M?#0]7'ތY@]Raepn u"%fgOu@QL\l8ES /gii5[Gjr-&" Q]Nf1,D5,O`T`I_:2BMJQH%Ɔ HL9&3v2ECt:] ^+zRk_ 'L'΀eX)!“JQ05٫Z)5Fu~1zGb;ίSz]ʿ߉DjV+a7I+tp}A5%#&]8Rp8j?YyG!zޯb>&FsSh<dl0Ao%3ڱ#%Syȥ,G3YwۺBn~ [KE:XOYY-*4MzV$Oft ?|=> m%FI! r'ו%UΘ`_!)ֶ0>مŝ|.<4 lߥHgQ%0BEZ1v]^ܮ(g⃠ZάI,Wd,kOSLpcfg1gXP#ɝMhw=q.p$$QDQgᜦhv6cLx6o5$`OaU;XKg^ޚI>\D^7!+s|QQb&yqx%#=2 i"o+9 : S~edk[jaKl(74d.a TAG}_"XBv֊&(Qxm%Y[,E"6FWR.:JI?k$z^m:VPLN"mޜHc7wp ^㻶N;EF,S8 @m{b?!!C9zH9EuBhd3z'oAd*56ls0I08فX0U*)eBw#h{NO@ozS?ݏ=k&& ?3?~L&`~"4Z)]kp;ILχ|}OiXaRGt޵5q#2Ň '*Z'' SB\RPH ¡8փD4 ktմѡ& i6Uӿa,=dSPі/<HD5~p6Y mMX"א!9|`5m Nk!'q@ =P0v% 2AT % ͯ/@* rpoc="v)JbK=Q MAhERi7dI%4bH a .XL.X1&T5`^:9@d d BG Ql??(XSأq$`Hsj(,_ ==nZ|$8p|yQhD^E 2=7˻8Hv˵FLKj̈TRtHm3i~\c ? njs#Q[gܮz:+VoXwhef+`hQbp #лpk>{=g)'s/jQڅf}b9룠w{\s"wE:Odr{p?'^g"T|!KǙ:Q"o:7d\ r!"=00TAJq*팱JAo`b0 Ib/p C)\ RRJp Bjf iK3*RDn LHpQWFJ- DzqF^\p({C/AMno'7 ]nR}G&WAŭ2$'\||? b-fj2dvoG߹ϸ<Cπc8rj_2'vInnO7<@H[8;ZtzfdV"myT/Pt/+Y[?j;U <eD#M4%Tr R!4{3 G$ };䕰q ?,Ȍfo,Y@P+1TSGi)ߪ k!gULqYwe8?T\X_~uoG㨕wf7οmtHAp2``jw(Ϥo%2tP LkeZ̏semg93^"QmN^N}mgߢ,C B&T}l* #E f\P$s+,#܈ ٷN /`R;6Mq'W+ )0,RK lQڗ]M%$QQ$F%~@W] ~mfb7Npa\FdjNQ)H5s4eB ڸ7%ŕw̓wZF,LFBWucB#:bۦ =2>m<<$yiر'RPsT#CVC-&*R#mpݼfw1aGIE>Od{^4N<':R^Df/KƛɃޢp} ExZR&!`e2ԓ ~BC&>M\њ_ m&+5y_%[:aϴ JtntUp2CΥd;$K5$@Š?OdXqoUeID[;Id@ڳ9O `FZˆQ!nX]q n{m@vɉy_.o_@Cw1MS_:vx|ѰX~kHBaW,h{/F4 ˖`y}Z# &xP<>77Yӭv=IdtUeěRJpP5:"bD!ڎ.ڠvڜeV9JmJfD9R.}f gyti}8=@J:d}\lY10?T#.w{d+9ХQY%)Ia@1#ivyX+GI)ڦ[WĤIQlBx ȰYn =pl5Aڥ%|j~I;Q+ qQɰ hÍD;+c &n-x~؎5 wyn~=Nd X `ZV rO?~\W=Gnu?;\/r}'+"g;d5䌮&Y%W~E9IL:'PhcА#[gv9}w6~3mcQ[CxAQ#%Hp 0Hz@bA+!$`/|Sʀh`Mmč#IpW7gT up\ [WNK!z@wG8 u;׮Þs\Ќ'{$pf1P`egw,3$ߺO?w?kO|u/b's8 K>;; w!6¿ß͔]:% RDRowy;NK#!NVCCGΌ J!0r:jZ˨¡7-b6u)%)JRkj9R5-06Y,vP =v(PSC],%FT8@RCfoh4KPŴ1= $uLQu#:LX,z<;iSKF$_RzFZRȷ. dc㺸{ =9[L;sH\[X[\[ qg;\E\EyP^]͊5,39h24הf *e`uއ2E.um`CA9&u q*0GXZ}f+RYZ]YuE(C 5f`;!!j+RXc $"LX1˄H_*ikĨE #PF!%cJ5h u~J LKTSלǡ8]#n:ǵ9Jɓձ }L,m[/뒷7ƸvX0~YYlr9.=hԗb=llx@+Wcze火;׺kwQ^'^??\p/bJI^bm,_^7M7k/W{[~_+onH_?cjyJ=8h#%g+N3𰣨_<iXH`0u.NZ.:zWomWׯO/~3m\G6Ql:{~O^޸'_=7>Pǧ6Nkٷm`nΛG+py4ۮ½{OS%W`'$]#5=E.1/zt5[Vw>ۚyWڂvJ},1c0%jRBC<*gMf5 O샬VkGj|i7~}a9\exdUW9 .ne߮Y#A10p< efV.'wW] k `5-\>$ $qf ZϿ~hed&^7;.zhu prn5I܅VAVqشwG=X{oq0K4{P^Vwow>] Ay/߯?{)F+k]VS}w/<6Ơ8{{9乴fgݦK?ݔboe !S$đ:M;N|9B l4>&|da!L."qP([X*ּ&x[mmOu5ϳ72߼:;U漆D~N^OOhE:~a7M\w̢oz,QQ =t!Uf|'  Fثoz&f'ʹ$~8㙬8d6 wpk iB$&V Y&\R."̹:Q*#DZ@!!-f4[ɂ3U@<{ u`qAO4n'N%8[- w<3?`4\#ˡa7/}W 2a\y\ϭ0j2ѮUfGVgd llpB=]Xž3k'+$VxwZ>hU#dtNw/dZ _:Di,NҐ؄L,BP2K54u:"\mibWg q!23ݜc3:PXBF}LK$9јrFTȢ ɊD֡`{zqВKym/Iibqά>fTWS"4d;3a'nR.7wmm8OReB -.t{F a?yxgϽ~c˽?K z4voϚтJոGS\ tu B>Ny.p9ͯ:֙m+/[2h#稺¥71x\B>rpԷKt66~ WQg$ws\_v;:m}4w'5n9ol_:sngJ.$5X/L7$Te'0a )abxLaWd^Ki)۶mJ[bAR$!K #ay(RQ H2^ljSHp:GB=/sKIM2Hqj$`ߙ4&)0 !00d$ǭ6&Q dL%mɳ3f.af$teRP.b*K@O"ZE&hR`T!T0ʹق}RJSp㝣vnO&( :nfζ 1]|HSKT4Pr< iYtE!7.jc0ᆒZZ޾5`vl<|m:8-;0&w'[0Sm Iu*sR30RefMSauvE8APT$76KyJkΉ zKH@2CfVT+]3mERc̈́siHD,4R&mqugͽxUΕЌ쇋e{ 1F;pND8QqUP*R0qpűB4Kk(bmRtJ0g#nhwĵX10q+#|um)X1Zka2EXE-"R8E8DdbP+JuZ: aV %0E&\3nfBò.[#$@ (RO: , Zr8-eH+3Dk$' [V9R j1Ь&_NXuG<("&AI}>b .F i }[Hcصa%cv}+ay5X}cO ͻ]7*2ՕJ=yK#J"dD*ACGS!෉@$'6"Z=X-fi%WT52*Mp!gGHAk,SZ` 5 wAY$ԡ9VԜCLl(tCأbN s$I"]mh8|* \W6l;s K%,PZrxrʄ'3BDuJFsD=d@F*BY` +k+sTE3(dPYG!\ [Z)ޢ3wKgLQ/Qv.{Cb'ǰ:nSiUs.X*`< /eJn5o޲J5bB#*j *_Nc!C"bpN͕֯&⑎5$$ Re?C8U |@8T4D%Zcɦ|NB`!$.BR̝u T1\rzv9nw36GvTYLLeH§oӖPnN덜H|W~Sl%o7?x*,v\+lk-gDog=fbۓZ-j{p\7:M8Ǩخ\d(|M3LL6,r hd ߚ6gBmO?Kb}vϦi6t; D7w[pfgYyDCvmmͷ7#dOSM8łH)FaF;HdW*zB^9 UdfES,GJnI5-yǗ'<,Tr*竤')%ٞ$ @>Z&?Y:C/ ^0p[|\v/xX5PRrNЛ7BkLv{#x`KϛN{2m8{`eu[We/KJ3V$ {cf,1+L17&/Z֧Q|fz3j?+"gH؄mucCRRf;uXcVKaM|ۚUP\!G05N9ߕ+.FC^zճq&>Gj:ŗϽ\#hK]; *^6Lk%F3C`TnŒ_"E"#<_*…b?̺!bĻT%E.> )t߂6X?$KxH| =$]KY^,'!qN[*$gG5gʮ][oǒ+_p~1pۻ> uG،)QKR V)iH {n )ꫮx1& v$uo+PV*ecpqUۗ hTHb+&_f|%*SLfJGVPdfiΓ)cl1q<5vHxB;B5˕P JXޣq1:*Ъaf*ꮔ Z7UFpj0Z#1 ܉<4繆Hq)=E~z|<$vuad:<*`h7qn0Zbgf]pld xښƲd:2}>*uWe /`\,{Aa,O]x:EwQ~,Z{JELS"wL"fLzm@ KJ6Wgհj];ᄎX mN2v6B%NZ &oGaT +եM;ƥs-buTY.rP0<XeYckQ B|IfuE¬5⪿SuJ0=S(hw?TXP{KX+m7tK(LX9, q= &:7ףs^eob_^y\cE$cI@r Rupk QӤ(QGeQk$4}PbyC' OYbpy`. @H5ߧ{JIUF8ۼF$'U>_q[ݵ9K2Bm@ N)3\f{x֡ZBʸkQGh0O*_̫Yi$֞9Jˮz<--ZJTSYGՓھQ( j̡=T:clxɾ[DŽ^b0ȾgŃ"Dr(f OL=lҊ (#i)pNj1# ::3Μɘt,BL@5d1[3^"ũ -,s!Dϙ^I|>uA 8nX;Y^ov5[N+Hg Uࡋ9"X#"3;,yZ1CQ|q3WV\a+B`Y⭓B\X<_D^ 4FV(F)/|m.d/MD1FªkZBfS,hI]DK"ZReK`Mn ce`Xk/? >䍣<O$B4s`{,XLiv^*vE|90OvD !j!Ը @iUāLS5Mb&8]~X@0I4ȶ7 V^`u a`C0b;vXY{ÝmF{mMmoJ;h2[ },|NR.]>w4!J% 2vƗ &&~lƚ#[3'zex Lh8 WPv1nA1ٜJU~k(XqOҏWLKX9ݍ;A ZаZMg9m]kxſ\v.-[gPZ QwEW8>hs D%urc EÝ1GS$9&bmk/sYd7x8Ss) 4xX]*j5DHǦ(V!;rf:zbpSv\<:H`T\(p$G0#_פ׾`WIr%;.%4^+E)/ zgŢQЫr%UV3cHSb%Q6]EǞQ53p8u"#5"aHXl-0X_Q DX쉄=zch.'Q2Q0)V-D^>wK14q ͳ5]2e: HЏ0R-ա~fD a щ9\.H`4xv9=}ՎJw!£tMmZCXaC <ħ!*>^9B>J%dL,uxxuR|xzm4:$wUeDWOwnmF*Auj"si9Z5*2J 뛵Xj!$j!D1|(OShuM'SEH9ϙN_QcŒ1ƭ4O}w浔eEf $_hU<=e:7~C$ I,XilK'`L 9wc_M+t L_~6WE|ĵeaeq@T6YcTJk뙖:si'[Iϔ")0q1)Ar,F\jLVRrS%sEbBLJR5b_ jF}3vㅃ"vl*xϱMu%O[OW7"%ūby9oa w\D|.">2> >.5¿@UP` ڳ_@> xNIųUO[Gr@JBnMU]tiL&bApb,H7aը_m~{ĽC[H}K"Wq1Y*0]?4Ğϋ W?"9Y1Nt\ǧϵ L+qryhsUj`TmHƑ.2X%՘^z+nwIo._3?WcnŃfu҃Z'ÆD"޲78hR8+?ߪ51rݢ޿ƽ=N"Lr;/le":o]Ɛg!sH̓83:vn},A;AAC"QO{C欒u j^ث[ث{զTA Pg($I0%$?C I)  F{]w} uFᑈI6Iziۉr'e28m76 ,,'Aѣjt2|?ۋ\ۑ3N"Lu6x>AcC,WJz\z,$,-Mk"kF3#zMjy P@ᅲ\,(Vc=_6ϺܷYs"O#Æ`8wPurjV¬F5f,*6sF-I,^ƎPFQU;$"K4`bI(+U>v9)֊OBqY@}߆ H4 Gt@I(}(eDFBchU).c6*?F-fg7U`̃mU7]-8/Rx(*¢‹`i)TaԜ_I_w[Qw̶KO^IJ)),KP4$!߹&ɔԃ7i7M1vKAIvLYw-ݱݚ\DdԌuNcVv={n<":cn=:J+:na*j&$;$rm)%KRyPEtRǨzhKg-Ъڭ EhvۈHo&|ht]\\+xk! { mXuu5+٪{imkPZI0R2/.媥_~ab2\-{'eD%R }'"x"*~S_@lNC>8ǀK3j̶+"K;%P;8R !.B$ǂLFEqHf Y<%!wv_I^ ljK504}؍VeˈQv X5HЀ*)dqݳ5jz!Gj'2cY:ugr3eژ4u q:^[o.xHX{g6Ue'4t>u*rs)t> )tH"To24qFewa{m‘%%Hl$Wv+IMfy:Bm /aOL\{aOOҺ v3Q3q`d|goa>ɿƃl _qhe ZgJ:aVs|*UroVTDDyEb}Vl,O.w*G|P&Od@Pҝ SG8wN*V )@y:K S9!(t(yR(!2wvNϜ͕\PO rJiڡTjM~'$L;$e?Sæ1N21ڱWnQs!H.9)H)Â)>)Hҡ09 8py9ď2~6v-gJu=a.ZoӢH>P*~NUohAc-p>i5639&Q[{|(f<$eюA[E$4g;s4zazwnfvѧ&1V;V"n04yHKf9@^ Nls)p+y2 :]Rw4/"89hLHˆ;;6W(;W^ޚľ\纾./^֧eN#`)v-v"VTiGC/MlӆҔډZ<_:rUf>'%&3|Sl:s&z#{G0(0wƇ%VzA{? nYzߗP r{m;c4C{?nCチl NnDfX\&֨,eƻT@%4ld1C e\ץXaT}`YiH0o0~WL/_F왑 fڥ=,Jv&np;A:)7mo21-/})(G3eoGaqT /@.]Qi͛Qu!{7o\ѹ~~w54M䈌BM·y4 y;C&o/Wb&y +rg߮6 %9Ű3gJAՈҊLPHT&"Va2e \(%Lj%X$aGnZXEЛ Wۥ9>W__vj?9OIoj6]߶" ̗c$0"@UIpo0Ùoc%z}UJR?Ū7[zP5X( "%o9)5\_Ef?_d~2G `$ވVԡ@pl$HdN8> 7Cƌv3I;O+>-6UY43dQ\h>-hdsHl??\H莿IgWA(-mya+ -n$s41OV9e0BiKN3TV4BL̴v$ZZ;IJ>>Dڠ2)qZbl2|'AqP8y\TsP0Ax RN;Y!&wI :u@͈h,>mJU_{e0RX( Iy8IIC]xJ#pi oɏs?nNnԄEVNؿhEAzp2O78RJh?~|=|| EPdPޜ!A1$0|7g~5͗xw}7ڛvZqb=|zL):m\n[{tyiFճ_Cn8mmj4{PySQuct p75 f'd ݮn}z#]І~།fo>9O}(oNB9۠'Qw4)l>N"Y²2Svv$)_W v@tr?2r2,o8b&" ˥Y9,QC-2Ia~+N4,{ B2L[`:uMLuyC qyAjE.N_D,JE[*w^ ȑq u[[:kC.P5I:$=LLUO ѦǼ"bF{kr>Ԃ[i4aCǭ%# $깑)vDc; 6ԑ8\3񕙢/{XXND9UP ?9FAK5.ïuKi^h䭕bKA[}G):ō A e|J^})rĨb;K) ndO=vSnFCNAR*@錍ԒzFAZrXaEO@RѺK̝V!Bz;KDsFܣU3/ڇcIĂqsk]tezh(Cl&9oO!2(NT]#C* ɡKࢳF([N8XS9GJAL) +B6eGRvXAa ,A>EP}WFT : n[J-4@,tA cO S=/6G8똄bxZYVD\> QeL4#,̚YnI1I&SkD h. ta32Q%>_5#tbP$7H Yj !VS383*T'aj*t$@FUXm 4'HHimd FRku ͺ%Al3\7׳ Q Tl))@lEEr5UAIJ*EUYjyFJE$[V|8e}+.D=:L9N!Q~QZ5!N䫵ϖ,BǑx6Jq,3"Ɍb (ƃ+%3CȨ i\3=,{qIs`Q7mؽvPRG1dWfT:Hy# hcqyUkC0?DaZ<*-+D=[i!FMuJi-)3lU筫u)!*(듆 ȭZ*>Ӻ/o][7@[k텱֞k]k9I+g֢j.gVJ[/mP?M5R? ).Wk)WX\i[/9W9i:(9OLQ4B;&QDVV|P{V#r#ZAa 6 &4V]]Y46XJ\?.Y-x?ҵJQyKU0d)YdZǞU{tb>nCUكqĮ/䣥BE;."XN^ :$u`j۹ KȬ vT@8al\REbDT}d5'B!C!a~ۃE`*4Ӧ96h%Dihǻj}h^҃ъϕ K,mh *pRіwe rn"GR9 od4zJiQ}<^pgWu6[ P)vWA151! )ѭHڛǼ@ꜣ8}\ton=jhpםb%4rEzP (8yo7^,ϗ!3~) nwWCI;3zsvLϯ'|pVz|2"{q]\IWD |o۔a-  2uyhv=ICfsyod~0,aXðHR&AIHSq;A$KS $R+C>: \N9KQ6mk4ػG7+d~5_9KD.C=ovjI# "I6Iv4f%H IVDQ@gpfVB/Rla%|| xY BQgzgN+,QMTL lTdN B1Q 1v,i>I0 ;BO`2Kǡϛ< ᛳկ.Ym8K~Q1cƌ ܘ,IAL^Kkc] t|$i?~[C!nh"4P̱ bDFXufJXq_#RyGZ(ݦe%s 2~Q"Sj#S[*GDmפDĘ1U>%R?q>bwe(HqE \D_TLci|%QQm=iTݴ{EISEڽ^͡"@ڣ9]<ݺ"7QlR,:s}uP8"+B;W]Ņ٦,-NMQɗc++Vd\;fYkR;y^~l!pnn>PRwn{PXw&H9BD{Ws,/R3$_fYI͗\ }IgB T%"9_<ޏVQ\w>ZF?včVNNݞy:@A畓6z''7sy~sRErFZ4 %EB.]u^gH` [5q)rdI43IِqAZrI ~K5[67j[v¾JFie]CԆDڞ6z6 TjyЏC`^bhRGvGe7g(MmyǫPl#%: ihd!kБ D8J9/VSWzxU你<67COhR^w|Gx0B[>y#%5CIo@*hgzK|NWL@G^!QZ@N&=(BM̬GÐtݔ'N}rQT{tӵsӟb4‰JɃ|D%n"YpQ!/z$\;+0!I4WKHɅap`Rl9iS<{BrDt*!ڿ7Q4KXQ6!1KRB$ \TAJٻ64eo#d=LM0;_v:ű ެ0;zk8)&Hs Uz3L*$cmΧ8`+Zb@ ݙ1Hl5EїQ@yǐ!GaS'Rg i{:]wo{A㬻.?uU0BλH ފOǹGzQWǪ%9ʁq˭'z-סpk&C޹:(*S{:klXf:Laqhϫ' =ĿVz z a!Hx}dDĤ:{]"b84"nM;gKSc@hw (͆e Nvz2qKI*_ Pc;&( Wĉ&3R1+dBKH])mVprBS9`Um4ZUEF*s}aFG&;NfA%?b+ ou8wֻ(rI|{엷y$A{C{Eh%ZWúuB;U9;noZ5bLNO6 &/lY,eZ\#T.ȗWގ_TlskOhvi%|#ϤM%{'7Hcoxh6Gzpn5ջ[֡EM]745pnLcpT~t{wV dMT qڥ}?D1d;N)zQq#$#G#%$GOf aq08nk޼kx+ %f27|%7ZUS D&VU?^?Wpdv>'\!B@,>Og3ipJ=!!^ 0ѽd%;nsLld,-{Ï!`_LUL0|巫2 2b!ebFQ\t֮K)u7Am:y .;96uHH#m zۯh?DV*YM?@ؘeϔj&?zgh[ou_-8C, #rM3(C؄k+*QӝmBXjuq!.*\;Z,.yLByr3x3z֯~Qw^}ƪqB[ q zo Wu5D%WC{6DhAު"(jh6u18܇"_9zz6cԁmu{+X[NBL-Z"[[T2lSũSU?v]UvBiS/u@0I}"8x]`Z};ZWUp+ޕn2Z\K9rXQ 5_ܝ:ZՌfafaZ qPj^GuR sG |x<3COuixnUl78W)Tt{ GÁXY='[( FVDm&0a %Ȟ ֡s#SrdQ=ѫԡX0P1uHs5^VEX*Ҫᔌy2BZ<~=g x2ޠ=U$xʙkl͍5{x~!r#,t\ مk+]\σ-i->W†\'6MefbqS+^qrtU\]ۑ~m:?Gr)I/H{-fL?sJ<~. ?*\:!2kW#s:Ň洡*ʥ>*O(ӫu{V bdRWR Z\L%>pG/*f6ZN32pTQ6عZѧ&๏Z55c.dho~؍[541|%4я$se1'r-23y zF49{qڌ܄ʼnp&' ieg8gzJ,>\ѼԞ6zuÐr^rC13\.⨎v<0iļ.Cce<ܧe;Ig21&՜m1Dѕ V5Y)2ĝkw=CH0c(46\ӾN dkrɬ"2̊T:0PYPLm:oXҧ'F?O2M')Xg~LvnV]ݯ7ދ.xG>ܱ޿>D&5FvSߖΈy-ߕ.h8tӏ=cI*rxG4}8}+>ezZqFX1&=/Qk`w 6͡r*{Ks‡de~\+ce`[wc1Ȩy^/>I8fF_$OŝFt^N<|7 -/]54 65rӊ_YA׽*PSi~}3KF \(-f.|qwuruM_򼸘V$˯ޯ۽(C 5+]*`QN<[ZQ.TڻۏJ~wrGy(|Fk$ p./eC28*0G@u%'KI?<|O/&e:VPŗ=8E[LwpA%O! b1wn  T oT=ʹa܋1 ~·y( bX̩K !_[+cr~89z1ۮ֨'/mU0gadH-h@+}sby%RG{ 缅ẒRR2k!˹d(#[^dvYO6{3]KC\Cv|rƬ_Fv6 pE7$!ٗzaX{ʊ2&#`.4O^{I7'׷7_ΖmI8pr7o۷rق3oc9x1cNir* XUqJ&F5p ".D) ؁Bz nᔧ\RE4e73;o7uHDo= |s᎔Q[9o.IA߰y3֔1g֗j6aQ|!F5$:F o{,Ez-,Q\zyx!1Cr R~#C&u*$<[}&[)a0r-˔Sv$b_lhʈ-Fej7{u9@#UҘS* goswK |UC 8/zԐ_e&G%oN4/$j! bsRT|p~ߺ8<:ۊֽ㼁<'o3sEm%~/'FQFP9b+xVon7_vߞ7޾u;+@= fgf $R.Q1 $1m!ND+EYL|gZPq Up"oIgPl:;w{{&5;XUURFuJci:9gLyй1g_s=s;Orse9gJv/Vut;t| 7|ɑ37ՏVqzG,ׂՠt <=j{* djR ElΩ?';w2]-9H\gN&7TqBDB)fBdOJΣ5}4[_xOu˧jg}+"YeNFsO繵v1Q{,=wu~{?2iMǟ׋p1MߪyT9?ЇԎ:WuP5@( xӗy" Y YHnki@ZbZK [|eZL r#t$ WS&jHzJ4}խ `(WT EˑjŒcЋv10/8䒅10v9 uOQy 3^#*hjM#8ƀMp7@7NOZjvk/y~ʴ30 :]PR1 6\P*DecI(5S2߉Jgh9>hR&BH,ĥO[ Ns>9 \;{O:{oߖM뽫sZEhx[QMw1i:x429|5[O7>a6oP %IY~` R.]ajm&P˃ ctX8^ ejZg)Cc, 1А:[& A5oѺDujbݎA֭z֭ y*ZSRN~OiݴuAԎźq)I{hV=Rևp=X4;ZxwFϛ0~Ts(o:`i vCU U{w=s ;zuLXS#"Hkꁹ[^mdP" L4f _mSPjNg{+iG†H[$5BB1r$8Js դس,@3Ew4 U.)_oBKo7h=)i JҢ\P p=uB5j5#BL oyD͸С1 7V<R5;XzЁv˂QnWQTDJ6wf?o޼uQAFln1ʀbO[+Xb,BAE mk?DPh㹮6](L}J0L. $g$u:L$vo/, /J4e 鱥ZvI:>{t wP>Ji]Vn:|J&m]0 !̮lYg6exֶmYCi5kp%9q+>I3"-*F)F avY !N?[ o2!^yYβg9˞YmDk`R`)8$悱Jji F sMX*VځԻv 6DX?.-KFk-F٨)d9: s bwkO%4:\<`my"@-If/)(iL& Mz oPQW>@F<RRN ٳ14Ғ|.9P?n @JOy3%,GD%Ow`aXkי>6?EW_5,XakbWs\/kȿ=^J-s>|+'!1OfII'xHBf*<^ռ\S\[AFN j:no{L7=G!aw”JbfY;Z71\IAy X+ 0=|"w" tr|ײ];IKqw=h.] ߵD 0[h#Iq cl:jp y#s< p0hH&Gsh2CjY;LÉeXsM]`fIw0ʞHB?:n_֡?x( !!Ge</$K6DQF#*`]Q$͎\%Ձ բZS!&eiTcSh%⹺2$ T-5lcbxX[y) IZpNĆ~H׆ f'X]EAp$W#i5SBZ,!c>xY\/xztn'cz}5۽6n0H2CkT#NAIc1N;AdwZa{a>4䅫hNZsLdӺA[-JTv.xaY܌fݪ'ZjАU:%XiݐGSn<(Q1X@L7_n-n}h W*jtn:zJ81-)h nEDe˩v;P0pN9ArVuM9jkDq"$F@y+A:!qntpp-{jzf|S{3M(yy`?fՌh}QK9|5sF)v =ܧ,iPkv :{QJ}催("2V$2'┍Gl0rb& Nu~r,<(֎%r%ʔαTO4҇p)'شnM|%S;Fv Ti lhV=Rևp)ГjnX7Z[-JTv.xj)'Y޵>m#EΒ~h&r\Zg2&Lu?$ab]bݪG.d ~MF= {ney":eQDo`*`cڭtG][EtwPYACv!V A~f al5JH {cu:kE:[MR/並듯Z!MwNbI_0[ K;2L뙩R:6%-!ɆP Sg[^ҦKhfS6!/\$3eL8t AIڡ-pYa}5䧕Kcon( 268keM@WSoP|:n׾l@֨`9U4{}|/Q YvhO]{GrXE$6Irz_n|0j}8e:+oX4J>A 5鍍 d *KPNL'k5RXYӋ>U:A8:&\W{1dfS!Uh|53̬$'zعJQ5ܱo1SzֵH<6]g͉ޙ'(C@> 1xPzؗ2*p#?!" oeXj¹3vveV<2?1Y;M֒]a+ .9w \{%^B-/0z2q @ {\A @=RXYJvmR+4R-Ν։8nDӢ~yYYh1Zk5/5wAeWLEZgws/o2^Fނz`B}0R+jPLR"xQD;ӟ=׺0<+tZe_k9Չ1==Ea闯raA Z~&0 gs=iOgr[a\8Oě}zrMqJ }VSRqݒ`yw^Om@j]f[ U~͜pz\T5I&#*VyqpAO$P \s\I]99A$jH;vo'v"0EE*;\4n N$K\H6 lK6 J6.27xȌk$[6y(gNv5'6=aΖFut:v;#XkVpb޿'Z!)M8A='߽/3GhXdm9`jۑ~Du1 P6 8lh["TN4. az199}x7mb0 (T>CCE!S/Vc?%b juU=8SR Y/1aY?o{LoG<E OΖ$䇅͸Mx[}T{sx {txӇӏIv^'`x'jπJZ?p3F}87~ָ>Wz[諹qbEכR.˧_^z4A\ǟpyϞ]//o.^O_{F_lj^]Sej&/a<[~2N Ǎ^<4Ooܚd4d.oio~^7Z\ ŦWnnίx}L Sۋk\OZ(nDYtQ!-)ʥc~ ǥ~e3lrVI~3-X:wd+Ȫ;QnE`JGYte {54z? Z}!O}~ YvR4^+ew_字q`]QaeQN`azk-gi[tbF=of7.HWPQ.!Bz*>-ڕz2&u+>fS00{a!N?sJ>Ǔ]h?KZ@,6ܱMΫz_lS`p Yc|Ec{Eż8 ir_P9äg«t$ZvIj̫)YT̬c5 lu^ISPO| KD۟jL@eY{y^kn36YcDzZz셚ia4zݮ:vIy٧gV{,VHUHm{^(hoI &C$q%{!0ZEoMTٜ<ȲN1%6$mm)h 3σo)XmA@fJF#s46s0{YMbD)㑊 9") K_@8 %V1qf/9~{NDg.@-)|;z5D7@h2ܢϚSF}pBLEs03AF{߂>/$$eF%L`MAF^d7x9r;;Jk7,TO@@ʏT&9\qG.7 -C Ek< Emem]-zxPnC4FN7yK!k %惵WڝU*ZnT^Ea*ZT*( ђN0=8ۈȌբ,VN<8[hhBI5 p'@8~f%Ex +cP'O OV.PQ6y @jf-ݻ25F"8ןPqb6HG~/6H-H&_HVu5柙VDSsgc̭p[[4O.xd<{JiIl=H렝xzc#)ق' qS'7k숶d~kBwL!(ڟgND4sʻۤZ@5dpJ(M|hKGJ͡8tjGdoV?(H*VG %z@ MtF=r]ِ\W 0L2K:Lܫ smm öǕkrWVt[+n. ̦@@ؔlJ6a=mPDN  Q F P^h ,8h꼇,k수qE숬F>n9qtwW}\tq7pݣ=Tt +f4Ѥ 'w|~pP / lqTw;eANf:'w9W9s#;A"!=ps w?퇁R{UJPn [S(RD"w1CȚa"pxQ= ;B3HgaL~hY֧cjFy dVrdjqR2Sߴ\"E7o+`R!XED0dи OI/  %![Lb=𭗕M7Ŏ•쉭/굮R|-HXY%N3~lYҴd~!Xnw;}#7`vmD1@ˆQ@"-J  $|I&QEwv\l7!蘜6xWk. *Dža35!7_09tafQTBpٌD tɹ0bOkmI/A/3{E.}.fӣĖt=>gЛʾgֻ7B2r-#ըwêG*)@PHoIJ`Qa0Ŏf$YP^1'8@[U7.3b@3dn웰(E"^Km-)MgZ9pqܥB"ŴrHXL9ĀH$rf2Zi8Ӛ~`":NƲZ*6 i!Uh?sIpF4b<`28v-»mnC%ȓvMDi\j#2qD_zàV-zS8SV -'lZHPxMDp.i \5=<R4GxP\hQY(9+yXή +dSA&c]:נit a}13_}X[o(w\,E oc%~3[o[gz#r5f  ,̾n;;offk(ąmߨ{X~vn9z4úQP[1t#"|=D~4>I5Ē,]Cyg) 4ҀvL(#%0ƃꍉQƊi =|,k'CW4#Q=3o;PV>ty(73йkg|8Gd?9%u g`q֊SuQm,bzss#4IfKC _h3:$KYIހgXP0|9( r%.\{ZdT9N&&L{)cNhNlFޏF c<ɺDYKt邽\ދ[(RnJ!bRiƭV0b婷mB A4b;vLܷsqJD9rT#R˄s l$%:CYxQJ$[`.J<{aSp.`?`=P9N-`*oݻU@ ʝl ^ 1RPOQt%s eQ#.hk)M~[)q2zԐǿMbe{0~|?m;f8*G\")~AOX"Q)N;|)+ARjF &>_f(ֻ+f} sYg"kge e8 6?}|ˬt/Mz،e|#j=.fj 2Or=ԖێLwzxfCNlCh- l%W:[IݗMFT{z,N%(GN\=&"zdFGɑیSFl!Rt]t/|]ug*:a[TѸԹ&&jPi==S3dv8e&Q X {m{Ch:@ EP)g_V8V"`ga:P8يj&OCsLCكp$׌2v\Ws?/Z8έۜESCy^G4%5l8(1h w"g0 UʥNZyl 7.&UE7lOj˭ 7cyǔ!ץ݋srR]_NHkқe/{r@od=@NS^OStO>]%7(uf~uYh?>[ق;fl5l?_~X.هŲ{͌O"{RS<1n=͵:[Wwŭ"]#]{Vb-Gj2^*r DEmq2Y◘TL[19WRWd3ή*픍w_}0_@DEЯ쩎 ^^A`"0Ϣ?pwfOjn2o'#OHzW32%U9kYX9xp.VR10#1I1ħWӡ3񔆡Y%+ rWR2RG\4&:vØD\T];g|SlY(μp2M""1g s5)V[əOZb67wf r#&6%lCD[@5vc.ϔ|}ZT߰2 ny{YۢkOR _ҩZ,ӧAf\ &~IvZt{3|\9~n8poַjW?}+e_[a+/& "1B/Þ:M 5ch}d2g/hs+ >^J)}iCI#$ !U*7FԐ!0]ZiI8HÕTN ."2X%(*M\ObU@So ML1qI K$1jN sZ3AL 8 d'zI  9: PGPM005]|K&clIj8 HR@*aldml')@D?U໒2)" e^Ϊ/iUT -k lP;AwgߗS-?uI:Ϯ}'82B#!bB;Ȓm[:ç{4EɆ78'w%)颾ct mtnf>o6@ڧ}cOw0n]KG~,}(H۾ߒJ}Ͽ'߿4&yqG!=xJF~_ހ5'_-`yȁ Z -X ~5\vu5!S_D-!{~^WK_~0մc_#%]#jL]b5详M=R5i3I2p h?/+A޵`|ê>_#FRFwf՜hɜ6#dp4CYM,2aVDhJ払2"_Iwm(`?A ¥Q#4< R4\km?W?TCytKg 롴/rU.u<}bQP'?<]m'b?O}/@^D>(g;wEqJ؅ܺ]B?FT)~FSb1w4Fp }n ݺ?)B8)]4\8BrT*IanMvil^O]j!wF4h1BT&X+w"Dc.4u^:J|_8 !F `wN-1Xϒ; wlw<:}BQr ??rjR 󰞭6~_yeaY5P-oo& 7wGM'V%ɇ<3I ՓcnHRH=JuX:d;+K3m&t!& #$5ƪĿ9).9&lI¥"mR2Mc(%*ÉɈ3Q,SB%vn4rۜK! Q_k2ggkN+H"QDiSN$v4с8@0e͙z4,nj(5=6bhFi”G3C, s‰SN ˌ ^6=.[N@@4K! D 3DIZVR+AHt9t@IC"wU}r4D׼q_yFi_{{<@[~)ShH+,')qE$=b۾Mp5m'6( |"yYAeMP#X+ r 86sPVq{_ 5aFZghWL b &cxC(6#Nm7# g!R2 ہQlĠkOLB'5f{*?v*u bM[l=۪Ͽ**f$vᑼmG =[=6Piߘb+mA6aDX8405$KFV(n@ xVdu4RfQ\PcX/=9ɕnM^~ZoKTW$sqgoJ%, ŒeDK* ֦*&:a"-eJ(״7jCu2G9%!^nj2 ޵5q+lvh/CvR[-eSOHJ'Ґ!1\Hےn!`Fa4UhIɠMlfҰL.OKOf7 C45IfS]?,ko#|hb_ ?Q^> Fw7\(T bX3M^tc ThFtEd q h`n&YG.]}rHn=bnof/wU-V*RVj_?ln85xͨiE[hɫ ~}ʻuKQʔDT SuμTA7 䌩|}KFR zzQscKV,%@͑8^iifrWo|⬟diS=7%C$r?7Ndje2 ""]:#|fzGRgM0G|z03.J+!Tf09[Q6kO!U.¡5,Shu=)>|=FRr3ȹFV V/.*w5WG`΀K7Á78}p',f+ο ?5?Ty="yrv H0VVrZfK`tV[%zx,liDnOMx 9 lmjChBz8m͌"Qo!ؗh`_VNAꍜ}m蚰iq5}X6..u6\ibO8:??sYCxB4-nK'Q6e{-ϖ.,vZ;+Kíg Fm2 3\ੋ+>#Ғ֐^S-}sY8lZ:?L)"Hub^O3_? 1Zgy=| |2鹱lUDl8b;Bzsh>@=F꒘:+L%;̧one\&;+v-j잵=O FjU'9=9RtM渳l}JtHb;TjPCڿ-:!$uآKmI_^O}V{/V(CSZEAqQ LND]`Z B!y3M߷V6u-[y!RG.+G% >h-ȃN#PUQiTM() ӸN<]j^H.P+|O˻I B%>;\?1sT'^un?3?"̋w\)C^ڼ{ma?t4NJJJ~` ݁ZML9ɹ4# }yxM&WcғіZ+%:zQj&6ҢX,Hf.A^ɤ+x\iҮP{?]J$>0`Ox OP/“@m 5^7 Qpٺүm{9õPu:v̬Fz_),7_G/G4 C}Br"$\&fKYڭrM+sۤ\C3* n/+OoFWOwn=BQz4.iM9pjN-ш_tLH 7^.\NPÅ>ML2M݇69wA.sR @,Q6f$&SBDlpSp8eb%,cV,fABKl!01 oovaۛD֐}Nx]-S*uN*$[沭BY\=_D$Tk,h(g\TƄ ߋqJ["e=9dZ`LPP\~"DsW%I Y3J[֨lwicRUGW&"aO\]ҧ]!a|U9 cd+%fno&lC\-$x[XLV%&+yfw|6tS])H&)ݯRK?Uϧu^"]y}+2F9+a셰O \Ό1)Zz6謁,0PWљ~t2 崻Gb P31q\Z+HaNب%@=^}EC^dh"ɵME^T6Y GuE:|ȵGlu "(|.,Dden3#.!9rUu Qrū[ɄYG1=v$fxvu6C ;7D+ѭwIjЖp A)&J+ϬdH8)֙HyQu_V.'h0x衷*¸Đ 5Ǣ֮R\ Iu"I4f"&VF& 7<$q¹2:RRXvJ-/rEAWUmsujy{@t1pH4ŠVl@*m30ݭW%ؖ q(v}Z, $p'lӝY.FCuDkuNrΉ,IUjGbǪǝߩV/8`pZE`q\[EF.fkh-%b3Z(ʂi%;w!㔝Ngf&D3&QY&C"#M۬qʙ+#Dpytb`BYIoǦ͞(!A2ɍ:X)T'B! xWj,I,r)1"(Y"1‘R)#"KyqT~E\.-_ՍknkK'ߝb:uO5oyDw_2G H!;r=뎱JfeM&_1JMt>[7B 9+OQ"kknVŗlWecTj٤|es`0+)mP&aPb`aݍF_:>)7b3Wpd\K.4'%Si7-*zz`%_ yx[I+tkRiDX\' tU_q3)A`%Wd0źd&H|O^rPaJºkՠ3hp!*76޺=r\k>d#& *Hk%]rmyҖ*ੲMĀ5&;jzW1Ԩσ:L*]ǗY C8g rױQ'lo7 fa`I1m.u pCXN) O)9 x>T2H"I(f5zӌ $$gNgXH#ay j~fcZ ;>a3ErX :hϝ(Y !lUPf)߷PV3FiQ+zbrCMJ+7)X=BrQT+KI6E`d<4~Pv!=?nGjLБ|s/>O;XXse&W(+]a#t;j~6 vcY)U7pz¾˝8||+a\wsKpf]Af+ئ=4 wN8qq?n;2^܀Qif;>6RoTڡ~ZyM\_'X[=ET_4j9Rn8MjJ.S:㪐 zyq2t@&5'jM'^6n*_v(ML&pbth'bg$lTBaJeOa?~a׷l,o?w7=lbL%./dc3,`7away-h4#;;M!#U޲MK( o,yYȞ"Z~p!,v# ZV8#';r.)34IGOV8F%qB^\p8VpZyTGuW *Phe9љri4F0C`yBQC;vpE:R[1s?㯦\uB`Z&{0c& Ƀ4,ChZPJ쵷cTʼXGvj|R~8K4YrJB #XɅ1˥VӜP#2A{7yǟQ#9֝n~@xlMOƹw_ď'k~"z@~\L^єWfxY ӈo{Jw,(M8**E!30fTMoSܪ ; *hΏYK밑Uv-_Nyri e*Q9cpXtdǣ\RlFhkG3U1o8O+lF`?5FQ;^%Q\\IGQpkN9Fک\}"BE&`:)~,dY0ZJX𹶈Spo7oןz2W_r3 HKqӦmrPEH6ՀlEtH&$h[.cC4-Nl< m!w1"CWi g ~,X\{ebG8kx4oI  v}n7|F[ݸZ3oF84zH᜚QOWM vfb7Y:уj-(#c<ǝ~T 5 dsR64+ % W4_ʆ`V0&6$J#&N |fs(/bkW DC%76w;?''fStmS`PNq??լك6Eo4TTHس@㏅ (v/o:YHt/E[m{U&eYP!y,z>ڧ}1ا}j.8 .O5 +2X%yX5Y.TT6܂9 sL$rW+B(gMWljG=U'fCqOAm]*?y^D?eh2c0tU -^Qt#%((א?CiD{%T-vu諹4*,v1UR鰃q'Qqw(ǨE9_TJ-j 2l&&]L#g(3%&GjÉC#UJ <}x*~ lk5QM eGl'7-ܵvV/Q: kkth"ʊo&=*x,p'%|kgݧc#5;;Us &gzƱ_1ӫ*/꡷,vL:;OAA7.v;h/)lږ˔(Jr KYxsxx;;A;>1hW 7w ew@A]YT:Wb֪^tH*X؉WdݏǪ^tH(F`XN5qnejڂ{:2`{`mr~JQ) *.LG.:.Q6*!(*&~| OfZMv{g &s;c﯌ɠ6 cyԳ(hZ v!Ua/X\hR"AB240BrEs ]-[bSX>mvf `w a;0ħ#dML)?#Ovp$R^A&___GdjlI}IMuS-ǭ-Gͮ,sϫC!W}Sē?t%CxAJ@ cO**-(x 2rDqr.Bx> >j(ԫoٞR<|V/qm`BsWٛW~tʬF[EHȓPT::.lՏ.`f(ԗM C ˎS8.=I,~K?C|Xdc/=%5c^vɧ?}Pӛʚ5EYɓtP}—.W!#ⰶZ6E]; E_^qڶE_֪'@gLm۵iˡ` $ۿ۔H uq#@:Lx$?W#Eϖ℥Ʈںcu95{?7~m_ Z`Ɗg+%PLSB&)hl^`)0߂tȹz>%MfK?,Y<`u5.VIF=T ݞJݪ~ݱ.+`l˹5helq?)tRhUMufgV6I-mb2ՋI"A )Oq$ܿWY-R=W64Y14K! qJ Td9V^ÛՕwU!\v~_pG1d)8Ϥ^P=`9H*~j&Qׯo+(v0OwnXGPQ}sQ5>չv_Jb!d& wΔ Tqyx_~pOj>t~=dW_۵W5CGnv|IxZp /?[W׼yt mf L',1.u1D-VETlt[balp1/Y<ҠKiT:eԍdP AǸەݛwÇeٺ18,;XQYp^<-Qn1B&X\A Bk0v[-[*NK"I%yE qRe`V(a*XT €NBK`#NxJATi$ sθXAf8i[VP 3lH͌eH)J \g1!H;@w-^rZ>1e"NM7ה'8W9ӗi),|Ah(1#"sGܙ9G B/ B7絟C0HCC_7NT+m~q[b2,ׇpw^*3SyrfB T)fX+QbfP@PIS0e]ޚ @?J%tr7"Nm[X3DRȬ #9-(HCZi%H4DSESa!3@$q2k7ǔ@ I1OsA93" q@fi' ҷ@X1rZMUSE4&R+ggJ`Ta$x! AS!Gʛ^3DIEz;L;yޓſ MM1&PPEX~~Df~ODqWahez0īڒQt5r%[#7NxFTDCڬU)^2n̟z(jv |C;#50ӣ}ό^'{+R FDX#j۸wDXݍ"jkhTAG׎#ܫ\h%̶")a|PЀtu{i;|t;Aϖ}KR T_JXAaq*6SM2T(9AkcԋW[ q+L7Da3Ql'(S:Ld(44%j!FqHFy^)C$[yޕ*xyeN (EFzbc7֙{-<Ƀ_ܚ"02;;5Dw|H"B!< "fz82 ÂЅbLE1R(d@错g0蚋bKnF'-O:8OLj/'D{]0gS[s(9`;9n;էx_2˄ޭOzted7]f{ z3"Ȣl֮'S1}yx#Xbv6[9'`z;6 b>T|%!fl`ۣ P)جצb5 @"̘=iޅ$=f-+XU{VwvO%%kpe&M)fſUm2H%6=/ *S%=7w=7tY4\Ka&3Nsfcb؎`33p)`HjvC_ܼcg;5!5NQF:ppv A2w еP+V`K]!.Un',Jp8[ 62l$Cmjs}?yf̽yIB3+!֤`M3q@X* 䧤 O(F}V瀧<˨2s]p@SS0rflI2J1T#T'waV) [.9$t7Ė?(uo[!ǩ\Ώ'yp6PĈ' EDz:f5HwVDנEBJ$M<~P&ڂFC{Y&_ӇiPW4>d?M b!CKP0??QUlزVx!"`Kdz}Cx>y4ⓝv -""ΓLbb@abhw-^*~b]a/o]a*_RZ"n0qG}smm_݃`tW|]otZFKiT QA:3Q>޹ʞ_F:a9$ UVp纫E׊r62t|<;Bc2^zjB@0.ݟFER#0ieTkZ`'2Q;9; #)G<9xPˁVwS͸ 9LwvҖG%&]܎e`WR0AX84-ghsKZ S{baL%oA3ʇXJ,3Y0d9XQYp^<5fcK* !@Nո2o%nkѸHp^75RDcw)H"muDc6@^95k q ޽ ]Yo#G+^o>Гklc2"jq$eg}#yHKbEFDVEdDdddrUEj,^swI!+}b"5 Ixm&2΢I4\nxWm~f8dgZ 1(q5cvگҌGC)Ĕh;ee F:qՓ3jq)QOs={g[Xk~i{`]6#CS*  |Fy/P%97+Y5#C"Zt+H{o|E8$R㴈"HwD9"'{Cepl@Y^T{CP/B' {HPd)W=$2ޱ!f['@ JU, P z5HvRi ,e/buq3b1S̸)Z$p#a(k2&T%ҴR0%VM{Et-b9ϴs:hQ(꿖H>S>K/a^!^eCr,Sz,8h%F-t/-ҴrPҶt˯Ó(ݚ3g,:v_W"J< X'zҭ3rC[.+YsșhO1:xЮtSb[.)G!Mw9QWvmKAҭ 9s)Shٝ9cMZ " -P+.o" ܚy]VBuT4ꌢP@ef)p![&-ynY`jAvp ,#jEu.0J5^/bn@̄BC|ٯn~t.iӂbwР!/_#y%$/=c-yea UEUJ<̘TuZnoo''1\W=iôHK;yPTHH2:)v_++=ݒJ&RuVtM ;euR|sKl C*w 4 !@5>^XF3!H>Cz3F=uRI@'kew\N9vÇrhD0$ҮCB!ꎄGI$L饒=aPk/ʒk-|=FSjQTseuc d.|yFT46UCSfzz=mBo_:5Z[h@k%šꭦÉ9pC%"BC &fYX;Vѱ:qtjJUc4AHVz8/R/2/H%Re!, ’Jp;E,bY$Xoճ  (Qbn`8̼XHq ޗO" 14;,!(hLVrh *t!E ґ* U|?qbγɶ5989rMrj/7F]0 z=Lo?GI m~?ĝNRq p=Y[b G%YϹ(SA!` -3#";rBW]NY I%it`;AQdpJkRYRRA "&H1b$͕C3hUَ&&X Uj(Փv4Ucb`N{U1̭vś.nrq`v1ޚYXM/PՄVT$jl왛,3 -: !}M? ض֫l~J:'Ro}~ SW~)XT iǤ q׿襩GZ6,@ `c]u+VRy3 `$S0D dO$[|RW5IezC^HF4 -ٯ$ x3t45es}D[ (.\aL*(h9,Ι.\DSAb,PREv2 \Ltg*N',яN܄@Ē)W!%qiJ-i4 Xдt12]TV2*w}}8G%|laZUG1*bUĸ"W%46_^)*`!iZKA+b{H)I*QUη%r4}C%18(  BK0 IJXm #;0 [Tһ]RE(o7u\2J %ya0ńA GZQa0R:R2H : H.(>YKQpKDI4̖>T1G%jIȱʔ7"5^Xdڰ#k Ga?3llhUS/+h\x9S- Mv1Qi9xq@'0X9JO1HOZP2N{I($YD @h-vuZTBݽmJ0;I&塴8ђLkےN:hnPn6Ӊ(kڀ.(&+Y~Nu@VWkw9Dj<$ֲM.ex؂G:LxMN|z1͜ZALz85 rMACQl 1=9) 5B.h*D]|{휆{5 9 o/^^ #߬,^?h/kh~>mbY~~,Pta).aBvsTf.krT ӱ̅kL¸AQdG~mv l2LMg2^dHnNaLJd,ØA]\1ǏQLhۙLb4&Ono936sLDpz 1{LIh*p3콹dΘF]ıVf#~^&}a/2Z]5ڙJL[ =fָhFD%\vi6Z2m2U8rʭURl%$iPFtHg鵯S̛ZLvFnfL?Iρ>NDwh=|ys? ?Qd@ y~ǻɾJu=:S8~ ӍNLa7#qNu3 C˜xvRxcԟa]ʫ,H^Jqfxᇏj8CZ6Gf.d䘱ɽ6]OA,uE8.7d3x-"H^@ .. yQn0趂*AOPؘQi*|*SMJ۵WvD>#Aܯ6]$@T y@IG!8,:):KPTDd\yi~BEi~}YSt("ΧV |qO> "swQpt.KQ^݌S)٩^bƃ?.文Z̆wvNVx$SX'";z9;H V\?8'@%z(3J . Sw=J$lO9M 1P"+{n %=ʼn#+a$}eԿlZWC*1 X >k{YyvU?G4ZH88bkalJX*1~q{c\{ZpC!*)ʲ Ji0̰xH%"R/q>?-HѧaQovqb`E?iGaH?}淡h;? "~$Vi5RZlVIMtik;I +E%ek" {ghH=aN@X/0ܗao1p p ȇ\(T|ް?uSlq0!ʂ|pTZ%`Pb,-:Y8̔$0-+l@N*S~i,WKD,)w:BVGeI \&'u|Ĕ]6A$eiPʒπY$Б8, RX2A=ܧP&0Ec%`X T(hK8@hTς:iN 5!fqLzϧ&aYVo>Yr`_ځu#71`cȧ֏]<7[4-hϓ1_o5EpQ~o+twa;ܙdv >quA$כojb׈KMFɍPצ`TQf!1- 'x?e\}T 9} %5&*$/y, SLZ=~IVkPDڜBioى$;2&2=`,w~tIRӥjM7-{0,<=Yx)eF_`mt3y82X!Ãy\q Pu ,Svbe2O(<0ݪmꦓe,68SU2Gb 7RFwLa*T;w珋iސ-Y"1$niG>}"b"߸,Q=-Xd<WpRqqUJ<ʂ 'Is!Cn Z/ y>`+mH `txn~C5%qH׍fVQT,JYu"`2Y"22//^)/wPdUndvYqauA6r*p{/]nraGz|Igg5OhK"VN檟]b<-*j lw- d%6"7E:鈂ݽgť_g>V~1/i0,% d2,"1%bt~ O[V 'Nl#s R 4~" cwE2@$,}G8o j` 'Z 'd!52LReHU&x)9w=MjRKM C^DD%j*G*4 (A*ˌ 0 [ fR{<3L(kdN" bL31XpY?8"D5ĕ#ީs7Hq%1ejȎ0|KMC<~ Xo[y=1$(nᖫG|!sEWn>Ed$cś+pF!yЛrsoWG>[%zn%lˆ*6]'ܻۧ?ы}ɱ[a=n{ƮK,sKM4ɦGQǻQH[*1F֣N,RV-ޭ y&bSqEjy7^1wK tRۨz% {nnmX+76E(E$c)^l̞=}$)E.Hl|.`V~;*ׇ~7N!c+t r}qU-jUɉ:ڴyЍsf7E'udZ]?F ؇R0BC%ζ=H#k~0 !%jWVX"HKA!`iш$iWGf``x  &!_h0EB@2Xe$%amX.\ψlX.y0U).Jxɠp6^/Ѯ|ai/&Ń{"ZvSA0H| JIA3! q84֯$(F( Z?~uaX292* ۯgnۺZK'iq99T;1\*"i)jB&doR<WjbD*f4C!I-ɜ < na$m|@P6\[\{s!K!zӽ~xd#yaEp6Lܟ#`r 09:hM(Oҝ렵 `tEgGCAbTvEe:v.FaGHp%$ [p݇-tKƑuci-I1hOD)ȏ M89{2Gav ETð ) `O"rס y$zܣ:#j[_ 60HwGZHNz.\=AO)fkhx:Y)RwI:>8)5|\j$fgz.׫Ȫ.1pzGfm?`q3NͻcFZJ2_o<7FG1a;VPQ(Wք#tǣW\E(CcmXG.fב%ҬCҼ~і>'5ހGvQ'hv?:gR_n6|qQ[m$zOHkcrdS~` 䓊A餶Qgp; uϊ"[MMQ&Q4qt!O-Ѩ)!ѴqG{Wd i@ur6*VlWbJIJGx` L*Z/TalH,GnJJ }j阴ux]*]w2BE(DꍆәXB[n iJ`SCPV")G;~]b?SS7E `{q_koN6MKS`i6[{^솛?}Zf7w9@]8T,ZPIr&X:J+fQ UG=^oK%!FG.Ttxѱ'M;R)F3b &-cNR~LIfw{<f˄Ý֋&a?=ɽɽzXSzm18&Ȩbb#bۛFhQ@ic̐ F|sC9p>2oUqygAe " {o>`E0N8~MZ<ݻ~UHKcs˜=H.q{b}+2 NlG%΃,8um⺀Ku#pvò޷Ab :h[Cp\ a"@(F4}Ά-ӰHmK`CU65i5PrK2$9l ?dq-8٩piA!ͽdD-*Sb%vڏ2B} zdA  n$'륲[IV(b,&R2,4NgepF9c@ E*2D G  P0`Rj- @&Yf܄(Bcb)(i9bIjI Y,8 !a&1S;9) Ds R* 61e7m}ʚ>ٹ6Z2]{MQ} +@MY3x(8J_L rȋZ }VbkFS9h*%,ߓB1E?TbL#F{Tj` N\Kj)X/%џޭ#R㵠K pMD š ZK#㖖ˈBPވ^vnU({/_<8 /kZWYvi*:jjv~(8::BH%KWORor% êpRˇ,\Rp2qj6GvHvAL6v?1}6a\SS~P,bad>TO? ”Bt'U kB^ؔ(?bc:mnxz-ޭ y&dS%ʱrx7AKzT bL':mˠ[HֆrM)Gs$*` R116x7=L;ڰWnmʻ J{/)愰zK!"R@(>LWS(@&4=d)ob^"c_;auJ F!_ɂroIϰw[#mRsh@ "SH{fsMljNC@8sIGEB{6%>2MĜW_? @-*vwTu@Α|iλ#a3K1 F˗Xȟ_MvC=)mVy|B8Thb֫r{KR5rضC>B*u= :Pq5>IGZ-#ŖR "BU:.\L rŢ_N0 ǭZX7g(IɨQ+Fa'V+'a戵r'`L+#|$k3q4ÂF0*)8ˀIPSgRk Nrns4ӎp 3LH4X] "b.H .+H ,"ެVUe-0Yj1 G`pRs5Q)a2i2VL@eS1.ek"ҿ)p}8yzb|_rr( [߿G=~Qz`7է٭2.3)2_A~bfz ?oj<77~|d 3(ӯ'K1Hy1A!8V" {#8!":ّG3jF"H T T#S@zšRHS aL+[4I^wu$& eh7L}`쥘Ϫzr&)U`r;&7_ۉofV3y`bIN}f!5UBidf%I2oXfC{,2pjT |teF3d^^ct)еn>4arH]"/@SX#Ľq8/h.|8S;j0/4X TkӜRF\]%zXLiowj6{C.-lÿ-N>'-Vދ/^IO+T(Z} wݏ*]-nv#2?~?6 ILp4K1e[>Pbj\21ǍYa^vsɼ_ A nx}I0fwۓh.4^(i$J"_{]X"?V%`H$ 3k&oRA?8uP1KEhH3>$ `*A\:xM9[ࡧdja6HnPNjPl{ŞW0gW-ع{)2 $EL)LQe+XdRfd p"M<rRuHP]n|i07,r!n&7isDv)Ϗ>*p4d {||/`w\8FHs&Ȏ煭o\h\)6`csM_ wЛnA hvaűuϯV[.>ׂ 1MR,$N#iA%,-!ILyj8M*LAhTG.M-E!IaéI$TX 3My&$\IRV4I2DaX?*Y/+*FH f@&%(D-V"sq,y)*FI J !qH1:c\) :0x7(7Tt1x(nF=Ŧ`g;CH|aBבm|B P'\JΕƕdjg.4>q0٩ ޘҳW1B >==([~PgVXB/1ϻ>;35/+gnθo^* y2lp \}cnVXhg$B/&?dz~!^}9Zw PLѦiTX.1nPQɈRMk4lͭeɣ+DcZ\!UC}ymB^k|E.>z[&|}fs׹eNGaˆƤӘQp褹엻dqG-{6#<[wc6kOjś? 3R>3hM[eC1/0&[<^m֫Úǖ QWIĬ$y䮞n]b*S]8 Q'OZ]j=Db/Fymdc4߃_X=<#c~x|&]%l!_z4(Ǜy3%d!"!8CH 5+_ RIZUJ |EK EE,32ehZ)Fy IںnEx9LJa0VrBuFew"PZR: ARU>:&c%1ńij=7Ӂ-qI| oK!ՀV<&v61CT 5eD0F҂ɝ,B~/3˟c{ |> |ZhSV[cz#nHN?W?»}\1ڣAV]r1*LP{TogE;?X{= տz-Dk-r(nG@wOnb"ШDhg߲[WfQ5nҙmvjז@h/mE\Ԧ\TC ~W]b] Ho.]6 N[zm͡ wwE S x|JfgYk5Ƈ~ug{@#nWNdЮF!o+ c[=DZ`x3^`yz[\^P{-!J-!9{ui3E4gu]oBx=ܳioa jyƲ$!Ulhbt[.FRhR;gpg@>@Hi{J1[Tb[R)$hTxRJ %"=&Zl]o<83J<ںKdhY 99}c9i1hw#ж;ŃǟL4 ݝ!4]r+vk"?W-[Va? \}V ۹Uw[~KkzVAk>b pq{{ ЭݜY|$S^0W|אkڿ3D͙>"42JQ{3QJGUrvO:~{7kUTcqh*{Zy>CW~ mI.e{\ozֵX.F,z-*V}:@2n4F "}~4P'ځmHT 61eR)V$SK1.дĆ.P.³.дhjhK1*ë25KYKZk>-g(8Jn{tbg("Jwp &Q(]+GTg8::dmҐ&nfV6$eiHue 'G@KiG2vZ.`P؂=V l2a ra m(:dHf'TYgJH)2RCTJB1:59C] u!R3',v3gކ vK-Knz`_hF4g^ ^ΗZ#uڗs,4O7w}vu a", ٟLt}/g̝mqے9Vm\/[mgo^&5߾[]kY8\Tp~_bMk@%rZpw.ZqrT' ҏv9w `Y\S ·ۦ 쓫YVJY2AN}=KCLm bĥ[4YB롟+S]U%vjYhl?<,S-G_zQ i62K0\ܯ ueo$zi_=Զ??a"N90e50oA-)(,wkpaTx,P^}Vֳ2%rǸU|e~q,37 'z7DzP |L'u[wS[ndUOB>s @x7LQ/[(>:ޭ;ZթP69-|&dS >>nB>xP |L'u[{$ Kߴw bXgnA6%ѥRͰGs&/Hː|@΅\iYѐ}a]Y*#bdbSmzͲj> b7 B<4犃 ,ul1է@Wԩ !ہX#£C<roa@+my|@~Rn-뭻MP&fO϶ 9\Iw dKFGB;hU$wlX} n iThz͆S'l 3 15FDa _ /(+Ĝ5ď|Eugyv.7ֹXrcˍ\\>P<Iضa0YBHNDSKg*g#*r-J%B1̄(|YB?o!d/ EcdI %256)d/7T Rqx(8xD"MƠKeWO6*IT b 1*#^eD8]ƍ+  UyrLO9rTwrfYCZ \B bMrM4"M$-24I ZFe쀐V9Ҩy.>Ɂ+U:a^%RBDdI#YF3Q/FiPʅfL Bٻ6r$WY K_$8`=&bK ~ՒlV[-Y3dbbEu]'-Ji ^(G$x4%L< `3{xI.!DmOD,/r>h(w *a4Cfu`Yif& m8+.tbmɜ&qn3L!ògF >vlwْ״cOpk@_*Mg'.1~2V )#ӷ7?ӂ#-.{>Xըfj|P,/  Ն"CH[@RJj =nMY̴/~ۓo*.pfZՠ=Z$5<{UcKn&RdpJUc}?R(MU!fđZ`-4ppF*qjĘ,߿zu9)q1%&7GUvZ  ?XHZLAF@ @77D)P5 #+eݽsR_@M|Y✸XOM&]hxx 1X4ow}OO6p˻p'S4[Ia}P(܇*cӝn<#*{fEzun2;Q!—lb7A|XܭNv2)uA yf_GjՍ9MH43dA)ԩ8p~zGԃqZQOƓK>Uɛ8AW"h{jV}5= sԊTx#3Fi`ʃS18'B`f9-d"FsY#b9Z`E!4)IŮk RY*ա, :RƤG~1e;ETfAraFHvKfp P yў) 2:9yOfg0!E'x+(W< V<7"4k#c:#EIv2 Pï/?wUW*]>%[Hfd?Ytf@cqwcG꽨=iX&<Ol W7f1ˋo;͇OD䆴RA}RT3Bhu\vg37$g2iš(3&*{"lAebcʔPY]0p`6e+%GP[2s2AAh=̴0CɆ&f.g<`9=/ hITJwU"W$U: v g;2WNj;N4ԠqŰ>Jm<%vU<vcE[<ϓ&J֊ͶP6m4ɥ(O&G0}{VOD%gj7fW~ TrS$O13.]FC;vy5` E6+3.cs@yXޛ XDz"9j@M$x2H6gf,*B1fB0<άK9($&w ԤKwMZex۪ІE?YPz 5eS9)(Ώ6bP 0),bc^X(Zq+ƴYpl8_7 tւC$,FQZjo C5m2Fv5K](sRO˔ap:__j^`CD'V7,߱QȒ{e/R#eݮJbj$N`GHvݱ3e|r7 w,=9~;-[WվNq6 9Y!ztGB$MνbK[W%"1bu{d(lk4=Idegw5̾]V5cx35:V ;ǀҠms5?`{f2Yy4 tq#mQ-LCȊMʹ|ǃf cI/:/sp<C7ߥQTjDhLN&"ll~ Iʃrj1my*,)Zs|X-/<υD9_qy0(D1絺'V`&])S=XLqj[ El9jkiIil`5`JƃRsyEn}9"x{fQ5IaJ~]^O_*#lJWrU2NUšitt5eMSk-t5U̶^7rG3SW~s-W 쿂` tg5 ycpYJC4iBSlg:wj9j.}&QLQF;X݄cJKZs12@-` zt{~>bLuvw\#БgWuSɓ']:y^)GSۺR~TཬUgd.촆3̍qnnP#zZ@KC#JrdY0%#%[J5ECneşo?HݖǢS piM,-89\,X11@ 7뽵Buc|4\-y=zq>~obU'6x?5'r}}Ǔ)w u2 `/e_YO ڵXVpߙhoA[paYקP{uO A=>)9 $ZBʹm eHvlXUNM+Og7C&̾) 8nk>  0̴6Vhp9auX2 LoHRyd==Hױ&Kad;" I+=.lK|MIfG̊ &JVr}<(|b}jQ\[ϾDϮtJ5gxeR^xӕ#=UGoOV';)zr[z LgE L5!;.dBe򢠜ϥdI _"npޒ̏Jv -Dos\RRbΕ 9$.*[.+ ]G6/-0^a3ۻOWzUq|xB~7ꈹq΀V@⍚y X/c>!c*Z'"A3=g1s ~a_=h_EbvO*m$9 ?dz/oU,M91) ۋ<5zM$)˺wG:8("10WRd,*QѪib8#e!8a-IM:\kyPmU"\e500&Z&58HF$d(d"%X)}#0Av#k %R@ 4r2Lg L7y_+p#uR D)t'GF*bu s$q*%B "9a ,6 $Z"ag:=X,,t1yyQ Lˋ'|5xL`r/]N2+ڃy.r]f 92 =e["o{qsJ!9"K1T ?~'޽}鼁74O~^|GHWgF >悶"wْ״Opk/_\d<%XO&|L&Cʧon.ZZPXlN'7z0",!@)J:G`QHJ#BZFJ-Bj# 䤲],STSղTtR"حȒUC/h'=EJVGO8V4 n6dٻ5%WPf):KV{lyٱڬi`L?~"TQf,t}..GSqGP?0LqNK/N{3Ià7}x'R(Az ϕyUXM&gVK"{֕zpڨ'Y>ħL`"ՙFEAE-f[K_}1⬯)8;wfv|"t5&fy6h$FgZ#I,,avrqܛLǏlo\ƽlB-bҐJF8 |3.q@;e#ZwwduYuٕb,Z).)h׈,M` ڈ7PɦCB:0%2Ы$=8AF+ɐ?lP'Lh%NBЄI!s,K.;n2EʦTo?C`fnL?=x etFf:4@v#Y2-#?ԴxgA(4m"馀KBFB3':Ȣ7BVM)3y~ڙp#ş֩ &v2 3(A= uF:',uL<n(T! s#n 5xbHJN&#2,3HxfD g[#7sP"0cm(MRP8AX1" 0W1npxL%Jn Ȁ4I-va^aE3UsIW+r:5a dZ/Ztݬ- uhy_yl!vUz`ۗ(}UтJPE \Xe;UgJAy^TxaLh0WB沃VBŬR^m 8>=J4r VKA1l+2f>\*|\}tfXc,sRy)|F<|v3X s$ T.n u3Y T)r¸𕟧-2E~}xvvQhCdD!-g=XE˕?izVмERPrr!"v\–m!s_R 5xUy{\"O!ܷ9/ux&4b3. X!E<$*Y_Z=s 3!'Xd⟍ 2WkhJǯnJï Pҍk(0J65n4'M!} !p񺘥 ŲW[TՌLuYX0Ϯ ⫣q\b=_9qRɸƟ\m5nMu)\hu&sᄂK@|^$n+ž[Ju^!NaUFJcEtE֎ EJMBqL-|c5C2䃑j"=EH5!bubu RE.Feb#B@ZrO;Ÿ!!H$K} +AԈ\j(Oh}1hFL+ŕJRa7YVz> ˰v\H,ԱcXadF8/x1N\5)Q*r侫pl~%.#!w=a^ $E2 3b&T(*X~B#juZXLb)-<1j0!j E\ OXb`S95V0ўBR9,0f2)1B\g# Ŗr>N s\#) Yq2[fd rdH˜vav+XޚF*#^IFY),2qU}[ry# )qHA\să;i5XE&>}Vs=>S؝`j <0a J ^ (hHR悪fjx(ƨ@ δCVW1$_G|lo*tAs*.0NF4%9i;_\Ak u6+.(Xf8e[)A(pD-qv6_.D_%C X`Is-M*EApbqy:R'#k~td 3͑F\`VQ E-V>ki[MPtS+Ol,?Q=|t:ELE.`E)f` !XM=\% hcEpWO$T{{u1Cs:])z 9>A Rm5\U9 GgUt P{4ȋ2ЧM~?k)RݭX~c5m6H(J#z7Ż@繤Ι*ܬ4e*-,)>"mv|\Worf ZNvfUmUWTK+I픱1Xe-xŵU&`,F2hʶNˌWBN ;3\~7Ů%*&z=>\{:ϛ(7vv7ͼut?̧Ã3ӷpW+O^ׯQ3 Lez%~k>LKmv>x2]ۇY*駱X4ݹZ Nu*d L{\k$7Ө 0,E:\a0Rv8$nɧqQ@#(ySeZTлzGuvwFH Qq~[${[Y*0nF.0̇]`?StCf0N;{`҃xcmKVsao=O~0}љ45c7A}oaq;<#o8w6>V܅W.NC W.NlMoӿ3ItwnUh"`z##AQpߠ{Ff Ĥ3$f+1DLn})m\ TWÿrGH.i Wjk.LUw7Iȇqtoѫua.Ⅶ~?63u$]+U_[_?=SAp)FL%3 mh.bHبnDu:*FиlY1y9xe!p-0 1mEZsp[+ě:kZ)t ,?Gyb0E(f{T1l@q ߖ@QcEϏ^3 =ya=Cpc,ƇIؼǗ<.04;hi?6h)bi%f 28"f#1rpKit/b[Tr c,7 QXJ $sUidy %E=X"Q(C;) T:%NErG-!8ZfNoոlʽ;sܦ?IYdJ?uoFElL"_e{N%s*sP͟5kqFEbwfdf  {rĈK/lGvYRҌZRľݲnbUU O+cWޖwL=!c:F5 7wofOULVY}rd[vMj22J⼖r褕_>ghf쒵*`UU0\fָw':Ab7{t݇iϹ,9-J2[ 5MIe!aODH֔32Tj2R`\Jq&QpٽnIj^zBok*߇ Wݍf޹ZRtAKhw {5xUN#ܼ$ZUw2r F,pz$Ȭ4R> z%u|mTTzU4 nmuPνR袤`LV8+k9jĪNfi9X^ Yu iF/\ܼ\|W:3ӍCg mz xRz\)UEv|#k-Hc{31xztDn:@wg5{VSY͢D*氧rS(Kc;Ae,DR< 0GP\ c8$_s?盥CD];ÝZ=3O|cTQ <7 Rx!\QCD0gwKn S/>f5S$|q;,~2O~ErG㻽^!0O, #?=f-ԉZõ/adj\d]We:Veg# Jګaunu7qMHg=-7uNR ĪiEl"U`LD]4V4NbESQqV7c P!BZpVJ-e{ ׃ {? ( ;8Cb{8eI;}& IZA:UGP oXVx ox{f ?$4!)믾~Rʙÿ}{׻cPʶ=<}M($rw"q-eLe un\`kZ]'o},&|>xwkFCHҬTs@E^] >S Ʊ cǠq5`9$$|^x%/7j_yuP?N޸Wm?\J[˯R\ܧ֟VwJ7]cV|~\?wB9 B! u mzf<4ŽBoAAIk܏ꬄ3ߛYɅU<د}ς\|SKSSJOS#4O3c /%J7/|{^/E \c]B /xpq^b?&qJ)ZxJ9`z±s%Q**u~Wvi e G{۸~?`B油nc- \줇\о˾_DBb(_D\ 3qp}R_ԯ=IsU' R5AOf{Mӷ#LJ$#}g_ĸ61+>D7GQYİ[wutWش?V{U,Zt;y&l:¦P($z\ ՌHZϡ ؒ8~R{a8J#LyujgK-,( Uv 8t ^v+۶HH)Հ2S@AO5sI&$4Dm7%iA@8ƩJ۩ܱ<J<L%B)RWJ5ש&H+d.~ EynR*YTvC*T7*CЙNVL1piǀ<$;V"0z+)E]uu^Zy="!]1M 'qZD+], 9)zNqYcIsYz2wYI%#yRI4v zt=x 0#h]hHnm0H0Qpr䜆_SJ#Jo^tU.f=<(oq CԎ3u6Rő`LArn )L%S{"UN Ve醏s^UZ.8x\tEH}Sd#-hvnES$@ PTwO T'b3N =M C& FxF̬͊;>hNg2$.o=^O5 ˜up/1w ȞCG>բX2/>ke'D?Zűʗ^ճǟO?քSZ?*}) t ApϜ^ ռT)RӤr*/n&}),Gdڊ-Ey- ѱp.FGGOċx8РSOS`z:"Xۋj|DOw!\GR]/<>.y[Gzqc,$%e>CGD"yR쫍(0gh{O}9 cI]h$4 T_=M(.Yr{Od4l#]N#;!؎ p4g+=brI}q*c]NC?N|ɤeW[e 5{p^do3/f0lQY}pnpPn'd)\JAaA# E2ci0ADRI33%דkkk+1Ҥ^Op'5Z' [dv{/o=TTU} yUW}:dXJ 9)ׄQHIY*̑4K7U,兒 026NS.1QH3 "Kg&,w.?pd]}}+.. V>@={V!Zm~jyB]&d߃E4dmיi*Tb!=2x't<2X12i /"e9)L&ܮCS7Q8 ~|T3PRyXt3f9DVX( $'9e,w;fՒRՀSW_rihW뼲_j[Qw wA*Nsaq;0\IeZLЌJ+Rbe@^] h4>hw٧$emKurTe$Z7!%*󃾠Y)J Gs}JEr`S Ӳe}ŚwO77L&߹I/7|s웧;uoo=VKWVR;k }ƱVfl9X)Nc͡-mj0!+ vR=oJ 2ygkUQlJԀXPDtR{~e;7Z[3H)X '΀)H&R"wai"% X V~KDpsWkJV ҈ /XP8HWꎡyggT#h햚gTYZzRkiTZۥ*a YɊ䔥iA, ΌVA V3PKW}Ѭ&0B|/4_̭y"; c5:G{dSB Hح i=-V<-Ʊ`X.I`As]d$TA\ٟ͜6Wgq!JdSsB_@c3'ѡKXz< ]|>3bG+{5߯ m"OS}yǘ! M%| De0V}w;,dJ8uaÖXB[o1kk/?|.>Yi\Wd}C_Q` = '=}l}|UN`u71J h͵ˊщ!WHL"\luI쫰\_U3x<6ڪ6#(O !Xafk:7,ɝk̾q%~xi݇Ƕa( ˠ~H҉QjD]q Bv[TFD.&&hZP,NSBreZA"z,;,U}V*M5~Ϡ:|m[S66G$q!9OE)E)!Y8.~lkqsv0xN]*ʋ TbĞm-fGw _VpknWןܼ븱 ͻLǻ[!o(NeqiKDr`iCrҍر=JX#WY%nGPv|JD“$Z.)iENd@ɳ&))C] {\23=`mTq5 G*jN@uҌ)}e8k0Y17wz.9&O#J@?ߞ.1 NeG*#FX)veq9Z>罱HGWRY($E&7B{8abz',8S(xWp,&e- ^pb(!%i/Wk~5.PWjavA-fSd=y|dq6tb `n=vcED{V3M t;F '2aT~y1e@MZ(>k||6>`i#J q4>l%F 2m[e[z5 6@/zeL=8:C2&Jx-K6љ*<>[LxyX+yXVXݍ4`TslT{[ (4aKHb91 5ǟBQ)5F .zi ;'t}IiKQz }9bBQ1~zEm|MUa;0`sbok1dWSGrH((éЉH$ҥ TLO@$,˹aI))cAl eɲޤ0AsA5R*g `2Jv Xμj5$;87;Wb5&;?79:;:n{pڡiwV% zzg8l2f]r3>땘18zxJofYyĘ3xQSy#Y1!R{1Sw A48xYÇXfy>WO#N!gIBN%F,ME/hF?zϨ.%!3bR FwGe ;\P /#Kj5\ǭQ,TË9R&9U\ɞ3]#,bR7@#;`G 5Hb5$p45JfLW 4±T貓q:/M]ggUOSDF!VG]ˏ=Qc)>q+)TWK1BBC=^| 4g:`dX]c;tHAl,EPbb O3c3dJ`Mt3jR,%;Y JB;U]"TlKߵ (?59~a84'MG2%!b,Ii8!JNd5{\E/l eNWAãYga13U|%;jWWҗNMr~;ݓ\:Jߑwo_?)\0ocQy5!b4g ~ǛXgʘ޿<>Bpaw|x9;f^z/`l^!Rj\QFoa7}ű3Fa]̖D.$uRp[b߶ĉ2kO lKko ծ-@[#tnɏY?GH鎤il 'nOkE8SA486MLvCq8JWtɧn+NnoQД,M3P T fJG8%\E,}ΎBa lLN8rDX x$ T$Ja.E&Eʍ>1 Tr1C yW{vizB[^x?~/LRwN@Vi Th$ȋ<]hkG`_3X" 8 :U<CS"9dHEmp{4:&pЮF6x/=ն,Lfa{,Xc85  3 9A5k&nA}8q(9iA8I%ҺdaHFx 1>b [?\;?XPx'%P${LWk_eL&#t-vhF%G+[ՈV1Q cCk|oN<GE_BNS6ÒuCܼb{Ly]%XڗAtXlݪ[TB YDlUp߻i-b11gTnglB;_4ݚ?ֶ)ڢ-Z?e*9ωlI۷nOL&7K_aT¯~!X.,^')]ta|=/<}pzo1JQXS)qc d Y66ܗʹ;fI|İ\(^sQ)"Ň$|R|vH6d\ %)xac !?ْQHB/_8L5G.Q3tj%!Aخ@(BUSDI%Bt_ A9 V("w6l1@;l#]\U!! E踩2-os jC6Q%UPʛ&*z= 7txSƙKQnګ@7گ{`̹֜y 6O (+JZ~X<^>ͣ3>ce:̦.\ߙO3}'GAsgq|%qWR$<|?HLR$Ø#lL5"4q aF[ka5Nʌ"X}xmKLyB vq#)*ۮTn=jBK5tI;BK4 ĔVf0f2hVQ2gp*GKRj:VKDׂ:0cڹD@)OQjX3e-*IpBh8@LPQObcqA/"|+WJzWK1fWZ9e9d⁲sYc=V疈IF!$INTFrOĒ[iIx;ekie2ݤVW)a}wrHq@C-1EFo1R׷=Bqp*Ře1j2\[F#$DbZ J)LjJ!,d\ OEsh[T@I2mAН o!fݳD@E݄N.!D5e͂e1e6A{Gn PKaiǡo¼{^c8l߭[M;}}\ #&@\Cl1ByAc!ҢXnk: ;׭ZXW-.>xGvU50 KA\cŇw-O'v砦^?\KZCIa9Wwr1ḫQPZۚ`jpk "_*E֥ҚF-UHͫ՝Zժ<(}R^#D)"J*ғ%N=kdRu7E U4x椹U$j(3VQ guNo[EAF G\*#D=$J1 ׳\%\4HJ. Cche$͜LRK!a&/KlfPԴA"1̙UIN'ĕ +CJo$})e<>Lef0y)_5x9na=?p3|77 "sy^y= &/[yN"GwJju)*!YҾ{W?dd~|&l/>+UHw7fIȎfYӡ0Y-cJ%r/+a?9X璡4sӅjuA⩯|"j:XtTMW(4K"qaz[boj3FC37d!"%Iۛ#[YfO9K@i&zYfs*a LDtBU,d`ipDFgW\P] ϓ84(xe@NReI xIBS11MJ3,"q` ,1e!0ۚiPI LquƚfQ&SKit&u`hLD% *eVtB[S8c[IK8'P,ɥx87SR hUTGDA e;' jt!S 6[ l ˄-&BHWR}ؔn~{7DX(X,~\iD&Wgl1A^ eNhwh`M1e#}sYN|r@ ՗i9ַ /]]TAx¨O'p$: aU{']myzlᒛgh g؃/#NAkhu{Vu(Ƽa>HmvnѮwsE yL!xxF!'-FOx}{ m~bnPx'JQRzݐ`\kzŇr7>,*R(UA .LMMBVӐ$Rkhp)X.FCU4*)}7Wtͣ!5e<a_we^sO'%TS'm]ܹKZL@wr*U-ѿ"*j\GmiV4NN ZЮq+[=I{U\HCbvTQ&~^ClC bLaQ[xknPh-9^IU$DnT_h(5N 1M;' %&M"N80:#YL)riQa6^L%8Vʟ"b@%*@]51҈YSuDz$ٕZ=}:.z NPA4SVC(Lk>ktk'EmDeH:HX1?j6lpltɘyf55c-knK"Q槛l6yL:vX.,HMPLK<[3="_l0uM?9tq{by^$We/)k"(UCk420ѰPXXe%gxgW!;y.wEqwJ>H{N AЗ\w~DV]>PCB"ЗCWT#3_.MfA 8\{Pu 73&`R7ªTo䩔%(𖀿zcs ppu!$ Bzlp|!Z]gET3R Ў>R'E+0׭Na)) #|í }]O 5*e^OH~ 63䍺a sC=IDb3$17j]UVIE޳@KD`ÕpyPƵ{Xĉ Cdp)A#G{A٩GpxO^05Ld df%PT7$1Z"+= 3\I5SN V0&Йf\(u?{䶭Լl&4nCڪWӦ\ :綒&qΖ6$ G"%M0k(}F4I* 0t3E˹p}!'V,rgD `r+\! hQN9WDQjΘ)uDR_Vuȱ,+Pr<ˌ6#{+-\2Yd9c妰R)Dyr 6`/[B˶'ҫscm˶Gy ;v{?[LJºOB/~J۫/o۬Ԛ_+N=5d{-{Kk/9S+{s&g=/~E1_,W[y|X:ZKO_y7L9#./rmߞW9ը|+%pyqE[WWZKD!./֌|}Gt+o*KuDdCh-vۍ@F=uߒiJ EKoiº~MɁKiД"th( $lF&,5יbP̂/rA3nǙ:w$vWj)59맦Χl}M~rMLby]Yj~$jl?5"a)lBPI2+LN<:CѠO##J fr=`W<ϠXd13g!Gg3 SaޛAﺽ_M+mДz+ "DqI50<,#)81*U =D4߹9JZPèugH\B}&Y w.rR[(LÙ,˩;4Rh "MɊ0k[+aūŽ  뿺mKio r./ܞ==շ>7yx zցb0Ŏ{FlPרg( )ڮrEXn:$#5z*~b x1ͮ OgBXL;3OFX;T47g}7na jD[uq]wS~ DtxU}Vq'T:eR`TԙPߢɬ"i{3" J䞏ZOÓ/ֵw׌@ÇfKSuΠlzU\;<{B ͢5#1as~)F$—V'Et8uV/ڂʽO$55Vn%5\:dЮE2魥3˜n" %lorV{R Q#_ L'o_J-I؎ .GyG۞cv~B;ٍf3Hmy~}X&cSQڲ $@lڬ7뀤'OY=,BȗcczIq7ڳQ|Kiȟ\EtJg=˒[*1&m<9R뾭[ztzuBC*Z1##$Q-GߍKmÛ>A)mt"_ޅȸ\qduǑ{:|!8Lj% jop6WNf]X3J);?tbix: ~:$*SQ;Dk_eFugM/^,UM9>+i5\sǦh]Ѳ],ImZprכԜgv~$ wO'$JC]b|p <B{w7.K|~_RG28^ʅ8.!j54+zmQ3J, }}@8i/M]imCCE¦Ё:Zp Z MOSs, 6^7b,f.t3_&Hi p~㯺VRʴz^MFsro~?qY: v})FCTrM;/oڀ E>s1.*u:s1.sߞȹ.tLf7FY&r/2>@ 2fl$'Srbh6F Qp)p:P:(fy8|<bpVlZ hXs~deqr3:4ݧp3/+.R{mlА NJuJQ3vx9]WˮWˮ) ODr{;H\q;~Udư}n Qw7BOn/NPA'YD7Nyѝ$Lx|(?EleMo9M)])C`M8>TzXZS-&ts 59 s5 H(5?[)\=VUV<CUQnHiQEy8"܈B(쪢4VѼc6wU9!߾a7`K3U \nwnvo_ ņ(VY[2Y(~IBRӯgǣ|5HR9^0iBQ%KIrm1HiJ%+t{ BX=esE0")BQOp@B1 :/ 3|oH жS=DT b&49t/Cw9X=tG=f'W:PU?~mR㣤f䊅V ]w n0P ^n#!fv2_ZæY]O#WlqOBzԎ]ǨŬ1(Ef] m- <)~x3 yQkT_|(F̥NZJTo⒯hK4={qF@ftd&U"TlKwg؛Nݿq&#i*bħdPWtmF"uuqlFB1jJ{nԲ|=?(Fl1V*> CcQ}KANsO8z*ܱ]*E؄!AI1sw04RR1Z(xp+H-ѓ"I'R]*8 [b IZMb!ʮ~IoY1# ]R+tWLD:&:҂ IA10LDTH!`N !9 Q%sx)#Y"lv,9IԒV˧#-E_7=+6{DHil@\ L NO B^4 4;Dfa;#mRARr%֛q I2Spw?,q,=d]hSZ3.KSꠔI՘%'2Ζj/s9 р ME͖.oYwlWP\{ e|(õ}LkopSQ. QójQ/yaOK) RkJR?H@/bU[k Q~1/kwuy~@ "qZ%q&AKRxqʯO9*f\5QgH!pChECe@_[n*jCE3N CmcތN z4C.ưPQ64r8$ƀ_QXeXc2Fu%GuViz4G-gAZ>tMZC|^'k˾y =A}^1MK+}4]S'J@_Ӧ.š>/u ^WX7H-:l.)+.u,C Fua)tݎC^kުbв+hZjF3n Z,d؉ąFsn4񃗂hM7K,f 5Qg3 SaMW, B\s9+,#)B= @tis(Fr k$c5JTjvV ~g7KHeaˈ# }(k1iȡϖKm%ǩuu ccw뻏(|7254.DpPq^nh/;x߄-?H0?  TP!)^DC2tKڢ2pWeRUyIr|/~˺b@p6 U^q:O_*ٛ;/^k/&DiiZCV%<~8([˄ !fQqEMt\s7,c{E"=c{"yŠWLFidfG! <~%% _;ȯftbBhƁdBZ P͸qɘ&„`S.2'DN !'>Y\'H;G2UV\ /y4HEw,2iI)piB# r J@L)jh{g aH?2ŜOC6)B|Ȗ"5dgl a$*uZZ2OXi!RZb:d[dC]$C%oANvJ hfu(p 8X-$ HJ]! 5l2ȉ+ӽ;g,e`Zt戥^B"9cl9wm͍FM%G4. J^IeɋTzѭHT6)"%Yk<3nѭOQr+Jª!BL). BtA*o0ɫ`K b$^nSh֧? ?OJGObl󓍓5ϻlR&|\nܩ8~Ӄx?1[C`O)#i̶ ؉j+}ѷ$N 0 M7O~ ~'<'3|u5eϔ g)Ӄ?qmi9MPVO2{}JP)9Rv&onUe\^GT8dR,y%eskh!k]Kg]NZWJ)B!cH߄Bk\Y\JIB5 DTKջYGt'o{ū#Hg3E ],\+~֚c1$#dvX4;0k߮uN4)Ю!HrP j]rmüfg/4Uz|}}*nilч[C/jM-6ڽ(@4U:Cp ?^zZ'$PGKEIt'y(7R|UVH;r9e`4{#. ;%O:gk߃8r ZM)az,>^EdAk|206>')'п,er\qٵ7r$%lhAK.(Z.dIhbDhRbBp 2e\FmM8w=Ag$Ab=JQIKѴäS|ɧ0鍚KR&YH&69zEfgIők|r6Cw5/qBm/},Qwa/,ePA3*Tɉt)iXT1k8cA㒮K<C$O5s %q_=j!f:/[QP!qݾufNrvԚG4SYбd$"տE:-Y>z͙K pʭ'lLR%A.1|IYo-OkK>8[!#ϟL`՜38WWeSXé 3m,Ek/[nˈl>fZ=Mo~ż.4Sغܓv9+`{Lay0<xj?K#~>iCmȍ?.|.;鯴䁺f{r"FW5<'@y}rڂ ҾߵԗwT(Wtmk10ɍVzgu@a>F9R~BAn˲ՠIԕ5=8+@U;3+@UTT}Q ZOSFS`mmuTXPY=w8߻F㜿!+nw+zH=E?CD֭D7MRu4*-47wUx{j.<}ErgօN΄F[>dI kIyz@~lE'{ܗLv=NxtvhNpw}*dM9_=kmHՙ谏DЛ )7$"K)&p00- yfsB}r1,p-Խ]jZ8⶝H%~e`zXoNVp d;ٌi_B}Dt4ׇmU%K2STD=U*S^B} pZ<*[k/XD #KXiPFㆱ D9aSFGDD eb)B62 6G2HCM%jC,Ղ[/XD48qnե K뒥gunHYz,+@1p/XZPs-ՑҺO΢'٘xO|3Ԃu,=<3jXZ+TX,}"jn;,=HB%KidK7ܙdً Lek/f1Gml`6h2GKi$pXN 1dA+ ɁjbTKsc~sԂYn7\4W2JsL2V<,!Voڧus"ػz"jǽcX* ;];ߓX:Em'cU~JyD.@/XD% ;6Ky%KɛBq (&fb֑)S[(49\ Π"O@BQ@K ؜ #n i rGr{`W$6G8EzR 鑩ZiL NF(҂6J9#8 "⦲{) Z䨥22r/8dIQ"c&Mb3dPQQFokb(\D.;f^QS&;=MC6EA'x`֌ % @\%BWΡSD |s"A#)rkZJ%7CGf&8Mr@rJ(9>l%hӲZUn^<jcss]C\mGecicfu,i; mGǤ׏XNv"[k׃\.?v [V–LFH_iN_Ɠ[bӅsÅpqKa.ĭ c;7HlR^NoO]=}{\ip7}j^iJh(7J% dI)onj?>Cɇ7|}I G ^+FGqލlxeLOEPL'q;M%ZzRR;=ll~Y;`Ӻ}*k7B.:?}Uf}^20Ҽ Zi4_3f2mR{@Lh֖%]M4q8RY&hI`D5l!/",go좷Jt>n5q Ry6U&8Rs3uIv*fr8R_Z(ZYִh6;ф֍;^uՑ)e?-xSTzvpuW;w#K' ߣ ~ q*utQvm#` Z5|\Tǩ WȒ`!OY猊z."&2\j&ǜ!I^2yS>%P+C.e&UYs2w R6JIhZ\[r]}Lo[42Ƙn_75䎁ecdL2{Gdp0 4ZKگFH#Y)CyJT~FػFnWXr.V'{/J˥TfW݈}N{GPF-Jah[Y 2J2ՊׄpQQ8 :)mNI8\W|- ڌMO~ A_޿XZ4!O?~z77w~ǫ~E$EDo/nO~$@,njםza<7+KS}<ci`{:xrIP *b]ܠH`LH9xݴ'V8@̺1ֿٻ/'Jɘ#tg@0;z=.ZΡT8f@(1_/ Y肕 -T {:~ؾ7-1ßi[ypx#\}^(XZn$s_!ʛ_T5e8Gq@qMNu,^PGٹ߿Ɲ#!@ǹum#\SoLNڤLW9 nNqvA*$AjF0M ʚ"QˈU, 5qᇩuIT̑A ҽ0J0^=Q;-?r\E\RsV^<16K%B(Z%g" idn%K-j\׏K?Qxm9'1VVe)6-JZȌes4~H]I""a(Pd(݄?#.>?h^d{R0ֳZ`=޿%)x}dz^=?|q?VN[Vqy׻w7_\|n#vinh I>Qڟv2^BI*"$4S_Gnt E"OEpFmCft|H+oݯ@3*ۛ 6%t(ǠA@iF?$ #KBYI|־3di v kk'pˆ[ dtv ʎ&;;ZQTS_K)xvxp ">|@KOb8!*NKWpUOf;=$ OW@˥'TrbQDDv`ފ+:mW'Q5/OՆGuv젻uq)!6j.9=hr@?'& hkfjv –E%\ao'P R+!q?@aelJ~b×[Y#P5$_f!}eHփJ}Dp eaTI4{]Lݚkzě $4v a '\/;o&L)ȃӝH|9|ZA(8?Zյ Y驌. ( JJk܁ż Mt16ax_Sґw;Txم} P)F $\|Ys[&,*PQ=a\M\!VgHTJ 6$OYҤ27~nCi|R+O䗇]+ ^o0v.޼mNrVǀSHjDvfbD0 g]e%BB`UpSzc!-j!jY:,c_מ h;`֫󰊣p<}|y}j%83.xK̵9;W"+W2$+ĺcѰBkX%$tZH< ̬y|IhE5H)Z-_aimEvA[!| Tb7Su垾-JZO 㺻 *n7˟ FfJAXOkѬ1DQae5:ֵ(A YYrt(e9FN; 5IW)\%U>圌^1VkB2V딦5Zrc9zʫk[NϘLk|EITy,&Qn|$xXV+jJ-4LV^QhYsBc 0RRkRis(:r"|kKO-~ޞx >!|@7:>]PFn '%oPcˠވ"~wpg"rRP&P+1)T#S9R3M)I 3><ߥz?@ƋT`L4tiy, lBh#nrDǘaft -81,]wãg'@ȈM'+,ӣNF_Y*!QQ7Ĕ$0&,qy H F%K: $lG4>3a"(e09٘*Kiɾ5j+ʻկ5+x -hQVJh %Bs WJTZٚ;tY:h($9Th"z+ĕb1Qs-d3?50!1)Ymdk5k!d3Pgu_wu;G=<ՖGyip‰yzt0jGTIԚ_hl̍e0EIIj$$=K^ǡ4a͟l2SR934rJTA= 7`= yҬY([!X95Q[ҳU#*1ıYOqj^1RtD6Us,obϟnF3hGajI.d,r?j闲-*S~ܸSζ[QZNW^A”a*ͥE)x-k\tż&:.be'! =xROoGu2FFTHvCZMXI>"W|J U#ea\NA_`l,@(IF.%p;lNh!t9Z%H,BRټ+ޣxSeFՂUWa^)z՜jAΒ(U׵Vcc*U!RrRCjA/ʪٹ 6mF?9[Q)^QH]JB1RQ`BԎ%Ia'ID E1GL!cdhm{@ B* O$Ta30! U(]z: nȱePEٴW t !+B#*[ n>a;R.MlnlpɽJ f(~gaC!9)XϳUlN3v݂EH^l+#g % Xq۴>3d/vQV 3z'E<=51N̘fqMSQunUKh5-\f::!A!Qrhl2 TYxhV)w4MAz E|*ٚKye^=:(B:9LNLbKF]- Znho9ArL(whVGzN$m=:BRЙO^ @wNZhx-Wڢ*C``[jA+ %#˜֍)RSힶ,VԆA'\,nu=9+ʯo~s" OCg'K xGQM`T\huP`RF9z `1L"z^X=uiDtѡf%i'sD VNd=0o]ZyR$BKӑ82:2ѡ߅]L<F8%5E8bm%$IN@&jgΣ q!$NFg1!Ai-< &pdklӺYυTɜHn˕nj$@-7ܓc, FFwݴݏM 1F Z^>R hADkn DuҢ[M7 ƵPRlҟ YN@m㺛*fP)S #ސì_}kב]Fޡب !΁)![HļׯuC 2Vti\D{9h--ɜEa*cyO/~4 z 9eemgyݴb(7t pZi5"I Hl(D_MKK8gQ F!vXHt I S%a}78<$m: ֓jP 9O!)IM h/pf!c?s/O<-|goB7l)l6JԂ@˞jNci9#|H Ù8@3=ꉉNy\QK Y5FYj8p qX].n*^KEzsqKr2R:"wCv'WߤZwMګ|yq۴~#gL:%=`Fb5wÔ6{Hpڙ]eD#prBԫT]WytNRAq=FHKP =Y в+pŹL#2\|P XC[~(Gt͞z/ե(^bSŦCo>#ߟJT|yyJ2YC{.`ƧY.bBf?w)⧬*~~:FI1\>AKñdIHmyZq|F~?N@Mt68=H[aOC9Z%%lS%aaO6إc0EF,υYrVᵵiJW;Mo@(F 4'Ą(gQbLL{|kxxM#lZClKrW7m*džʱ_PZ7bunĒ 4bF\fL#^~?PޞUx h뷀օ@5lFPBuG)ZoJ펻IE4f#mx".^(\F]_e g-..J]v9M0Z ˛ys$ϯIEoxF!tOo'KcE0Wihld o|?-^و-Hغ7!zl=<Q%AF>?8Mw~ŋ! 읡]I)[ \/stޣ[Fwa= 6+?WD8dQC3K.ő`rƯc W{ŃZj<>j kܾti[^8Z)!&B@sLZZ.HN 3<])>Dyo:+=-(5 %{BbEUT."$Eiȍ"{vui-4yn<n|Cr5KRSG*0+ o\br{ еktC)CwZ&6qxR#|5"ES4u׷*{%~e4љ}o:ye%~yh63_>Fc77ϓ/*mnjFv⯡wuѠY4} x0Yk 7!on|0^*WoU_as:btS;o+#؟iy]2s쀇wP*ZU@2eKMNܠ ׷("tq՗+jV5n_L WWsM_鬚OW@vɨl-5z՘aUjkĻv CGVD 'cf/$ٍ=AKEl^{5{f,̼"|(LM۫/oaJ>">`m68\/h %$<ض|0=hP3z˭y IXs>+ +5LfD҆Ԃ(_#=r&TK,LpD1I-(#{bzHK mt)IoCZ3;ӄqD-AzuI0㋠3Nv k ܒ|S!(V,nfn?C+4d]n^s#ki:\l@ kH*X_9CLxs1-W/:|^R·W~bJpsCvs/\H8Ag<λyK)Ē4Rpӣ!GɈ1-ʚ֧,3o.ͭunWy,-/r57_W;=OZ=]>d뗗 (3UĀ% D>qJBqb#%Cua+ԫё'Eo:@L\&cL+\\c E#lXP&袠6DZA"F҈G`D 5Rp,E5imJjPEX A Q>e~-r2Etˀ̡Ny1t,iZ!m\7.<Ց&ե11g=ERkZ=~=]2t :O::R 9Elh]r?4H<`Dm2y0JŚ8RhVR#𮹩r{3Jj,5axN.]4mOSKz M_4#(Y](8ZZEACgj{㸑_> n5%IJnf['by~q,TXE>T`X4Iڽ|־. n~orq˃]\G7+<1BpX bB8jA.5jS..1; XոbBAYalGʰU#)欒 f)J} 1أ`[]`l10+NRߞT2W"J 68c9w:>M)s'5=._kK7n]~}XBR_]/~y5yπy)!r{%侂diD #!f4*X]\^b۹dHgE"=q7ZF6to2!nlSPW6a v􋞷2o 2;VIlT5UBh"Ǭxo [}܁!E5zXn n}Xu(#d*t:֩mmί^'DZمv.nmsk*+^?g 4#9b+*sT!4Z_Y}UsCΙ' 6S 寏%>єpaz4Cznwm<x\Lb48cw8<*E1;EmFT0mbT8b^*eLm-S_{GY3LLyע'kA4(&2L!4(*jV}nEA=vdreG )l/)gdM 8ZSG $-:&{gA|jLSSh0=΄|<,SfD%BjCJM#ëTJJ3. W?ޑ$/p dB\o5#GH ԃ1DX]Z%DkcsyҨG!O ?<6qFfHcX H $ lh+4?X要iyKF͇/÷7YCLKA2f(3gvw>N#>yzt#}&BV9!p}f3h ,2׽ # {򆮺w9] w OȊIcŧ% Ą`@3Ӡ:2*j/hKe:7aoJΨ*k/H$~'~ KqROZmPHadY-(XJSQ[)52ZՔT Oɼ5h2fcq[$ULuݗnB,ھE> ⑍ !tx}r <[3:~n kVK2Gb9/Rc#ХW> I0ϟͦM +aH"wǓn7*lLɯWvR }& M4ɦZyx7wK tR#ƻMNͻ[ MMiʖXjuK6}wQf8ĊjqH7=ĥ!ꏠ~v_fZ jïTyUZٹ4c>A+X{흁4~xrx i9*foIJa*%N#2*ª ?SIT$9uD|( b9?&#"XYLƸhyS.Ѣ1溶?]MO??Y{ožmouWhX'kH]urEsOT5cW˵|[U )4?x5D/o=a&y_txd18Uް.ZI= fM|i>zX94_|!o*3xBq=- |} Ms7@(m->M&Vɼ۬nLj({;b?;[\9*a`5E-Ҷf*D:TcӴ{)KZY%&Yk,Dվuyf4e.^$ʫ[vmK+%(*1FQ 9N+PeT4V OJRNS32aDwkw]7ouTnG]Zkm-rMV[5e4 ,_ĝhqeysr81TF[K+[;I`Y*Af*jXU`AhlA.ؙCkvkrIBEVbV?)v$#+Zg H_r! Ip4L;wb Υ6? V !A^&lv mTDwa-P&l?~?h*h] &8e! TU2d;J{,q13ǐC>:s {1 [gH1G xwsT ߟ~_!uw>xXn!da;89-1!Q, SDb {쀃7aT{V6 {gPPfI4Q3#P%*zsNC f+mGZqUe* vY롽? [ʐo m1A xj HKgп@i^9tNJSy }%y)H/&ym9%L)f!i;!s%s P9X7kl M zUD; =v|WL%?Eo?N %: Rg+LsZl} &%r6W''uՂaVo0kQB$BD !fk]!BR{f4-<، 4;^0(-a(l)7**+G?gur[OMN|8 ,T|X?{ȍe/ (7yy2O=n`z0A ⣺W,y }IIKoXU*tұU*{>wo%A\{U`| Zjgdb,lv*6!fAA"-ym<>aM{C6\Yݨ\)"-\)"MJoE%zՈ-l`l9 f|=M:ۍD|;M/"҈!?tӽmȥ3IoUMz7{ʓJzHDN4ևhYyw֢w" lPD#96Ppٛ g&"}ֲ`$jlenx5k=ӺX4l)8R@"nNQP-HhS\Z7CD8!؆p^^@Ae3UNQ6_/ # yJuUv)M+5W*e.hjbӓCQ`?Ə' f:I~~9?w]su aP8(+y遠-/G>ĕ%J2JYWyD|rrA]Eӂ(\) C[ъǘ~+JMe]歉Ўg,iA%)}ЖkU &L*X?;7iTE"PđL<j(2.lʁ!9Nؔ,(mY)1!4քh $WTXVBn2̄&PHē(3 税 L`Z)C`ɜmej>BVuy{=5Bxސ%zUII>pX zER2]9`95  #P(Kь9jud~*D&pL[ L.r!U"+'P[dZYKei~*PJ!הrL*%6**qa[ԑoνzT&H-~6Qx~[a=ϖ ;STˋ.i\i\+>w~ Т~oggax4m,\Oi13"br<4~aٿ>ˣ˟!ωgwxvp!FJJ& QVF4>: y"SqV)TF ]]߆;lP[`FOxy)sJ'\+tpe҉F ԇϞ'e:iүWQnD۪bZdrqڦ$må1?l522eh{*>85úT7vg">T|qy<5G>#?/Hlq=On&M68ZV>y_Iđf}s1](_iVv#OSe)i-2~HJ /! #O%jr!Vi@xJhۉNOqz a-#| KAYf5K!62pkҧvУP$mi<=V%VP9k0}i)lǑW=R'WFJ:fLq#P%TRMHIņ2^")MQ_Lr R0e@>S8Þtޕih~@S6鶝2,6y;j= 9m'ٶ(iflؘty>Ӵ ${G!#GocԼ?{^_m~J~/i?VLfttA|}5gTpTҽw; `[ SghvpP Z+RINdiЁ8V[AIQ0<8x+$̂P !RPQA')zBWI% #XڸffvUhW( .˴P;Q%FЛ>~6xiK/ufy !meua/٭ix4M,a위$jZ)Eo/Vݫ YZH}$,]tS6$ !d?9hVRTIWGgM9e 7P" ҅ UV +JHuK[9WߜQ>Q>9Xmt2oK=RժKJNR 㳸 BXxy-65ڎpJI;Ow⑲2 CI"kmg2ѣfK_Lj 8CLtFf_d(ջ&D_X1 ɰy۾(}P2T>y՗?@ ~ٵ?o 9b8h>ʯ+555C8OlFM1?fU+n_̧Sc;Ƿ7ֿX?*7I~~1hD^MttPHB%%̖zTsxuxT |IuUܚ؞/S._Ug3gako[h .e'Jl=#*w# ]?$_Ο=;2_\hDCNɑ0:, wԃPIӤu^s  9ȫSTX>*Ld;E@)gCLո>'T?bX;PT]\n]Vjiҗ&PH]dVͫjr oX?}9ݓ@R }5zmN Xy_(IX1{n†&2ak]~¦K%G̎Sba)~ģ@9 e3E`&ŝ'UY#FqyH-R vgo^XJJ(\/Q Ӯgou~hO2FP0hC%yԶ>}kڕiIGNDs^CO)|H]9iM y{,q[XR)E~p;KVfC?)>(8$NX8xS:ȧeO[7> t0ro69m]O؄\OyЂ uw$hMh&3 ax?"bx+Iu=HO(bͱәFatKdax˻=B\iw2"=,p86$nxy;ZrCMjbCvyAF0~9Mkhi4b D9qZʭ# Ļb#A CᴒFK]GF,Rt}Nހ8tii~4JbJR:XN$i-Ar}7) <~]mxA5QtJuAgqi%<d4 -b _,O|,m"c:z><39ChLsEUx%8KVzɵsRxEklϗ)䯋4}[t12? QCG;EVX,Eac6JƁ8DC=b] SSrk2)Q0+ EÄPRܲ\bK˘5HcDx-vFLz`D ̺Y0ƥS|E bt_w8͞"w0gc[tP@O [rRq[pvG`7` ""C8+Gs~BO_^<O0g|o߹7Kwp}>( p*0lOI'`D_ͶO;}qS(`L0azn% "V]znp>8 DMb/"$+ :r D%-=۱-zq>6P$.>K!8X#&qI rB`'i"mNtCA޺tڵ&thrcܞ# n-5 do>dtVrWVHɳZXQaDVƹ;0T`'_jKY)KIҮ9cD~Z΁kIЯO?JL;sr*p"9xOqਏ\/w_dA#fU3MElRKD&F'h83D;n_bx>HnYew䶹/42fIkȦO>}U"ʶۛR lx.ԡW $S q7RGW*gqv+`@;ZmtJIϜzyec7_d(?bzToz/7]¬Hb-b jat7ƻ"u"_z8=O'pBKuŎj ?~RZ`aZgYPƯ_GnP‡R+FKL,zIpדx7X*&QWf4awu``K4ˈR X8b\А[.̕౦(MA k$@̵p =: nUgÔ`UΛ{r- xHFyG_Dp@\9+*NHldL҄UPChsHE9Áʠt v1DS:@1M+@5 Ne^?kHiڱ'U?Ԉ1Nԓ `g9Ik23:0@!e65k xNV HhHs sTUJy*KE+`9OL~9|Lw lhJQ1/XH>\oLmp)oigՏ@J]`98G UcfJZZLܚ:@ :3"h:B*ܒkK fӑUYv ?=7ԙ2PNk~߇U6P}3x WқuOp/^Yi1V늰 SU[w>7D_PiXqܝDp%:kLFSM,=EqKs6_11A|]R<[8 }(z·   QB*$>$uU[5NlqvvO!R9(>€ , 2^+a:@bF,?`إ]-6l9St6D<ib37K)|i$|3 lfXAK~F#"]h} 0EC~`V6H6*g|m\ ~:2D)Xy}/K r+ 8$n㒘d"iɆO>Jjd>GPKڥ?g_ױC]BXVTx,y4ɽ6rxzp ^0c l 6. ck0l=, fE!(GP.KPi,df~|*|ӻ) MSw!I h`2)M_{U*kO ,{u3[%(Jt0BiXkHY]Fi=jhh Z[1C+8n>T $ !cVX/9I'e\(/nu&;?f˕g7 rr?\5mцmhȾ#"iqyQۊ,~ D*Ic֋UZKk1a˷LYщY6EקcgU[Wu)Ef6 ź7w\ojֺsn&zS$ʄ5VoepgPY׫*;F*ˣZsכv/u:D2}\"\\a.Em1x"u-sGhX^QVuNV LEZE՟Fʽ qD8N\+A} yDMBK%&K\p)i)JM"Zˈ }'\ u !K%8'Dj[|*ncFl18Ӛ rTWՈW 9v%ʗfo}{aN~T~QxdÝEzwmŌӲ6lۻt|C`=] ,_Y11}V;6{| VhoEI 32Ւ$KlRPBDdT'}] ?y2| +¹ LY <f4g%V65JMѡ9iq^xp5kt ac#5Y=JWjP-%W.필Q/Fr|0?`L\]6[dW8kpK1xJ ePI`AbO0V 넴DsX@˜ #"XnXN{DZs̔2c\Lqg s~ӵ\K).0Ј1E'P8I9Y]ow6ՙ9i)j:{%I|Ax"iȃULwO6=[(I&[8)pBz ֋bUyܰ4<~I*$-,KHKDڪo{Y{wxGŅ/  H}ńuU1n\`Wmp -#M9Z?@ H2n>_FX7^cb3Z4TNl.'-4ʏ1[P%y #H B d"מ~\d x;}0Z8zRcްkUL, г)=\ǮB"Gs*i.fJkg d rMaȩ^)1&ג,E(75Kg0H"Oa4NVḳ<` dB2 3̃> & 0Jq)v.AiM!4\s9`D9#5iG)6E CP^drU<yg}Rkz3\!'p!B`A ŒgL;J2 kXtmTANigd 92׫^*nvt|sT0] >CCn2206߂[t {hX/}ZSMboϯ8 ~.p_+ʑ׷W0SG\i>La~g{yrOH&|#ػ7ɳIx?'ZRL5V\iB8t8"cc%)V}< j'}Mš8jUx>!Z 'y}`VĻL]%Y-*wBYsNrljU7e{x9:LyQ i:E~J4ִ A罇q8񺁱Q$|0DZ( om)}5ȻOc0b]p)qX&?[ GP<|)Lu?vx6$Bpc8ba?M# 5>k{{TRNz@axC?^參ZApXx^$fxq$iӂgKoW`i<8:  ?:8UWoc_]G>e:Ɲ\w7g~翽j>,*M>}0c%?xpܳi ^'GTKURGGٗ_zp7}{g+dNmYJ̴wި?&~䩀9;v !`}yd/} &tL'm))жwcqNQx?CII 9!=Scc\ڑq_fm6n(Xp:M7Sޤ TS:ja΂x`!f ffj6$ L23)<0 %9h$_~MhiʋO4P]&t=FS & S:bW F9{%q(f?zQg/ӧE1VU[a!$ ڈ+vi͚8*G@c9#};wUBRbwi8&֚MU"ڄQ*wJ."L2&.q~'-Ĺ}׎}m)դf-Ab4u4J(nc jjhPҦ`"_g%lXP Gp0 qHQkS#G <6y k]rJ'^SڅԘ-+LNeX !uEm.p *JlB.N mF4B&\icm$\ TX4Rx U` <@ JsS ѡ-Wjc ZMPPG6EFBN!IK]AC.0RV14w^ٜIgZumc m3W\RAoҚml'LIՋ;Ad`Vؽ-h/\4V3 V ɧL| 5քWuEo:P^ 3Bu"{R^VV=Ik\(ե*S3r_#_`؂0Hl;I)j~k993 }ω= ޱ "A/ ;qfQDdL'F̟msvR.UnKeĪp|$Wd9yGhq-["B&q2oMZ).M2 7V~-FB h-gw&H^7?N$vFDmZon);,ĎB`*SQ{'r1l|rQ}z Nݻ[S|5PRUv7Y wޠՉqRKC`mdl:G@[@H[BT?<?j ׯ5V#Z9,+QEYcg'G-o#ֹZO[F%Y &-_po!;8*|78Ε~!$+3/%|q]i[HHILʮ#+\S^2yatGؖ>n*]K3- rkI͒_rE^%rq_U_WI_^95Yx-Z]TedT`/tWT5By֬drU~ފW~Al̪oD*@Z]GhΫ4[#/B~%X Tz,z,楧yι rzc|uLJOFܻ/l@sݖ(!{+zqR׳-X_Ov`R2*!H_w,%b6Y(F&W}ll!ㇷ5N5a % \ѽٌ"R sA4 =J B-"[DTO))_ġAWU9\\)_P.XbPiɑc7/[Qމg͹s3)jΆgY6Q`nVϚ.9',UzN؆Sfg eC(\mhBHAu}6P+UQ63$뷵xN=#+%n`+T>W0\ubIY#$gBե8Ȏi,;B3f_vU/kNI RvOגT|N& XZ1j+ ~+eRn~qIlo6"r,0w. a!7W8L5э[HzQ)[>a}Hi3??Dmx1\$ ;LS$=#T,>0 `"}QRD dzˑXKU)ry?ϥp ~y<I ƞ8F*POPxMHOL'a+8753X^(B 0&Sd7Sd<|EB\INy'X9#x$m;0RǾ6bDi 7ӳտ;c.9'ZT߹cy!*iUΦBb7 hOM0G0PD*f )4w'2!R#.S-VτENCƆ(/$0ixaJM4'8!PGfeht`VtvԿ|L΀煟5c{act#eшy )E(b>}FN'y5A@mI@nWOIA2rPM[BGo:^2﮿Q0vyK`m!fѰn-JqM4Fs !1_Am]< "m9pBɱN J6S $-@jo@\{-)9_xF&,-rxy) 2U6[=8z` b0UqeJBDWeĎ1YaʔDzɒu]^b7QEb5r{߅\엩,kݍ5G݉k̮uam}#ϣXiDuH E?[c7]G`>g-f2ב]GL΀լ%Ң:MQ~:]V[Jݜ>GP78}id/ cΌa21 !v};Kۍ<wq0]T5^L,vFo}=.'j6.Tk[ Oq1Cl ±okgT>eOd<)*rb*Ejs 9=(5FLts6:׭[L<[Y䉱:UzF w80`RåO+^{*1IKv2šEJQeX&7Xs284Eo,-6 ]AT@Ě #zJHloIV6RlSyV ۇh#&| y?_93|'1Z 9V23 z")pWj9ضV6`r;a+r\'QimLm+|.<`>Xv4zsmOn%%-{$2EINMHy]g-1YٳޜP~!ܲ a;kQS34)? o_L`Q5c^XS>m~EerF(6RJ7WGeZ[؟7?X}o2?21ZCI8՛L^>0mn^ؘw3 zm&U؎* ˂[~md$'~YIrYڄD<#OR5jMp0 扁xtZX؟tv\['gQ"i^r ɤ'=NRE _Ul76 asC&;KaַI^RZA soٖpdd?2D3e>iqfZ#-Z-C;Φ.7YKZT=h@( xˢF 3XiaE 5qaC]G:e~+EԖ |,'2Լよ",%~ON0AL!l[rj `&Q`:;b.A ̧S ?@Og+(AJ=蜆3өIPt:yt USJe9%H2"Q^,&Rl7N)ck'$1 ̡dKw.>&CB,(pFZ%N '9|qC^vsjt[9lt 9@)xR>B&?~%r nwj޻([M$} ffqvmz<֛N \wgrݝe2O06j)`D,qdUH ] Kb 4V &*I:] ׳64=aA3~kg}WH_~6 3p'p1k 8+%UEV+'kExoLC(cNe&jC@0T1Oc.t IkܵBN3x/ 9Ej(Afٟ7WE"᳎_۶?{SaN>}fť?aA:ZqC38ox \$\p ($rbfIJJh"PD&J8bLk`Bȸ#(mAq[جf\;|LA)XΦ>w۵p6mؖ:+Pѳ)wZ H?= fd #,w߿Eoo* dxc_kqo  9p^ ?zse$z-ri$͕γf+2lŕ ~~zx0+[1_ѓ׈AI*w~r3UHJp"`;9YJN-)vt )2FR:GyW /}1>g5U;OJǂK̈uEۺT@'PGO;e*~*($i Jb K|"L$bӫ|qﲵ`dRP XUP23R!̗N,!TEA"I8cuws& #ov f`Jv"^0JaOXiLRcƑ 1&]ģH J7v$1l1HUruwc wLG4%s11+=9HM I$z(eluGN\Sqʨ$BJED+.\PKm(J M: J=. r#:>&5(S F4ˍfQ*l,HcF}bc4JIM(Jp( ~(o>j]'݀F)#JH c".LWF"Dˆ aQؘޚw0Sń"]̓;)?R#3hFPȈ*䍭g cŐwitd3#nwbr~V,V+6i]Kv covqAK79T'^Hee A(v#[ty(E* n'L̞8j8zy~^hOiIX0mV×{1;j`Kwown̙~.ڗo[_sVaHZz}0 @:+ ֢ %k1bQz{b7{  lw~\n|۱~1YEѝ S9eZI#VBzED$az,sǼ5z@쉣;;;'ÓN>bK' r+p&jU11n BƦ| ޷}0w6!LG8N؋4lPz0;Dm%8TKƠ-ۉ^~0Nl x]E1nTvpjcpCnfq͡QbJh\|yoCJzTc[X-v],AvDtwBb!_<($Mwf)Q3B.\" RpbOC~aVލ9r$u(ŔQPf}7ŋ#=\;w[KѽzȹC[Jtq4b9)%j>7ZR>G0+ͬu L7&fȄ1Љa ]8L$cDJߨWcWDG*k@!c̜ I1Tʆ%NZA'Arc +T15舉_"JP( #Q1- 9繍2J%oa W_n]u) x$%mZ̟ikY. &yA@0dG\CU:(w j( ](yoWؘ@%ZF 淘Y,~O 4KDv0oo{)2]`Ǫok<>ke=f췜 {o- ݇}VuTW EԾJ1=ⴔbZ-SSRA=::YwwU1@ŢmUnce{X=~ =-o֏*oni]Dnf\m%zjLfQ_^m dz|ߑ.}xқ#m914.r8,\.h}ԥpM?YscrgV{]OFAI7C,S_kcfFӥVc&>K"O\~s/}v˼Ul ա;p˰7bٗx.<6byHfZ,29.8O΢1LEWb=%rEK- %WV\$`6Ȧhlvo(6zIa=nds!Dw\"iT;V-sVU] .VnqMً\LD9û %HRR7c:߾g0|r*nE{ݪ8,K(}^o?BTx;F 9ػa["A8l3/vnhi3W݂\1M\}jqu,n3L g9JS!-  t3re^.+; kT?Z*!ӅQKѽz(CSL3a[);;!+I6$9pWVKD?f3!)͕`NlW 7&:AҟarBq [:GQ)oblb}kN`(}[zsl?f;l "%SG:d`bԢXXPHr->t_;uy}L yχ|.W:ssgMkT'OiIو!!A:|ݍS+?fqS977 ׭uzK+>f x~zJ3GYtb#.'k1fF +7Yqa5JL{ꞶQMEcGNO\ƖC!vҺZSk(z>w*VOu@6N|z9F BB=^j/3\1ѯʵ>qU1w*V΢oB"hu? JAձjysOWJ>mSƞ?-.QBwp]0ЙO t&("?4s Ʈkzz= 'DSENp ` 1IEPnA\:%;șͪR쌙۷x{Vc 1.^;+ԨO,HS"qELZZL8 V(0G5TIw6EYf8[?g8G~P؜(9,:Pv_\x%۩"SW @@TS j9 fG#$1Ei!%}QhA0?(Y߯.s1DK#"lİsp@q"ȓ$2i1IbBc=X؀GKJ/< lU*ERb RN0@$UXI4A1` x8omlŽ hhi)=3YZI+sFF sFAE-][oH+_~1Am,v :=/uc[2,9ݙA=E%QRQ$uq%Qdwu9V^pEa n%^G1b: s`BcerX;q$x&ULC1[.g'<^kie BA"5˜HH(BR`gK,8Sa\ (42xOfsW)Ί=nܛBwF3~~8E:/nB0Sb~+P}~n9aSYRRBz-D%[ # *3>%DyT)Xy/ o().v`F]~lQ3 ~и_QSAt7!I%7]_߾=[&rpwfh>O pʜ#Lp1-\6gl ’e0tI=6l~ ƏǃÇӘ:ʉ"la( LW6IPVt`6#IiUI4 K 6jnD!) z'`_ d[0-T#R,FOtiӂLODZ8GGI֑gq2ܜ8Fx;i?S! g ^`f95TO71a y|iN~EA/=F6,f#.crɈ ;&t~A_]SNv93^yH4ȿ4gZ1|D!`xYDNvwcU1G=Pq,=Rssy8N@L"Ƃ@ JDjczS|@k\F ^{+q@]ot([Ǭ?>߿ҏr%d3Έ(77}B;K$f^[n~;M Jg{ PIfXRsSIs5SZ;c@> #TXX0S^=1&ג,E(k,'1 I(p|H˻o`GeL!4ȨYooŒOyNߙ.iU%~w˳3IvHdخJ{{MBgbJ-!W$3f|'J$UE 5шehve[hi-TSZn48uXu&rd%Qj3^#:3}j~D ˎ2Ժhe{t 1|gbQ+A&>V躸+Yv%mE=LUȖ=]/Ebv!x͛'tt%`}ٌJ@>KCˮk/[-H`qjm'/ÈBcݟFS?aOQ Qg=N9ɐ&-3h[k[QL1BDRs,pXJj #IJLsj& v)Z_Rӏ] T7hf /048qE|/o'Ww8  &oF>ܢ([HȯV"b@b9 UXIZsdeRQS ܄y8CwX٩ت;A qe~A+;]ıjsHF\:7\jսЉT9+ݏ>N8W8>OݡJ޿h=3?>7% g')ip:{ ~MofFӻ<7z(vj\G}~SF;3' m9' ^Lg2tWFdDE9lgagWc3@}f9C8|rpGxMIt.V#=osݦmoi}JZw6҃ӭb 'a,5 0 Y}iY=m䜣.EB|!1vtWD RƴWOtdv8zCѸp 3l)Snz| /{ ??y<1Lt"7ט>FIMco"<;X7ǒ-|lgv6/];m^WbY`o'dHə%"h y,veJYa F γ@eAiWm3/I wvg=cĄGq*i9m-NڥGpp`Mi}hji -hJ:SB)۴Z+v G7sŵA4o#Ilm'_kR&܅sΦ1#xJ'\cc>H<G OhdHի'GKek5kdԬ x"9@g&J1t]kBo-5H{)˱aA{Ӏecj`s=m3sz}w5ʈ.O;@%8&Fnu34LќgA Oh݂_5^95`dPƨrWg}͇_?ЭPA%N"یZ" BX4CoDw=o/܊ z2O^SMNɫpgꀯXM7̜֛[]Pu1B?/-b@2Ps)C\!sx12մ,11 ./3Ɠ&? E>|ẋ;eq c9Iia,LL ^kjdFޚ=c iTKiu},b=A]Ws}X hљ!Dָb1_DEi7іacoITDoٓpuzךeS5!!$!M'!nZ@1ĹkXJ\:]͝ƒ;C\pi}o1$ckh59 % UѪhє<:nC6-wTwP,V7L3,mC3#0/Y a{da?3 3|Z?PVީRt>/G>gti2&ʬ!2&ȵm60Y@XTY]N9% JXBFQMI1M;W-j$JK!l"'3Xzσ7bw4[ҞqcЭ\Xb2| eOtL2y*τQӝ*iħ&C+Ctss)rIf2#͸y*6xlW I0780zk 3vT9rss1h%ədrp$6Jª-8F3c 3 . k)nXAUv}؅ѧ;b Cȱ&qNb,aJ tنg,gn4zi%&OϽ<4LJqWS/i:;;uBT\)U4jGbDcfˏ7B 2ԋq'Ư4ɛ BMKRIl=?Ma5EOX#b0A-ӈ%Gysf '3 #ENo?r 4jL&ߔ}~jQ+AR9&Ff9~yynTQD)4߭hp^bWbɱHBlcјl܆s@15A@%rm2)Q1@3b_ysA,"<9q93y. g(7.b ^{ LkSy [$y ce0FksB ,gk<N8RQ:j v(lH|6 @б h87[NpҲrodRMY:(Grkz:a`4F\TQuQgښ۸_aeOv$/C*vRsv7ٗ\s\S8[ ) $g( qQRe[F4eW =CB,."Ġ^kη0u5*nUysO}@xB54@~=dr.wwZ>oZvn1  ݍ{rO'X- '"SQcseѐ69dz9 V9CY{;,P{6G ,N0g)gMºŬ墘ˢZL=h=k%Ir8r?v9CN#jv>u9ù[7ج+YxiciӳĚS`8/5`2G8Vx sUo>^:i6g`,GQ9i9_x Vp`OTכ`i,[onU!&nc[|wsݗ_N^Mo6oMNٝ~҇3 Ɍ`/o%n_QCV<[rf>KΓy` enPL{1%Ƽt ns{q8V̌5\/y{%'oc!ΑYQ6OBգ ʐ-^mVտ«B Ɗq;rDI>z- K%Z}r3Q7EJ#8Bܟ K䊷WTt'ՁznyҶNنiejKd8L(Aw"G7(wb"hg '\t"?D6O']H gURϷjXSEʩ^frlUA0>l{%C$sI)&ƼѶaJm)+!.1m8*^WDJ:J Ok$mY LLwӥȞ6/_߫S*"RK#(߿iDAXwEӥՉ L*A)9DgλߞQjtV0Nwi0k_0pDuޥc%kбr` ^hsz>fq"whNZ[L.+pkԛwobJhmM+`!~luh{!jN@a!Q)y)+Ŧۯ׽IsiPZ/^zqB=A)^sdJMSH둬Y

q m! `4H +>Ɛ)TaX U *RWi=a#25N38e. O )"a DE;;c '| 66 0&a!U+5#AlfpG%P0mMb5[oݸ^qΊöv]nm=~;e.}9'sYϟ@=vh 7h7Mf `fuhVF(änPI%Ҍ9"*F;d7,wzHIZ'x" Ĕd^Q~r;r\3>&g-˻|b`eɻÉ."OQNXR09Q硋aAYrqל'I1o2) QNL:Ig0a0A eA' )bOwc7O'[PB1G.]` {׿N $)8l#*E_jɫ#{rP( TBb BnۆcٓmJ+KT75OR jHWq?Y4 >4ݐrDHIjV`vHeFup(A uA:vA^@I/KfO)l>U D "P@(`HUX#~ LϗF,}w 2J_n8,BeAZB -^%Y{}g7oL5*nd˥ ZB;d910\ (a|mSsg@V{FHVAc1l ! J bcDB9"<2?{Wȍ/w])|) 0_nfnq&H6_na'me;~Ŗm[`ZlSE,>U Fa2:fᄒĜ67k0GŸ˸d.ϓ]'>}&.3 _$|@W)A;suN "g_?Lȣ|py@oRx*M^zrO ?Q%p&׊M.²=\0 7LqMqtMnJr5tGR=^@5vVFmmTtv7W6L ڨ~b1f 'З!p ^{$507HYtks -,{T o.$AQJ02%UCE!2Yǽ"˴Nyg$.Dlޚ49ۗ4Г[W*i` QF6@n=GgP*DnQ^_B#T{F D0醠u:ݼ}`yhIIvRbT YPwӆ K[\6j4kD^m.$ZQW } \%Rn3evs޶Fg&1TkOWhki^ӫl2lwfRaps,|M4gSd.D3 ҝE n9=lЊos|~Anmn/۫{uׁS$-8eȁMB'gvt ApxB?RgͼMzoM|b~܏k C==cLؾ?GS :#6>VSmztm.c~FWcf 6=FlyttEaHB⤓Ra]mƶ^H% ;-E.pFck$b7Xp>X1.G GsA11L 89sx"AK'F^1t} [>1`4);rDO_H%Q||&w?5goI8+@?|;^JIj4x?mtY>7iMi(?ī_2s~r-[le0B7qJr8.i>&@O?Fww_鉂arB!n_ZZTR- J%{z4&aLu1׍iپþ>oُ00֛ܸ M;73 p.)ubS+:#Xc$cZHl?݃Z+{"#XGg% ͈,mykT&/6y(۞4&ov&5 w3i5iM-!KQE1@)HBDy{%zڛ=~7n -q=(H9X7QRћјzc~ {ݷHNz/+QBNKN<ǪGalO_Á!e?z)rB%g8v˫,KN~Mgcfj& _si\9&3˵/?sQn}7BoEyc ڋ=# gn\ͽMVz,gb݋85< ۷WМ/a?5ư6pŮ tRZ -uSrj}я5Spa/'vĻ5zKu2&|}_2%hVvfcT&L5b QLvZT3Z[/)߻jVu3ipl+ڠxE#dOE}Mيgs:c@*eޘlL7od7Rف7Lˊuꮦz!vp}1ᒉ{qu?m%Eo _sq0qӜ߱bΟ/w:qb녜bgkgns>c:ZŲrU#+͌BEgLVIFB`TʖP "P%{M x57ʌF{y0.y:-0q ̒:q7J||*ϝ|zx=q wі!xDODGb{7z"QE~nѵށGRX^uDyHG6ǔv!3Ì)-+wnF6ө]1;tA,K~m@cIuZ@/O}>im:ƾTnpMFh;-jiA;Z"4#(*1Rz:CJLˈ*Qɨ%ǿw}Y9\-^=ߖh8 R5 L6hHh-b`FBNn.C"j J|N hKDN(xmʞGXgV8XPO kՉv8׼CT@&rN@'*lqR!ucH l砬RP ƀh\4SuzvS(c4.z*"Q!xqVZ.]L%t9A6SR6+pç"0ƫ4od58}-*K֚ǔ ZTK8 PQJyˉLkp1z"+.FK'ݯ 淯T4l>J! QOpG*j@[EtI+8[Lji.>Qfcfc_TdvZ7'G`K T =Dϋ빠DVc"{ċ.UUoKf-Un26h PI|W#):<tlXSTjWGaI|"]&3FSmcFT8;4s=4tu(m7Brн7 /^7 0X$ÄxJ8׾p؆~/ν(؊pؔQzl| qգtu)ӣF! MvƃOgY{Og9o;g2΄;f‹GWf{7Qr9"4p}qyMuR]?mtV^{t* k.$d\5)IZ>(ˠQʿU걘tP)NLXsaL`4Jrfg߷(vlZCm6THjbQCGɚYSF5SzL4aqdMR* * =]a+ZggUڄmZh]QȚPm{$1iH\η{.e?+Dh4RʳB+婨ujUـZ!X+,9kÃkg;G˧nuPB@X" `EBBƽQӀB뙏™>*i"2rKQ"5d*y ceLؽ<4J_v$prB#N&zNIp)Op + jp6N={732絗 Ք3hwdq-|I-Ơ$d̓)n޴ozS;ll.:GDTJZq| Mu 6 RʔVpjA}?c3 CķA=QT,r}a&b!e-+:1h-ׂ JF+0@5i1N7dk|)zX({/JMFϿn@E/b7W<;gv>EgT_>8@"n F@OfC-[՘]DW.lROLVugJ5?~M*ҼaBϜU 7w߾k$K,m04 zx8oB_a7Iq7 籼ub~8 o A'˾Cp R{VWжb/-կ hwLH6 p[H8`|n&)u(W?aj!ņڂO1zvVcanA6nfM a7ͰUh  Ff%DAE*\<a4Pi0x+0 !Y^\̿u|ذt!n5=ww'6UF606Ͼ'[]xFBDcWb۵>C J-ԨڀBCF\F6@k#-aI~E:JoQ2ќ^EsH79/`)5k-Jnu$4N2;,8 QkmQm?z&(K^V~e ђUI:PIKp[lɯC-sA} -+Rt#깶5)4%n"JAʾ$_R]r-K,n_ 4?G ygF[|ճݨ(MP |@h>sƴ̼Yo1vdO.ADJ z1l@\9 sWDDË⮇^ -HMG1;19zf m#-#U+9GKv:dEVL)(,o 5O͊u4ג1Q`~g;:1?d׉h'>ia_7s)MlDIuk-4nk TUR{E8aUBK+]+ERZwG0Q;Ң*=Pzq+?u(Q&cG `^UGæT @eKZVR Q4Xce]Bu5!)vIV^+_ω?H TbxL݋qT1}듹wx 1 VNwh{6h珳%%$OC4KPtj yI?)8lkL_195Ÿ?,-L`z[|eÀބF+h 4i͛ݸ0k^d>qV+ AWMou,gJ-|(gKGC*]mV~gkH߶<KqT$ֺ,(`5 s 6:OjbBFn>IbRc\#yAK)r%c8(us uBJ)^iIdX(<_jqjf3NYN, feOī R3 \¼?~ Lo7iY}^R?LWOOrwdZzj-&#} hȟ~Wbɔ0Eb.aϾ|}{󨹞+|%yCw\QioO 6=4 7wlju=vUHD'\m' Fv5yXVخ$+_ў+VjP]Q B|UgRbӷUP _9{P`Ob;E6xҏf(ϗL=Qi`f>@(707z+HUL| Zc PF+ʟlW0])x㘷vֻbԐz Z/8)m7b!Ϗ_]g~NŀGǔ0$r7hнày=GFR ?sO%`\!PS^J j#1,++ߔAcٻrGY>sFitUNj*j~yǟbF)~'%hk- _kIqB[)禎8Xyh) \K%ǔw{L b {9%9M9\3&b=_Љ;:tA'}t O{4Ԣ@CL@qc GJ\cp%uR*+OJ 6DW84J9{cS Ƭ*ru,^kh*%)K^ Lh%^9mJ+90I2yk^\Ri>L~|NE2We !EWaP׳?,}3??yo׾ ^W-`TPJ8pV9O B&`'QCeB&!gAH)X-x/df..vRD&/z3Ye%~ٽ.!"[\ BͺkqKAVkKB#1#>0P_Z`Uv4ԋ-9` AQfr^ *)ZjY-zi$;-N[r-\%*-IJ{BP;r`֯%cD}?bCJGs1 R;H4Z*9NS ʹe1E?, U'%t^%:`׵* cZ nViJ_[?Qn*#Qdfg*ǀkpEugUW2#Ք3"T+gSS@5?=])Zf@.rBw ڧkP ُp-dpS5BHe=} jJE0e;å@ o._(DRP8p`hp;Y!п fGpI>3;Z!G{ 4iܽ0gMn/}7{9Tm%.ep(@ p'Fׯ^ͱgt mPlɣK}Ś?&^9X$o:WՑךr *YnH!.#Q=0 LY|)%)̯fmp\ݘN4"ҢOLYN&F7Al@LF3ڒG5.eFj&+BKU,G)aь 5q;DwK=F DӑZXӿ#_ + f<#Zuv8yS^8\ܣ:ދ{FRrfHmAHw d#ۙdqD\& y?fs 0ͱg}Ew9P ystoȇRI>"g!! .n;E0 zgu$E k)j!%ս ;hT?C'ux޵6r#"ebeflFͶ-$'[얥b}XATwbXKe`*tŇςb꽳(xɎ!~"% S;a3Z2Ճp]5 }T ̀KnYQM$Jِ[͆,5P B$EԤk׋?(qO09BhIȷK[`.; h>r.= [+`~ k,i:}XYݛHS۩iA)^ZjrƇS+qkb"l^nɨ)ټG98\lml{y9u.n䚔[ݺqU- g÷b5 h`i  *IH%ܹS JCଜybQ$UrW CyYF=j6eYɘK M3ìb2'c4CB 'S"1+;jfaEs|JW%O]蕄28T3$4L1c8iBHg,IAPʤZ(@f:6̿=%!Z):+QĘ c\2m$IZ,┡DfE<+&`wsCKo,乪 dyc @bdN:hfۼfGf~qZ]vVh >_\o^|0=AA;VEHӉI4FzruJ VeV["ϗ/Ā}/:-Bze;7/IB޸FT);߶v%R-щ}G֡5C,dk-zڭ y"%SvӚAb":諸:Wa:EGX\"#hppNNF]a:U8d"^f:+w# 0"py\Vk4B8N /^.FXn1]ß*a?6͓ZnT 0kXFm~CA,K(rVdiP#ѳaiZ5v&m+Kr]T.Nw |Ur梓l[9*'k?K]Qa*5Ob[J=bfvS$)Oa%ˢP3]q%]l^#sZlOE;+X/SO9nO6'Z>62 1mͫlnR(+Vsi<[ƐʘrQL!bB!,~eq oG LjjxrW!W D!V][ 5ulqjG/p+ݹU` -r >dz@`3v>y#so0.~ h`kϳa4lhe0w('.p? ΈRO֛GG~9ğ=8Y)F5mw@Fe)O[SFkq -76ɯܡ޲5סQ0UL:tjIqѡ6CD4TIM2GEV$8H#T_A|y~JPDb*d2FfeCe36&I*#T|9>U{շ{ƴ,Uu)la+'|?饣8 X*ԧ5$[9po:yD>3",Ta: \SĺslQ!"dIyW5p{WU)-'[N w)A{8~p9`fڢstLFۜCa|C""^; gv̰; ?kW y?JY51Q)R9*#n Hgyeލ]⋾>? &ӧ[d~sq[2W><*X+үĺ 3$ͩ| Q-:;ţԿ07{D F ٖ:>T 9|3޾q\?`ҽ@ G4C}3ΡKPisiA_EK,bNoC}|çqNEMٶHuuz˕Ǻ}gl_u"ޠP0{s>-ԡ(.YHk4%5Rafsh wJݿZ3`R W6]fHL:-UjE ~oQ+ R{u%:h29 L~] Wyl3_حW`^zv[V`'h2)9")F(%Ġ$C3+9^嚀 JI5Ϥ6`:|>,F]>y]>mAlon`4mvh/?^>AvNjq [oxzlxZLXQa3n"&R1)T Kfj8S+0|`}Q|ľhm'bvn[:z[EۧkMfL*IN:nVAI$K3c `׌ խ_rh j^1pdEPb*9\08E9 _gaXN`p"Da_;hX56 h mN+` qЋ]@OP&+52iEq&"HVRP`U i@MLY&tVƤȀh K:>񽍴$ HKX`{ |Q+any&:=`hSq'K3 }s1%`(a|wz\f-Z"o n}G7y¢Gsُ&oVd_-Fgbws~ǗHE|x24n? ͅS|? ̥b 'L/71 RYEُ0vwsb]5֬23fX{P$v%c2"I0BVȭ6; '7.8skx&9^r|~,fEL SL֦4}~J)7ňw0(?,IVN¸nK ϗ#8g☕jf481N!NXe" eZZIb7`*`1}4H UϤɳ%k {j[ `wxhy졖X%ApF T{x/K g^k:DK+ȔA#ߧSo}W|tF$([A9Gp%$u7(-]ZfK䶗좉WzW 9#S"-J;P.&'Fq#,BHD؀lSd9Q6R \6_Ό(qJI3MSǸTO`ap\8T:C vS@% hU""]Q@$:"Te.ӣBB^hh١D+3 :롁fB=N!%Hkɼ@EgG/h?do]"/EZ7Ӫ$ۄG!ۀ& rĐ9&eY趛( F4ك|@Qe2||k"8iB|N8UYat*& QfLuE14!UǗwͦ{PvsɄID*q*# "(QL!!g$Cp*q25I~#/n|fJ 6/7fDMpSZ\A_W\ tA[B@ +Mz֕КO7C^eWpuE/8J[r݄-У$Ld2Udgrg=xХxcwPIXvOв!0\>%_<[ 0R7fj-v?h&j@Q﯇2VW3쯃5(/vjALw~zrj] (p w_}98~=?adWָ[skN%ԡ.{)ub&O:Zx{Ⱥ> b27` 7n8]ûGw^Rݧ!0߷βƤbrH2^M~^mlk[$?E-Z{/V O"b`c&lvL*$թ4,$q[q#8M3($9_)?$4jQ9k)E%$8n1^wzQ$8xhJ9DFɠGkܺY-;T)F7A%ue_/z4-xP%+[i-^ TIMD>8J8nD[]]\6fsx\J^ƥ+FQ5HG b?-آZj'wC'B[4Ȧ" gkn\7\LlQ;&o`fg;yEs4>c#ľO? נ}%C^}{Ն:eLc!Xf;}VeF?gPj)P<8Ew-{'>/T㬘 :1wP;^Nj %=$:'dY/o.8^Pk0{P^x1r6 Ez3&9-)JFZB8ǚxհB{fahO\|σ3Gs iUN+bl!A #|"i&v"/vI%ܘsոcy .ost%jZ\n9瘅BaE=I׊ h}}di%Lm0M̬NP%7SbB2'ܺiv97ɧU!FؕدOǕyW`κӷ_k FJ0훧_d*1|{ ?"h?ebwxzWk(x_ ! j41&ǯl|o!F.O0 m檖 RjZM]>W(:@d\5CVi^"XQt,AT 8ؑ4[*JQYW!"1'&5o)E0qB@!5 YN6:Y08V,/Jj+ioZ"JNXAL5}uYA|uc?9);Ov"F6?1E& ߸bKRZ4c XE6S`44qd"RbPm|\r7#NfwU&)S)!!cvsL28E1VBú TiI.X[k;},@_RaHtU;ZkX$ |D#(Ơ2KJ`Z%&"Y?"v:՚kz}%\S<_ KU nndFrxQy!R)QtcZa,o+vZxuff޺O^QCn2׋8?F],mM??[ou#N;hk,H&8Y(l13X4儂|(> kq޶>B_bnli4֭3N_~XKC5ܹJH[<(LDHh)e;WPչJy#ۂ dg0@CY| )7h1&qāF ,¶ мMZm^!A䜡O[ TK{hjO45WJg33]Η,Drz$ *6n]|\U 꽽O~9M G?9h>#4I ZְEK^2 'LwA/+1hȏ/N-VS&E񄸤UR̢-(ffǴ<.lvzX+5+4t^MGS=N1Gx2=NiBSWXN!>,Gօ} S PRUcɣ c $qDbxX,4ͯyyRXsŭ}k%vϽ""#LjjWrfݼf<"wf>5mj U$G\(J֦)*/=9.߉|{<$c}C,>-.?OϥnCF[k@`?box!^!C(َ|: ]D[{E({( -`E`#!Oc뷖ע g nx \fZëR*j@SqFѣT! X÷I,&K&5:F"aFuSbbq}"@i@5utqt9bw;X爚\]5[hdb+ܧaK{!wLc5AA:}f"$  %%3jSq)$$MgjG( .W ew+Qam't_99| .ܦ^E5AQu-PKVP+!p(Qvjt#]n)Wv^\q[a^d{A_^UUZS;z9l,G]$s2{l]1m ׫է?Yͮ6 +݄^ә]MXyZ0]gnҺ^oRg^n?O\(̸ 㬤'{׾+W\DdJ!4nO)RQb#:﨣zawrxj&$; 4:j*=$Pt !KHhH) M\t081Q] g }GǍ#;3qzvzuVXvw~~}zs.Az-gۭV2p';xBpH9 laܗܼ{ 1T j=:+k 6xH+32<9K蚑`XRv˚Nfg߉&5Qҏ*.8> #>i# )mJҜ`DխgMQۦAvjZbӿ,uZOu]ëY<[5!!߹,=v͏Bg-W5 !߹ k7B1wn{O ImkCOք|"$SD >x4>N;hRS:ngڭ E4H$9bn—%FJPs;7vT+M.kt%>k?'4(H fi vs8Z![* -_ywG9uH 4 I LݪA11ϊLK }ճJ;m&N0ZBN{{}$5f:z{PTfM=Ma%`ICxS 45}&q%duwBk2 (ġ^&ՐPI6Bk? )hBRvva*dciBs:lV+TgO+!B>m+>,l,1lgf>6`o)A!hj€"`޼ -ܣVS|O4aO/v>ξl_y\Ui-S|UR:)Y4;O"Ev3_?8r\k2fg3o2fPÓZi "p ՑJRJ@#}x>#42y!DÓ3Mg&& O0`"bوi`^gښ۸_a%㙐8Pl'5Uيkl^б6H9I ) H;Q}98$hf _|jc B?+~&gkC8Iq zw"WyL9Vy07&G1.VP8:abS)UG[m*[h?졼 ay6Ttwa]eᤰ9@$NrH @>1b2c *ɷ?e_;82Fywd KgHza T,'T3]3 p8nX7ZW+ž/dK;Aʜ\uF+lO1Y][DNՓ܋-xG]Q_+V1m8Nm]Ȑ5E\C}]L}TY_bM/G]wk=)E!(`ά҆ SM)WS<){yG x.6(p{&E}JtCBD84* -UCH{6P ڔ۰ X /,D>  HRm`o <1Z Ax0_w0QvíJbךD`_ P44ې9~U1 rş3xIoU*ʤ|R?`WTXeFZ~ ha Źq?Nr?O\Z|S3 15S3 158f'ǭ"E&1 %3r*S0FSJHb_x>,)8#_FB*el a~Nο&?2?xlSB9|l^[xx^gzUtͷzWRޗq+ml\AZidKe2Ɉ{; *wWлX2B~<3kSrod( Isi9Jso 2a?^`P%Q7S9w:$v3&YV,dIGuseHpжЅ%u&69[:I)-UVCśk^*Hz(ߞb2_GP7&l)$ht[E(8U@8:Ӆ, &$x Brb NjLꔖ"Ր rh\Ep<\9XrPD?=r9WV Z1m p͍L#0"\{͉>IVJqHU^yYji r$ZJLY2c&LX w1/TjOԉ߰+HGP!SKy6PL01y!R2Y**fxc\QHe ,-0PK8fXU)$fW|4<_{-~9Ig D3&2&7iWwTxSFm:sghpEo90ԋaW@{Ө[]u~Lݞ2,7)_3 %=jT P1@,C,&;$pX`a~9RF!״}{N~=H wOW?VhKe'7b9*PNv?{.T7#<?yCu܇gV,͖5i}uu;oխ>1՗8]'+#h4=uŝ|muw%FPQywr։҇X\'BA$ݢpBMⶇۧq[/&3*-tb->f{۽9{lm@K/ Rg*AkWk)oo|nl?Ն)R>zy^#@z4S6PDO\{'[ۻAbT3WZ՚}q]U3#ȍj6Lgy$J[^ikp-/%Bh@9\n&(['s{77PP(8?iLNyHjx&CIpd"8rJiUr`E;S 4{セ9q)W׋T_/}39??: ZazkPHQytIq_Ol7Z2G{xZ% +IAs皃t^hGts㍚U.?{uavzM;Tb<ivշ7H<2Y)Z *t Qk0C6\h͔7G{Wƪϛ5cު5G49(ќ]*qjJ)ߤVovxx(\)LFW: ĸϑ2SLJw붘h/'Ɍ)S`FRm{M]gf8jxEAQ;(Ӭt`gڝѨL)Ml砑*Pt@OrRAď&]+! Ɗ*jBDޟ+J 8z![ 2>/ϋ"2*ЩbLN9UJL,zW@): St,q4'i rd6s搇j8kS\ jC UmЉ3ZO)AH+3zӱ,En8i,zM (Uf@Pt$xsBW:Cs T) QdXHD3oEh9e)]J.pt3"l'V7=Ite|utPsE\t㖗jOt5iSzɢU$pʒhg՟r)6R/bv*ш]>K 74/un*E%76(biGJ؍1!8+j{Ia]W,5@ڍW Y,y[d}MDuWPŠ+݆Ui  Zi!Z7 BU:v{? #BQ•$M@:,I@Z+^^N$%]R+όm1^1Uʎ7ocykl7&$6'S((.8JRMy";WVK#g{erUr\$>+5#M[W+j_-$>-Tu:K4 A۴҂OzL!fG)']^.ƤuхL-җ cX(J$ @'s\M0Gd]C"RZ)MS .5=UeT%mx56Ē˖X֊*&$#N}Hf,95Aasjlz4`$ BjlZ. 5ڹU{Y.twtɥ PJwFOE @Ε:5 O6/oiUQ.IUSo.ŧt8m|i»N)+0xrɻ[rRtp) eAj/P7F*qTONŗvMūT2Pny7xc#@Ԯ=]K 9f 02DH60!jlEY dgd3!%팭s.L4` <}sdAtz}ye-:C6)q+<a< |Njs/2 {wqC$E :L8B4 yDs t HVRlCX'<;Ik?]i\wdkخw៟qgd߅FCs~npn7A,A,DiO3fgZC#.#lOo6G޼|itn+7~L( ǫכS ú R11R4"6Cd5Œ(ְUYH0$,0_Y  @%0 vQ΢HKG MqQ EрƄDX8ˀ(ʤ44Iv>~ro/" McA4OOvӡ}/ϝ\0ư:Rf7YgNM:o}?9 i m]|>q2-Xr7 +o@ t|g?30?R3j![3ed '0ªFn w S^ iBK(8)F~RѝȏPsX0x,1SȐ:\~(R)#?9Z>_<\Ge7#KąH a;!E9J %K-]2$A PEvҢ%04rpa_0vGTmmk(ɞ*ĕ*3zDP){|XrmcI/7Y{cy{(9]2vOa7 VȧO? ll,u!پ᧟:I+A#N:dZWBYDyصۅ0eV8,(6\*#L$0"ia1:v3Qarp4}+T g6i3076VVGiAX @(&vq+ [ hE1 XG0&\0ƷUhl8NjKT)7i8)w NªgCP !Ƹx^^͂t Pc#X'EϝV, &zj+}?)}axO8 ?{7*A!MCPzXB@@hM g-ץ\۹.2W$?Ui9[7?Jo~&I ? iARN -ioAx"1"}dk]P rV# K>f :VLim8\ͧ/Ui%Ն4xyYtU!3ZTmF@ y6&j[)j^vfL5p*j!WK)~LG K3UcsXcgS}b?cgwccƝof0e0-rɭUƹ(ܒ˚Usb>/qtJ%*Q7Gؽw2Y [9V7-̙|%@?Z8Ñ% (JFX}[wgf_0>&PL Ah$Z+_Gv֊ +BK:{*ЙMDkEA_G=Yi6D(G٫ g^ݙ|$'0s7+̢*-CmǝVZ&O6;nbAe!V&XcCk4B0BXb4%1%W{Z1v)ttveb`aoLVX|Ga簀6)ہxBVus!|e,Fok8ӊMۡ3bLm̢1gZ՚1|[c+2x` p8X}4__q>ϦFaw)\d1N:bu>^>FN 9_K ,{!,W[vMd2M"8jqV[C?G*Sxl0zLPqET_$oWdrՙm 9\ps4{UNh>ЃVaϥ1w7 C|j8ʓʎR't;F %g:kM8xzE%e7ӵqSq덇?ρפE4f[fy*=#+ LI\DKɔ䭏ni7B2cjcݎ9<͜LծJ(*)yL|[<ڭ,EDh# O3n3 j*$o.edJG0Vne1(":eG1*)dn j*$o.dSK)Q9FSJZ&P5=gҕ xQ-Ͼv%hMN3 Y͐twQl۳+sveJ/VkrVQܺ{I?El2G~"k*%aJ,\Fw}IWKu5!^k^)%]|EzNJ՟7ǏCNA_áu8-./ZFVͼ8v!VPk48!ibJ CN֓@?{N\;)N_,}$=.^^#vqRLaAu7zG]^E}r|5j9pg *?HS^umg%5MjB}Mאֈ*7O޽}Wm>#0abφHa%kUV;RYղ|_DW@]b)Ze}p+dp̃[F$["SĵH:!VwB;x$,j(HD M}vC784N~/u^=dn"TK bULXPs!L4W 12M"Y0KGs~ n|ZV.(;eHELXG?X !9&r"|N㻁3~>q0$Kq0wܳr̾\u.Š^,2S.bׯOEEoN4.|]Vh*_\fJrZq,6bhb0dh%e t?'|L&*sT 9zTI9=5,#bC_[bTG<2q$B2,6XV@0V&"c=]BE.2TTH: ޓá13aPq$-S"8#r r;&SQ(E(N5p0 P8SL"h(,cS9/YC} W_Ff3vWC-}L{ Y<?yCa61 |A__A4O}=x r+zrs~G$$s&][<]Y{_:`VM0>ma!tt=!2d`I-lzhp}_Kq%(`tRpuTKTh;z^]uWx>yt];3FiwrI7t9M|[SM&~dR_>>ʄ\vz1]-i7 ) Pj^<*d&"VdW#b5s=͎M&c6,,cq6b|9Xy"D4M_e=eQ.m(s+x5g5qyeYDh[ٖc<*M ;J%=zX$b hga) jΡ:[ڕIi k&)m ₴#b"gTq.GJJg/bCF("Dsi"5"iQR{{ #)S M q)i汋зnD\ xc=|Xh,|)ٻQP8>_=M Susg%z/~%΍|>@L弿]rDm/(Fc5"ݜoFl9յa|Ա}k}4ɎЇBz0?Qt|&cgSlnpA;*Xz&[?L̛} ~̡Y1,Ylb֍1عDRRmZۻN^4m944(ݼ[9V7-Y "riT0803/Zup&ԉ%[& ZQ4 M޵aFXH^xrZ9k ezIcN7gWfr,c: JDXyr \rSr }uurUD.}P> ^ A;EChWM;T 6&zs?Ԅ|EӴC| hH3o'"{M |)2\Pt%A^{xrOa\>, }w 6MLGoYPr8bL2VGdFcY8SDmQQN% }lo&݂SɌ/<w ȪPO R12ҟow][ܶ+?.^~.dERl{f4R7%Rz< ؎F*~U,ɯ:1l|)6) \NiWO~4oF18G楸ZWP+nbqwTTsfgKmܙ̢nyO !ۢԆxڥ%`\\Z7Q蜣\(T"4W&VBx-Út2M Uļ zR91:Gq6V77>GH)Fi &\L `/IZ#*նИ… M@\xu{ٱjrQ)JZ.PIRQXƈ bx%.A UHxbj"4W:=2,a\Il*8f#T(%4F( BEQɕp`e*RzSkiU* PKM@$V5=SBbFAX8s{J8Ai,H9pC"t`{.UD/Kʲܣ8Xyl{ qiBS¨**XT >~ގɍ!BZbD!(As+LnY\VkJEP]e1LySL'S_%$+r^U*[+uL8֊=O(vqh!oH0ڭtw$$j=G⣢|5 4.U6y C')֠{ĘPY#D&8fBxDŽp;fBH7!)p}zH[,y ?:\۟v' --9ⴏB\PPI3FHݓZ! &epOE}V '!ۨhw"TVҚ;n%*a~II1.qgQ%L}&{A4_/dǜd4c-N*33M%sC N;fq׀H zov0pW:2BoSŗ_ >*TezTA#o4 r&娋Վ/ { ۳1^0πx$d/5< YoH#B|d`oD;n=K=d^\ G96Ogv1R95JOC+ɁQ2G>EEB qr*-114 zQQoz(ҫY)2 #"s"WeXEb89їHP{nSX==$8䛞`H87mZC"ɄQQI7p1q%'e&lA9.k?=3>[γm[֑Yql]$>x߷[OV9dZO({ͥ/| s.pv[6J?}Vfcӧ6m?o߄\].ݝqŲvtnǸ?bڼWyS+P(k{S+P(o\ѕۂ̨NX~רp_V3hѹM\oRR-޶DMU_&y)0w c1U)ܞ2~s.ۛ*I*  |Tm +9DkEr 1czA(9I[vL[JtH ˥ь*jq q",8 oah ~(7=),w]4XH$σKMWre+eYYQ;h-v3|/?jaG[bJ!(1g20 @`Ww.ޏTR#QCȄCAW:ՙZ;x{Zc<CZwtL}{~yd+WuxC43IB3J B~ z| [!N؛#Ғl5qi!xw|c֐K<Ô0>7G2uF<_^0fAY{agdPgh젃-+]Tp,)Ε10OcP!npVPې7t53xdA;[!):;(i1tMC[g4n7V[-JM@rғc]Ƥ{G ALcTp00(aTTH*( F` q^=ӐE:k@WpS#W'Kq%94%REtIIcijb?"iE-5Tr%0R"TF+~I %j",VP%'4"T%J3nSU%2+[a*凊, x"A$2B:1g3 zd!bU]p{S cS ӈ) ˿zycE`*M{u͢J1j*iNO(dVXLd ND(8:Mآn8l~_w < iMKpe7gh ]?>l5ysiO[Iju? iD3s=?s]3=aY_&rL8R?6aٲ7Ta &D#lJJ>n臧H <  o&G}VlYlR4<}da(RZ rp w/ǚ`9?VuXsDb6i/is#lKe$_ڻk{%<orވ>gi&̮v#97vo%o=BXXZ{r,z `L3vLn=4 wvpg+.ێE(Qŗb^*7k;OLJ)^2*q*TX 4'.rB JYM4 X }YܣŔ6ǣBĄQQIпtC1!k3J-BxD!gۣ٬P j[vJ8J5׊ҚJUZJGEH%<$s' ]Ѭ3%GRQB!5rPKnKf;T{ ]z ^`zA> )Obu|R 8M]oDh ӲlI (;Jf)gR^R7K*bB9Lɜ,<<7^(@" `%Hʜ_S3ãqxrcfE=M~+1'FV$+\W%^Oa((b. ȱH [dqLtNpAb6PR c.lvw5Xj7{R+vAHNK7]M͆c]g]Dljhߞw bb:ݦiNY#klRXnQ65Իe5A~wnS봱bO[ M4ƦOy7hbNb11onSe(w/RXnmbt`vT!՚ J 9/FJf)d|n7MPnwW!̌Ty%l*qsb>۶kkM__ᖗ?|uv{z1nkr7 99 \Z[Tά'9#amH]9#ϟNULD:m=E LOO-BNɲIYAAԼ>V}'㇓ҐT*_Zp.FZ==4$H19u}{O柇{"䞑`s_)쁢-gbI))Ј~2*qj?=TH8J TZ=E%hw=#~!GEDxKS1$ur$@a"nh``qj\Hic 4RT"*"dGBğ?|!߯Q~F_u=?_ސd LW`S8$zѬ|חHHŧV jĠgjo-jfl 2'3qġQb.Eyk冋Mq5˻=ټ㇟'!ۏfB|u̖E*{e{ҏ9S'>ޔӸ-k"C.껋k^!5%Bf#1(4&`q}#՟f<:kͻ\T+#?oI / (JH!#J\U6O#.eDGQHi ÿ;[$4ȧ].dA ))O)IlAzh> ʈp\9 }Ec@fZf4\@B~G.s5 HBK [lҗx p cA4oKҌQC͋ ۢio:`'c #Č!YjD|@=W-@E0 Ubv8GAqۇ甲䀿4!W'-[C[о]:f$yXP kOo2xd'7Zh/WN}}nw_ֿ>xV*_޿S//J\ln%JC&VKwOsx-rQAѮ"#+[KTifzʑ_X (R$䩷g_fуY,xd˒#ɗd}(bё  `[~UŪ{U!F 'Ӟ!F+DHHդ4ѱw/CFԔd%IÙ͢?Lr\-P1NEل ZǪRP ]$[ zuO$v#9-O+9HΣKQj9'>a{VZX9URԞSCtVʟVrd- j]V"W^E*:6es1p䴦DY_H9Y,βj?ߟ,JȌf<\A[3k5Yح!mIPmI-7|A㶯=ps~տNXȭr/,'W?$Rm'nUᅵ d\mWdv;CrRh\XTd%Ys2ϓd9~=9RO3Cy6|#]fTõ9DOXOڃԄO'2~۫эIQq7zőߺ[RDjOJ\\-Wy![ihstx8d5m#tdt"ϛHTL?yS_֕ :]+0-Vtiԛukr{0eW l}~Z?/S`oDnIVdbbk/OU٧ VeTZ.n/vNXgBՇ.%e*XՈ+ Q RUi_)vGsR[rnƵQ#,'9i:c: AOo.;!G^D ")&?T!(6vZO->2b#*#htfy,rEpUO%{E`qƔ0Y)GC?.aB<-)9 %Md\p$N ;ZOv'bi<>d*솨󜾁QQ9]` %TN1ɸľguh'RPu?f㷝}B X B/Iz e81dَiхԦ7eC6d_S Y)x?/ g/P!.%5vZeB6%9K9ǃϝL .?9fU1i*#!+QG {ߛ;%Q㡿9xkZ1ɳc& B_L!boT xr`_3VK5v?;K]>$'}vr%( Q d7Ƴg9x'Xͮk32jZd0MT.BoY@:i'/[~8=ծx_N} c(\ T"T0Le]}- Ğ%߁w%%߫vۡcI[,Kjw>ǜ`1-sJqO^*3?#Uab7޸7@+ }Pf$Ha?A0ѢN/^?^x~÷9u9'q7 `C K 㠧ru4h}ZKhrʴ2Bs&sJss1 .W+lADhUo0ma1NVD^-WO*%p Zj5p[Z܋`XrX\fFdѨ-}>)Sn %cBX[N_ /Ճ ~U-бzNh9wbi'BW,fdl mMUݎ'ד @tN\ i<$$'ydM_=1;ڸ_!N^n=tJKtAKQad09ё5ܛQ8x3w>/]sؑ 0`ڥLpb_kQqylvCl'T4x5-'GԵ0!;qt.)Q(Hl%8H=n`Np;!߻kF1>t*#3'Nb" .~ϹjsXvմ}}U/JqLz\[nk9 {OlzM3icʭYRF;Z:>*~-XsY,obf e8_2J:1q""eDt7q Rlp.Y"BL'"FL)s%X&)3 BF6a@8kp:=NZ0GՆ>jC/h(_Uz$IՆ>XzUP^Tm胊_c;rLՆ>ȭ3PՆ؉sj6Dڀ!97Σu'vNtg!'aեWVN MJ ׵{!Y[9α!a,ȰA+`˜t閷^a36WL0.wbZiYxv,Y McȬ`HCE n{?uzbV D%cd%XJjH2:\M#%pDg/ M-OrF+;4x7KITjbvBU)vn6l@˵‘i3`XT[ mh5g?xɬhy(W_<\s$}ZvNDhu5>ٷ^ Z.oy׷P sguNԽk['>IV l뜜/B9< 6k+]O&_RmGkpt}|*iy:Z-7V~FcxB?K#0u$;DKsz@l6F"]V~QT"Kv*F 2&- x9 Zb)y:Fi;Vi _V({ߢl -V;Q'JGUY[4"XUJ5EuxWPٚ}M?^B"vl &Cق#C}U[ie+ӝӭ1a!Q a|&~2fk$.m;"q-I6<>f?]^_<({몤65 ;Z}]\<| i{8wy’o͜eX z;|Q{,~*_|!P;%1~}R72dxul $a<;QRZV%}=?hk)_ܭ{cIӑ8HVF#',ir1ɍȷ:|7|bL/ڏi~,2D2 p{w֗7&r]nR?>lai'4ieqo lGHkbRe j WMnn]iWwyͻIby9OKC'$G b '0y 1eŚ9ds_XH1ePdD urlJBE *{q3// Ӄ_lʎD{dFԖmXDt@dm^?Q8c\+uʜ"3O)7%(R4X_6QC}9y^+3;wfr.\RDp;&[큾kѳ; 'ϵ5$ P2[^6Z>d(X-j`eJg lV30ߡdV> o AXjI9x]::y˱i>ŲD,Qj8v\ș,DY8`j)]OjR@LJ{aڹ 4G PfiD yH AGR}'Mz#&I)k1޶R^Z4%]S:_&>'}Nfq֛V8ت e:82уbFkev~{@(8T6քfH,5kdk=Tm?O*3uP".QIcHɎ6;UsZ2;$6bEj2NJ"0Z`r5.|!sG^K#dNe38Zh $UL{NQf'bJW%i@b8 9xBFe)%A,nC~h-؝aFD{3EbZtj(jq@-v[D{y~b\O'zKZ{U/W t&}В^3ăZn;r~?L,Ь тW1*TAf2ҁ^Zbt e%+ʔ"jzݨ!`5kuZTEDrkq [ &dcCp.9ւ5kſz:M8PR@=AJNqX.T] F*FYf:}UjYZVˑ9@R} Gm\8;x]^'[ JPTvj%5@Z;IW{TUx%B wŖsA Nk]:1WVbl>cA;aj-Ɲ̊*b2ٳGB{ؓyo-Qx ggQ/^&eWs2,_N?Mr-hE@NU^sbfkö_#f. +. B%VN5fQd0 F)ɦDAf_:mb1W՗bbUPQJ#ZXNeHA?ُ<&O$d%k]e=W@-)O"JmAnu6\|g_U+TV S3 jƥ`jQi07Ypq BR}U $ȶ6Q&&$U8W:8ǷULGG iOӋCo} "bUG\VF8l1U'*5Zu#תHײUeO]U`0 v, I3֥3Sa!?%70<:"r<88)l:FɠP<&7 AM6|LcEX>P(+ w:H?b}eK2#v(|N`#%KNQ!D.`.ϒ%+pvtvqPψ$-i2`$2)Em-B0Xp(#Lަ=<gq/j,}eKgch3B3ٻh=@a{C_=bׂ&s"$]@AhG!{_yC ;bA!Jގ:f8kwj&75XM>rq:2B*-E gkfBwd@M/_29B<|^-1Udi ŝ- ,9x6)Ev*O>1+."=Ha5ЗEra깥l}tJ5y`=GYv l1=.|\ClH%r9k l5 u/ja2? _|wߝAvǥ\%&$z H5HF_ ,/7-.t25nָ,'e2Qo(`L1R{˄=OLps| dΆ@d\Hxwզd9/VmJXQ&J= :bB֨ukŦ gasXmt&5(X^b.D!YEĚwoʭ՗0Ɛ[52*͹l64 ba:]NFBGQ3fV_LSh_GlS;pXn7&ph-<jʼ!d|[7"Tk}'Ƣ_"X,b94r&C8 HI*H].}9Ī8znTbpѬy@{hB'K$޴o5βFxY2RZ3K(遐z U,-'$7\RPDddd4HfeÕse>ZZ|wzZHDk] mY^t v s(]_ځ<֣NOMytb6lrn.Ýx.@]c[Ujۅ N'ēv`3>ORp0vgV*%|8<V< 3%]k>.pNy$v_OJ`XJzA-Il WTϮ$]ivK;F+r$tg4Ϯ$Qd>B џ$zip>IIb{WϮ$2OQ- tj'mKHж)kWokt|}n&N'hCm ᙫ;Dvȗْ :phsb&B~7ϋjSOJgg7F-W>e]/ra6M+2s Y8:Ơ-X?Z} fսhFȝ1da!tZXPؔeȣvJdQ 0`FK#٘xxψrFZ^J9B6|ZYU]4M7gP{_!qi9B!B\j3\ >Wkr rGPkQyF)(,D<{MU򁜏!e$t-oTmEQ}TS-̝.ʄ3lR+Vh}TSMKnknyw RݲwC _tԾ.5hTS--i-m>wAH-&' ^&CmQ:kYLVC ^ -T[gW_u9hL^db+kmsqf_iYe6OB4:؀zh?SʵNY)md+I/d,K! xPksrISҎնBGwQԾZi4"Ii$Z2 IOJf[f؀j{$5>2*{ZpLYa 4Ulc݈.̃7&g:ە$ߊ5>֩_֌]\3RM%Vh}TSJ2EKwQK2M.cKvkiT/ZZ[MVݎ]./^tILm>WW|.hT}E/RRK݁펑 )"@\the-%כsҩE9()\P%REBryY=;Υ,`E :5]TQ@4DZB,`hPfIlQft$CABh@OH65GF.qLM&2- K@*x,Ҹ lC- 1r"BѴԅ- "6@s)d@^Z׫=؟r͸uiʿx6fsK5֗kZ\;-zQBK%;άܥ[?2*_IU{?}c~ NҔeSs{<.n2nOї"5grJ0ybM4^zog s R,Z[8>I󊫕?%ƌz|ix |5^*R7k^Eƫ4n}@W4![4i"Dխf{ ?Mav0x/ׄLOO5iM4aLcʡƤI/o5z晤IQI޼lzt9}|\>|F咬jNMO$H(U^W/~wjJ\.5&`<;̧gˏ,燈Pc0ySrǾFSkźDnesp5d륳>ZIh׮97yKfrEsie  o-CNb/Zb/2Cʚd;Vjɵ[z޻E͹^iv\Tq?G_QK n-wN:Hfp n3 g.:V ԚDDs[c5H 3<~NRO\{Q,ݼZI4Pc *Z囩ӝ3 \yDmA)|jZ,}}З^*r˓s^!~Ou'Ŷ<xe h) -Xp[[8pI 鴼;+A~' ~SQpx>=~<~7 dBa}Oiܢ;Z"k{#Cv=o{EmX""s4+b:Bx]rوRqYCY+C-(T$lEXj6{n%ϵ 겋+3q+,~bQ~ʩT-|+X+|qR߯$ ㊽Dϯ{z{a5sm4a,!d$B3%RVDvH$}c6s4Ɣ Q/"\!`ѰrDpB(,;gTRª]SlhzxF  EC)C@5T(FI;: DZ7A(7 d$% 1a̅]֥- S&d}wsZg-Y Hnrԛ ߬7o%P4A 7k"L[3AY h~bZ`ӊKÖ,⻋t2+|<[ 9ӂs6HF_`7f<}>0sq2fit2UЏ:>^`l*CǴh*6 x9,]s Sa33{,k WRvīkw O6>fp7͵|6+IJ'U_MF|wՓbcש9^“͹4W#q sOz2#*-sTW\fo S@@$5X``vR1+1TöK04׏f}s)Z 0wk~m)ٴmftZs-MF6Zӗ?S+2wkW:dq}h4 Ak3pfL[T+5kub9峻*bT׆q<[ڍ=0H D#ü?sIn <}s[7.Q2%}-&!hX |D'!m~&WS!!o\D˔FJr:8mdk} RKiz j!%1&R}&  N,!HlYlW+6FDy.^MAdH?G&Ղΐf[TaH)C~RZPG)=h)2u 9/6(,!9oܴcR[T+FRzRJ\Tq9)}M1(-I)) 4j9Nk/.yLM3ZUf0)ʤ7e T bAY49 !238q<@uMDٳC2Y_iS*]8˹N0 ͍ D Ih" I)HSSw2 @N;msl6 zEG 2 J(ϸfFJQ Z(\:FF $Vj+)caP\o=Q䀢{"SI!9"Id&9K3)g"K0bRBSI ]{1@b4R$(jg >20\QFݒAn<ʧbŐb]qqR< dR0VtIHBTnb&=_G+BW6^\Fb4ERP"]V_]L PN'd^ɖ/¶l>&YmSXg7OM.Hi6]dN;L꫚MĢit\bdt` w1a9CÈ&Zv;X.dojp8+=~Q_,DPZ1ۍ !UpT K0ƇaەVfNMj|sKMK&78iI%7ͩ`ޜ%hg{C+ Va(q"r6{1vzzb5_ zx`W1+[35,;9Wt\'9>.G?ɕw,.6cl$^5#4vo"W:ۂq)nL CEb%:n *6vݢMj$䍋hL)>@vS2u-щ}GvcY*BDo-vkCB޸6))=}Q`NQNa(^.:Aܕ[?QYNZc.s7zCp5q|+䲔rp&8[ޏfWş]\qȅ3A6ǧ&izzCKNW- |'R!QIiwٗrqr)r7Tt5ُOgRS[t{gLjuyp̎|W8?SL?LIN˔=wXir=Zd'S?Lʙ͗sg]~ޙϣ@'rbyo[gͤo^/*/>잛`BR@z5Ӥd>5(| zEڝ@T2x3=C!5[eV( [5 9f|Pys Zl0҂g@8pOI\a1N*dQ4BI\q$ iD8⭇{5BPp)L񾠡LR൶#)CU}퀕pQHmQj`nn:Wh-F۠yR&H嚗 7^Q8NO{Tps\\ ZwkܓE'|ppz<]?rçP8"UlH1VS]^R~\ֳwf-*3eҀ%c9GRA-"8M̉SI!|"~*zM}=mz} ˾LQ<$hāXXKhޜ! c:ԁ@ ;ԁPvN&mi zk!/$_ D+Ddњ^*n$O1J2+M$& (M4q1j9(1Ԧ*WI 5ll* 2D [P.D*i&s"/"HIK5]/<Q;Slc,+=6v64n&䘢@@L#G"ZRtxO tۂfiI_>˅Ip#~rXzAv /_?W{u~S((teFQhBE/jj'QѰMTt/j%۟M^*l ޕ6r$B`g@:##Fepc<=dudZt{ER̬EmIb\pN[4ld[0bV:ռ:0dg!眾QSTש6dJJ9Y)GofPNJwQ}^-.qңRaVKLJwQ}^#NOmasJ%cRAaVZRNVzVJݥ9oXJwS}jpңRaV ǥ$tuNVzVur14 +:JK).=n+;~gH+;_N#.;~I-" k/)[-]P|`K<16,l-{eWW6[0B8>7 'pp^YK7{^^W3+ʞ9qٸ ^fc F@W mQ{qm_g>dW@,42*y+6L42uT[*e\^03RL%epR XR*gW{TRZ)Dd*q.UR4hE)%SF9'Adk~ ;&ZY2?>x{GUz`͖nsoS;w^qfLbLaCyh!뇣# [z9K֘2VFe95. $ڮNB,+ERHsW Ps G*D%z^{UpC>ݟOjC35aj[?|܅i9k:ek@o&FMoVO#C{TU6dJܹdyiU)M6~tY7'O76M&+#$,m8LLl"5x',ǜ K2I%5X%Xg扔Jm5DVzòUY!@@AV1TI<7yl1,fdrm$lbSգG<]M~}B2Q,XJ۽O߯i$=|k,K[蟗wq5g01??գ>=hN߹¼o/~*_q&wn%'6lDߞ]rW H ~^B%uhZ-[JHyjrv<&~O[ ?rW!Q8{9Z>@zG]D.a1H)LRbc&(_z]sHߍ}:ww}~8Xjs@km[նj]j[?ٳδo(z]7x(LFC4>T{}(bVr$8Njg*]8F @{'sQ%!7s i3rS'7wvY6%-r_KtsUʂq|gOZou<L΋>U*1L2PL:-;v,L`LSJôuCC4d3/%7 WTk8Ay3 RD󞚡g)+?hi/ɋ߹gyWZYƸ*nW{ Ue[C!TDg2X%ч\ϘdNT1s3OrϙMs'SgfNPN"HVK5zҧǟ YX2xU)j0tk] Da`D9; iJ}\eĝ PuruY8PKV5:|<faµ!d."xZ̤+P\XYyؔ rɋVqU,{fŒs@즯k2DE[>㍨ QB5 v*0N˧nf+׊xu@P >tOȝ'PD3K):J'-!=HHKDwMmODG@?Dtߙ5V[؏Ņw]\xwqQuRW. ˲47mqeYN/iFYg.U`i!dRԩ[//nBy4``+3ɤzv$@&47p*Z c2O Ƀ1;2Ћx^.NU"F('_juAY1ZiX?*i>S(dGm"p)*CZKy+dhRY>.;N5 JJyo܂s 0_WTs:={40+-N;q[)0+E]ANJ#:w mɃ,(LDBfNfE*5`91+MB0Y)gB UH傌`cU[W٬NF}6{̅Iggltt_=Tr 5k{Ws>}ɮf/7孳1ѠJA{['TTʸ,4Dɬ^T"!I/*d=*y)::yUnJ-NJ1JiBJ*-ɇr^}*ZGt^=@2AV JѠLOg$es3nT 7jʵ'n۳\j38sH8e}d5u!e7\^_~w^i z/0=׉- 9KIt꽓s!؋ԣR+~] QЖx2|hE2tjYvAEBBZ IM>T(Xg4tQ  ߧa!Q}neCRa :z_ǂ3>iN{b=2|eh^sDo'{{#h1s/XhgBi*FK9ZAyrq^ t5]3L.!2s 3!1;%i.bB}E.Fw˂Dk*3"Fp"06'\GԖr'1[{`ӣk^04 []1G q3!1[W1̑q_1G `tG, A[Z'm;P+[ét ZorfA{ж4$5yt~ry |9"}hP; x2y $4oͥz+re^5v>뾯zmlS!)|Qm9. *+M'tI*iV tU[7 x(&Ǽ}?i̓`J01%^d3fٻ6$t{~ r6w@vmq>`Lw:l~CJRC=zUꪮG:κ)Xŧi)%Dk }۵uKOx6sHd+꽈_nCDQ/E\?JD%l,!㖩,SJf+ ~V_R)K!뫖`ܯR' PDw\F|Օd(fCIk3 rh'=^M '$dj;==/2m ޡaWI=˼Ղ }5+Aی%2d$dWdA!"ۼݎsۍG#*v<|RHN*51 I a8v& e3ά疑@*3\I6H6f2xl̸gKSxCV@Ը&@SHBdHOب#٨g:ܐZӅVjXKPjmD4AFBʴ0ZB3J+spjRNLX<>kFԥUeelOTq,;[z+e(ՂyRF,Xha(qe99gxVO+ u@\YZhąX.#n+˳L$4ʳH0JU!]zw)Ryv ΥR# ˳bGUBw! nVc0\zcP )&U끥3)/km),nl7i|mc bedPA1kS/4PVMk;2Gjri:Ց0\5FK΀JcPZ }Usg)G _pG\F?ej4>ۛ"*7s4YhW3w2>9[nyvWDu9}25#?zoVhONQ8dL6Wx\x}T6s-;~VɚBj#{Wu1]0g' (,'Ghu ST&FL!(AZ2I(!TOFJ߼&Gw{wXKe!,#\Z LF#;JPs| Wc#D ҈é.KbxiEE6 Gpڞ\U NY!5}0 σ8nMQDЄgTĝ^TR)xjN(zOpG+ wǐŤ\MbC}7"xA讳bF%3ec#SFDق><7LNyE|(ϕȂ0&ͨc m 8lLcԄ hP0&K9ԙI-NJa2CSX"NmB<z_!Sꬠ\ DJ `lzD*($p+rj7{(HGJA]]ɔ3vgly36  JL0fvo"B_L8FQFxZsaQKR[㈢4?oR4W9(e1ߔ YovKrtQw-*nrCouXU|@|}.6Er5hIo)`)~O%] `a=+r@qy,?}>|XsԬ9ˣfY E T)Q4׀j%;!+lSA+l8BqQtm0҃Fq(!ߒD!Q(=89%1-PDXgoSIPCD)cq(e,G@>TSҨ R00 rR:J(=D RAJG|jZl{yDᡔ?.[>zM5+>Q8孨JCiNs遣8293FP:O$XY~ب~JMtlyIl8t%̟.Dgruu(K(I%")[]( J;`5+\_t?h1Z\0w3(U %Cf@Lf >ɦ *)7\kg}gݭHVg$C4 $^\Lю8tAQ"̨GG Wa &g{q-J0 '?i%ˠרv=f"aIhKf@˙[,Zӯ8ڒSšq;Q 9w0Ngϯ:^A&L9u$S4&ڊrDDD=1@vb> wԶQ=EA:%P,M'7*H% y* 塄OhU+J@ GyzHbKn!G[]I&k PTv4hbZ10!<4ΤRT)][bR#Z΅g5ۤVi7%T!QT&2hi8:*Om7i{G23YZōM X B p wթ1&JA€fЄj\\FsNs&<çY,(-LpqԱJH]D4e4ӂQ4QQŞۇNm;CYo?34ȑOu-ֿk@4֦@DA-96SЎ^gF4qV{%RS>aчg75FcVnCD2̌fEKwHkDž4MK rHJ'Ɖ$*aT94M69!'3c(j(9/[ԕ<#q(LQ(=d <2L pwcnnÈE0P 8~jF kXy$JøxGL48o%z˜`34%[0O%zﮮ]aaeCIrΌ%3#i|oa޳쥚qۆ@DOu!edDYS-e J,ˬsFy+8*\ +2Zkr$FTؾUj$Fn_N }a%'Lt}AU\^B=Y+[R ޾Q~g*]kb%ٞo[hZr ա|[Dt#^7+Ո7;ʋx_Nc!:癛 B]<ί&2͈)y.wt 4^:#0N}a-Z#P'qUHs9v41mG*P3 ~} UInLBT4 SUrDyqiSٚV|Y;1\eˎR`5AkO.Y< %xtg~qJTQz($r1XTϷ`F)Uq(+](PJUJs)Fiܰ\@)0 zM5n?lƵO R }TϷVB-aT8ǗRBơ4?KRf5Pm~]+ߖWW_7/ :ߦAx}L_V}tFDžMݿȞe (T&]>IX%c>$=2O\l%b)\ Z,Wt/H(Pz^BH GC6D" 6D".iGyI>>iGyIPeN#$'K(+VERtWS^m.Af#׋ߋRJ(|cZhq5Cxi4{ U'z<=_BF=DiUj'J\2e[˧)H[b\C zѲ?m'W2( JL;>% H\ֹj)D = ń;,gRZȥJpL#Fu)Yת~Q@hG8چ#ktQd&gh})/0;jZR(lG Gpޠ 9 $FCtȵ@LO:\=”&}c0dIJx0J!I}mq6 daxhWd@;mTbXl' Cd.Ri R1 J94LĂRo){BaRh]]s+[|ɽwthUz s_r.;إ`,FĐK)NKjv|/"Ù t9ef uQ 's1,x#BAN`2ed2&"$8| z'hRҳ)B.sヘ~#^xiz(Þ9L$ʮN>so X( :R@Nph>c I vdG(.㷡8v?6SXZY@u@8;١QjgfSθvH/2X[cu.zSޱ {"Sy=M#o/@+9N%aVF5] SpeӦxV|M6X ǀPVOnbt€fQd ky8+9~č>"K{*s2cn ,,`$8pϢNmj@n]V}NP$!}^KR+2fqox:Z2^," l(k[@vKPqCWHL-WRO;\-bf^ ʿ 6c+EOYSS;^@JB+.J`)HV&‚gZ"MGӛaLD>5oib͵7{W6To%J?p`By?kp[SnIHGm3 ;Li2&݉&(4J%kw!cPsT4J*ZN&=WXdZ(GG Y?ބ{/CH;^_cNQ\xM11b9:on4ZX- ɻ+&Rp-A+~w+jF3/,|lo@|ŏGM|egs7DΧOre=x$1%˸z6΂[~~B5TlBp꫁ytuf& Wsҕu< s:؆RyACG撡1%d1zwse~n;x{}Eͪ1R]7=u?y#7n6xog{㜙 ,'g Qօ tHXl˥f1",}0;{.궼Pl֎쇪w&.Mk9[sEoytq0ZFW-\|7fd3f9`!Jd(1\종𞩅ǛrbٲtzvmϰHla"msd%DΝFj6cFB{yl(rwbڃ%{}$iȓW8zPƮe -^@i=_4N\[]/ "X։> \xbpq}\ȍZy Bl~C)K5"8ȍ%q$2%?T4ˁ0`rvߕfBvWۡ*R ҧyߖY 8όuI@{T7 ;Ku%D^[H͊ ,xc?eV,sJrTT T̑ K E7VU &);׫W'W+ʛM:,ՖeQQe{@9m٢htuq> \?tMO-yz҇_I=s0UsgCfnXsyYm|KRkg19Jyփ{ys5(C=z&©BGgGS6>1-zc>hH1PyO Los=q\ql$InEbRIWEgz&rAVi-HiV ,䌔`)P g"f%YپDMyݱh =S9g)&gK* mYG!An IxN Ze8PÑ^4~l^'Nh$( !P]{h"14 .88ML (#k3渳6Вcdmw2E `ջ#Or2mJgd[וVQȚf-2vp{cs.w\==wIcE6lb%#pV셑5~ĸ5=JeKxMà> vЀ7pHPPw֮ 1Ox4!٬֪AsdaX;RkřlJ2ϠԚI0qp~RhyUEL3.%6X(~ p+ydÝ}|X>,__[ry{Z0d%O8liogOT|O|ZyuExWUX[חQa}}w#;߼{wp\z\ a-ǒ1-L_jw2 ?;|'em{vQK(1y8jڝaQ`PUnx _|,WqVDw<Ty߬swOu]kl{ZM8#*{﬘;3eZ;Fb":GDW4dxmreQh\D8b \o 2 cg'V\,|ٙ^k97(x쬡R+I\K&˹p<{MKPmҎ0La %D &0E0JΩ8X,nS=bl9NSA9'H[6Χ( > d ߓɇgxsdOSqx 8STr(:EIk>EiNQeOf[!2X9)8wCۣ_1.?]0oٛGz)^j5vpK}Lj'Tj[jn:{i{)Fz).OÝ+eZfGۍ#^mKљF} ܿBǝe,jXWbYj)ce]D+X&A)pmLn#.TƢL#MŤBiȤ·%,\(,rnsCt2W +C3ꭱZρmܵxheӽ_ye=8yCcNP&D{[q4aL+d|:l%_jtD[E3Sa(:g%j Wq|e e=gHk)ShHD`%FhHaMۤTx֝O yQEs6(QܡƲ25E(* 2$C YO}k3" c(Oge'HZ)$!S`J rYe@k&a,ip(Y&U^ jg~.}; gaOon~pepnn-1UjއK0Q7 bƕ%W@rd"C!-03֔ZSnĩOP/ẑ*]h (Atfr]z7΃@hͥpJSm0hfXf`v$f),3j,e>#zn3@k;TSuH_S$Z@:bAORC3U=Q?/DϔZ{ohXUԒSm$ukR#r>%vqצ鷬蛶k)B#k53AٙҦxv00hFњ,hn1`GKI`J_A867 t-}@`4G+zk$:t< fg4aQ:\X@;9xbް((#^ya̒x:/&K P5cz!zKOK1 Wq-RK4%<Rf$4ʣԤϹ.q#`o-᥇ږ]y^z^JPV-۬)]Q{Op/e/NrZv٬*<7 oeCRM0IsvDɘdw[dԒbF0J7J#6%dЬbIs$ԛ(wZO%/i\LwTc7Q`.1lTHd58JO׻hPT}v05I3yngNLeṅJui)kL=mJXcjy)#VBQeN98P{$Lxnen1\%G.r^j?BDO%MYݱhCUg>% "b,r4%3Q H8[2_g0B$@'@2`4׍WQ䅔3H;r t1. <[3z#Fˆ ίGn_%㊔(R nq/Pm)$M6wCɞM;;!Mx)HQi6yA-G J=vv72:4FZ?USWXr89*A:(2{k֥TW㦨Ѹe3N(8bMM*("'&%YTT{ic si.pZ}:*c FX7$EGﳅf8_f־yGc/uVa>)\4#,>~fA<\ U(tښ8k3v;NK}HjW^θ,/51T`sĔtah TO>pt%E6Ng6?VrkaZfb6 01Vm7] C] ,3]d萴UfH q[)'%ST,SZ]Il4KA5[n9ZG^HH)M)Yb֠Gkv ?+,P<: BAYqdPbOna-r4xJcLPB0@,3ӡh=؞ ^psvHq=y&{b#{Ԥxr Ze3E^ ,g@j(-3SҟS7L\ǎnl:I,Wretb`ԑ3{4/n\W/h/M?}.ۊF,5楝꥗&qlxZC-ue{)`F{+؄Pj^꥗Hi^m&)K;++T4/{ћRҼ]kRҼԨhmf Z__; e=%Zw to-<#&9/ctxNbi LV61HgQ^(3-],fIVt2~yP%# &ڲj/U;kahCSД[U4۞ǜߟ8z7-}R~Ƴ0<Ʉ[d#.(6\ {,Z".<0@E_bj+˗To}a|uNȰ)HvBvA+7X(<S" 5kn5R{V/1JPߚPVFKay3{x"IuO?P.Qo5LXj3QCеrijUBi豮 [1j GH&QPbco1J${P!`\J(%bCc#Dt0xu[R}dezQCrgY|uM%S%UYSS:dOTlluX_BhSABMZ,PP0%w)H.H;Y/a-K:cN):壴N؋1OuvY~&>sp ȕчf4Zg7ZG4\gMHibŏZ2v)X訑CK@ ^إ~;hoQYRJ]\]3J̳  @pj'n +ˁONI?Qr8abTMT- NFkUtӵ0TsktnYR'Xދeql#m<}jeFJ]Mi1 ͣl9Y^HH,MP,fZr. ?q`E=70g fُ"ColUQxvsU(x8Q \gЅKM0?cx>?agiYy ~1]ަZWJTߙmPɣ'T]orOxy3KE x5}Z~0I&:rB1'VKȧ/wf0`b^\ۈ)Gv+bq <3lb,-8.@kk`"=]^m-S()nuН൩ &BJT*JJ]qr%غ eC5 Q"U3$M5LSth^PZx_6*8m?2K(_9L |-VVU({/>}+ &|j5zW>EA߽-}/pwLkmFX<+Aruw!l!RVWljP?։jԅio0Wk9*@F?{v72:4Q?r_c+h鄍ބRxWCKaeIHT)d9ePLMDԘ}6D=pnrij7+9vY39wuR1)FYD&Nev]&{>a!iWP{Oin"(40TgpG4>1s} 4;^{>zNâ,It(#怯~@% si}@a1yoop@c/)n#qgk+ӝkSɀa$5֔fuIDCXJ qVԕ/c8Ɋi:A:wyX2ZTQAs9&ƽYKB;~$6*jS,8֊JwJk8ywGD˼^4}[?آ}`vwH:t8Svar/home/core/zuul-output/logs/kubelet.log0000644000000000000000005405143015134354374017710 0ustar rootrootJan 22 06:35:06 crc systemd[1]: Starting Kubernetes Kubelet... Jan 22 06:35:06 crc restorecon[4687]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Jan 22 06:35:06 crc restorecon[4687]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 22 06:35:06 crc restorecon[4687]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 22 06:35:06 crc restorecon[4687]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 22 06:35:06 crc restorecon[4687]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 22 06:35:06 crc restorecon[4687]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 22 06:35:06 crc restorecon[4687]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 22 06:35:06 crc restorecon[4687]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 22 06:35:06 crc restorecon[4687]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 22 06:35:06 crc restorecon[4687]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 22 06:35:06 crc restorecon[4687]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 22 06:35:06 crc restorecon[4687]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 22 06:35:06 crc restorecon[4687]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 22 06:35:06 crc restorecon[4687]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 22 06:35:06 crc restorecon[4687]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 22 06:35:06 crc restorecon[4687]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 22 06:35:06 crc restorecon[4687]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 22 06:35:06 crc restorecon[4687]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 22 06:35:06 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 22 06:35:06 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 22 06:35:06 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 22 06:35:06 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 22 06:35:06 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 22 06:35:06 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 22 06:35:06 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 22 06:35:06 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 22 06:35:06 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 22 06:35:06 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 22 06:35:06 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 22 06:35:06 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 22 06:35:06 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 22 06:35:06 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 22 06:35:06 crc restorecon[4687]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 22 06:35:06 crc restorecon[4687]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 22 06:35:06 crc restorecon[4687]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 22 06:35:06 crc restorecon[4687]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 22 06:35:06 crc restorecon[4687]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Jan 22 06:35:06 crc restorecon[4687]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 22 06:35:06 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 22 06:35:06 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 22 06:35:06 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 22 06:35:06 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 22 06:35:06 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 22 06:35:06 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 22 06:35:06 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 22 06:35:06 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 22 06:35:06 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 22 06:35:06 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 22 06:35:06 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 22 06:35:06 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 22 06:35:06 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 22 06:35:06 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 22 06:35:06 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 22 06:35:06 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 22 06:35:06 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 22 06:35:06 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 22 06:35:06 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 22 06:35:06 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 22 06:35:06 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 22 06:35:06 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 22 06:35:06 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 22 06:35:06 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 22 06:35:06 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 22 06:35:06 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 22 06:35:06 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 22 06:35:06 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 22 06:35:06 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 22 06:35:06 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 22 06:35:06 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 22 06:35:06 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 22 06:35:06 crc restorecon[4687]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 22 06:35:06 crc restorecon[4687]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 22 06:35:06 crc restorecon[4687]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 22 06:35:06 crc restorecon[4687]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 22 06:35:06 crc restorecon[4687]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 22 06:35:06 crc restorecon[4687]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 22 06:35:06 crc restorecon[4687]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 22 06:35:06 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 22 06:35:06 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 22 06:35:06 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 22 06:35:06 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 22 06:35:06 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 22 06:35:06 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 22 06:35:06 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 22 06:35:06 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 22 06:35:06 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 22 06:35:06 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 22 06:35:06 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 22 06:35:06 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 22 06:35:06 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 22 06:35:06 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 22 06:35:06 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 22 06:35:06 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 22 06:35:06 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 22 06:35:06 crc restorecon[4687]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 06:35:06 crc restorecon[4687]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 06:35:06 crc restorecon[4687]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 06:35:06 crc restorecon[4687]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 06:35:06 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 22 06:35:06 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 22 06:35:06 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 22 06:35:06 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 22 06:35:06 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 22 06:35:06 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 22 06:35:06 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 22 06:35:06 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 22 06:35:06 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 22 06:35:06 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 22 06:35:06 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 22 06:35:06 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 22 06:35:06 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 22 06:35:06 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 22 06:35:06 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 22 06:35:06 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 22 06:35:06 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 22 06:35:06 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 22 06:35:06 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 22 06:35:06 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 22 06:35:06 crc restorecon[4687]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 22 06:35:06 crc restorecon[4687]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 22 06:35:06 crc restorecon[4687]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 22 06:35:06 crc restorecon[4687]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 22 06:35:06 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 06:35:06 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 06:35:06 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 06:35:06 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 06:35:06 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 06:35:06 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 06:35:06 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 06:35:06 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 06:35:06 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 22 06:35:07 crc restorecon[4687]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 22 06:35:07 crc restorecon[4687]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Jan 22 06:35:07 crc kubenswrapper[4720]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 22 06:35:07 crc kubenswrapper[4720]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Jan 22 06:35:07 crc kubenswrapper[4720]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 22 06:35:07 crc kubenswrapper[4720]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 22 06:35:07 crc kubenswrapper[4720]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 22 06:35:07 crc kubenswrapper[4720]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 22 06:35:07 crc kubenswrapper[4720]: I0122 06:35:07.981591 4720 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 22 06:35:07 crc kubenswrapper[4720]: W0122 06:35:07.989643 4720 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 22 06:35:07 crc kubenswrapper[4720]: W0122 06:35:07.989701 4720 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 22 06:35:07 crc kubenswrapper[4720]: W0122 06:35:07.989713 4720 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 22 06:35:07 crc kubenswrapper[4720]: W0122 06:35:07.989727 4720 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 22 06:35:07 crc kubenswrapper[4720]: W0122 06:35:07.989740 4720 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 22 06:35:07 crc kubenswrapper[4720]: W0122 06:35:07.989750 4720 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 22 06:35:07 crc kubenswrapper[4720]: W0122 06:35:07.989760 4720 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 22 06:35:07 crc kubenswrapper[4720]: W0122 06:35:07.989770 4720 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 22 06:35:07 crc kubenswrapper[4720]: W0122 06:35:07.989780 4720 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 22 06:35:07 crc kubenswrapper[4720]: W0122 06:35:07.989790 4720 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 22 06:35:07 crc kubenswrapper[4720]: W0122 06:35:07.989798 4720 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 22 06:35:07 crc kubenswrapper[4720]: W0122 06:35:07.989808 4720 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 22 06:35:07 crc kubenswrapper[4720]: W0122 06:35:07.989816 4720 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 22 06:35:07 crc kubenswrapper[4720]: W0122 06:35:07.989825 4720 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 22 06:35:07 crc kubenswrapper[4720]: W0122 06:35:07.989834 4720 feature_gate.go:330] unrecognized feature gate: Example Jan 22 06:35:07 crc kubenswrapper[4720]: W0122 06:35:07.989842 4720 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 22 06:35:07 crc kubenswrapper[4720]: W0122 06:35:07.989851 4720 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 22 06:35:07 crc kubenswrapper[4720]: W0122 06:35:07.989859 4720 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 22 06:35:07 crc kubenswrapper[4720]: W0122 06:35:07.989868 4720 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 22 06:35:07 crc kubenswrapper[4720]: W0122 06:35:07.989876 4720 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 22 06:35:07 crc kubenswrapper[4720]: W0122 06:35:07.989886 4720 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 22 06:35:07 crc kubenswrapper[4720]: W0122 06:35:07.989894 4720 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 22 06:35:07 crc kubenswrapper[4720]: W0122 06:35:07.989903 4720 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 22 06:35:07 crc kubenswrapper[4720]: W0122 06:35:07.989936 4720 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 22 06:35:07 crc kubenswrapper[4720]: W0122 06:35:07.989946 4720 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 22 06:35:07 crc kubenswrapper[4720]: W0122 06:35:07.989954 4720 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 22 06:35:07 crc kubenswrapper[4720]: W0122 06:35:07.989964 4720 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 22 06:35:07 crc kubenswrapper[4720]: W0122 06:35:07.989973 4720 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 22 06:35:07 crc kubenswrapper[4720]: W0122 06:35:07.989981 4720 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 22 06:35:07 crc kubenswrapper[4720]: W0122 06:35:07.989991 4720 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 22 06:35:07 crc kubenswrapper[4720]: W0122 06:35:07.990000 4720 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 22 06:35:07 crc kubenswrapper[4720]: W0122 06:35:07.990009 4720 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 22 06:35:07 crc kubenswrapper[4720]: W0122 06:35:07.990018 4720 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 22 06:35:07 crc kubenswrapper[4720]: W0122 06:35:07.990027 4720 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 22 06:35:07 crc kubenswrapper[4720]: W0122 06:35:07.990047 4720 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 22 06:35:07 crc kubenswrapper[4720]: W0122 06:35:07.990056 4720 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 22 06:35:07 crc kubenswrapper[4720]: W0122 06:35:07.990065 4720 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 22 06:35:07 crc kubenswrapper[4720]: W0122 06:35:07.990073 4720 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 22 06:35:07 crc kubenswrapper[4720]: W0122 06:35:07.990082 4720 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 22 06:35:07 crc kubenswrapper[4720]: W0122 06:35:07.990090 4720 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 22 06:35:07 crc kubenswrapper[4720]: W0122 06:35:07.990100 4720 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 22 06:35:07 crc kubenswrapper[4720]: W0122 06:35:07.990109 4720 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 22 06:35:07 crc kubenswrapper[4720]: W0122 06:35:07.990118 4720 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 22 06:35:07 crc kubenswrapper[4720]: W0122 06:35:07.990131 4720 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 22 06:35:07 crc kubenswrapper[4720]: W0122 06:35:07.990141 4720 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 22 06:35:07 crc kubenswrapper[4720]: W0122 06:35:07.990153 4720 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 22 06:35:07 crc kubenswrapper[4720]: W0122 06:35:07.990164 4720 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 22 06:35:07 crc kubenswrapper[4720]: W0122 06:35:07.990175 4720 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 22 06:35:07 crc kubenswrapper[4720]: W0122 06:35:07.990185 4720 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 22 06:35:07 crc kubenswrapper[4720]: W0122 06:35:07.990195 4720 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 22 06:35:07 crc kubenswrapper[4720]: W0122 06:35:07.990205 4720 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 22 06:35:07 crc kubenswrapper[4720]: W0122 06:35:07.990215 4720 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 22 06:35:07 crc kubenswrapper[4720]: W0122 06:35:07.990224 4720 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 22 06:35:07 crc kubenswrapper[4720]: W0122 06:35:07.990233 4720 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 22 06:35:07 crc kubenswrapper[4720]: W0122 06:35:07.990241 4720 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 22 06:35:07 crc kubenswrapper[4720]: W0122 06:35:07.990251 4720 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 22 06:35:07 crc kubenswrapper[4720]: W0122 06:35:07.990260 4720 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 22 06:35:07 crc kubenswrapper[4720]: W0122 06:35:07.990269 4720 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 22 06:35:07 crc kubenswrapper[4720]: W0122 06:35:07.990277 4720 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 22 06:35:07 crc kubenswrapper[4720]: W0122 06:35:07.990286 4720 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 22 06:35:07 crc kubenswrapper[4720]: W0122 06:35:07.990294 4720 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 22 06:35:07 crc kubenswrapper[4720]: W0122 06:35:07.990305 4720 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 22 06:35:07 crc kubenswrapper[4720]: W0122 06:35:07.990315 4720 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 22 06:35:07 crc kubenswrapper[4720]: W0122 06:35:07.990324 4720 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 22 06:35:07 crc kubenswrapper[4720]: W0122 06:35:07.990332 4720 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 22 06:35:07 crc kubenswrapper[4720]: W0122 06:35:07.990341 4720 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 22 06:35:07 crc kubenswrapper[4720]: W0122 06:35:07.990351 4720 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 22 06:35:07 crc kubenswrapper[4720]: W0122 06:35:07.990360 4720 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 22 06:35:07 crc kubenswrapper[4720]: W0122 06:35:07.990369 4720 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 22 06:35:07 crc kubenswrapper[4720]: W0122 06:35:07.990378 4720 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 22 06:35:07 crc kubenswrapper[4720]: W0122 06:35:07.990388 4720 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 22 06:35:07 crc kubenswrapper[4720]: I0122 06:35:07.990559 4720 flags.go:64] FLAG: --address="0.0.0.0" Jan 22 06:35:07 crc kubenswrapper[4720]: I0122 06:35:07.990579 4720 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Jan 22 06:35:07 crc kubenswrapper[4720]: I0122 06:35:07.990603 4720 flags.go:64] FLAG: --anonymous-auth="true" Jan 22 06:35:07 crc kubenswrapper[4720]: I0122 06:35:07.990639 4720 flags.go:64] FLAG: --application-metrics-count-limit="100" Jan 22 06:35:07 crc kubenswrapper[4720]: I0122 06:35:07.990654 4720 flags.go:64] FLAG: --authentication-token-webhook="false" Jan 22 06:35:07 crc kubenswrapper[4720]: I0122 06:35:07.990665 4720 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Jan 22 06:35:07 crc kubenswrapper[4720]: I0122 06:35:07.990679 4720 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Jan 22 06:35:07 crc kubenswrapper[4720]: I0122 06:35:07.990692 4720 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Jan 22 06:35:07 crc kubenswrapper[4720]: I0122 06:35:07.990702 4720 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Jan 22 06:35:07 crc kubenswrapper[4720]: I0122 06:35:07.990712 4720 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Jan 22 06:35:07 crc kubenswrapper[4720]: I0122 06:35:07.990724 4720 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Jan 22 06:35:07 crc kubenswrapper[4720]: I0122 06:35:07.990735 4720 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Jan 22 06:35:07 crc kubenswrapper[4720]: I0122 06:35:07.990745 4720 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Jan 22 06:35:07 crc kubenswrapper[4720]: I0122 06:35:07.990755 4720 flags.go:64] FLAG: --cgroup-root="" Jan 22 06:35:07 crc kubenswrapper[4720]: I0122 06:35:07.990765 4720 flags.go:64] FLAG: --cgroups-per-qos="true" Jan 22 06:35:07 crc kubenswrapper[4720]: I0122 06:35:07.990775 4720 flags.go:64] FLAG: --client-ca-file="" Jan 22 06:35:07 crc kubenswrapper[4720]: I0122 06:35:07.990785 4720 flags.go:64] FLAG: --cloud-config="" Jan 22 06:35:07 crc kubenswrapper[4720]: I0122 06:35:07.990795 4720 flags.go:64] FLAG: --cloud-provider="" Jan 22 06:35:07 crc kubenswrapper[4720]: I0122 06:35:07.990806 4720 flags.go:64] FLAG: --cluster-dns="[]" Jan 22 06:35:07 crc kubenswrapper[4720]: I0122 06:35:07.990825 4720 flags.go:64] FLAG: --cluster-domain="" Jan 22 06:35:07 crc kubenswrapper[4720]: I0122 06:35:07.990835 4720 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Jan 22 06:35:07 crc kubenswrapper[4720]: I0122 06:35:07.990846 4720 flags.go:64] FLAG: --config-dir="" Jan 22 06:35:07 crc kubenswrapper[4720]: I0122 06:35:07.990856 4720 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Jan 22 06:35:07 crc kubenswrapper[4720]: I0122 06:35:07.990866 4720 flags.go:64] FLAG: --container-log-max-files="5" Jan 22 06:35:07 crc kubenswrapper[4720]: I0122 06:35:07.990879 4720 flags.go:64] FLAG: --container-log-max-size="10Mi" Jan 22 06:35:07 crc kubenswrapper[4720]: I0122 06:35:07.990889 4720 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Jan 22 06:35:07 crc kubenswrapper[4720]: I0122 06:35:07.990899 4720 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Jan 22 06:35:07 crc kubenswrapper[4720]: I0122 06:35:07.990941 4720 flags.go:64] FLAG: --containerd-namespace="k8s.io" Jan 22 06:35:07 crc kubenswrapper[4720]: I0122 06:35:07.990953 4720 flags.go:64] FLAG: --contention-profiling="false" Jan 22 06:35:07 crc kubenswrapper[4720]: I0122 06:35:07.990963 4720 flags.go:64] FLAG: --cpu-cfs-quota="true" Jan 22 06:35:07 crc kubenswrapper[4720]: I0122 06:35:07.990973 4720 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Jan 22 06:35:07 crc kubenswrapper[4720]: I0122 06:35:07.990985 4720 flags.go:64] FLAG: --cpu-manager-policy="none" Jan 22 06:35:07 crc kubenswrapper[4720]: I0122 06:35:07.990995 4720 flags.go:64] FLAG: --cpu-manager-policy-options="" Jan 22 06:35:07 crc kubenswrapper[4720]: I0122 06:35:07.991008 4720 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Jan 22 06:35:07 crc kubenswrapper[4720]: I0122 06:35:07.991018 4720 flags.go:64] FLAG: --enable-controller-attach-detach="true" Jan 22 06:35:07 crc kubenswrapper[4720]: I0122 06:35:07.991028 4720 flags.go:64] FLAG: --enable-debugging-handlers="true" Jan 22 06:35:07 crc kubenswrapper[4720]: I0122 06:35:07.991038 4720 flags.go:64] FLAG: --enable-load-reader="false" Jan 22 06:35:07 crc kubenswrapper[4720]: I0122 06:35:07.991048 4720 flags.go:64] FLAG: --enable-server="true" Jan 22 06:35:07 crc kubenswrapper[4720]: I0122 06:35:07.991057 4720 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Jan 22 06:35:07 crc kubenswrapper[4720]: I0122 06:35:07.991080 4720 flags.go:64] FLAG: --event-burst="100" Jan 22 06:35:07 crc kubenswrapper[4720]: I0122 06:35:07.991090 4720 flags.go:64] FLAG: --event-qps="50" Jan 22 06:35:07 crc kubenswrapper[4720]: I0122 06:35:07.991100 4720 flags.go:64] FLAG: --event-storage-age-limit="default=0" Jan 22 06:35:07 crc kubenswrapper[4720]: I0122 06:35:07.991110 4720 flags.go:64] FLAG: --event-storage-event-limit="default=0" Jan 22 06:35:07 crc kubenswrapper[4720]: I0122 06:35:07.991120 4720 flags.go:64] FLAG: --eviction-hard="" Jan 22 06:35:07 crc kubenswrapper[4720]: I0122 06:35:07.991132 4720 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Jan 22 06:35:07 crc kubenswrapper[4720]: I0122 06:35:07.991142 4720 flags.go:64] FLAG: --eviction-minimum-reclaim="" Jan 22 06:35:07 crc kubenswrapper[4720]: I0122 06:35:07.991152 4720 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Jan 22 06:35:07 crc kubenswrapper[4720]: I0122 06:35:07.991182 4720 flags.go:64] FLAG: --eviction-soft="" Jan 22 06:35:07 crc kubenswrapper[4720]: I0122 06:35:07.991192 4720 flags.go:64] FLAG: --eviction-soft-grace-period="" Jan 22 06:35:07 crc kubenswrapper[4720]: I0122 06:35:07.991202 4720 flags.go:64] FLAG: --exit-on-lock-contention="false" Jan 22 06:35:07 crc kubenswrapper[4720]: I0122 06:35:07.991213 4720 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Jan 22 06:35:07 crc kubenswrapper[4720]: I0122 06:35:07.991224 4720 flags.go:64] FLAG: --experimental-mounter-path="" Jan 22 06:35:07 crc kubenswrapper[4720]: I0122 06:35:07.991234 4720 flags.go:64] FLAG: --fail-cgroupv1="false" Jan 22 06:35:07 crc kubenswrapper[4720]: I0122 06:35:07.991244 4720 flags.go:64] FLAG: --fail-swap-on="true" Jan 22 06:35:07 crc kubenswrapper[4720]: I0122 06:35:07.991254 4720 flags.go:64] FLAG: --feature-gates="" Jan 22 06:35:07 crc kubenswrapper[4720]: I0122 06:35:07.991267 4720 flags.go:64] FLAG: --file-check-frequency="20s" Jan 22 06:35:07 crc kubenswrapper[4720]: I0122 06:35:07.991277 4720 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Jan 22 06:35:07 crc kubenswrapper[4720]: I0122 06:35:07.991288 4720 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Jan 22 06:35:07 crc kubenswrapper[4720]: I0122 06:35:07.991298 4720 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Jan 22 06:35:07 crc kubenswrapper[4720]: I0122 06:35:07.991308 4720 flags.go:64] FLAG: --healthz-port="10248" Jan 22 06:35:07 crc kubenswrapper[4720]: I0122 06:35:07.991319 4720 flags.go:64] FLAG: --help="false" Jan 22 06:35:07 crc kubenswrapper[4720]: I0122 06:35:07.991329 4720 flags.go:64] FLAG: --hostname-override="" Jan 22 06:35:07 crc kubenswrapper[4720]: I0122 06:35:07.991338 4720 flags.go:64] FLAG: --housekeeping-interval="10s" Jan 22 06:35:07 crc kubenswrapper[4720]: I0122 06:35:07.991349 4720 flags.go:64] FLAG: --http-check-frequency="20s" Jan 22 06:35:07 crc kubenswrapper[4720]: I0122 06:35:07.991359 4720 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Jan 22 06:35:07 crc kubenswrapper[4720]: I0122 06:35:07.991368 4720 flags.go:64] FLAG: --image-credential-provider-config="" Jan 22 06:35:07 crc kubenswrapper[4720]: I0122 06:35:07.991380 4720 flags.go:64] FLAG: --image-gc-high-threshold="85" Jan 22 06:35:07 crc kubenswrapper[4720]: I0122 06:35:07.991389 4720 flags.go:64] FLAG: --image-gc-low-threshold="80" Jan 22 06:35:07 crc kubenswrapper[4720]: I0122 06:35:07.991401 4720 flags.go:64] FLAG: --image-service-endpoint="" Jan 22 06:35:07 crc kubenswrapper[4720]: I0122 06:35:07.991410 4720 flags.go:64] FLAG: --kernel-memcg-notification="false" Jan 22 06:35:07 crc kubenswrapper[4720]: I0122 06:35:07.991420 4720 flags.go:64] FLAG: --kube-api-burst="100" Jan 22 06:35:07 crc kubenswrapper[4720]: I0122 06:35:07.991431 4720 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Jan 22 06:35:07 crc kubenswrapper[4720]: I0122 06:35:07.991441 4720 flags.go:64] FLAG: --kube-api-qps="50" Jan 22 06:35:07 crc kubenswrapper[4720]: I0122 06:35:07.991452 4720 flags.go:64] FLAG: --kube-reserved="" Jan 22 06:35:07 crc kubenswrapper[4720]: I0122 06:35:07.991485 4720 flags.go:64] FLAG: --kube-reserved-cgroup="" Jan 22 06:35:07 crc kubenswrapper[4720]: I0122 06:35:07.991495 4720 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Jan 22 06:35:07 crc kubenswrapper[4720]: I0122 06:35:07.991505 4720 flags.go:64] FLAG: --kubelet-cgroups="" Jan 22 06:35:07 crc kubenswrapper[4720]: I0122 06:35:07.991515 4720 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Jan 22 06:35:07 crc kubenswrapper[4720]: I0122 06:35:07.991526 4720 flags.go:64] FLAG: --lock-file="" Jan 22 06:35:07 crc kubenswrapper[4720]: I0122 06:35:07.991535 4720 flags.go:64] FLAG: --log-cadvisor-usage="false" Jan 22 06:35:07 crc kubenswrapper[4720]: I0122 06:35:07.991545 4720 flags.go:64] FLAG: --log-flush-frequency="5s" Jan 22 06:35:07 crc kubenswrapper[4720]: I0122 06:35:07.991555 4720 flags.go:64] FLAG: --log-json-info-buffer-size="0" Jan 22 06:35:07 crc kubenswrapper[4720]: I0122 06:35:07.991581 4720 flags.go:64] FLAG: --log-json-split-stream="false" Jan 22 06:35:07 crc kubenswrapper[4720]: I0122 06:35:07.991605 4720 flags.go:64] FLAG: --log-text-info-buffer-size="0" Jan 22 06:35:07 crc kubenswrapper[4720]: I0122 06:35:07.991616 4720 flags.go:64] FLAG: --log-text-split-stream="false" Jan 22 06:35:07 crc kubenswrapper[4720]: I0122 06:35:07.991625 4720 flags.go:64] FLAG: --logging-format="text" Jan 22 06:35:07 crc kubenswrapper[4720]: I0122 06:35:07.991635 4720 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Jan 22 06:35:07 crc kubenswrapper[4720]: I0122 06:35:07.991646 4720 flags.go:64] FLAG: --make-iptables-util-chains="true" Jan 22 06:35:07 crc kubenswrapper[4720]: I0122 06:35:07.991656 4720 flags.go:64] FLAG: --manifest-url="" Jan 22 06:35:07 crc kubenswrapper[4720]: I0122 06:35:07.991666 4720 flags.go:64] FLAG: --manifest-url-header="" Jan 22 06:35:07 crc kubenswrapper[4720]: I0122 06:35:07.991679 4720 flags.go:64] FLAG: --max-housekeeping-interval="15s" Jan 22 06:35:07 crc kubenswrapper[4720]: I0122 06:35:07.991689 4720 flags.go:64] FLAG: --max-open-files="1000000" Jan 22 06:35:07 crc kubenswrapper[4720]: I0122 06:35:07.991701 4720 flags.go:64] FLAG: --max-pods="110" Jan 22 06:35:07 crc kubenswrapper[4720]: I0122 06:35:07.991710 4720 flags.go:64] FLAG: --maximum-dead-containers="-1" Jan 22 06:35:07 crc kubenswrapper[4720]: I0122 06:35:07.991721 4720 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Jan 22 06:35:07 crc kubenswrapper[4720]: I0122 06:35:07.991734 4720 flags.go:64] FLAG: --memory-manager-policy="None" Jan 22 06:35:07 crc kubenswrapper[4720]: I0122 06:35:07.991743 4720 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Jan 22 06:35:07 crc kubenswrapper[4720]: I0122 06:35:07.991754 4720 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Jan 22 06:35:07 crc kubenswrapper[4720]: I0122 06:35:07.991763 4720 flags.go:64] FLAG: --node-ip="192.168.126.11" Jan 22 06:35:07 crc kubenswrapper[4720]: I0122 06:35:07.991774 4720 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Jan 22 06:35:07 crc kubenswrapper[4720]: I0122 06:35:07.991796 4720 flags.go:64] FLAG: --node-status-max-images="50" Jan 22 06:35:07 crc kubenswrapper[4720]: I0122 06:35:07.991806 4720 flags.go:64] FLAG: --node-status-update-frequency="10s" Jan 22 06:35:07 crc kubenswrapper[4720]: I0122 06:35:07.991839 4720 flags.go:64] FLAG: --oom-score-adj="-999" Jan 22 06:35:07 crc kubenswrapper[4720]: I0122 06:35:07.991850 4720 flags.go:64] FLAG: --pod-cidr="" Jan 22 06:35:07 crc kubenswrapper[4720]: I0122 06:35:07.991860 4720 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Jan 22 06:35:07 crc kubenswrapper[4720]: I0122 06:35:07.991874 4720 flags.go:64] FLAG: --pod-manifest-path="" Jan 22 06:35:07 crc kubenswrapper[4720]: I0122 06:35:07.991884 4720 flags.go:64] FLAG: --pod-max-pids="-1" Jan 22 06:35:07 crc kubenswrapper[4720]: I0122 06:35:07.991894 4720 flags.go:64] FLAG: --pods-per-core="0" Jan 22 06:35:07 crc kubenswrapper[4720]: I0122 06:35:07.991904 4720 flags.go:64] FLAG: --port="10250" Jan 22 06:35:07 crc kubenswrapper[4720]: I0122 06:35:07.991943 4720 flags.go:64] FLAG: --protect-kernel-defaults="false" Jan 22 06:35:07 crc kubenswrapper[4720]: I0122 06:35:07.991953 4720 flags.go:64] FLAG: --provider-id="" Jan 22 06:35:07 crc kubenswrapper[4720]: I0122 06:35:07.991963 4720 flags.go:64] FLAG: --qos-reserved="" Jan 22 06:35:07 crc kubenswrapper[4720]: I0122 06:35:07.991973 4720 flags.go:64] FLAG: --read-only-port="10255" Jan 22 06:35:07 crc kubenswrapper[4720]: I0122 06:35:07.991983 4720 flags.go:64] FLAG: --register-node="true" Jan 22 06:35:07 crc kubenswrapper[4720]: I0122 06:35:07.991994 4720 flags.go:64] FLAG: --register-schedulable="true" Jan 22 06:35:07 crc kubenswrapper[4720]: I0122 06:35:07.992004 4720 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Jan 22 06:35:07 crc kubenswrapper[4720]: I0122 06:35:07.992020 4720 flags.go:64] FLAG: --registry-burst="10" Jan 22 06:35:07 crc kubenswrapper[4720]: I0122 06:35:07.992030 4720 flags.go:64] FLAG: --registry-qps="5" Jan 22 06:35:07 crc kubenswrapper[4720]: I0122 06:35:07.992040 4720 flags.go:64] FLAG: --reserved-cpus="" Jan 22 06:35:07 crc kubenswrapper[4720]: I0122 06:35:07.992065 4720 flags.go:64] FLAG: --reserved-memory="" Jan 22 06:35:07 crc kubenswrapper[4720]: I0122 06:35:07.992078 4720 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Jan 22 06:35:07 crc kubenswrapper[4720]: I0122 06:35:07.992088 4720 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Jan 22 06:35:07 crc kubenswrapper[4720]: I0122 06:35:07.992098 4720 flags.go:64] FLAG: --rotate-certificates="false" Jan 22 06:35:07 crc kubenswrapper[4720]: I0122 06:35:07.992108 4720 flags.go:64] FLAG: --rotate-server-certificates="false" Jan 22 06:35:07 crc kubenswrapper[4720]: I0122 06:35:07.992117 4720 flags.go:64] FLAG: --runonce="false" Jan 22 06:35:07 crc kubenswrapper[4720]: I0122 06:35:07.992128 4720 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Jan 22 06:35:07 crc kubenswrapper[4720]: I0122 06:35:07.992139 4720 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Jan 22 06:35:07 crc kubenswrapper[4720]: I0122 06:35:07.992149 4720 flags.go:64] FLAG: --seccomp-default="false" Jan 22 06:35:07 crc kubenswrapper[4720]: I0122 06:35:07.992159 4720 flags.go:64] FLAG: --serialize-image-pulls="true" Jan 22 06:35:07 crc kubenswrapper[4720]: I0122 06:35:07.992169 4720 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Jan 22 06:35:07 crc kubenswrapper[4720]: I0122 06:35:07.992180 4720 flags.go:64] FLAG: --storage-driver-db="cadvisor" Jan 22 06:35:07 crc kubenswrapper[4720]: I0122 06:35:07.992190 4720 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Jan 22 06:35:07 crc kubenswrapper[4720]: I0122 06:35:07.992200 4720 flags.go:64] FLAG: --storage-driver-password="root" Jan 22 06:35:07 crc kubenswrapper[4720]: I0122 06:35:07.992209 4720 flags.go:64] FLAG: --storage-driver-secure="false" Jan 22 06:35:07 crc kubenswrapper[4720]: I0122 06:35:07.992219 4720 flags.go:64] FLAG: --storage-driver-table="stats" Jan 22 06:35:07 crc kubenswrapper[4720]: I0122 06:35:07.992229 4720 flags.go:64] FLAG: --storage-driver-user="root" Jan 22 06:35:07 crc kubenswrapper[4720]: I0122 06:35:07.992239 4720 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Jan 22 06:35:07 crc kubenswrapper[4720]: I0122 06:35:07.992249 4720 flags.go:64] FLAG: --sync-frequency="1m0s" Jan 22 06:35:07 crc kubenswrapper[4720]: I0122 06:35:07.992259 4720 flags.go:64] FLAG: --system-cgroups="" Jan 22 06:35:07 crc kubenswrapper[4720]: I0122 06:35:07.992268 4720 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Jan 22 06:35:07 crc kubenswrapper[4720]: I0122 06:35:07.992284 4720 flags.go:64] FLAG: --system-reserved-cgroup="" Jan 22 06:35:07 crc kubenswrapper[4720]: I0122 06:35:07.992293 4720 flags.go:64] FLAG: --tls-cert-file="" Jan 22 06:35:07 crc kubenswrapper[4720]: I0122 06:35:07.992303 4720 flags.go:64] FLAG: --tls-cipher-suites="[]" Jan 22 06:35:07 crc kubenswrapper[4720]: I0122 06:35:07.992321 4720 flags.go:64] FLAG: --tls-min-version="" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:07.992331 4720 flags.go:64] FLAG: --tls-private-key-file="" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:07.992340 4720 flags.go:64] FLAG: --topology-manager-policy="none" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:07.992351 4720 flags.go:64] FLAG: --topology-manager-policy-options="" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:07.992361 4720 flags.go:64] FLAG: --topology-manager-scope="container" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:07.992372 4720 flags.go:64] FLAG: --v="2" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:07.992385 4720 flags.go:64] FLAG: --version="false" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:07.992398 4720 flags.go:64] FLAG: --vmodule="" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:07.992410 4720 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:07.992421 4720 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Jan 22 06:35:08 crc kubenswrapper[4720]: W0122 06:35:07.992701 4720 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 22 06:35:08 crc kubenswrapper[4720]: W0122 06:35:07.992713 4720 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 22 06:35:08 crc kubenswrapper[4720]: W0122 06:35:07.992736 4720 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 22 06:35:08 crc kubenswrapper[4720]: W0122 06:35:07.992746 4720 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 22 06:35:08 crc kubenswrapper[4720]: W0122 06:35:07.992755 4720 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 22 06:35:08 crc kubenswrapper[4720]: W0122 06:35:07.992764 4720 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 22 06:35:08 crc kubenswrapper[4720]: W0122 06:35:07.992773 4720 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 22 06:35:08 crc kubenswrapper[4720]: W0122 06:35:07.992783 4720 feature_gate.go:330] unrecognized feature gate: Example Jan 22 06:35:08 crc kubenswrapper[4720]: W0122 06:35:07.992792 4720 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 22 06:35:08 crc kubenswrapper[4720]: W0122 06:35:07.992801 4720 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 22 06:35:08 crc kubenswrapper[4720]: W0122 06:35:07.992812 4720 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 22 06:35:08 crc kubenswrapper[4720]: W0122 06:35:07.992823 4720 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 22 06:35:08 crc kubenswrapper[4720]: W0122 06:35:07.992832 4720 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 22 06:35:08 crc kubenswrapper[4720]: W0122 06:35:07.992842 4720 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 22 06:35:08 crc kubenswrapper[4720]: W0122 06:35:07.992851 4720 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 22 06:35:08 crc kubenswrapper[4720]: W0122 06:35:07.992859 4720 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 22 06:35:08 crc kubenswrapper[4720]: W0122 06:35:07.992868 4720 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 22 06:35:08 crc kubenswrapper[4720]: W0122 06:35:07.992876 4720 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 22 06:35:08 crc kubenswrapper[4720]: W0122 06:35:07.992885 4720 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 22 06:35:08 crc kubenswrapper[4720]: W0122 06:35:07.992894 4720 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 22 06:35:08 crc kubenswrapper[4720]: W0122 06:35:07.992903 4720 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 22 06:35:08 crc kubenswrapper[4720]: W0122 06:35:07.992940 4720 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 22 06:35:08 crc kubenswrapper[4720]: W0122 06:35:07.992949 4720 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 22 06:35:08 crc kubenswrapper[4720]: W0122 06:35:07.992957 4720 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 22 06:35:08 crc kubenswrapper[4720]: W0122 06:35:07.992966 4720 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 22 06:35:08 crc kubenswrapper[4720]: W0122 06:35:07.992975 4720 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 22 06:35:08 crc kubenswrapper[4720]: W0122 06:35:07.992984 4720 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 22 06:35:08 crc kubenswrapper[4720]: W0122 06:35:07.992992 4720 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 22 06:35:08 crc kubenswrapper[4720]: W0122 06:35:07.993001 4720 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 22 06:35:08 crc kubenswrapper[4720]: W0122 06:35:07.993010 4720 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 22 06:35:08 crc kubenswrapper[4720]: W0122 06:35:07.993018 4720 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 22 06:35:08 crc kubenswrapper[4720]: W0122 06:35:07.993027 4720 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 22 06:35:08 crc kubenswrapper[4720]: W0122 06:35:07.993036 4720 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 22 06:35:08 crc kubenswrapper[4720]: W0122 06:35:07.993044 4720 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 22 06:35:08 crc kubenswrapper[4720]: W0122 06:35:07.993053 4720 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 22 06:35:08 crc kubenswrapper[4720]: W0122 06:35:07.993061 4720 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 22 06:35:08 crc kubenswrapper[4720]: W0122 06:35:07.993070 4720 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 22 06:35:08 crc kubenswrapper[4720]: W0122 06:35:07.993078 4720 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 22 06:35:08 crc kubenswrapper[4720]: W0122 06:35:07.993107 4720 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 22 06:35:08 crc kubenswrapper[4720]: W0122 06:35:07.993116 4720 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 22 06:35:08 crc kubenswrapper[4720]: W0122 06:35:07.993125 4720 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 22 06:35:08 crc kubenswrapper[4720]: W0122 06:35:07.993133 4720 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 22 06:35:08 crc kubenswrapper[4720]: W0122 06:35:07.993142 4720 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 22 06:35:08 crc kubenswrapper[4720]: W0122 06:35:07.993150 4720 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 22 06:35:08 crc kubenswrapper[4720]: W0122 06:35:07.993161 4720 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 22 06:35:08 crc kubenswrapper[4720]: W0122 06:35:07.993171 4720 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 22 06:35:08 crc kubenswrapper[4720]: W0122 06:35:07.993180 4720 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 22 06:35:08 crc kubenswrapper[4720]: W0122 06:35:07.993189 4720 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 22 06:35:08 crc kubenswrapper[4720]: W0122 06:35:07.993197 4720 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 22 06:35:08 crc kubenswrapper[4720]: W0122 06:35:07.993205 4720 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 22 06:35:08 crc kubenswrapper[4720]: W0122 06:35:07.993217 4720 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 22 06:35:08 crc kubenswrapper[4720]: W0122 06:35:07.993228 4720 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 22 06:35:08 crc kubenswrapper[4720]: W0122 06:35:07.993237 4720 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 22 06:35:08 crc kubenswrapper[4720]: W0122 06:35:07.993246 4720 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 22 06:35:08 crc kubenswrapper[4720]: W0122 06:35:07.993256 4720 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 22 06:35:08 crc kubenswrapper[4720]: W0122 06:35:07.993266 4720 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 22 06:35:08 crc kubenswrapper[4720]: W0122 06:35:07.993274 4720 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 22 06:35:08 crc kubenswrapper[4720]: W0122 06:35:07.993285 4720 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 22 06:35:08 crc kubenswrapper[4720]: W0122 06:35:07.993294 4720 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 22 06:35:08 crc kubenswrapper[4720]: W0122 06:35:07.993302 4720 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 22 06:35:08 crc kubenswrapper[4720]: W0122 06:35:07.993311 4720 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 22 06:35:08 crc kubenswrapper[4720]: W0122 06:35:07.993321 4720 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 22 06:35:08 crc kubenswrapper[4720]: W0122 06:35:07.993329 4720 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 22 06:35:08 crc kubenswrapper[4720]: W0122 06:35:07.993338 4720 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 22 06:35:08 crc kubenswrapper[4720]: W0122 06:35:07.993346 4720 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 22 06:35:08 crc kubenswrapper[4720]: W0122 06:35:07.993355 4720 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 22 06:35:08 crc kubenswrapper[4720]: W0122 06:35:07.993364 4720 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 22 06:35:08 crc kubenswrapper[4720]: W0122 06:35:07.993373 4720 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 22 06:35:08 crc kubenswrapper[4720]: W0122 06:35:07.993382 4720 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 22 06:35:08 crc kubenswrapper[4720]: W0122 06:35:07.993392 4720 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 22 06:35:08 crc kubenswrapper[4720]: W0122 06:35:07.993404 4720 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:07.993485 4720 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.006217 4720 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.006274 4720 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 22 06:35:08 crc kubenswrapper[4720]: W0122 06:35:08.006425 4720 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 22 06:35:08 crc kubenswrapper[4720]: W0122 06:35:08.006449 4720 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 22 06:35:08 crc kubenswrapper[4720]: W0122 06:35:08.006460 4720 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 22 06:35:08 crc kubenswrapper[4720]: W0122 06:35:08.006470 4720 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 22 06:35:08 crc kubenswrapper[4720]: W0122 06:35:08.006480 4720 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 22 06:35:08 crc kubenswrapper[4720]: W0122 06:35:08.006491 4720 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 22 06:35:08 crc kubenswrapper[4720]: W0122 06:35:08.006500 4720 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 22 06:35:08 crc kubenswrapper[4720]: W0122 06:35:08.006510 4720 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 22 06:35:08 crc kubenswrapper[4720]: W0122 06:35:08.006519 4720 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 22 06:35:08 crc kubenswrapper[4720]: W0122 06:35:08.006527 4720 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 22 06:35:08 crc kubenswrapper[4720]: W0122 06:35:08.006537 4720 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 22 06:35:08 crc kubenswrapper[4720]: W0122 06:35:08.006545 4720 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 22 06:35:08 crc kubenswrapper[4720]: W0122 06:35:08.006556 4720 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 22 06:35:08 crc kubenswrapper[4720]: W0122 06:35:08.006567 4720 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 22 06:35:08 crc kubenswrapper[4720]: W0122 06:35:08.006576 4720 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 22 06:35:08 crc kubenswrapper[4720]: W0122 06:35:08.006588 4720 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 22 06:35:08 crc kubenswrapper[4720]: W0122 06:35:08.006598 4720 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 22 06:35:08 crc kubenswrapper[4720]: W0122 06:35:08.006606 4720 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 22 06:35:08 crc kubenswrapper[4720]: W0122 06:35:08.006616 4720 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 22 06:35:08 crc kubenswrapper[4720]: W0122 06:35:08.006626 4720 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 22 06:35:08 crc kubenswrapper[4720]: W0122 06:35:08.006637 4720 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 22 06:35:08 crc kubenswrapper[4720]: W0122 06:35:08.006645 4720 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 22 06:35:08 crc kubenswrapper[4720]: W0122 06:35:08.006654 4720 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 22 06:35:08 crc kubenswrapper[4720]: W0122 06:35:08.006663 4720 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 22 06:35:08 crc kubenswrapper[4720]: W0122 06:35:08.006671 4720 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 22 06:35:08 crc kubenswrapper[4720]: W0122 06:35:08.006679 4720 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 22 06:35:08 crc kubenswrapper[4720]: W0122 06:35:08.006687 4720 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 22 06:35:08 crc kubenswrapper[4720]: W0122 06:35:08.006696 4720 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 22 06:35:08 crc kubenswrapper[4720]: W0122 06:35:08.006705 4720 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 22 06:35:08 crc kubenswrapper[4720]: W0122 06:35:08.006715 4720 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 22 06:35:08 crc kubenswrapper[4720]: W0122 06:35:08.006725 4720 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 22 06:35:08 crc kubenswrapper[4720]: W0122 06:35:08.006734 4720 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 22 06:35:08 crc kubenswrapper[4720]: W0122 06:35:08.006745 4720 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 22 06:35:08 crc kubenswrapper[4720]: W0122 06:35:08.006755 4720 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 22 06:35:08 crc kubenswrapper[4720]: W0122 06:35:08.006767 4720 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 22 06:35:08 crc kubenswrapper[4720]: W0122 06:35:08.006776 4720 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 22 06:35:08 crc kubenswrapper[4720]: W0122 06:35:08.006785 4720 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 22 06:35:08 crc kubenswrapper[4720]: W0122 06:35:08.006793 4720 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 22 06:35:08 crc kubenswrapper[4720]: W0122 06:35:08.006801 4720 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 22 06:35:08 crc kubenswrapper[4720]: W0122 06:35:08.006809 4720 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 22 06:35:08 crc kubenswrapper[4720]: W0122 06:35:08.006817 4720 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 22 06:35:08 crc kubenswrapper[4720]: W0122 06:35:08.006825 4720 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 22 06:35:08 crc kubenswrapper[4720]: W0122 06:35:08.006833 4720 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 22 06:35:08 crc kubenswrapper[4720]: W0122 06:35:08.006841 4720 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 22 06:35:08 crc kubenswrapper[4720]: W0122 06:35:08.006849 4720 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 22 06:35:08 crc kubenswrapper[4720]: W0122 06:35:08.006857 4720 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 22 06:35:08 crc kubenswrapper[4720]: W0122 06:35:08.006865 4720 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 22 06:35:08 crc kubenswrapper[4720]: W0122 06:35:08.006872 4720 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 22 06:35:08 crc kubenswrapper[4720]: W0122 06:35:08.006880 4720 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 22 06:35:08 crc kubenswrapper[4720]: W0122 06:35:08.006888 4720 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 22 06:35:08 crc kubenswrapper[4720]: W0122 06:35:08.006896 4720 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 22 06:35:08 crc kubenswrapper[4720]: W0122 06:35:08.006904 4720 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 22 06:35:08 crc kubenswrapper[4720]: W0122 06:35:08.006934 4720 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 22 06:35:08 crc kubenswrapper[4720]: W0122 06:35:08.006943 4720 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 22 06:35:08 crc kubenswrapper[4720]: W0122 06:35:08.006951 4720 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 22 06:35:08 crc kubenswrapper[4720]: W0122 06:35:08.006959 4720 feature_gate.go:330] unrecognized feature gate: Example Jan 22 06:35:08 crc kubenswrapper[4720]: W0122 06:35:08.006967 4720 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 22 06:35:08 crc kubenswrapper[4720]: W0122 06:35:08.006975 4720 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 22 06:35:08 crc kubenswrapper[4720]: W0122 06:35:08.006983 4720 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 22 06:35:08 crc kubenswrapper[4720]: W0122 06:35:08.006990 4720 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 22 06:35:08 crc kubenswrapper[4720]: W0122 06:35:08.006998 4720 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 22 06:35:08 crc kubenswrapper[4720]: W0122 06:35:08.007006 4720 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 22 06:35:08 crc kubenswrapper[4720]: W0122 06:35:08.007014 4720 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 22 06:35:08 crc kubenswrapper[4720]: W0122 06:35:08.007021 4720 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 22 06:35:08 crc kubenswrapper[4720]: W0122 06:35:08.007029 4720 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 22 06:35:08 crc kubenswrapper[4720]: W0122 06:35:08.007036 4720 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 22 06:35:08 crc kubenswrapper[4720]: W0122 06:35:08.007044 4720 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 22 06:35:08 crc kubenswrapper[4720]: W0122 06:35:08.007052 4720 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 22 06:35:08 crc kubenswrapper[4720]: W0122 06:35:08.007060 4720 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 22 06:35:08 crc kubenswrapper[4720]: W0122 06:35:08.007068 4720 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 22 06:35:08 crc kubenswrapper[4720]: W0122 06:35:08.007077 4720 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.007090 4720 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 22 06:35:08 crc kubenswrapper[4720]: W0122 06:35:08.007337 4720 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 22 06:35:08 crc kubenswrapper[4720]: W0122 06:35:08.007348 4720 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 22 06:35:08 crc kubenswrapper[4720]: W0122 06:35:08.007357 4720 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 22 06:35:08 crc kubenswrapper[4720]: W0122 06:35:08.007364 4720 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 22 06:35:08 crc kubenswrapper[4720]: W0122 06:35:08.007373 4720 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 22 06:35:08 crc kubenswrapper[4720]: W0122 06:35:08.007384 4720 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 22 06:35:08 crc kubenswrapper[4720]: W0122 06:35:08.007396 4720 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 22 06:35:08 crc kubenswrapper[4720]: W0122 06:35:08.007404 4720 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 22 06:35:08 crc kubenswrapper[4720]: W0122 06:35:08.007414 4720 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 22 06:35:08 crc kubenswrapper[4720]: W0122 06:35:08.007425 4720 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 22 06:35:08 crc kubenswrapper[4720]: W0122 06:35:08.007436 4720 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 22 06:35:08 crc kubenswrapper[4720]: W0122 06:35:08.007445 4720 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 22 06:35:08 crc kubenswrapper[4720]: W0122 06:35:08.007454 4720 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 22 06:35:08 crc kubenswrapper[4720]: W0122 06:35:08.007462 4720 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 22 06:35:08 crc kubenswrapper[4720]: W0122 06:35:08.007470 4720 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 22 06:35:08 crc kubenswrapper[4720]: W0122 06:35:08.007479 4720 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 22 06:35:08 crc kubenswrapper[4720]: W0122 06:35:08.007508 4720 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 22 06:35:08 crc kubenswrapper[4720]: W0122 06:35:08.007516 4720 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 22 06:35:08 crc kubenswrapper[4720]: W0122 06:35:08.007524 4720 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 22 06:35:08 crc kubenswrapper[4720]: W0122 06:35:08.007532 4720 feature_gate.go:330] unrecognized feature gate: Example Jan 22 06:35:08 crc kubenswrapper[4720]: W0122 06:35:08.007539 4720 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 22 06:35:08 crc kubenswrapper[4720]: W0122 06:35:08.007548 4720 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 22 06:35:08 crc kubenswrapper[4720]: W0122 06:35:08.007555 4720 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 22 06:35:08 crc kubenswrapper[4720]: W0122 06:35:08.007563 4720 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 22 06:35:08 crc kubenswrapper[4720]: W0122 06:35:08.007571 4720 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 22 06:35:08 crc kubenswrapper[4720]: W0122 06:35:08.007578 4720 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 22 06:35:08 crc kubenswrapper[4720]: W0122 06:35:08.007586 4720 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 22 06:35:08 crc kubenswrapper[4720]: W0122 06:35:08.007594 4720 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 22 06:35:08 crc kubenswrapper[4720]: W0122 06:35:08.007602 4720 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 22 06:35:08 crc kubenswrapper[4720]: W0122 06:35:08.007611 4720 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 22 06:35:08 crc kubenswrapper[4720]: W0122 06:35:08.007620 4720 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 22 06:35:08 crc kubenswrapper[4720]: W0122 06:35:08.007628 4720 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 22 06:35:08 crc kubenswrapper[4720]: W0122 06:35:08.007637 4720 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 22 06:35:08 crc kubenswrapper[4720]: W0122 06:35:08.007645 4720 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 22 06:35:08 crc kubenswrapper[4720]: W0122 06:35:08.007655 4720 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 22 06:35:08 crc kubenswrapper[4720]: W0122 06:35:08.007663 4720 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 22 06:35:08 crc kubenswrapper[4720]: W0122 06:35:08.007671 4720 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 22 06:35:08 crc kubenswrapper[4720]: W0122 06:35:08.007679 4720 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 22 06:35:08 crc kubenswrapper[4720]: W0122 06:35:08.007687 4720 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 22 06:35:08 crc kubenswrapper[4720]: W0122 06:35:08.007695 4720 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 22 06:35:08 crc kubenswrapper[4720]: W0122 06:35:08.007703 4720 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 22 06:35:08 crc kubenswrapper[4720]: W0122 06:35:08.007713 4720 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 22 06:35:08 crc kubenswrapper[4720]: W0122 06:35:08.007723 4720 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 22 06:35:08 crc kubenswrapper[4720]: W0122 06:35:08.007732 4720 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 22 06:35:08 crc kubenswrapper[4720]: W0122 06:35:08.007741 4720 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 22 06:35:08 crc kubenswrapper[4720]: W0122 06:35:08.007750 4720 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 22 06:35:08 crc kubenswrapper[4720]: W0122 06:35:08.007759 4720 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 22 06:35:08 crc kubenswrapper[4720]: W0122 06:35:08.007769 4720 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 22 06:35:08 crc kubenswrapper[4720]: W0122 06:35:08.007777 4720 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 22 06:35:08 crc kubenswrapper[4720]: W0122 06:35:08.007785 4720 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 22 06:35:08 crc kubenswrapper[4720]: W0122 06:35:08.007793 4720 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 22 06:35:08 crc kubenswrapper[4720]: W0122 06:35:08.007801 4720 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 22 06:35:08 crc kubenswrapper[4720]: W0122 06:35:08.007809 4720 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 22 06:35:08 crc kubenswrapper[4720]: W0122 06:35:08.007816 4720 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 22 06:35:08 crc kubenswrapper[4720]: W0122 06:35:08.007824 4720 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 22 06:35:08 crc kubenswrapper[4720]: W0122 06:35:08.007832 4720 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 22 06:35:08 crc kubenswrapper[4720]: W0122 06:35:08.007840 4720 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 22 06:35:08 crc kubenswrapper[4720]: W0122 06:35:08.007848 4720 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 22 06:35:08 crc kubenswrapper[4720]: W0122 06:35:08.007856 4720 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 22 06:35:08 crc kubenswrapper[4720]: W0122 06:35:08.007863 4720 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 22 06:35:08 crc kubenswrapper[4720]: W0122 06:35:08.007871 4720 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 22 06:35:08 crc kubenswrapper[4720]: W0122 06:35:08.007879 4720 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 22 06:35:08 crc kubenswrapper[4720]: W0122 06:35:08.007890 4720 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 22 06:35:08 crc kubenswrapper[4720]: W0122 06:35:08.007897 4720 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 22 06:35:08 crc kubenswrapper[4720]: W0122 06:35:08.007905 4720 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 22 06:35:08 crc kubenswrapper[4720]: W0122 06:35:08.007938 4720 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 22 06:35:08 crc kubenswrapper[4720]: W0122 06:35:08.007946 4720 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 22 06:35:08 crc kubenswrapper[4720]: W0122 06:35:08.007953 4720 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 22 06:35:08 crc kubenswrapper[4720]: W0122 06:35:08.007961 4720 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 22 06:35:08 crc kubenswrapper[4720]: W0122 06:35:08.007969 4720 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 22 06:35:08 crc kubenswrapper[4720]: W0122 06:35:08.007978 4720 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.007991 4720 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.008235 4720 server.go:940] "Client rotation is on, will bootstrap in background" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.013104 4720 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.013300 4720 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.014149 4720 server.go:997] "Starting client certificate rotation" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.014187 4720 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.014439 4720 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-11-08 18:04:10.128110891 +0000 UTC Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.014578 4720 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.022092 4720 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 22 06:35:08 crc kubenswrapper[4720]: E0122 06:35:08.023699 4720 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.025193 4720 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.043052 4720 log.go:25] "Validated CRI v1 runtime API" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.074442 4720 log.go:25] "Validated CRI v1 image API" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.077315 4720 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.081403 4720 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-01-22-06-31-28-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.081469 4720 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:41 fsType:tmpfs blockSize:0}] Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.110135 4720 manager.go:217] Machine: {Timestamp:2026-01-22 06:35:08.108163924 +0000 UTC m=+0.250070669 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654128640 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:4713dd6d-99ec-4bb6-94e4-e7199d2e8be9 BootID:234f6209-cc86-46cc-ab69-026482c920c9 Filesystems:[{Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365412864 Type:vfs Inodes:821634 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:41 Capacity:1073741824 Type:vfs Inodes:4108170 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:3f:55:87 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:3f:55:87 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:29:72:80 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:52:bf:e5 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:4a:21:bc Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:16:c1:c8 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:92:23:e1:a9:b6:25 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:e2:94:ac:f8:fc:11 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654128640 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.110498 4720 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.110731 4720 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.111427 4720 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.111682 4720 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.111762 4720 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.112148 4720 topology_manager.go:138] "Creating topology manager with none policy" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.112163 4720 container_manager_linux.go:303] "Creating device plugin manager" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.112433 4720 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.112479 4720 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.112790 4720 state_mem.go:36] "Initialized new in-memory state store" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.112951 4720 server.go:1245] "Using root directory" path="/var/lib/kubelet" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.113858 4720 kubelet.go:418] "Attempting to sync node with API server" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.113888 4720 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.113949 4720 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.113969 4720 kubelet.go:324] "Adding apiserver pod source" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.113986 4720 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.116645 4720 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.117082 4720 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Jan 22 06:35:08 crc kubenswrapper[4720]: W0122 06:35:08.117174 4720 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.147:6443: connect: connection refused Jan 22 06:35:08 crc kubenswrapper[4720]: W0122 06:35:08.117253 4720 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.147:6443: connect: connection refused Jan 22 06:35:08 crc kubenswrapper[4720]: E0122 06:35:08.117363 4720 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Jan 22 06:35:08 crc kubenswrapper[4720]: E0122 06:35:08.117362 4720 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.118586 4720 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.119268 4720 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.119300 4720 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.119311 4720 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.119327 4720 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.119360 4720 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.119372 4720 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.119384 4720 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.119401 4720 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.119415 4720 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.119426 4720 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.119452 4720 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.119462 4720 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.119837 4720 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.120402 4720 server.go:1280] "Started kubelet" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.122018 4720 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.120995 4720 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.122633 4720 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.147:6443: connect: connection refused Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.123523 4720 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 22 06:35:08 crc systemd[1]: Started Kubernetes Kubelet. Jan 22 06:35:08 crc kubenswrapper[4720]: E0122 06:35:08.127896 4720 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.147:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.188cfa11d38cbbed default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 06:35:08.120366061 +0000 UTC m=+0.262272776,LastTimestamp:2026-01-22 06:35:08.120366061 +0000 UTC m=+0.262272776,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.132150 4720 server.go:460] "Adding debug handlers to kubelet server" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.132543 4720 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.132768 4720 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.133117 4720 volume_manager.go:287] "The desired_state_of_world populator starts" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.133154 4720 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.133175 4720 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 20:24:06.993522328 +0000 UTC Jan 22 06:35:08 crc kubenswrapper[4720]: E0122 06:35:08.133270 4720 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.133351 4720 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.134526 4720 factory.go:55] Registering systemd factory Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.134574 4720 factory.go:221] Registration of the systemd container factory successfully Jan 22 06:35:08 crc kubenswrapper[4720]: E0122 06:35:08.134767 4720 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.147:6443: connect: connection refused" interval="200ms" Jan 22 06:35:08 crc kubenswrapper[4720]: W0122 06:35:08.134757 4720 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.147:6443: connect: connection refused Jan 22 06:35:08 crc kubenswrapper[4720]: E0122 06:35:08.134876 4720 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.134993 4720 factory.go:153] Registering CRI-O factory Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.135032 4720 factory.go:221] Registration of the crio container factory successfully Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.135157 4720 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.135206 4720 factory.go:103] Registering Raw factory Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.135258 4720 manager.go:1196] Started watching for new ooms in manager Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.136644 4720 manager.go:319] Starting recovery of all containers Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.146605 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.146679 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.146702 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.146723 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.146744 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.146764 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.146784 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.146804 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.146833 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.146853 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.146872 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.146893 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.146939 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.146976 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.147002 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.147031 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.147060 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.147079 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.147102 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.147122 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.147143 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.147164 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.147183 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.147204 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.147223 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.147345 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.147371 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.147392 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.147412 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.151896 4720 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.152110 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.152137 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.152157 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.152184 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.152205 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.152234 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.152259 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.152278 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.152313 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.152336 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.152363 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.152382 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.152403 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.152435 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.152456 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.152484 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.152519 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.152546 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.152576 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.152603 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.152631 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.152662 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.152692 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.152729 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.152757 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.152779 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.152811 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.152842 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.152870 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.152898 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.152953 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.152973 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.153004 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.153024 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.153049 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.153079 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.153099 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.153124 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.153154 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.153224 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.153252 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.153278 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.153303 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.153335 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.153357 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.153393 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.153412 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.153438 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.153471 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.153497 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.153528 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.153548 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.153570 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.153599 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.153619 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.153643 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.153660 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.153678 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.153699 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.153723 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.153746 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.153765 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.153789 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.153823 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.153860 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.153884 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.153931 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.153954 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.153996 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.154240 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.154271 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.154299 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.154321 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.154347 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.154366 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.154406 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.154446 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.154476 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.154524 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.154561 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.154590 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.154618 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.154655 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.154678 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.154706 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.154728 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.154755 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.154793 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.154821 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.154851 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.154870 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.154893 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.154965 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.154986 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.155011 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.155031 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.155048 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.155077 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.155101 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.155140 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.155163 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.157485 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.157612 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.157661 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.157724 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.157778 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.157845 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.157941 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.157990 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.158024 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.158058 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.158107 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.158142 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.158191 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.158223 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.158294 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.158339 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.158371 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.158409 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.158444 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.158487 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.158537 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.158570 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.158674 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.158760 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.158797 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.158862 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.158903 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.158976 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.159051 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.159158 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.159183 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.159201 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.159248 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.159269 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.159287 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.160206 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.160277 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.160318 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.160355 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.160386 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.160419 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.160452 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.160482 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.160515 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.160546 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.160581 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.160611 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.160642 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.160673 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.160705 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.160734 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.160765 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.160831 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.160864 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.160891 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.160995 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.161031 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.161087 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.161116 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.161144 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.161178 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.161206 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.161234 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.161261 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.161290 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.161319 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.161347 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.161376 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.161408 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.161437 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.161492 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.161520 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.161548 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.161573 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.161599 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.161624 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.161650 4720 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.161701 4720 reconstruct.go:97] "Volume reconstruction finished" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.161720 4720 reconciler.go:26] "Reconciler: start to sync state" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.175645 4720 manager.go:324] Recovery completed Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.191207 4720 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.194096 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.194184 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.194207 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.195794 4720 cpu_manager.go:225] "Starting CPU manager" policy="none" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.195821 4720 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.195849 4720 state_mem.go:36] "Initialized new in-memory state store" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.207134 4720 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.208626 4720 policy_none.go:49] "None policy: Start" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.209283 4720 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.209333 4720 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.209368 4720 kubelet.go:2335] "Starting kubelet main sync loop" Jan 22 06:35:08 crc kubenswrapper[4720]: E0122 06:35:08.209419 4720 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 22 06:35:08 crc kubenswrapper[4720]: W0122 06:35:08.210509 4720 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.147:6443: connect: connection refused Jan 22 06:35:08 crc kubenswrapper[4720]: E0122 06:35:08.210584 4720 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.211563 4720 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.211613 4720 state_mem.go:35] "Initializing new in-memory state store" Jan 22 06:35:08 crc kubenswrapper[4720]: E0122 06:35:08.234201 4720 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.264951 4720 manager.go:334] "Starting Device Plugin manager" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.265008 4720 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.265024 4720 server.go:79] "Starting device plugin registration server" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.265615 4720 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.265636 4720 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.265802 4720 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.266020 4720 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.266038 4720 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 22 06:35:08 crc kubenswrapper[4720]: E0122 06:35:08.278157 4720 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.310448 4720 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.310618 4720 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.312551 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.312624 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.312646 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.312954 4720 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.313321 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.313442 4720 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.315198 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.315240 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.315253 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.315450 4720 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.315767 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.315842 4720 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.316368 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.316391 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.316434 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.316463 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.316535 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.316647 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.317136 4720 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.317275 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.317353 4720 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.317982 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.318039 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.318066 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.318441 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.318480 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.318493 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.318634 4720 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.318659 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.318734 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.318751 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.318801 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.318867 4720 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.319681 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.319727 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.319739 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.319957 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.319989 4720 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.320036 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.320052 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.320064 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.320850 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.320899 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.320948 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:08 crc kubenswrapper[4720]: E0122 06:35:08.336450 4720 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.147:6443: connect: connection refused" interval="400ms" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.364683 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.364749 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.364795 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.364833 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.364875 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.364941 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.364977 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.365038 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.365072 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.365196 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.365315 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.365347 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.365382 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.365410 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.365454 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.365741 4720 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.366826 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.366863 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.366873 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.366901 4720 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 22 06:35:08 crc kubenswrapper[4720]: E0122 06:35:08.367732 4720 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.147:6443: connect: connection refused" node="crc" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.467420 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.467739 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.467813 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.467692 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.467889 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.467952 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.467978 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.467792 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.468023 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.468056 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.468122 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.468166 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.468199 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.468246 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.468262 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.468279 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.468330 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.468337 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.468359 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.468295 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.468430 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.468446 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.468473 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.468547 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.468547 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.468582 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.468508 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.468614 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.468520 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.468730 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.568294 4720 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.571945 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.572002 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.572024 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.572064 4720 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 22 06:35:08 crc kubenswrapper[4720]: E0122 06:35:08.572811 4720 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.147:6443: connect: connection refused" node="crc" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.644180 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.661999 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.682626 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 06:35:08 crc kubenswrapper[4720]: W0122 06:35:08.687642 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-738055f2116cf2c5fbf260eda2a5e7b80aa3f78f319962c098e6687f38b22b46 WatchSource:0}: Error finding container 738055f2116cf2c5fbf260eda2a5e7b80aa3f78f319962c098e6687f38b22b46: Status 404 returned error can't find the container with id 738055f2116cf2c5fbf260eda2a5e7b80aa3f78f319962c098e6687f38b22b46 Jan 22 06:35:08 crc kubenswrapper[4720]: W0122 06:35:08.697790 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-0795ab79f1cf58d442dbf4346d079408818ad2ea6e480a4b8a92961cdcd0290e WatchSource:0}: Error finding container 0795ab79f1cf58d442dbf4346d079408818ad2ea6e480a4b8a92961cdcd0290e: Status 404 returned error can't find the container with id 0795ab79f1cf58d442dbf4346d079408818ad2ea6e480a4b8a92961cdcd0290e Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.702400 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 06:35:08 crc kubenswrapper[4720]: W0122 06:35:08.710268 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-085955f4cb4ec2909afb09c2eaf587b58854ea638799efccae6f9bdac5c061cb WatchSource:0}: Error finding container 085955f4cb4ec2909afb09c2eaf587b58854ea638799efccae6f9bdac5c061cb: Status 404 returned error can't find the container with id 085955f4cb4ec2909afb09c2eaf587b58854ea638799efccae6f9bdac5c061cb Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.713211 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 22 06:35:08 crc kubenswrapper[4720]: W0122 06:35:08.732124 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-c33c981321f983bdca0108e32ec289731dae6ebe1ceef57d0145822727a67aa1 WatchSource:0}: Error finding container c33c981321f983bdca0108e32ec289731dae6ebe1ceef57d0145822727a67aa1: Status 404 returned error can't find the container with id c33c981321f983bdca0108e32ec289731dae6ebe1ceef57d0145822727a67aa1 Jan 22 06:35:08 crc kubenswrapper[4720]: E0122 06:35:08.738233 4720 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.147:6443: connect: connection refused" interval="800ms" Jan 22 06:35:08 crc kubenswrapper[4720]: W0122 06:35:08.747409 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-2a744f8282bc1892643d7d5940dd499ff49f0b810c9ad01eb9b196c7af46b3a3 WatchSource:0}: Error finding container 2a744f8282bc1892643d7d5940dd499ff49f0b810c9ad01eb9b196c7af46b3a3: Status 404 returned error can't find the container with id 2a744f8282bc1892643d7d5940dd499ff49f0b810c9ad01eb9b196c7af46b3a3 Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.973103 4720 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.975687 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.975732 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.975743 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:08 crc kubenswrapper[4720]: I0122 06:35:08.975770 4720 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 22 06:35:08 crc kubenswrapper[4720]: E0122 06:35:08.976188 4720 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.147:6443: connect: connection refused" node="crc" Jan 22 06:35:09 crc kubenswrapper[4720]: I0122 06:35:09.123895 4720 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.147:6443: connect: connection refused Jan 22 06:35:09 crc kubenswrapper[4720]: I0122 06:35:09.134542 4720 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 08:24:59.831376677 +0000 UTC Jan 22 06:35:09 crc kubenswrapper[4720]: I0122 06:35:09.216422 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"eff41c6fc4cac32edebfbabd313603d50771e92427938f4fc8c12bc75563133d"} Jan 22 06:35:09 crc kubenswrapper[4720]: I0122 06:35:09.216621 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"c33c981321f983bdca0108e32ec289731dae6ebe1ceef57d0145822727a67aa1"} Jan 22 06:35:09 crc kubenswrapper[4720]: I0122 06:35:09.218136 4720 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="3418e6eacd36147ae6ae1be72f23113dea7dbaab3597ba0fee76fbc846c0162f" exitCode=0 Jan 22 06:35:09 crc kubenswrapper[4720]: I0122 06:35:09.218208 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"3418e6eacd36147ae6ae1be72f23113dea7dbaab3597ba0fee76fbc846c0162f"} Jan 22 06:35:09 crc kubenswrapper[4720]: I0122 06:35:09.218275 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"085955f4cb4ec2909afb09c2eaf587b58854ea638799efccae6f9bdac5c061cb"} Jan 22 06:35:09 crc kubenswrapper[4720]: I0122 06:35:09.218414 4720 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 06:35:09 crc kubenswrapper[4720]: I0122 06:35:09.219678 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:09 crc kubenswrapper[4720]: I0122 06:35:09.219708 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:09 crc kubenswrapper[4720]: I0122 06:35:09.219718 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:09 crc kubenswrapper[4720]: I0122 06:35:09.220216 4720 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="49fa65139740d25a82a8becea2b7476e9846a3eebfe5a02bb0b3e8cc4e7fa30d" exitCode=0 Jan 22 06:35:09 crc kubenswrapper[4720]: I0122 06:35:09.220349 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"49fa65139740d25a82a8becea2b7476e9846a3eebfe5a02bb0b3e8cc4e7fa30d"} Jan 22 06:35:09 crc kubenswrapper[4720]: I0122 06:35:09.220376 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"0795ab79f1cf58d442dbf4346d079408818ad2ea6e480a4b8a92961cdcd0290e"} Jan 22 06:35:09 crc kubenswrapper[4720]: I0122 06:35:09.220533 4720 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 06:35:09 crc kubenswrapper[4720]: I0122 06:35:09.221626 4720 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 06:35:09 crc kubenswrapper[4720]: I0122 06:35:09.222584 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:09 crc kubenswrapper[4720]: I0122 06:35:09.222629 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:09 crc kubenswrapper[4720]: I0122 06:35:09.222643 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:09 crc kubenswrapper[4720]: I0122 06:35:09.222832 4720 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="02c044f6a9997893116f043639407239a8dc4cf8a30435557910df3c594389cc" exitCode=0 Jan 22 06:35:09 crc kubenswrapper[4720]: I0122 06:35:09.222878 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"02c044f6a9997893116f043639407239a8dc4cf8a30435557910df3c594389cc"} Jan 22 06:35:09 crc kubenswrapper[4720]: I0122 06:35:09.222895 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"738055f2116cf2c5fbf260eda2a5e7b80aa3f78f319962c098e6687f38b22b46"} Jan 22 06:35:09 crc kubenswrapper[4720]: I0122 06:35:09.222957 4720 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 06:35:09 crc kubenswrapper[4720]: I0122 06:35:09.223552 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:09 crc kubenswrapper[4720]: I0122 06:35:09.223574 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:09 crc kubenswrapper[4720]: I0122 06:35:09.223615 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:09 crc kubenswrapper[4720]: I0122 06:35:09.223650 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:09 crc kubenswrapper[4720]: I0122 06:35:09.223672 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:09 crc kubenswrapper[4720]: I0122 06:35:09.223682 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:09 crc kubenswrapper[4720]: I0122 06:35:09.226697 4720 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="7605a8cc85e1a2da51c6bedd5f03df930e1135340ec7de69d1ef643c907fd2bd" exitCode=0 Jan 22 06:35:09 crc kubenswrapper[4720]: I0122 06:35:09.226764 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"7605a8cc85e1a2da51c6bedd5f03df930e1135340ec7de69d1ef643c907fd2bd"} Jan 22 06:35:09 crc kubenswrapper[4720]: I0122 06:35:09.226800 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"2a744f8282bc1892643d7d5940dd499ff49f0b810c9ad01eb9b196c7af46b3a3"} Jan 22 06:35:09 crc kubenswrapper[4720]: I0122 06:35:09.226930 4720 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 06:35:09 crc kubenswrapper[4720]: I0122 06:35:09.228270 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:09 crc kubenswrapper[4720]: I0122 06:35:09.228340 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:09 crc kubenswrapper[4720]: I0122 06:35:09.228376 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:09 crc kubenswrapper[4720]: W0122 06:35:09.380173 4720 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.147:6443: connect: connection refused Jan 22 06:35:09 crc kubenswrapper[4720]: E0122 06:35:09.380284 4720 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Jan 22 06:35:09 crc kubenswrapper[4720]: W0122 06:35:09.417888 4720 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.147:6443: connect: connection refused Jan 22 06:35:09 crc kubenswrapper[4720]: E0122 06:35:09.417992 4720 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Jan 22 06:35:09 crc kubenswrapper[4720]: W0122 06:35:09.467614 4720 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.147:6443: connect: connection refused Jan 22 06:35:09 crc kubenswrapper[4720]: E0122 06:35:09.467706 4720 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Jan 22 06:35:09 crc kubenswrapper[4720]: E0122 06:35:09.539635 4720 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.147:6443: connect: connection refused" interval="1.6s" Jan 22 06:35:09 crc kubenswrapper[4720]: W0122 06:35:09.666835 4720 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.147:6443: connect: connection refused Jan 22 06:35:09 crc kubenswrapper[4720]: E0122 06:35:09.667008 4720 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Jan 22 06:35:09 crc kubenswrapper[4720]: I0122 06:35:09.776297 4720 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 06:35:09 crc kubenswrapper[4720]: I0122 06:35:09.778557 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:09 crc kubenswrapper[4720]: I0122 06:35:09.778605 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:09 crc kubenswrapper[4720]: I0122 06:35:09.778624 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:09 crc kubenswrapper[4720]: I0122 06:35:09.778657 4720 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 22 06:35:09 crc kubenswrapper[4720]: E0122 06:35:09.779286 4720 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.147:6443: connect: connection refused" node="crc" Jan 22 06:35:10 crc kubenswrapper[4720]: I0122 06:35:10.053226 4720 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 22 06:35:10 crc kubenswrapper[4720]: E0122 06:35:10.054291 4720 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Jan 22 06:35:10 crc kubenswrapper[4720]: I0122 06:35:10.124680 4720 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.147:6443: connect: connection refused Jan 22 06:35:10 crc kubenswrapper[4720]: I0122 06:35:10.135006 4720 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 17:30:53.065869583 +0000 UTC Jan 22 06:35:10 crc kubenswrapper[4720]: I0122 06:35:10.232596 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"8a954389852d9be1f01cad5b53c0ee3a1e22d956897c2fec4bbeffdf558ec585"} Jan 22 06:35:10 crc kubenswrapper[4720]: I0122 06:35:10.232731 4720 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 06:35:10 crc kubenswrapper[4720]: I0122 06:35:10.233564 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:10 crc kubenswrapper[4720]: I0122 06:35:10.233595 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:10 crc kubenswrapper[4720]: I0122 06:35:10.233606 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:10 crc kubenswrapper[4720]: I0122 06:35:10.237100 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"d5d1e4cb487f75b95bc0da8ec3adbb6410d171fa2c95137c8127cea6023166f1"} Jan 22 06:35:10 crc kubenswrapper[4720]: I0122 06:35:10.237141 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"a5ff36eb3ab53efb54f45ab3e3030175237fd76ecd28ffcdc5a5079dfb93ec2d"} Jan 22 06:35:10 crc kubenswrapper[4720]: I0122 06:35:10.237152 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"8b5ab589e0e928e47ac498164439f2fbd62bfe1130a9c17a9d96ec4cedd2c1e5"} Jan 22 06:35:10 crc kubenswrapper[4720]: I0122 06:35:10.237257 4720 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 06:35:10 crc kubenswrapper[4720]: I0122 06:35:10.238243 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:10 crc kubenswrapper[4720]: I0122 06:35:10.238273 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:10 crc kubenswrapper[4720]: I0122 06:35:10.238284 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:10 crc kubenswrapper[4720]: I0122 06:35:10.239220 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"365ce842df37c87fae429e32228087a45828c6c717b63a2eb9818f26bf9aed7b"} Jan 22 06:35:10 crc kubenswrapper[4720]: I0122 06:35:10.239249 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"5dc3d31620607809111c9a1ed06ab7dfb1b24ff5ef18155979efc6b09b6d67d5"} Jan 22 06:35:10 crc kubenswrapper[4720]: I0122 06:35:10.239261 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"f91f6cde32c3019a63315238551af8420def8e585e4dbf3d29e55f62f155ba3e"} Jan 22 06:35:10 crc kubenswrapper[4720]: I0122 06:35:10.239270 4720 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 06:35:10 crc kubenswrapper[4720]: I0122 06:35:10.239888 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:10 crc kubenswrapper[4720]: I0122 06:35:10.239926 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:10 crc kubenswrapper[4720]: I0122 06:35:10.239938 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:10 crc kubenswrapper[4720]: I0122 06:35:10.242413 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"927a5a6aba8d7533888fb9e5e664fbaed25c6e6c0616c5b3c6f0afd6f45aa128"} Jan 22 06:35:10 crc kubenswrapper[4720]: I0122 06:35:10.242439 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"1d21c2ba52009379f44498a3064c67bcd1a54a6277cc2b1df9db949bf0871e95"} Jan 22 06:35:10 crc kubenswrapper[4720]: I0122 06:35:10.242451 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"65a3043e63bb84b58bb5486070eb84265d51c514df2d766703a62382b3d6f66f"} Jan 22 06:35:10 crc kubenswrapper[4720]: I0122 06:35:10.242462 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"b844e983a5de29cec0e9c51f209f04921cd45b437887ca38e93348bc0816832a"} Jan 22 06:35:10 crc kubenswrapper[4720]: I0122 06:35:10.244060 4720 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="e171d6a13b6108f7df37e46d2195682d2ce7a75e5fbfd0ccf4aca27871964cf6" exitCode=0 Jan 22 06:35:10 crc kubenswrapper[4720]: I0122 06:35:10.244087 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"e171d6a13b6108f7df37e46d2195682d2ce7a75e5fbfd0ccf4aca27871964cf6"} Jan 22 06:35:10 crc kubenswrapper[4720]: I0122 06:35:10.244176 4720 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 06:35:10 crc kubenswrapper[4720]: I0122 06:35:10.244739 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:10 crc kubenswrapper[4720]: I0122 06:35:10.244759 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:10 crc kubenswrapper[4720]: I0122 06:35:10.244769 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:10 crc kubenswrapper[4720]: I0122 06:35:10.377533 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 06:35:10 crc kubenswrapper[4720]: I0122 06:35:10.392644 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 06:35:10 crc kubenswrapper[4720]: I0122 06:35:10.582791 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 06:35:11 crc kubenswrapper[4720]: I0122 06:35:11.135120 4720 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 02:30:00.071002669 +0000 UTC Jan 22 06:35:11 crc kubenswrapper[4720]: I0122 06:35:11.252229 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"9f51bf89aa732948c3672632435a055b62f58316da07931b2773bc2d3c10789e"} Jan 22 06:35:11 crc kubenswrapper[4720]: I0122 06:35:11.252483 4720 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 06:35:11 crc kubenswrapper[4720]: I0122 06:35:11.255154 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:11 crc kubenswrapper[4720]: I0122 06:35:11.255232 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:11 crc kubenswrapper[4720]: I0122 06:35:11.255253 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:11 crc kubenswrapper[4720]: I0122 06:35:11.255994 4720 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="136504ce05b178f4c60a839a1044c0b0892ea97babfe80181eac601c5057df16" exitCode=0 Jan 22 06:35:11 crc kubenswrapper[4720]: I0122 06:35:11.256049 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"136504ce05b178f4c60a839a1044c0b0892ea97babfe80181eac601c5057df16"} Jan 22 06:35:11 crc kubenswrapper[4720]: I0122 06:35:11.256103 4720 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 22 06:35:11 crc kubenswrapper[4720]: I0122 06:35:11.256147 4720 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 06:35:11 crc kubenswrapper[4720]: I0122 06:35:11.256229 4720 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 06:35:11 crc kubenswrapper[4720]: I0122 06:35:11.256489 4720 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 06:35:11 crc kubenswrapper[4720]: I0122 06:35:11.256893 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:11 crc kubenswrapper[4720]: I0122 06:35:11.256945 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:11 crc kubenswrapper[4720]: I0122 06:35:11.256956 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:11 crc kubenswrapper[4720]: I0122 06:35:11.257759 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:11 crc kubenswrapper[4720]: I0122 06:35:11.257809 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:11 crc kubenswrapper[4720]: I0122 06:35:11.257831 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:11 crc kubenswrapper[4720]: I0122 06:35:11.258080 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:11 crc kubenswrapper[4720]: I0122 06:35:11.258099 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:11 crc kubenswrapper[4720]: I0122 06:35:11.258110 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:11 crc kubenswrapper[4720]: I0122 06:35:11.379564 4720 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 06:35:11 crc kubenswrapper[4720]: I0122 06:35:11.381067 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:11 crc kubenswrapper[4720]: I0122 06:35:11.381125 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:11 crc kubenswrapper[4720]: I0122 06:35:11.381142 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:11 crc kubenswrapper[4720]: I0122 06:35:11.381174 4720 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 22 06:35:12 crc kubenswrapper[4720]: I0122 06:35:12.103494 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 22 06:35:12 crc kubenswrapper[4720]: I0122 06:35:12.135405 4720 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 08:11:09.45472591 +0000 UTC Jan 22 06:35:12 crc kubenswrapper[4720]: I0122 06:35:12.178530 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 06:35:12 crc kubenswrapper[4720]: I0122 06:35:12.263189 4720 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 06:35:12 crc kubenswrapper[4720]: I0122 06:35:12.263233 4720 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 06:35:12 crc kubenswrapper[4720]: I0122 06:35:12.263269 4720 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 06:35:12 crc kubenswrapper[4720]: I0122 06:35:12.263178 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"b2caddf6dd66b641ff10add3b52e37581c741507660ec4fe67a3b9ddde699e56"} Jan 22 06:35:12 crc kubenswrapper[4720]: I0122 06:35:12.263365 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"dfdba4f4bb3ed55acad58ed0c07d2dd58b1a1e9fd56443090c002bdefd3e7e10"} Jan 22 06:35:12 crc kubenswrapper[4720]: I0122 06:35:12.263390 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"a7f8b443098a61889ca09134ff7b6e219efca59abcbcea3bb4c1aa9266ee8e55"} Jan 22 06:35:12 crc kubenswrapper[4720]: I0122 06:35:12.263411 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"9d49a501c34b86f0f9300742970a00c92ce3b8d352518d8f53d1ab8c734e2af6"} Jan 22 06:35:12 crc kubenswrapper[4720]: I0122 06:35:12.263437 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 06:35:12 crc kubenswrapper[4720]: I0122 06:35:12.264550 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:12 crc kubenswrapper[4720]: I0122 06:35:12.264579 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:12 crc kubenswrapper[4720]: I0122 06:35:12.264588 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:12 crc kubenswrapper[4720]: I0122 06:35:12.264622 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:12 crc kubenswrapper[4720]: I0122 06:35:12.264641 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:12 crc kubenswrapper[4720]: I0122 06:35:12.264645 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:12 crc kubenswrapper[4720]: I0122 06:35:12.264683 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:12 crc kubenswrapper[4720]: I0122 06:35:12.264701 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:12 crc kubenswrapper[4720]: I0122 06:35:12.264655 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:12 crc kubenswrapper[4720]: I0122 06:35:12.904563 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 06:35:13 crc kubenswrapper[4720]: I0122 06:35:13.135682 4720 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 11:59:39.294477069 +0000 UTC Jan 22 06:35:13 crc kubenswrapper[4720]: I0122 06:35:13.271832 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"108525dd49ce1847f1cc6c4e2a4c70cae3d2d990cda97eeb4aba8879b955432d"} Jan 22 06:35:13 crc kubenswrapper[4720]: I0122 06:35:13.271954 4720 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 06:35:13 crc kubenswrapper[4720]: I0122 06:35:13.272078 4720 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 06:35:13 crc kubenswrapper[4720]: I0122 06:35:13.272856 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:13 crc kubenswrapper[4720]: I0122 06:35:13.272889 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:13 crc kubenswrapper[4720]: I0122 06:35:13.272897 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:13 crc kubenswrapper[4720]: I0122 06:35:13.273606 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:13 crc kubenswrapper[4720]: I0122 06:35:13.273675 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:13 crc kubenswrapper[4720]: I0122 06:35:13.273700 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:14 crc kubenswrapper[4720]: I0122 06:35:14.136655 4720 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 16:01:16.679361851 +0000 UTC Jan 22 06:35:14 crc kubenswrapper[4720]: I0122 06:35:14.274404 4720 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 06:35:14 crc kubenswrapper[4720]: I0122 06:35:14.274404 4720 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 06:35:14 crc kubenswrapper[4720]: I0122 06:35:14.275535 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:14 crc kubenswrapper[4720]: I0122 06:35:14.275581 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:14 crc kubenswrapper[4720]: I0122 06:35:14.275598 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:14 crc kubenswrapper[4720]: I0122 06:35:14.276183 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:14 crc kubenswrapper[4720]: I0122 06:35:14.276227 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:14 crc kubenswrapper[4720]: I0122 06:35:14.276241 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:14 crc kubenswrapper[4720]: I0122 06:35:14.276640 4720 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 22 06:35:15 crc kubenswrapper[4720]: I0122 06:35:15.137779 4720 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 21:44:20.598430455 +0000 UTC Jan 22 06:35:16 crc kubenswrapper[4720]: I0122 06:35:16.138823 4720 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 11:03:15.241484823 +0000 UTC Jan 22 06:35:16 crc kubenswrapper[4720]: I0122 06:35:16.814047 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Jan 22 06:35:16 crc kubenswrapper[4720]: I0122 06:35:16.814325 4720 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 06:35:16 crc kubenswrapper[4720]: I0122 06:35:16.816278 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:16 crc kubenswrapper[4720]: I0122 06:35:16.816361 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:16 crc kubenswrapper[4720]: I0122 06:35:16.816387 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:17 crc kubenswrapper[4720]: I0122 06:35:17.139671 4720 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 02:06:50.638009145 +0000 UTC Jan 22 06:35:17 crc kubenswrapper[4720]: I0122 06:35:17.179082 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 06:35:17 crc kubenswrapper[4720]: I0122 06:35:17.179378 4720 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 06:35:17 crc kubenswrapper[4720]: I0122 06:35:17.181056 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:17 crc kubenswrapper[4720]: I0122 06:35:17.181120 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:17 crc kubenswrapper[4720]: I0122 06:35:17.181149 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:17 crc kubenswrapper[4720]: I0122 06:35:17.437692 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Jan 22 06:35:17 crc kubenswrapper[4720]: I0122 06:35:17.438000 4720 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 06:35:17 crc kubenswrapper[4720]: I0122 06:35:17.439505 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:17 crc kubenswrapper[4720]: I0122 06:35:17.439564 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:17 crc kubenswrapper[4720]: I0122 06:35:17.439585 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:18 crc kubenswrapper[4720]: I0122 06:35:18.139816 4720 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 19:25:34.735443294 +0000 UTC Jan 22 06:35:18 crc kubenswrapper[4720]: E0122 06:35:18.278320 4720 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 22 06:35:19 crc kubenswrapper[4720]: I0122 06:35:19.140763 4720 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 09:44:57.090837903 +0000 UTC Jan 22 06:35:20 crc kubenswrapper[4720]: I0122 06:35:20.028009 4720 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 22 06:35:20 crc kubenswrapper[4720]: I0122 06:35:20.028123 4720 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 22 06:35:20 crc kubenswrapper[4720]: I0122 06:35:20.047098 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 06:35:20 crc kubenswrapper[4720]: I0122 06:35:20.047466 4720 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 06:35:20 crc kubenswrapper[4720]: I0122 06:35:20.049230 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:20 crc kubenswrapper[4720]: I0122 06:35:20.049303 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:20 crc kubenswrapper[4720]: I0122 06:35:20.049331 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:20 crc kubenswrapper[4720]: I0122 06:35:20.054272 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 06:35:20 crc kubenswrapper[4720]: I0122 06:35:20.141555 4720 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 22:47:17.295231839 +0000 UTC Jan 22 06:35:20 crc kubenswrapper[4720]: I0122 06:35:20.291988 4720 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 06:35:20 crc kubenswrapper[4720]: I0122 06:35:20.293474 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:20 crc kubenswrapper[4720]: I0122 06:35:20.293526 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:20 crc kubenswrapper[4720]: I0122 06:35:20.293543 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:21 crc kubenswrapper[4720]: I0122 06:35:21.125486 4720 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Jan 22 06:35:21 crc kubenswrapper[4720]: E0122 06:35:21.140601 4720 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" interval="3.2s" Jan 22 06:35:21 crc kubenswrapper[4720]: I0122 06:35:21.141745 4720 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 01:30:31.420394279 +0000 UTC Jan 22 06:35:21 crc kubenswrapper[4720]: W0122 06:35:21.285540 4720 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": net/http: TLS handshake timeout Jan 22 06:35:21 crc kubenswrapper[4720]: I0122 06:35:21.285669 4720 trace.go:236] Trace[914270232]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (22-Jan-2026 06:35:11.284) (total time: 10001ms): Jan 22 06:35:21 crc kubenswrapper[4720]: Trace[914270232]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (06:35:21.285) Jan 22 06:35:21 crc kubenswrapper[4720]: Trace[914270232]: [10.001406598s] [10.001406598s] END Jan 22 06:35:21 crc kubenswrapper[4720]: E0122 06:35:21.285703 4720 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Jan 22 06:35:21 crc kubenswrapper[4720]: E0122 06:35:21.382622 4720 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="crc" Jan 22 06:35:21 crc kubenswrapper[4720]: W0122 06:35:21.398825 4720 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": net/http: TLS handshake timeout Jan 22 06:35:21 crc kubenswrapper[4720]: I0122 06:35:21.399031 4720 trace.go:236] Trace[759343783]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (22-Jan-2026 06:35:11.398) (total time: 10000ms): Jan 22 06:35:21 crc kubenswrapper[4720]: Trace[759343783]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10000ms (06:35:21.398) Jan 22 06:35:21 crc kubenswrapper[4720]: Trace[759343783]: [10.000957925s] [10.000957925s] END Jan 22 06:35:21 crc kubenswrapper[4720]: E0122 06:35:21.399078 4720 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Jan 22 06:35:21 crc kubenswrapper[4720]: I0122 06:35:21.634065 4720 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 22 06:35:21 crc kubenswrapper[4720]: I0122 06:35:21.634157 4720 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 22 06:35:21 crc kubenswrapper[4720]: I0122 06:35:21.649177 4720 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 22 06:35:21 crc kubenswrapper[4720]: I0122 06:35:21.649284 4720 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 22 06:35:22 crc kubenswrapper[4720]: I0122 06:35:22.142760 4720 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 13:32:11.763196083 +0000 UTC Jan 22 06:35:22 crc kubenswrapper[4720]: I0122 06:35:22.913706 4720 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Jan 22 06:35:22 crc kubenswrapper[4720]: [+]log ok Jan 22 06:35:22 crc kubenswrapper[4720]: [+]etcd ok Jan 22 06:35:22 crc kubenswrapper[4720]: [+]poststarthook/openshift.io-api-request-count-filter ok Jan 22 06:35:22 crc kubenswrapper[4720]: [+]poststarthook/openshift.io-startkubeinformers ok Jan 22 06:35:22 crc kubenswrapper[4720]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Jan 22 06:35:22 crc kubenswrapper[4720]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Jan 22 06:35:22 crc kubenswrapper[4720]: [+]poststarthook/start-apiserver-admission-initializer ok Jan 22 06:35:22 crc kubenswrapper[4720]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Jan 22 06:35:22 crc kubenswrapper[4720]: [+]poststarthook/generic-apiserver-start-informers ok Jan 22 06:35:22 crc kubenswrapper[4720]: [+]poststarthook/priority-and-fairness-config-consumer ok Jan 22 06:35:22 crc kubenswrapper[4720]: [+]poststarthook/priority-and-fairness-filter ok Jan 22 06:35:22 crc kubenswrapper[4720]: [+]poststarthook/storage-object-count-tracker-hook ok Jan 22 06:35:22 crc kubenswrapper[4720]: [+]poststarthook/start-apiextensions-informers ok Jan 22 06:35:22 crc kubenswrapper[4720]: [+]poststarthook/start-apiextensions-controllers ok Jan 22 06:35:22 crc kubenswrapper[4720]: [+]poststarthook/crd-informer-synced ok Jan 22 06:35:22 crc kubenswrapper[4720]: [+]poststarthook/start-system-namespaces-controller ok Jan 22 06:35:22 crc kubenswrapper[4720]: [+]poststarthook/start-cluster-authentication-info-controller ok Jan 22 06:35:22 crc kubenswrapper[4720]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Jan 22 06:35:22 crc kubenswrapper[4720]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Jan 22 06:35:22 crc kubenswrapper[4720]: [+]poststarthook/start-legacy-token-tracking-controller ok Jan 22 06:35:22 crc kubenswrapper[4720]: [+]poststarthook/start-service-ip-repair-controllers ok Jan 22 06:35:22 crc kubenswrapper[4720]: [-]poststarthook/rbac/bootstrap-roles failed: reason withheld Jan 22 06:35:22 crc kubenswrapper[4720]: [+]poststarthook/scheduling/bootstrap-system-priority-classes ok Jan 22 06:35:22 crc kubenswrapper[4720]: [+]poststarthook/priority-and-fairness-config-producer ok Jan 22 06:35:22 crc kubenswrapper[4720]: [+]poststarthook/bootstrap-controller ok Jan 22 06:35:22 crc kubenswrapper[4720]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Jan 22 06:35:22 crc kubenswrapper[4720]: [+]poststarthook/start-kube-aggregator-informers ok Jan 22 06:35:22 crc kubenswrapper[4720]: [+]poststarthook/apiservice-status-local-available-controller ok Jan 22 06:35:22 crc kubenswrapper[4720]: [+]poststarthook/apiservice-status-remote-available-controller ok Jan 22 06:35:22 crc kubenswrapper[4720]: [+]poststarthook/apiservice-registration-controller ok Jan 22 06:35:22 crc kubenswrapper[4720]: [+]poststarthook/apiservice-wait-for-first-sync ok Jan 22 06:35:22 crc kubenswrapper[4720]: [+]poststarthook/apiservice-discovery-controller ok Jan 22 06:35:22 crc kubenswrapper[4720]: [+]poststarthook/kube-apiserver-autoregistration ok Jan 22 06:35:22 crc kubenswrapper[4720]: [+]autoregister-completion ok Jan 22 06:35:22 crc kubenswrapper[4720]: [+]poststarthook/apiservice-openapi-controller ok Jan 22 06:35:22 crc kubenswrapper[4720]: [+]poststarthook/apiservice-openapiv3-controller ok Jan 22 06:35:22 crc kubenswrapper[4720]: livez check failed Jan 22 06:35:22 crc kubenswrapper[4720]: I0122 06:35:22.913807 4720 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 22 06:35:23 crc kubenswrapper[4720]: I0122 06:35:23.046930 4720 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 22 06:35:23 crc kubenswrapper[4720]: I0122 06:35:23.047005 4720 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 22 06:35:23 crc kubenswrapper[4720]: I0122 06:35:23.143456 4720 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 11:48:01.47504968 +0000 UTC Jan 22 06:35:24 crc kubenswrapper[4720]: I0122 06:35:24.144496 4720 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 04:39:31.562773593 +0000 UTC Jan 22 06:35:24 crc kubenswrapper[4720]: I0122 06:35:24.536717 4720 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 22 06:35:24 crc kubenswrapper[4720]: I0122 06:35:24.583263 4720 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 06:35:24 crc kubenswrapper[4720]: I0122 06:35:24.585137 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:24 crc kubenswrapper[4720]: I0122 06:35:24.585215 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:24 crc kubenswrapper[4720]: I0122 06:35:24.585246 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:24 crc kubenswrapper[4720]: I0122 06:35:24.585297 4720 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 22 06:35:24 crc kubenswrapper[4720]: E0122 06:35:24.592068 4720 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Jan 22 06:35:25 crc kubenswrapper[4720]: I0122 06:35:25.126360 4720 apiserver.go:52] "Watching apiserver" Jan 22 06:35:25 crc kubenswrapper[4720]: I0122 06:35:25.131073 4720 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 22 06:35:25 crc kubenswrapper[4720]: I0122 06:35:25.131671 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf"] Jan 22 06:35:25 crc kubenswrapper[4720]: I0122 06:35:25.132408 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 22 06:35:25 crc kubenswrapper[4720]: I0122 06:35:25.132418 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 06:35:25 crc kubenswrapper[4720]: I0122 06:35:25.132896 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 06:35:25 crc kubenswrapper[4720]: I0122 06:35:25.132980 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 06:35:25 crc kubenswrapper[4720]: I0122 06:35:25.133028 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 22 06:35:25 crc kubenswrapper[4720]: E0122 06:35:25.133041 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 06:35:25 crc kubenswrapper[4720]: E0122 06:35:25.133066 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 06:35:25 crc kubenswrapper[4720]: I0122 06:35:25.133117 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 22 06:35:25 crc kubenswrapper[4720]: E0122 06:35:25.133164 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 06:35:25 crc kubenswrapper[4720]: I0122 06:35:25.135104 4720 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 22 06:35:25 crc kubenswrapper[4720]: I0122 06:35:25.135220 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 22 06:35:25 crc kubenswrapper[4720]: I0122 06:35:25.135707 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 22 06:35:25 crc kubenswrapper[4720]: I0122 06:35:25.136015 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 22 06:35:25 crc kubenswrapper[4720]: I0122 06:35:25.136575 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 22 06:35:25 crc kubenswrapper[4720]: I0122 06:35:25.137290 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 22 06:35:25 crc kubenswrapper[4720]: I0122 06:35:25.137416 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 22 06:35:25 crc kubenswrapper[4720]: I0122 06:35:25.137953 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 22 06:35:25 crc kubenswrapper[4720]: I0122 06:35:25.138048 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 22 06:35:25 crc kubenswrapper[4720]: I0122 06:35:25.138075 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 22 06:35:25 crc kubenswrapper[4720]: I0122 06:35:25.145312 4720 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 21:35:04.873184268 +0000 UTC Jan 22 06:35:25 crc kubenswrapper[4720]: I0122 06:35:25.182824 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 06:35:25 crc kubenswrapper[4720]: I0122 06:35:25.202952 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 06:35:25 crc kubenswrapper[4720]: I0122 06:35:25.221732 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 06:35:25 crc kubenswrapper[4720]: I0122 06:35:25.234461 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 06:35:25 crc kubenswrapper[4720]: I0122 06:35:25.246327 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 06:35:25 crc kubenswrapper[4720]: I0122 06:35:25.261724 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 06:35:25 crc kubenswrapper[4720]: I0122 06:35:25.275826 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.146143 4720 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 04:44:58.637434802 +0000 UTC Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.210804 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.210954 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 06:35:26 crc kubenswrapper[4720]: E0122 06:35:26.211112 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 06:35:26 crc kubenswrapper[4720]: E0122 06:35:26.211347 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.640292 4720 trace.go:236] Trace[26234004]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (22-Jan-2026 06:35:12.344) (total time: 14295ms): Jan 22 06:35:26 crc kubenswrapper[4720]: Trace[26234004]: ---"Objects listed" error: 14295ms (06:35:26.640) Jan 22 06:35:26 crc kubenswrapper[4720]: Trace[26234004]: [14.295849072s] [14.295849072s] END Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.640353 4720 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.640367 4720 trace.go:236] Trace[1517519209]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (22-Jan-2026 06:35:11.803) (total time: 14836ms): Jan 22 06:35:26 crc kubenswrapper[4720]: Trace[1517519209]: ---"Objects listed" error: 14836ms (06:35:26.640) Jan 22 06:35:26 crc kubenswrapper[4720]: Trace[1517519209]: [14.836388509s] [14.836388509s] END Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.640418 4720 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.644039 4720 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.684473 4720 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.699296 4720 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:51866->192.168.126.11:17697: read: connection reset by peer" start-of-body= Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.699419 4720 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:51866->192.168.126.11:17697: read: connection reset by peer" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.745252 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.745361 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.745416 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.745470 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.745525 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.745572 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.746451 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.746558 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.746604 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.746705 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.746765 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.746836 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.746884 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.747159 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.747200 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.747248 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.747258 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.747285 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.747319 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.747381 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.747439 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.747489 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.747551 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.747601 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.747650 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.747702 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.747761 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.747807 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.747826 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.747861 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.747960 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.747996 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.748090 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.748072 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.748156 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.748206 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.748187 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.748266 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.748305 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.748345 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.748405 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.748423 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.748454 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.748462 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.748516 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.748572 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.748587 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.748584 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.748629 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.748678 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.748681 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.748701 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.748735 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.748785 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.748786 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.748880 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.748936 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.748939 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.748965 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.748976 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.748993 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.749101 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.749141 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.749158 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.749152 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.749179 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.749213 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.749239 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.749249 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.749298 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.749332 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.749360 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.749362 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.749390 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.749420 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.749412 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.749519 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.749549 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.749576 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.749601 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.749627 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.749644 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.749651 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.749725 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.749761 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.749799 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.749833 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.749864 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.749900 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.749959 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.749973 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.749972 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.749999 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.750039 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.750072 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.750085 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.750107 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.750143 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.750177 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.750212 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.750242 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.750276 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.750313 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.750348 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.750383 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.750421 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.750465 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.750497 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.750532 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.750565 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.750597 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.750629 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.750661 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.750695 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.750727 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.750760 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.750792 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.750828 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.750860 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.750893 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.750954 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.750988 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.751081 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.751119 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.751159 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.751198 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.751237 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.751272 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.751322 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.751370 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.751424 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.751479 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.751518 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.751554 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.751619 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.751655 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.751688 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.751721 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.751754 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.751788 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.751822 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.751854 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.751889 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.752432 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.752481 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.752517 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.752564 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.752604 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.752637 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.752721 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.752763 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.752798 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.752834 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.752875 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.752940 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.752975 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.753010 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.753077 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.753120 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.753171 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.753293 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.753362 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.753395 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.750273 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.750307 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.750351 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.750576 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.751222 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.751893 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.752240 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.752474 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.752502 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.752831 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.752845 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.753069 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.753725 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.753150 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.753200 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.753558 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.753591 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.753984 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.754225 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.754346 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.754513 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.754813 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.755071 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.755320 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.755454 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.755565 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.755581 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.755838 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.755868 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.756007 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.756034 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.756045 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.756314 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.756383 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.756678 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.756791 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.757053 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.756717 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.757199 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.757333 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.758107 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.758165 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.758508 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.758557 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.758754 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.758788 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.758985 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.759028 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.759112 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.759421 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.759707 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.759310 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.760344 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.760629 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.760769 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.761412 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.761483 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.761597 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.761524 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.762104 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.762186 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.762201 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.762461 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.762622 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.753624 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.762791 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.762881 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.763048 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.763191 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.763247 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.763303 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.763776 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.763850 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.763948 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.764021 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.764078 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.764129 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.764177 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.764528 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.768576 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.768684 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.768753 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.768852 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.768898 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.768952 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.768988 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.772766 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.772836 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.772877 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.772928 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.772976 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.773023 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.773055 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.773098 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.773135 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.773181 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.773222 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.773266 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.773309 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.773341 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.773371 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.773402 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.773627 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.773699 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.774710 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.774752 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.763244 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.775551 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.763752 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.764140 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.764259 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.764595 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.776150 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.776105 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.765017 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.765083 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.766143 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.766858 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.767095 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.776247 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.767802 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.767828 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.767871 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.768932 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.769123 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.770424 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.770683 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.770787 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.770822 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.771131 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.771841 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.772350 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.772407 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.772438 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.772491 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.773812 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.773947 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.774132 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.774152 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.774314 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.774518 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.774734 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.774772 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.774546 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.774843 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.775035 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.775105 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.775260 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.775852 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.776103 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.765001 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 06:35:26 crc kubenswrapper[4720]: E0122 06:35:26.777175 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 06:35:27.277096039 +0000 UTC m=+19.419002924 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.777536 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.777643 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.777670 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.777952 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.777960 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.778289 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.779188 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.779216 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.779260 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.779309 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.780180 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.780601 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.780627 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.781118 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.781171 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.781233 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.781263 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.781291 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.781313 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.781359 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.781850 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.782338 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.782366 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.782387 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.782409 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.783438 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.783473 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.783496 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.783518 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.783545 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.783571 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.783770 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.783793 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.783834 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.781445 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.784113 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.781544 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.781579 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.781639 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.782236 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.782320 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.782496 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.782631 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.782728 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.782739 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.782743 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.782788 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.783789 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.783897 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.785036 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.785133 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.785201 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.785256 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.785299 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.785464 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.786336 4720 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.787186 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.787247 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.787301 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.787365 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.787402 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.787445 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.787484 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.787530 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.787770 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.787576 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.788010 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.788062 4720 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.788093 4720 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.783869 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.784078 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.784114 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.784791 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.784496 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.788115 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.788204 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 06:35:26 crc kubenswrapper[4720]: E0122 06:35:26.788616 4720 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 22 06:35:26 crc kubenswrapper[4720]: E0122 06:35:26.788729 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-22 06:35:27.288698113 +0000 UTC m=+19.430604858 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.788958 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 22 06:35:26 crc kubenswrapper[4720]: E0122 06:35:26.789149 4720 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.789232 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.789300 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.789309 4720 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Jan 22 06:35:26 crc kubenswrapper[4720]: E0122 06:35:26.789314 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-22 06:35:27.289289099 +0000 UTC m=+19.431195874 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.789445 4720 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.789471 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.789494 4720 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.789537 4720 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.789555 4720 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.789569 4720 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.789610 4720 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.789626 4720 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.789641 4720 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.790981 4720 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.791328 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.791597 4720 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.791624 4720 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.791800 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.791817 4720 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.791878 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.791894 4720 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.791942 4720 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.791983 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.791999 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.792015 4720 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.792040 4720 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.792063 4720 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.792081 4720 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.792100 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.792121 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.792137 4720 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.792153 4720 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.792170 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.792187 4720 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.792209 4720 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.792227 4720 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.792244 4720 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.792282 4720 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.792960 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.792998 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.793011 4720 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.793025 4720 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.793039 4720 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.793050 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.793060 4720 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.793073 4720 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.793083 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.793094 4720 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.793105 4720 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.793115 4720 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.793127 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.793142 4720 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.793152 4720 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.793163 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.793175 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.793185 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.793196 4720 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.793208 4720 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.793218 4720 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.793229 4720 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.793239 4720 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.793248 4720 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.793258 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.793270 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.793279 4720 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.793289 4720 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.793299 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.793310 4720 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.793320 4720 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.793330 4720 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.793340 4720 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.793351 4720 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.793361 4720 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.793371 4720 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.793381 4720 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.793398 4720 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.793412 4720 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.793425 4720 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.793438 4720 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.793457 4720 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.793467 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.793483 4720 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.793493 4720 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.793505 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.793515 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.793525 4720 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.793536 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.793545 4720 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.793554 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.793575 4720 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.793586 4720 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.793596 4720 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.793613 4720 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.793623 4720 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.793633 4720 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.793642 4720 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.793653 4720 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.793662 4720 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.793673 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.793683 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.793692 4720 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.793701 4720 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.793710 4720 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.793719 4720 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.793730 4720 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.793740 4720 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.793752 4720 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.793761 4720 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.793771 4720 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.793782 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.793792 4720 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.793801 4720 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.793811 4720 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.793820 4720 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.793830 4720 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.793839 4720 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.793848 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.793859 4720 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.793869 4720 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.793878 4720 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.793888 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.793916 4720 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.793927 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.793936 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.793947 4720 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.793956 4720 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.793965 4720 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.793429 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.793974 4720 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.794043 4720 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.794059 4720 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.794074 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.794110 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.794124 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.794137 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.794150 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.794163 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.794176 4720 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.794189 4720 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.794203 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.794220 4720 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.794238 4720 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.794257 4720 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.794276 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.794289 4720 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.794302 4720 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.794314 4720 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.794332 4720 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.794350 4720 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.794369 4720 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.794404 4720 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.794422 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.794439 4720 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.794457 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.794471 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.794484 4720 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.794497 4720 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.794511 4720 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.794523 4720 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.794540 4720 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.794592 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.794605 4720 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.794618 4720 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.794632 4720 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.794645 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.794659 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.794673 4720 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.794685 4720 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.794697 4720 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.794727 4720 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.794739 4720 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.795841 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 06:35:26 crc kubenswrapper[4720]: E0122 06:35:26.802633 4720 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 22 06:35:26 crc kubenswrapper[4720]: E0122 06:35:26.802658 4720 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 22 06:35:26 crc kubenswrapper[4720]: E0122 06:35:26.802680 4720 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 06:35:26 crc kubenswrapper[4720]: E0122 06:35:26.802765 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-22 06:35:27.302739654 +0000 UTC m=+19.444646379 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 06:35:26 crc kubenswrapper[4720]: E0122 06:35:26.805940 4720 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 22 06:35:26 crc kubenswrapper[4720]: E0122 06:35:26.805958 4720 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 22 06:35:26 crc kubenswrapper[4720]: E0122 06:35:26.805971 4720 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 06:35:26 crc kubenswrapper[4720]: E0122 06:35:26.806012 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-22 06:35:27.306001005 +0000 UTC m=+19.447907730 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.814208 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.814269 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.815152 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.815251 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.815313 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.815447 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.815741 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.816746 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.816905 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.817449 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.818410 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.819668 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.827079 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.830877 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.831684 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.831947 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.832071 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.839565 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.843151 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.863834 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.897336 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.897415 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.897457 4720 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.897469 4720 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.897478 4720 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.897491 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.897501 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.897517 4720 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.897526 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.897535 4720 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.897544 4720 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.897556 4720 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.897564 4720 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.897573 4720 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.897605 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.897614 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.897622 4720 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.897630 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.897638 4720 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.897646 4720 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.897654 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.897663 4720 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.897671 4720 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.897679 4720 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.897689 4720 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.897698 4720 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.897835 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.897876 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.956545 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.966594 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 22 06:35:26 crc kubenswrapper[4720]: E0122 06:35:26.977814 4720 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 22 06:35:26 crc kubenswrapper[4720]: container &Container{Name:network-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,Command:[/bin/bash -c #!/bin/bash Jan 22 06:35:26 crc kubenswrapper[4720]: set -o allexport Jan 22 06:35:26 crc kubenswrapper[4720]: if [[ -f /etc/kubernetes/apiserver-url.env ]]; then Jan 22 06:35:26 crc kubenswrapper[4720]: source /etc/kubernetes/apiserver-url.env Jan 22 06:35:26 crc kubenswrapper[4720]: else Jan 22 06:35:26 crc kubenswrapper[4720]: echo "Error: /etc/kubernetes/apiserver-url.env is missing" Jan 22 06:35:26 crc kubenswrapper[4720]: exit 1 Jan 22 06:35:26 crc kubenswrapper[4720]: fi Jan 22 06:35:26 crc kubenswrapper[4720]: exec /usr/bin/cluster-network-operator start --listen=0.0.0.0:9104 Jan 22 06:35:26 crc kubenswrapper[4720]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:cno,HostPort:9104,ContainerPort:9104,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:RELEASE_VERSION,Value:4.18.1,ValueFrom:nil,},EnvVar{Name:KUBE_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b97554198294bf544fbc116c94a0a1fb2ec8a4de0e926bf9d9e320135f0bee6f,ValueFrom:nil,},EnvVar{Name:KUBE_RBAC_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09,ValueFrom:nil,},EnvVar{Name:MULTUS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26,ValueFrom:nil,},EnvVar{Name:MULTUS_ADMISSION_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317,ValueFrom:nil,},EnvVar{Name:CNI_PLUGINS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc,ValueFrom:nil,},EnvVar{Name:BOND_CNI_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78,ValueFrom:nil,},EnvVar{Name:WHEREABOUTS_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4,ValueFrom:nil,},EnvVar{Name:ROUTE_OVERRRIDE_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa,ValueFrom:nil,},EnvVar{Name:MULTUS_NETWORKPOLICY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:23f833d3738d68706eb2f2868bd76bd71cee016cffa6faf5f045a60cc8c6eddd,ValueFrom:nil,},EnvVar{Name:OVN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,ValueFrom:nil,},EnvVar{Name:OVN_NB_RAFT_ELECTION_TIMER,Value:10,ValueFrom:nil,},EnvVar{Name:OVN_SB_RAFT_ELECTION_TIMER,Value:16,ValueFrom:nil,},EnvVar{Name:OVN_NORTHD_PROBE_INTERVAL,Value:10000,ValueFrom:nil,},EnvVar{Name:OVN_CONTROLLER_INACTIVITY_PROBE,Value:180000,ValueFrom:nil,},EnvVar{Name:OVN_NB_INACTIVITY_PROBE,Value:60000,ValueFrom:nil,},EnvVar{Name:EGRESS_ROUTER_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c,ValueFrom:nil,},EnvVar{Name:NETWORK_METRICS_DAEMON_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_SOURCE_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_TARGET_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:NETWORK_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:CLOUD_NETWORK_CONFIG_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8048f1cb0be521f09749c0a489503cd56d85b68c6ca93380e082cfd693cd97a8,ValueFrom:nil,},EnvVar{Name:CLI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,ValueFrom:nil,},EnvVar{Name:FRR_K8S_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5dbf844e49bb46b78586930149e5e5f5dc121014c8afd10fe36f3651967cc256,ValueFrom:nil,},EnvVar{Name:NETWORKING_CONSOLE_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host-etc-kube,ReadOnly:true,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-tls,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rdwmf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-operator-58b4c7f79c-55gtf_openshift-network-operator(37a5e44f-9a88-4405-be8a-b645485e7312): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 22 06:35:26 crc kubenswrapper[4720]: > logger="UnhandledError" Jan 22 06:35:26 crc kubenswrapper[4720]: W0122 06:35:26.978546 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef543e1b_8068_4ea3_b32a_61027b32e95d.slice/crio-1cc78e73b7301bf7de2dda85f777464f2a700de36bd3a5dfc1348eb929fa98c4 WatchSource:0}: Error finding container 1cc78e73b7301bf7de2dda85f777464f2a700de36bd3a5dfc1348eb929fa98c4: Status 404 returned error can't find the container with id 1cc78e73b7301bf7de2dda85f777464f2a700de36bd3a5dfc1348eb929fa98c4 Jan 22 06:35:26 crc kubenswrapper[4720]: E0122 06:35:26.978882 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"network-operator\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" podUID="37a5e44f-9a88-4405-be8a-b645485e7312" Jan 22 06:35:26 crc kubenswrapper[4720]: I0122 06:35:26.980120 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 22 06:35:26 crc kubenswrapper[4720]: E0122 06:35:26.980869 4720 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 22 06:35:26 crc kubenswrapper[4720]: container &Container{Name:webhook,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Jan 22 06:35:26 crc kubenswrapper[4720]: if [[ -f "/env/_master" ]]; then Jan 22 06:35:26 crc kubenswrapper[4720]: set -o allexport Jan 22 06:35:26 crc kubenswrapper[4720]: source "/env/_master" Jan 22 06:35:26 crc kubenswrapper[4720]: set +o allexport Jan 22 06:35:26 crc kubenswrapper[4720]: fi Jan 22 06:35:26 crc kubenswrapper[4720]: # OVN-K will try to remove hybrid overlay node annotations even when the hybrid overlay is not enabled. Jan 22 06:35:26 crc kubenswrapper[4720]: # https://github.com/ovn-org/ovn-kubernetes/blob/ac6820df0b338a246f10f412cd5ec903bd234694/go-controller/pkg/ovn/master.go#L791 Jan 22 06:35:26 crc kubenswrapper[4720]: ho_enable="--enable-hybrid-overlay" Jan 22 06:35:26 crc kubenswrapper[4720]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start webhook" Jan 22 06:35:26 crc kubenswrapper[4720]: # extra-allowed-user: service account `ovn-kubernetes-control-plane` Jan 22 06:35:26 crc kubenswrapper[4720]: # sets pod annotations in multi-homing layer3 network controller (cluster-manager) Jan 22 06:35:26 crc kubenswrapper[4720]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Jan 22 06:35:26 crc kubenswrapper[4720]: --webhook-cert-dir="/etc/webhook-cert" \ Jan 22 06:35:26 crc kubenswrapper[4720]: --webhook-host=127.0.0.1 \ Jan 22 06:35:26 crc kubenswrapper[4720]: --webhook-port=9743 \ Jan 22 06:35:26 crc kubenswrapper[4720]: ${ho_enable} \ Jan 22 06:35:26 crc kubenswrapper[4720]: --enable-interconnect \ Jan 22 06:35:26 crc kubenswrapper[4720]: --disable-approver \ Jan 22 06:35:26 crc kubenswrapper[4720]: --extra-allowed-user="system:serviceaccount:openshift-ovn-kubernetes:ovn-kubernetes-control-plane" \ Jan 22 06:35:26 crc kubenswrapper[4720]: --wait-for-kubernetes-api=200s \ Jan 22 06:35:26 crc kubenswrapper[4720]: --pod-admission-conditions="/var/run/ovnkube-identity-config/additional-pod-admission-cond.json" \ Jan 22 06:35:26 crc kubenswrapper[4720]: --loglevel="${LOGLEVEL}" Jan 22 06:35:26 crc kubenswrapper[4720]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:2,ValueFrom:nil,},EnvVar{Name:KUBERNETES_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:webhook-cert,ReadOnly:false,MountPath:/etc/webhook-cert/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2kz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000470000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-vrzqb_openshift-network-node-identity(ef543e1b-8068-4ea3-b32a-61027b32e95d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 22 06:35:26 crc kubenswrapper[4720]: > logger="UnhandledError" Jan 22 06:35:26 crc kubenswrapper[4720]: E0122 06:35:26.984404 4720 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 22 06:35:26 crc kubenswrapper[4720]: container &Container{Name:approver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Jan 22 06:35:26 crc kubenswrapper[4720]: if [[ -f "/env/_master" ]]; then Jan 22 06:35:26 crc kubenswrapper[4720]: set -o allexport Jan 22 06:35:26 crc kubenswrapper[4720]: source "/env/_master" Jan 22 06:35:26 crc kubenswrapper[4720]: set +o allexport Jan 22 06:35:26 crc kubenswrapper[4720]: fi Jan 22 06:35:26 crc kubenswrapper[4720]: Jan 22 06:35:26 crc kubenswrapper[4720]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start approver" Jan 22 06:35:26 crc kubenswrapper[4720]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Jan 22 06:35:26 crc kubenswrapper[4720]: --disable-webhook \ Jan 22 06:35:26 crc kubenswrapper[4720]: --csr-acceptance-conditions="/var/run/ovnkube-identity-config/additional-cert-acceptance-cond.json" \ Jan 22 06:35:26 crc kubenswrapper[4720]: --loglevel="${LOGLEVEL}" Jan 22 06:35:26 crc kubenswrapper[4720]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:4,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2kz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000470000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-vrzqb_openshift-network-node-identity(ef543e1b-8068-4ea3-b32a-61027b32e95d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Jan 22 06:35:26 crc kubenswrapper[4720]: > logger="UnhandledError" Jan 22 06:35:26 crc kubenswrapper[4720]: E0122 06:35:26.985646 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"webhook\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"approver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-network-node-identity/network-node-identity-vrzqb" podUID="ef543e1b-8068-4ea3-b32a-61027b32e95d" Jan 22 06:35:26 crc kubenswrapper[4720]: W0122 06:35:26.991317 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd75a4c96_2883_4a0b_bab2_0fab2b6c0b49.slice/crio-60d35ab1273cc15329de98ada69550b9f0a7c70a0a07bdc37acd8ee531739730 WatchSource:0}: Error finding container 60d35ab1273cc15329de98ada69550b9f0a7c70a0a07bdc37acd8ee531739730: Status 404 returned error can't find the container with id 60d35ab1273cc15329de98ada69550b9f0a7c70a0a07bdc37acd8ee531739730 Jan 22 06:35:26 crc kubenswrapper[4720]: E0122 06:35:26.994167 4720 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:iptables-alerter,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,Command:[/iptables-alerter/iptables-alerter.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONTAINER_RUNTIME_ENDPOINT,Value:unix:///run/crio/crio.sock,ValueFrom:nil,},EnvVar{Name:ALERTER_POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:iptables-alerter-script,ReadOnly:false,MountPath:/iptables-alerter,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-slash,ReadOnly:true,MountPath:/host,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rczfb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod iptables-alerter-4ln5h_openshift-network-operator(d75a4c96-2883-4a0b-bab2-0fab2b6c0b49): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Jan 22 06:35:26 crc kubenswrapper[4720]: E0122 06:35:26.996117 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"iptables-alerter\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/iptables-alerter-4ln5h" podUID="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" Jan 22 06:35:27 crc kubenswrapper[4720]: I0122 06:35:27.041351 4720 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 22 06:35:27 crc kubenswrapper[4720]: I0122 06:35:27.146599 4720 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 12:07:53.727960989 +0000 UTC Jan 22 06:35:27 crc kubenswrapper[4720]: I0122 06:35:27.209880 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 06:35:27 crc kubenswrapper[4720]: E0122 06:35:27.210103 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 06:35:27 crc kubenswrapper[4720]: I0122 06:35:27.300183 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 06:35:27 crc kubenswrapper[4720]: E0122 06:35:27.300516 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 06:35:28.300457877 +0000 UTC m=+20.442364622 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 06:35:27 crc kubenswrapper[4720]: I0122 06:35:27.300614 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 06:35:27 crc kubenswrapper[4720]: I0122 06:35:27.300723 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 06:35:27 crc kubenswrapper[4720]: E0122 06:35:27.300992 4720 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 22 06:35:27 crc kubenswrapper[4720]: E0122 06:35:27.301078 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-22 06:35:28.301062024 +0000 UTC m=+20.442968769 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 22 06:35:27 crc kubenswrapper[4720]: E0122 06:35:27.301088 4720 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 22 06:35:27 crc kubenswrapper[4720]: E0122 06:35:27.301295 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-22 06:35:28.30125872 +0000 UTC m=+20.443165415 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 22 06:35:27 crc kubenswrapper[4720]: I0122 06:35:27.318707 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"1cc78e73b7301bf7de2dda85f777464f2a700de36bd3a5dfc1348eb929fa98c4"} Jan 22 06:35:27 crc kubenswrapper[4720]: I0122 06:35:27.320157 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"b8f0e919501e346f2ffcd880a1f2c2a146f77e9935557a8042373416fe3fdc92"} Jan 22 06:35:27 crc kubenswrapper[4720]: I0122 06:35:27.323762 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 22 06:35:27 crc kubenswrapper[4720]: I0122 06:35:27.327430 4720 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="9f51bf89aa732948c3672632435a055b62f58316da07931b2773bc2d3c10789e" exitCode=255 Jan 22 06:35:27 crc kubenswrapper[4720]: I0122 06:35:27.327532 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"9f51bf89aa732948c3672632435a055b62f58316da07931b2773bc2d3c10789e"} Jan 22 06:35:27 crc kubenswrapper[4720]: I0122 06:35:27.328943 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"60d35ab1273cc15329de98ada69550b9f0a7c70a0a07bdc37acd8ee531739730"} Jan 22 06:35:27 crc kubenswrapper[4720]: I0122 06:35:27.346259 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 06:35:27 crc kubenswrapper[4720]: I0122 06:35:27.354947 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 22 06:35:27 crc kubenswrapper[4720]: I0122 06:35:27.355315 4720 scope.go:117] "RemoveContainer" containerID="9f51bf89aa732948c3672632435a055b62f58316da07931b2773bc2d3c10789e" Jan 22 06:35:27 crc kubenswrapper[4720]: I0122 06:35:27.365827 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 06:35:27 crc kubenswrapper[4720]: I0122 06:35:27.379547 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 06:35:27 crc kubenswrapper[4720]: I0122 06:35:27.397204 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 06:35:27 crc kubenswrapper[4720]: I0122 06:35:27.401599 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 06:35:27 crc kubenswrapper[4720]: I0122 06:35:27.401672 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 06:35:27 crc kubenswrapper[4720]: E0122 06:35:27.402012 4720 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 22 06:35:27 crc kubenswrapper[4720]: E0122 06:35:27.402044 4720 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 22 06:35:27 crc kubenswrapper[4720]: E0122 06:35:27.402059 4720 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 06:35:27 crc kubenswrapper[4720]: E0122 06:35:27.402121 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-22 06:35:28.402097912 +0000 UTC m=+20.544004617 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 06:35:27 crc kubenswrapper[4720]: E0122 06:35:27.402849 4720 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 22 06:35:27 crc kubenswrapper[4720]: E0122 06:35:27.402882 4720 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 22 06:35:27 crc kubenswrapper[4720]: E0122 06:35:27.402963 4720 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 06:35:27 crc kubenswrapper[4720]: E0122 06:35:27.403095 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-22 06:35:28.40307087 +0000 UTC m=+20.544977585 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 06:35:27 crc kubenswrapper[4720]: I0122 06:35:27.408772 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 06:35:27 crc kubenswrapper[4720]: I0122 06:35:27.422257 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 06:35:27 crc kubenswrapper[4720]: I0122 06:35:27.440151 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71c3232e-a7c6-4127-b9ae-54b793cf40fc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b844e983a5de29cec0e9c51f209f04921cd45b437887ca38e93348bc0816832a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d21c2ba52009379f44498a3064c67bcd1a54a6277cc2b1df9db949bf0871e95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65a3043e63bb84b58bb5486070eb84265d51c514df2d766703a62382b3d6f66f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f51bf89aa732948c3672632435a055b62f58316da07931b2773bc2d3c10789e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f51bf89aa732948c3672632435a055b62f58316da07931b2773bc2d3c10789e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T06:35:26Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 06:35:20.895806 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 06:35:20.896560 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2161447925/tls.crt::/tmp/serving-cert-2161447925/tls.key\\\\\\\"\\\\nI0122 06:35:26.673138 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 06:35:26.677382 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 06:35:26.677419 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 06:35:26.677459 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 06:35:26.677469 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 06:35:26.688473 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 06:35:26.688552 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 06:35:26.688566 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 06:35:26.688580 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 06:35:26.688588 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 06:35:26.688596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 06:35:26.688605 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 06:35:26.688875 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 06:35:26.690763 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://927a5a6aba8d7533888fb9e5e664fbaed25c6e6c0616c5b3c6f0afd6f45aa128\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3418e6eacd36147ae6ae1be72f23113dea7dbaab3597ba0fee76fbc846c0162f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3418e6eacd36147ae6ae1be72f23113dea7dbaab3597ba0fee76fbc846c0162f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 06:35:27 crc kubenswrapper[4720]: I0122 06:35:27.459642 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 06:35:27 crc kubenswrapper[4720]: I0122 06:35:27.476183 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 06:35:27 crc kubenswrapper[4720]: I0122 06:35:27.476344 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Jan 22 06:35:27 crc kubenswrapper[4720]: I0122 06:35:27.495051 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Jan 22 06:35:27 crc kubenswrapper[4720]: I0122 06:35:27.495353 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 06:35:27 crc kubenswrapper[4720]: I0122 06:35:27.496929 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Jan 22 06:35:27 crc kubenswrapper[4720]: I0122 06:35:27.507448 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 06:35:27 crc kubenswrapper[4720]: I0122 06:35:27.517164 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 06:35:27 crc kubenswrapper[4720]: I0122 06:35:27.529203 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 06:35:27 crc kubenswrapper[4720]: I0122 06:35:27.546686 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2336077a-0d11-443d-aa5f-3bc0f75aeb59\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7f8b443098a61889ca09134ff7b6e219efca59abcbcea3bb4c1aa9266ee8e55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfdba4f4bb3ed55acad58ed0c07d2dd58b1a1e9fd56443090c002bdefd3e7e10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2caddf6dd66b641ff10add3b52e37581c741507660ec4fe67a3b9ddde699e56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://108525dd49ce1847f1cc6c4e2a4c70cae3d2d990cda97eeb4aba8879b955432d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d49a501c34b86f0f9300742970a00c92ce3b8d352518d8f53d1ab8c734e2af6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49fa65139740d25a82a8becea2b7476e9846a3eebfe5a02bb0b3e8cc4e7fa30d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49fa65139740d25a82a8becea2b7476e9846a3eebfe5a02bb0b3e8cc4e7fa30d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e171d6a13b6108f7df37e46d2195682d2ce7a75e5fbfd0ccf4aca27871964cf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e171d6a13b6108f7df37e46d2195682d2ce7a75e5fbfd0ccf4aca27871964cf6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://136504ce05b178f4c60a839a1044c0b0892ea97babfe80181eac601c5057df16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://136504ce05b178f4c60a839a1044c0b0892ea97babfe80181eac601c5057df16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:08Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 06:35:27 crc kubenswrapper[4720]: I0122 06:35:27.562026 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71c3232e-a7c6-4127-b9ae-54b793cf40fc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b844e983a5de29cec0e9c51f209f04921cd45b437887ca38e93348bc0816832a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d21c2ba52009379f44498a3064c67bcd1a54a6277cc2b1df9db949bf0871e95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65a3043e63bb84b58bb5486070eb84265d51c514df2d766703a62382b3d6f66f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f51bf89aa732948c3672632435a055b62f58316da07931b2773bc2d3c10789e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f51bf89aa732948c3672632435a055b62f58316da07931b2773bc2d3c10789e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T06:35:26Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 06:35:20.895806 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 06:35:20.896560 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2161447925/tls.crt::/tmp/serving-cert-2161447925/tls.key\\\\\\\"\\\\nI0122 06:35:26.673138 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 06:35:26.677382 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 06:35:26.677419 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 06:35:26.677459 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 06:35:26.677469 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 06:35:26.688473 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 06:35:26.688552 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 06:35:26.688566 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 06:35:26.688580 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 06:35:26.688588 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 06:35:26.688596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 06:35:26.688605 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 06:35:26.688875 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 06:35:26.690763 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://927a5a6aba8d7533888fb9e5e664fbaed25c6e6c0616c5b3c6f0afd6f45aa128\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3418e6eacd36147ae6ae1be72f23113dea7dbaab3597ba0fee76fbc846c0162f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3418e6eacd36147ae6ae1be72f23113dea7dbaab3597ba0fee76fbc846c0162f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 06:35:27 crc kubenswrapper[4720]: I0122 06:35:27.574627 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 06:35:27 crc kubenswrapper[4720]: I0122 06:35:27.587129 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 06:35:27 crc kubenswrapper[4720]: I0122 06:35:27.605804 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 06:35:27 crc kubenswrapper[4720]: I0122 06:35:27.619430 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 06:35:27 crc kubenswrapper[4720]: I0122 06:35:27.630308 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 06:35:27 crc kubenswrapper[4720]: I0122 06:35:27.640974 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 22 06:35:27 crc kubenswrapper[4720]: I0122 06:35:27.911487 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 06:35:27 crc kubenswrapper[4720]: I0122 06:35:27.930193 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2336077a-0d11-443d-aa5f-3bc0f75aeb59\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7f8b443098a61889ca09134ff7b6e219efca59abcbcea3bb4c1aa9266ee8e55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfdba4f4bb3ed55acad58ed0c07d2dd58b1a1e9fd56443090c002bdefd3e7e10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2caddf6dd66b641ff10add3b52e37581c741507660ec4fe67a3b9ddde699e56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://108525dd49ce1847f1cc6c4e2a4c70cae3d2d990cda97eeb4aba8879b955432d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d49a501c34b86f0f9300742970a00c92ce3b8d352518d8f53d1ab8c734e2af6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49fa65139740d25a82a8becea2b7476e9846a3eebfe5a02bb0b3e8cc4e7fa30d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49fa65139740d25a82a8becea2b7476e9846a3eebfe5a02bb0b3e8cc4e7fa30d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e171d6a13b6108f7df37e46d2195682d2ce7a75e5fbfd0ccf4aca27871964cf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e171d6a13b6108f7df37e46d2195682d2ce7a75e5fbfd0ccf4aca27871964cf6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://136504ce05b178f4c60a839a1044c0b0892ea97babfe80181eac601c5057df16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://136504ce05b178f4c60a839a1044c0b0892ea97babfe80181eac601c5057df16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:08Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:27Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:27 crc kubenswrapper[4720]: I0122 06:35:27.942367 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71c3232e-a7c6-4127-b9ae-54b793cf40fc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b844e983a5de29cec0e9c51f209f04921cd45b437887ca38e93348bc0816832a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d21c2ba52009379f44498a3064c67bcd1a54a6277cc2b1df9db949bf0871e95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65a3043e63bb84b58bb5486070eb84265d51c514df2d766703a62382b3d6f66f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f51bf89aa732948c3672632435a055b62f58316da07931b2773bc2d3c10789e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f51bf89aa732948c3672632435a055b62f58316da07931b2773bc2d3c10789e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T06:35:26Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 06:35:20.895806 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 06:35:20.896560 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2161447925/tls.crt::/tmp/serving-cert-2161447925/tls.key\\\\\\\"\\\\nI0122 06:35:26.673138 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 06:35:26.677382 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 06:35:26.677419 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 06:35:26.677459 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 06:35:26.677469 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 06:35:26.688473 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 06:35:26.688552 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 06:35:26.688566 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 06:35:26.688580 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 06:35:26.688588 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 06:35:26.688596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 06:35:26.688605 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 06:35:26.688875 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 06:35:26.690763 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://927a5a6aba8d7533888fb9e5e664fbaed25c6e6c0616c5b3c6f0afd6f45aa128\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3418e6eacd36147ae6ae1be72f23113dea7dbaab3597ba0fee76fbc846c0162f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3418e6eacd36147ae6ae1be72f23113dea7dbaab3597ba0fee76fbc846c0162f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:27Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:27 crc kubenswrapper[4720]: I0122 06:35:27.962465 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:27Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:27 crc kubenswrapper[4720]: I0122 06:35:27.979652 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:27Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:27 crc kubenswrapper[4720]: I0122 06:35:27.992335 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:27Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:28 crc kubenswrapper[4720]: I0122 06:35:28.012388 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:28Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:28 crc kubenswrapper[4720]: I0122 06:35:28.026548 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:28Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:28 crc kubenswrapper[4720]: I0122 06:35:28.043659 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:28Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:28 crc kubenswrapper[4720]: I0122 06:35:28.147436 4720 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 18:15:40.345967279 +0000 UTC Jan 22 06:35:28 crc kubenswrapper[4720]: I0122 06:35:28.210518 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 06:35:28 crc kubenswrapper[4720]: E0122 06:35:28.210682 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 06:35:28 crc kubenswrapper[4720]: I0122 06:35:28.210854 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 06:35:28 crc kubenswrapper[4720]: E0122 06:35:28.211117 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 06:35:28 crc kubenswrapper[4720]: I0122 06:35:28.217028 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Jan 22 06:35:28 crc kubenswrapper[4720]: I0122 06:35:28.217645 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Jan 22 06:35:28 crc kubenswrapper[4720]: I0122 06:35:28.218699 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Jan 22 06:35:28 crc kubenswrapper[4720]: I0122 06:35:28.219338 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Jan 22 06:35:28 crc kubenswrapper[4720]: I0122 06:35:28.220313 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Jan 22 06:35:28 crc kubenswrapper[4720]: I0122 06:35:28.220810 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Jan 22 06:35:28 crc kubenswrapper[4720]: I0122 06:35:28.221374 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Jan 22 06:35:28 crc kubenswrapper[4720]: I0122 06:35:28.222220 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Jan 22 06:35:28 crc kubenswrapper[4720]: I0122 06:35:28.222788 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Jan 22 06:35:28 crc kubenswrapper[4720]: I0122 06:35:28.223642 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Jan 22 06:35:28 crc kubenswrapper[4720]: I0122 06:35:28.224110 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Jan 22 06:35:28 crc kubenswrapper[4720]: I0122 06:35:28.225065 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Jan 22 06:35:28 crc kubenswrapper[4720]: I0122 06:35:28.225547 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Jan 22 06:35:28 crc kubenswrapper[4720]: I0122 06:35:28.226027 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Jan 22 06:35:28 crc kubenswrapper[4720]: I0122 06:35:28.227088 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Jan 22 06:35:28 crc kubenswrapper[4720]: I0122 06:35:28.227709 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Jan 22 06:35:28 crc kubenswrapper[4720]: I0122 06:35:28.229068 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Jan 22 06:35:28 crc kubenswrapper[4720]: I0122 06:35:28.229556 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Jan 22 06:35:28 crc kubenswrapper[4720]: I0122 06:35:28.230461 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Jan 22 06:35:28 crc kubenswrapper[4720]: I0122 06:35:28.232167 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Jan 22 06:35:28 crc kubenswrapper[4720]: I0122 06:35:28.232764 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Jan 22 06:35:28 crc kubenswrapper[4720]: I0122 06:35:28.233765 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Jan 22 06:35:28 crc kubenswrapper[4720]: I0122 06:35:28.234203 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Jan 22 06:35:28 crc kubenswrapper[4720]: I0122 06:35:28.235240 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Jan 22 06:35:28 crc kubenswrapper[4720]: I0122 06:35:28.235630 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Jan 22 06:35:28 crc kubenswrapper[4720]: I0122 06:35:28.236276 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Jan 22 06:35:28 crc kubenswrapper[4720]: I0122 06:35:28.237413 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Jan 22 06:35:28 crc kubenswrapper[4720]: I0122 06:35:28.238102 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Jan 22 06:35:28 crc kubenswrapper[4720]: I0122 06:35:28.239159 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Jan 22 06:35:28 crc kubenswrapper[4720]: I0122 06:35:28.239636 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Jan 22 06:35:28 crc kubenswrapper[4720]: I0122 06:35:28.240555 4720 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Jan 22 06:35:28 crc kubenswrapper[4720]: I0122 06:35:28.240675 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Jan 22 06:35:28 crc kubenswrapper[4720]: I0122 06:35:28.242586 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Jan 22 06:35:28 crc kubenswrapper[4720]: I0122 06:35:28.243746 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Jan 22 06:35:28 crc kubenswrapper[4720]: I0122 06:35:28.244414 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Jan 22 06:35:28 crc kubenswrapper[4720]: I0122 06:35:28.245966 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Jan 22 06:35:28 crc kubenswrapper[4720]: I0122 06:35:28.246707 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Jan 22 06:35:28 crc kubenswrapper[4720]: I0122 06:35:28.247722 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Jan 22 06:35:28 crc kubenswrapper[4720]: I0122 06:35:28.248357 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Jan 22 06:35:28 crc kubenswrapper[4720]: I0122 06:35:28.249440 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Jan 22 06:35:28 crc kubenswrapper[4720]: I0122 06:35:28.249870 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Jan 22 06:35:28 crc kubenswrapper[4720]: I0122 06:35:28.250869 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Jan 22 06:35:28 crc kubenswrapper[4720]: I0122 06:35:28.251978 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Jan 22 06:35:28 crc kubenswrapper[4720]: I0122 06:35:28.252969 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Jan 22 06:35:28 crc kubenswrapper[4720]: I0122 06:35:28.253416 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Jan 22 06:35:28 crc kubenswrapper[4720]: I0122 06:35:28.254405 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Jan 22 06:35:28 crc kubenswrapper[4720]: I0122 06:35:28.254928 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Jan 22 06:35:28 crc kubenswrapper[4720]: I0122 06:35:28.255689 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:28Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:28 crc kubenswrapper[4720]: I0122 06:35:28.256119 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Jan 22 06:35:28 crc kubenswrapper[4720]: I0122 06:35:28.256626 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Jan 22 06:35:28 crc kubenswrapper[4720]: I0122 06:35:28.257392 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Jan 22 06:35:28 crc kubenswrapper[4720]: I0122 06:35:28.257836 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Jan 22 06:35:28 crc kubenswrapper[4720]: I0122 06:35:28.259201 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Jan 22 06:35:28 crc kubenswrapper[4720]: I0122 06:35:28.260148 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Jan 22 06:35:28 crc kubenswrapper[4720]: I0122 06:35:28.260688 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Jan 22 06:35:28 crc kubenswrapper[4720]: I0122 06:35:28.282015 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:28Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:28 crc kubenswrapper[4720]: I0122 06:35:28.309245 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 06:35:28 crc kubenswrapper[4720]: E0122 06:35:28.309271 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 06:35:30.309240485 +0000 UTC m=+22.451147200 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 06:35:28 crc kubenswrapper[4720]: I0122 06:35:28.309490 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 06:35:28 crc kubenswrapper[4720]: E0122 06:35:28.309656 4720 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 22 06:35:28 crc kubenswrapper[4720]: I0122 06:35:28.309713 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 06:35:28 crc kubenswrapper[4720]: E0122 06:35:28.309766 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-22 06:35:30.309741689 +0000 UTC m=+22.451648404 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 22 06:35:28 crc kubenswrapper[4720]: E0122 06:35:28.309996 4720 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 22 06:35:28 crc kubenswrapper[4720]: E0122 06:35:28.310114 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-22 06:35:30.310086299 +0000 UTC m=+22.451993084 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 22 06:35:28 crc kubenswrapper[4720]: I0122 06:35:28.312094 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:28Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:28 crc kubenswrapper[4720]: I0122 06:35:28.325357 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:28Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:28 crc kubenswrapper[4720]: I0122 06:35:28.334019 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 22 06:35:28 crc kubenswrapper[4720]: I0122 06:35:28.336140 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"1b944677a78ab673857bf2dd507dcc2eced1caef632613ec1e2054cb76ef57f5"} Jan 22 06:35:28 crc kubenswrapper[4720]: I0122 06:35:28.336724 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 06:35:28 crc kubenswrapper[4720]: I0122 06:35:28.338575 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"bc18b75d6761db0d4a459a096804ac6600c6c2026ca133d47671a8aadba175f1"} Jan 22 06:35:28 crc kubenswrapper[4720]: I0122 06:35:28.338628 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"6616553b8c62205590801d10e6e316b396b7190584b08a774ebb87ab425cc122"} Jan 22 06:35:28 crc kubenswrapper[4720]: I0122 06:35:28.344633 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 06:35:28 crc kubenswrapper[4720]: I0122 06:35:28.346058 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71c3232e-a7c6-4127-b9ae-54b793cf40fc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b844e983a5de29cec0e9c51f209f04921cd45b437887ca38e93348bc0816832a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d21c2ba52009379f44498a3064c67bcd1a54a6277cc2b1df9db949bf0871e95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65a3043e63bb84b58bb5486070eb84265d51c514df2d766703a62382b3d6f66f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f51bf89aa732948c3672632435a055b62f58316da07931b2773bc2d3c10789e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f51bf89aa732948c3672632435a055b62f58316da07931b2773bc2d3c10789e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T06:35:26Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 06:35:20.895806 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 06:35:20.896560 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2161447925/tls.crt::/tmp/serving-cert-2161447925/tls.key\\\\\\\"\\\\nI0122 06:35:26.673138 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 06:35:26.677382 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 06:35:26.677419 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 06:35:26.677459 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 06:35:26.677469 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 06:35:26.688473 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 06:35:26.688552 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 06:35:26.688566 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 06:35:26.688580 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 06:35:26.688588 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 06:35:26.688596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 06:35:26.688605 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 06:35:26.688875 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 06:35:26.690763 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://927a5a6aba8d7533888fb9e5e664fbaed25c6e6c0616c5b3c6f0afd6f45aa128\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3418e6eacd36147ae6ae1be72f23113dea7dbaab3597ba0fee76fbc846c0162f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3418e6eacd36147ae6ae1be72f23113dea7dbaab3597ba0fee76fbc846c0162f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:28Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:28 crc kubenswrapper[4720]: I0122 06:35:28.346520 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"42d0f971bb1f2a686c44e255b97648123170034b41d626318aab29e111dda4a1"} Jan 22 06:35:28 crc kubenswrapper[4720]: E0122 06:35:28.356666 4720 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"etcd-crc\" already exists" pod="openshift-etcd/etcd-crc" Jan 22 06:35:28 crc kubenswrapper[4720]: I0122 06:35:28.361154 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:28Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:28 crc kubenswrapper[4720]: I0122 06:35:28.372554 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:28Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:28 crc kubenswrapper[4720]: I0122 06:35:28.390090 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2336077a-0d11-443d-aa5f-3bc0f75aeb59\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7f8b443098a61889ca09134ff7b6e219efca59abcbcea3bb4c1aa9266ee8e55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfdba4f4bb3ed55acad58ed0c07d2dd58b1a1e9fd56443090c002bdefd3e7e10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2caddf6dd66b641ff10add3b52e37581c741507660ec4fe67a3b9ddde699e56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://108525dd49ce1847f1cc6c4e2a4c70cae3d2d990cda97eeb4aba8879b955432d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d49a501c34b86f0f9300742970a00c92ce3b8d352518d8f53d1ab8c734e2af6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49fa65139740d25a82a8becea2b7476e9846a3eebfe5a02bb0b3e8cc4e7fa30d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49fa65139740d25a82a8becea2b7476e9846a3eebfe5a02bb0b3e8cc4e7fa30d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e171d6a13b6108f7df37e46d2195682d2ce7a75e5fbfd0ccf4aca27871964cf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e171d6a13b6108f7df37e46d2195682d2ce7a75e5fbfd0ccf4aca27871964cf6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://136504ce05b178f4c60a839a1044c0b0892ea97babfe80181eac601c5057df16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://136504ce05b178f4c60a839a1044c0b0892ea97babfe80181eac601c5057df16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:08Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:28Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:28 crc kubenswrapper[4720]: I0122 06:35:28.401971 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:28Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:28 crc kubenswrapper[4720]: I0122 06:35:28.410826 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 06:35:28 crc kubenswrapper[4720]: I0122 06:35:28.410949 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 06:35:28 crc kubenswrapper[4720]: E0122 06:35:28.411116 4720 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 22 06:35:28 crc kubenswrapper[4720]: E0122 06:35:28.411157 4720 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 22 06:35:28 crc kubenswrapper[4720]: E0122 06:35:28.411172 4720 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 06:35:28 crc kubenswrapper[4720]: E0122 06:35:28.411237 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-22 06:35:30.41121624 +0000 UTC m=+22.553122945 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 06:35:28 crc kubenswrapper[4720]: E0122 06:35:28.411243 4720 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 22 06:35:28 crc kubenswrapper[4720]: E0122 06:35:28.411266 4720 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 22 06:35:28 crc kubenswrapper[4720]: E0122 06:35:28.411284 4720 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 06:35:28 crc kubenswrapper[4720]: E0122 06:35:28.411333 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-22 06:35:30.411319063 +0000 UTC m=+22.553225768 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 06:35:28 crc kubenswrapper[4720]: I0122 06:35:28.414069 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:28Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:28 crc kubenswrapper[4720]: I0122 06:35:28.424732 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:28Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:28 crc kubenswrapper[4720]: I0122 06:35:28.435620 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:28Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:28 crc kubenswrapper[4720]: I0122 06:35:28.455949 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2336077a-0d11-443d-aa5f-3bc0f75aeb59\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7f8b443098a61889ca09134ff7b6e219efca59abcbcea3bb4c1aa9266ee8e55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfdba4f4bb3ed55acad58ed0c07d2dd58b1a1e9fd56443090c002bdefd3e7e10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2caddf6dd66b641ff10add3b52e37581c741507660ec4fe67a3b9ddde699e56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://108525dd49ce1847f1cc6c4e2a4c70cae3d2d990cda97eeb4aba8879b955432d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d49a501c34b86f0f9300742970a00c92ce3b8d352518d8f53d1ab8c734e2af6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49fa65139740d25a82a8becea2b7476e9846a3eebfe5a02bb0b3e8cc4e7fa30d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49fa65139740d25a82a8becea2b7476e9846a3eebfe5a02bb0b3e8cc4e7fa30d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e171d6a13b6108f7df37e46d2195682d2ce7a75e5fbfd0ccf4aca27871964cf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e171d6a13b6108f7df37e46d2195682d2ce7a75e5fbfd0ccf4aca27871964cf6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://136504ce05b178f4c60a839a1044c0b0892ea97babfe80181eac601c5057df16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://136504ce05b178f4c60a839a1044c0b0892ea97babfe80181eac601c5057df16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:08Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:28Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:28 crc kubenswrapper[4720]: I0122 06:35:28.478028 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71c3232e-a7c6-4127-b9ae-54b793cf40fc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b844e983a5de29cec0e9c51f209f04921cd45b437887ca38e93348bc0816832a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d21c2ba52009379f44498a3064c67bcd1a54a6277cc2b1df9db949bf0871e95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65a3043e63bb84b58bb5486070eb84265d51c514df2d766703a62382b3d6f66f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b944677a78ab673857bf2dd507dcc2eced1caef632613ec1e2054cb76ef57f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f51bf89aa732948c3672632435a055b62f58316da07931b2773bc2d3c10789e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T06:35:26Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 06:35:20.895806 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 06:35:20.896560 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2161447925/tls.crt::/tmp/serving-cert-2161447925/tls.key\\\\\\\"\\\\nI0122 06:35:26.673138 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 06:35:26.677382 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 06:35:26.677419 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 06:35:26.677459 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 06:35:26.677469 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 06:35:26.688473 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 06:35:26.688552 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 06:35:26.688566 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 06:35:26.688580 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 06:35:26.688588 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 06:35:26.688596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 06:35:26.688605 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 06:35:26.688875 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 06:35:26.690763 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://927a5a6aba8d7533888fb9e5e664fbaed25c6e6c0616c5b3c6f0afd6f45aa128\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3418e6eacd36147ae6ae1be72f23113dea7dbaab3597ba0fee76fbc846c0162f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3418e6eacd36147ae6ae1be72f23113dea7dbaab3597ba0fee76fbc846c0162f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:28Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:28 crc kubenswrapper[4720]: I0122 06:35:28.512754 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://42d0f971bb1f2a686c44e255b97648123170034b41d626318aab29e111dda4a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:28Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:28 crc kubenswrapper[4720]: I0122 06:35:28.536591 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc18b75d6761db0d4a459a096804ac6600c6c2026ca133d47671a8aadba175f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6616553b8c62205590801d10e6e316b396b7190584b08a774ebb87ab425cc122\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:28Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:29 crc kubenswrapper[4720]: I0122 06:35:29.148161 4720 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 05:36:55.687736056 +0000 UTC Jan 22 06:35:29 crc kubenswrapper[4720]: I0122 06:35:29.210058 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 06:35:29 crc kubenswrapper[4720]: E0122 06:35:29.210277 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 06:35:30 crc kubenswrapper[4720]: I0122 06:35:30.055427 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 06:35:30 crc kubenswrapper[4720]: I0122 06:35:30.062660 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 06:35:30 crc kubenswrapper[4720]: I0122 06:35:30.070798 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Jan 22 06:35:30 crc kubenswrapper[4720]: I0122 06:35:30.080889 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:30Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:30 crc kubenswrapper[4720]: I0122 06:35:30.108854 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:30Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:30 crc kubenswrapper[4720]: I0122 06:35:30.127707 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:30Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:30 crc kubenswrapper[4720]: I0122 06:35:30.148351 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:30Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:30 crc kubenswrapper[4720]: I0122 06:35:30.148623 4720 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 20:19:06.180656838 +0000 UTC Jan 22 06:35:30 crc kubenswrapper[4720]: I0122 06:35:30.169133 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc18b75d6761db0d4a459a096804ac6600c6c2026ca133d47671a8aadba175f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6616553b8c62205590801d10e6e316b396b7190584b08a774ebb87ab425cc122\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:30Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:30 crc kubenswrapper[4720]: I0122 06:35:30.203192 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2336077a-0d11-443d-aa5f-3bc0f75aeb59\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7f8b443098a61889ca09134ff7b6e219efca59abcbcea3bb4c1aa9266ee8e55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfdba4f4bb3ed55acad58ed0c07d2dd58b1a1e9fd56443090c002bdefd3e7e10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2caddf6dd66b641ff10add3b52e37581c741507660ec4fe67a3b9ddde699e56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://108525dd49ce1847f1cc6c4e2a4c70cae3d2d990cda97eeb4aba8879b955432d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d49a501c34b86f0f9300742970a00c92ce3b8d352518d8f53d1ab8c734e2af6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49fa65139740d25a82a8becea2b7476e9846a3eebfe5a02bb0b3e8cc4e7fa30d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49fa65139740d25a82a8becea2b7476e9846a3eebfe5a02bb0b3e8cc4e7fa30d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e171d6a13b6108f7df37e46d2195682d2ce7a75e5fbfd0ccf4aca27871964cf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e171d6a13b6108f7df37e46d2195682d2ce7a75e5fbfd0ccf4aca27871964cf6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://136504ce05b178f4c60a839a1044c0b0892ea97babfe80181eac601c5057df16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://136504ce05b178f4c60a839a1044c0b0892ea97babfe80181eac601c5057df16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:08Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:30Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:30 crc kubenswrapper[4720]: I0122 06:35:30.209722 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 06:35:30 crc kubenswrapper[4720]: E0122 06:35:30.209893 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 06:35:30 crc kubenswrapper[4720]: I0122 06:35:30.209991 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 06:35:30 crc kubenswrapper[4720]: E0122 06:35:30.210260 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 06:35:30 crc kubenswrapper[4720]: I0122 06:35:30.229521 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71c3232e-a7c6-4127-b9ae-54b793cf40fc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b844e983a5de29cec0e9c51f209f04921cd45b437887ca38e93348bc0816832a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d21c2ba52009379f44498a3064c67bcd1a54a6277cc2b1df9db949bf0871e95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65a3043e63bb84b58bb5486070eb84265d51c514df2d766703a62382b3d6f66f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b944677a78ab673857bf2dd507dcc2eced1caef632613ec1e2054cb76ef57f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f51bf89aa732948c3672632435a055b62f58316da07931b2773bc2d3c10789e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T06:35:26Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 06:35:20.895806 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 06:35:20.896560 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2161447925/tls.crt::/tmp/serving-cert-2161447925/tls.key\\\\\\\"\\\\nI0122 06:35:26.673138 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 06:35:26.677382 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 06:35:26.677419 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 06:35:26.677459 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 06:35:26.677469 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 06:35:26.688473 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 06:35:26.688552 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 06:35:26.688566 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 06:35:26.688580 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 06:35:26.688588 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 06:35:26.688596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 06:35:26.688605 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 06:35:26.688875 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 06:35:26.690763 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://927a5a6aba8d7533888fb9e5e664fbaed25c6e6c0616c5b3c6f0afd6f45aa128\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3418e6eacd36147ae6ae1be72f23113dea7dbaab3597ba0fee76fbc846c0162f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3418e6eacd36147ae6ae1be72f23113dea7dbaab3597ba0fee76fbc846c0162f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:30Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:30 crc kubenswrapper[4720]: I0122 06:35:30.251341 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://42d0f971bb1f2a686c44e255b97648123170034b41d626318aab29e111dda4a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:30Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:30 crc kubenswrapper[4720]: I0122 06:35:30.284992 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63aad383-f2cb-4bee-a51e-8f27e7b6f6dc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f91f6cde32c3019a63315238551af8420def8e585e4dbf3d29e55f62f155ba3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eff41c6fc4cac32edebfbabd313603d50771e92427938f4fc8c12bc75563133d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5dc3d31620607809111c9a1ed06ab7dfb1b24ff5ef18155979efc6b09b6d67d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://365ce842df37c87fae429e32228087a45828c6c717b63a2eb9818f26bf9aed7b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:08Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:30Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:30 crc kubenswrapper[4720]: I0122 06:35:30.307719 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:30Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:30 crc kubenswrapper[4720]: I0122 06:35:30.326458 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:30Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:30 crc kubenswrapper[4720]: I0122 06:35:30.332520 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 06:35:30 crc kubenswrapper[4720]: E0122 06:35:30.332704 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 06:35:34.332670794 +0000 UTC m=+26.474577529 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 06:35:30 crc kubenswrapper[4720]: I0122 06:35:30.332763 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 06:35:30 crc kubenswrapper[4720]: I0122 06:35:30.332851 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 06:35:30 crc kubenswrapper[4720]: E0122 06:35:30.332897 4720 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 22 06:35:30 crc kubenswrapper[4720]: E0122 06:35:30.333038 4720 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 22 06:35:30 crc kubenswrapper[4720]: E0122 06:35:30.333079 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-22 06:35:34.333053615 +0000 UTC m=+26.474960360 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 22 06:35:30 crc kubenswrapper[4720]: E0122 06:35:30.333164 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-22 06:35:34.333101726 +0000 UTC m=+26.475008471 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 22 06:35:30 crc kubenswrapper[4720]: I0122 06:35:30.349607 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:30Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:30 crc kubenswrapper[4720]: E0122 06:35:30.361695 4720 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-crc\" already exists" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 06:35:30 crc kubenswrapper[4720]: I0122 06:35:30.370047 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:30Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:30 crc kubenswrapper[4720]: I0122 06:35:30.405707 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2336077a-0d11-443d-aa5f-3bc0f75aeb59\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7f8b443098a61889ca09134ff7b6e219efca59abcbcea3bb4c1aa9266ee8e55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfdba4f4bb3ed55acad58ed0c07d2dd58b1a1e9fd56443090c002bdefd3e7e10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2caddf6dd66b641ff10add3b52e37581c741507660ec4fe67a3b9ddde699e56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://108525dd49ce1847f1cc6c4e2a4c70cae3d2d990cda97eeb4aba8879b955432d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d49a501c34b86f0f9300742970a00c92ce3b8d352518d8f53d1ab8c734e2af6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49fa65139740d25a82a8becea2b7476e9846a3eebfe5a02bb0b3e8cc4e7fa30d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49fa65139740d25a82a8becea2b7476e9846a3eebfe5a02bb0b3e8cc4e7fa30d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e171d6a13b6108f7df37e46d2195682d2ce7a75e5fbfd0ccf4aca27871964cf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e171d6a13b6108f7df37e46d2195682d2ce7a75e5fbfd0ccf4aca27871964cf6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://136504ce05b178f4c60a839a1044c0b0892ea97babfe80181eac601c5057df16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://136504ce05b178f4c60a839a1044c0b0892ea97babfe80181eac601c5057df16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:08Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:30Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:30 crc kubenswrapper[4720]: E0122 06:35:30.434664 4720 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 22 06:35:30 crc kubenswrapper[4720]: E0122 06:35:30.434824 4720 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 22 06:35:30 crc kubenswrapper[4720]: E0122 06:35:30.434862 4720 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 06:35:30 crc kubenswrapper[4720]: E0122 06:35:30.435154 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-22 06:35:34.435116992 +0000 UTC m=+26.577023747 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 06:35:30 crc kubenswrapper[4720]: I0122 06:35:30.434199 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 06:35:30 crc kubenswrapper[4720]: I0122 06:35:30.435538 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 06:35:30 crc kubenswrapper[4720]: E0122 06:35:30.436437 4720 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 22 06:35:30 crc kubenswrapper[4720]: E0122 06:35:30.436629 4720 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 22 06:35:30 crc kubenswrapper[4720]: E0122 06:35:30.436880 4720 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 06:35:30 crc kubenswrapper[4720]: E0122 06:35:30.437273 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-22 06:35:34.437246031 +0000 UTC m=+26.579152776 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 06:35:30 crc kubenswrapper[4720]: I0122 06:35:30.462834 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71c3232e-a7c6-4127-b9ae-54b793cf40fc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b844e983a5de29cec0e9c51f209f04921cd45b437887ca38e93348bc0816832a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d21c2ba52009379f44498a3064c67bcd1a54a6277cc2b1df9db949bf0871e95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65a3043e63bb84b58bb5486070eb84265d51c514df2d766703a62382b3d6f66f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b944677a78ab673857bf2dd507dcc2eced1caef632613ec1e2054cb76ef57f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f51bf89aa732948c3672632435a055b62f58316da07931b2773bc2d3c10789e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T06:35:26Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 06:35:20.895806 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 06:35:20.896560 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2161447925/tls.crt::/tmp/serving-cert-2161447925/tls.key\\\\\\\"\\\\nI0122 06:35:26.673138 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 06:35:26.677382 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 06:35:26.677419 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 06:35:26.677459 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 06:35:26.677469 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 06:35:26.688473 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 06:35:26.688552 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 06:35:26.688566 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 06:35:26.688580 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 06:35:26.688588 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 06:35:26.688596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 06:35:26.688605 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 06:35:26.688875 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 06:35:26.690763 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://927a5a6aba8d7533888fb9e5e664fbaed25c6e6c0616c5b3c6f0afd6f45aa128\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3418e6eacd36147ae6ae1be72f23113dea7dbaab3597ba0fee76fbc846c0162f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3418e6eacd36147ae6ae1be72f23113dea7dbaab3597ba0fee76fbc846c0162f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:30Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:30 crc kubenswrapper[4720]: I0122 06:35:30.485327 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://42d0f971bb1f2a686c44e255b97648123170034b41d626318aab29e111dda4a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:30Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:30 crc kubenswrapper[4720]: I0122 06:35:30.500632 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc18b75d6761db0d4a459a096804ac6600c6c2026ca133d47671a8aadba175f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6616553b8c62205590801d10e6e316b396b7190584b08a774ebb87ab425cc122\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:30Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:30 crc kubenswrapper[4720]: I0122 06:35:30.992256 4720 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 22 06:35:30 crc kubenswrapper[4720]: I0122 06:35:30.994329 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:30 crc kubenswrapper[4720]: I0122 06:35:30.994374 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:30 crc kubenswrapper[4720]: I0122 06:35:30.994386 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:30 crc kubenswrapper[4720]: I0122 06:35:30.994467 4720 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 22 06:35:31 crc kubenswrapper[4720]: I0122 06:35:31.004238 4720 kubelet_node_status.go:115] "Node was previously registered" node="crc" Jan 22 06:35:31 crc kubenswrapper[4720]: I0122 06:35:31.004609 4720 kubelet_node_status.go:79] "Successfully registered node" node="crc" Jan 22 06:35:31 crc kubenswrapper[4720]: I0122 06:35:31.006196 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:31 crc kubenswrapper[4720]: I0122 06:35:31.006255 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:31 crc kubenswrapper[4720]: I0122 06:35:31.006274 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:31 crc kubenswrapper[4720]: I0122 06:35:31.006301 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:31 crc kubenswrapper[4720]: I0122 06:35:31.006363 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:31Z","lastTransitionTime":"2026-01-22T06:35:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:31 crc kubenswrapper[4720]: E0122 06:35:31.037326 4720 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T06:35:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T06:35:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T06:35:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T06:35:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"234f6209-cc86-46cc-ab69-026482c920c9\\\",\\\"systemUUID\\\":\\\"4713dd6d-99ec-4bb6-94e4-e7199d2e8be9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:31Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:31 crc kubenswrapper[4720]: I0122 06:35:31.042157 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:31 crc kubenswrapper[4720]: I0122 06:35:31.042216 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:31 crc kubenswrapper[4720]: I0122 06:35:31.042235 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:31 crc kubenswrapper[4720]: I0122 06:35:31.042264 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:31 crc kubenswrapper[4720]: I0122 06:35:31.042284 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:31Z","lastTransitionTime":"2026-01-22T06:35:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:31 crc kubenswrapper[4720]: E0122 06:35:31.064111 4720 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T06:35:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T06:35:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T06:35:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T06:35:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"234f6209-cc86-46cc-ab69-026482c920c9\\\",\\\"systemUUID\\\":\\\"4713dd6d-99ec-4bb6-94e4-e7199d2e8be9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:31Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:31 crc kubenswrapper[4720]: I0122 06:35:31.069307 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:31 crc kubenswrapper[4720]: I0122 06:35:31.069374 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:31 crc kubenswrapper[4720]: I0122 06:35:31.069392 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:31 crc kubenswrapper[4720]: I0122 06:35:31.069419 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:31 crc kubenswrapper[4720]: I0122 06:35:31.069438 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:31Z","lastTransitionTime":"2026-01-22T06:35:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:31 crc kubenswrapper[4720]: E0122 06:35:31.097070 4720 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T06:35:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T06:35:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T06:35:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T06:35:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"234f6209-cc86-46cc-ab69-026482c920c9\\\",\\\"systemUUID\\\":\\\"4713dd6d-99ec-4bb6-94e4-e7199d2e8be9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:31Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:31 crc kubenswrapper[4720]: I0122 06:35:31.103391 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:31 crc kubenswrapper[4720]: I0122 06:35:31.103464 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:31 crc kubenswrapper[4720]: I0122 06:35:31.103483 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:31 crc kubenswrapper[4720]: I0122 06:35:31.103510 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:31 crc kubenswrapper[4720]: I0122 06:35:31.103529 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:31Z","lastTransitionTime":"2026-01-22T06:35:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:31 crc kubenswrapper[4720]: E0122 06:35:31.122134 4720 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T06:35:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T06:35:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T06:35:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T06:35:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"234f6209-cc86-46cc-ab69-026482c920c9\\\",\\\"systemUUID\\\":\\\"4713dd6d-99ec-4bb6-94e4-e7199d2e8be9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:31Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:31 crc kubenswrapper[4720]: I0122 06:35:31.125725 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:31 crc kubenswrapper[4720]: I0122 06:35:31.125795 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:31 crc kubenswrapper[4720]: I0122 06:35:31.125820 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:31 crc kubenswrapper[4720]: I0122 06:35:31.125852 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:31 crc kubenswrapper[4720]: I0122 06:35:31.125876 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:31Z","lastTransitionTime":"2026-01-22T06:35:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:31 crc kubenswrapper[4720]: E0122 06:35:31.141528 4720 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T06:35:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T06:35:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T06:35:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T06:35:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"234f6209-cc86-46cc-ab69-026482c920c9\\\",\\\"systemUUID\\\":\\\"4713dd6d-99ec-4bb6-94e4-e7199d2e8be9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:31Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:31 crc kubenswrapper[4720]: E0122 06:35:31.141740 4720 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 22 06:35:31 crc kubenswrapper[4720]: I0122 06:35:31.143409 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:31 crc kubenswrapper[4720]: I0122 06:35:31.143440 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:31 crc kubenswrapper[4720]: I0122 06:35:31.143451 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:31 crc kubenswrapper[4720]: I0122 06:35:31.143471 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:31 crc kubenswrapper[4720]: I0122 06:35:31.143486 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:31Z","lastTransitionTime":"2026-01-22T06:35:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:31 crc kubenswrapper[4720]: I0122 06:35:31.148746 4720 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 11:37:23.580747253 +0000 UTC Jan 22 06:35:31 crc kubenswrapper[4720]: I0122 06:35:31.210079 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 06:35:31 crc kubenswrapper[4720]: E0122 06:35:31.210422 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 06:35:31 crc kubenswrapper[4720]: I0122 06:35:31.245669 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:31 crc kubenswrapper[4720]: I0122 06:35:31.245715 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:31 crc kubenswrapper[4720]: I0122 06:35:31.245726 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:31 crc kubenswrapper[4720]: I0122 06:35:31.245744 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:31 crc kubenswrapper[4720]: I0122 06:35:31.245756 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:31Z","lastTransitionTime":"2026-01-22T06:35:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:31 crc kubenswrapper[4720]: I0122 06:35:31.348555 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:31 crc kubenswrapper[4720]: I0122 06:35:31.348631 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:31 crc kubenswrapper[4720]: I0122 06:35:31.348647 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:31 crc kubenswrapper[4720]: I0122 06:35:31.348666 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:31 crc kubenswrapper[4720]: I0122 06:35:31.348679 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:31Z","lastTransitionTime":"2026-01-22T06:35:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:31 crc kubenswrapper[4720]: I0122 06:35:31.359100 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"a3042f2c25662a3c0a0f010b2e431521ea9dc24b59303773b1fc4e2b43308937"} Jan 22 06:35:31 crc kubenswrapper[4720]: I0122 06:35:31.394720 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2336077a-0d11-443d-aa5f-3bc0f75aeb59\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7f8b443098a61889ca09134ff7b6e219efca59abcbcea3bb4c1aa9266ee8e55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfdba4f4bb3ed55acad58ed0c07d2dd58b1a1e9fd56443090c002bdefd3e7e10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2caddf6dd66b641ff10add3b52e37581c741507660ec4fe67a3b9ddde699e56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://108525dd49ce1847f1cc6c4e2a4c70cae3d2d990cda97eeb4aba8879b955432d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d49a501c34b86f0f9300742970a00c92ce3b8d352518d8f53d1ab8c734e2af6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49fa65139740d25a82a8becea2b7476e9846a3eebfe5a02bb0b3e8cc4e7fa30d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49fa65139740d25a82a8becea2b7476e9846a3eebfe5a02bb0b3e8cc4e7fa30d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e171d6a13b6108f7df37e46d2195682d2ce7a75e5fbfd0ccf4aca27871964cf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e171d6a13b6108f7df37e46d2195682d2ce7a75e5fbfd0ccf4aca27871964cf6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://136504ce05b178f4c60a839a1044c0b0892ea97babfe80181eac601c5057df16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://136504ce05b178f4c60a839a1044c0b0892ea97babfe80181eac601c5057df16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:08Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:31Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:31 crc kubenswrapper[4720]: I0122 06:35:31.422527 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71c3232e-a7c6-4127-b9ae-54b793cf40fc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b844e983a5de29cec0e9c51f209f04921cd45b437887ca38e93348bc0816832a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d21c2ba52009379f44498a3064c67bcd1a54a6277cc2b1df9db949bf0871e95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65a3043e63bb84b58bb5486070eb84265d51c514df2d766703a62382b3d6f66f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b944677a78ab673857bf2dd507dcc2eced1caef632613ec1e2054cb76ef57f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f51bf89aa732948c3672632435a055b62f58316da07931b2773bc2d3c10789e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T06:35:26Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 06:35:20.895806 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 06:35:20.896560 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2161447925/tls.crt::/tmp/serving-cert-2161447925/tls.key\\\\\\\"\\\\nI0122 06:35:26.673138 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 06:35:26.677382 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 06:35:26.677419 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 06:35:26.677459 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 06:35:26.677469 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 06:35:26.688473 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 06:35:26.688552 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 06:35:26.688566 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 06:35:26.688580 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 06:35:26.688588 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 06:35:26.688596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 06:35:26.688605 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 06:35:26.688875 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 06:35:26.690763 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://927a5a6aba8d7533888fb9e5e664fbaed25c6e6c0616c5b3c6f0afd6f45aa128\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3418e6eacd36147ae6ae1be72f23113dea7dbaab3597ba0fee76fbc846c0162f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3418e6eacd36147ae6ae1be72f23113dea7dbaab3597ba0fee76fbc846c0162f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:31Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:31 crc kubenswrapper[4720]: I0122 06:35:31.446796 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://42d0f971bb1f2a686c44e255b97648123170034b41d626318aab29e111dda4a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:31Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:31 crc kubenswrapper[4720]: I0122 06:35:31.451763 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:31 crc kubenswrapper[4720]: I0122 06:35:31.451828 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:31 crc kubenswrapper[4720]: I0122 06:35:31.451851 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:31 crc kubenswrapper[4720]: I0122 06:35:31.451880 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:31 crc kubenswrapper[4720]: I0122 06:35:31.451900 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:31Z","lastTransitionTime":"2026-01-22T06:35:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:31 crc kubenswrapper[4720]: I0122 06:35:31.467977 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc18b75d6761db0d4a459a096804ac6600c6c2026ca133d47671a8aadba175f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6616553b8c62205590801d10e6e316b396b7190584b08a774ebb87ab425cc122\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:31Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:31 crc kubenswrapper[4720]: I0122 06:35:31.490069 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63aad383-f2cb-4bee-a51e-8f27e7b6f6dc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f91f6cde32c3019a63315238551af8420def8e585e4dbf3d29e55f62f155ba3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eff41c6fc4cac32edebfbabd313603d50771e92427938f4fc8c12bc75563133d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5dc3d31620607809111c9a1ed06ab7dfb1b24ff5ef18155979efc6b09b6d67d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://365ce842df37c87fae429e32228087a45828c6c717b63a2eb9818f26bf9aed7b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:08Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:31Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:31 crc kubenswrapper[4720]: I0122 06:35:31.510184 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:31Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:31 crc kubenswrapper[4720]: I0122 06:35:31.527995 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:31Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:31 crc kubenswrapper[4720]: I0122 06:35:31.542950 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:31Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:31 crc kubenswrapper[4720]: I0122 06:35:31.555597 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:31 crc kubenswrapper[4720]: I0122 06:35:31.555772 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:31 crc kubenswrapper[4720]: I0122 06:35:31.555884 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:31 crc kubenswrapper[4720]: I0122 06:35:31.555994 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:31 crc kubenswrapper[4720]: I0122 06:35:31.556068 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:31Z","lastTransitionTime":"2026-01-22T06:35:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:31 crc kubenswrapper[4720]: I0122 06:35:31.563830 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3042f2c25662a3c0a0f010b2e431521ea9dc24b59303773b1fc4e2b43308937\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:31Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:31 crc kubenswrapper[4720]: I0122 06:35:31.659127 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:31 crc kubenswrapper[4720]: I0122 06:35:31.659203 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:31 crc kubenswrapper[4720]: I0122 06:35:31.659222 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:31 crc kubenswrapper[4720]: I0122 06:35:31.659253 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:31 crc kubenswrapper[4720]: I0122 06:35:31.659273 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:31Z","lastTransitionTime":"2026-01-22T06:35:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:31 crc kubenswrapper[4720]: I0122 06:35:31.763035 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:31 crc kubenswrapper[4720]: I0122 06:35:31.763101 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:31 crc kubenswrapper[4720]: I0122 06:35:31.763119 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:31 crc kubenswrapper[4720]: I0122 06:35:31.763146 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:31 crc kubenswrapper[4720]: I0122 06:35:31.763167 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:31Z","lastTransitionTime":"2026-01-22T06:35:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:31 crc kubenswrapper[4720]: I0122 06:35:31.868274 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:31 crc kubenswrapper[4720]: I0122 06:35:31.868356 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:31 crc kubenswrapper[4720]: I0122 06:35:31.868383 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:31 crc kubenswrapper[4720]: I0122 06:35:31.868415 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:31 crc kubenswrapper[4720]: I0122 06:35:31.868440 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:31Z","lastTransitionTime":"2026-01-22T06:35:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:31 crc kubenswrapper[4720]: I0122 06:35:31.972589 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:31 crc kubenswrapper[4720]: I0122 06:35:31.972662 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:31 crc kubenswrapper[4720]: I0122 06:35:31.972680 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:31 crc kubenswrapper[4720]: I0122 06:35:31.972711 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:31 crc kubenswrapper[4720]: I0122 06:35:31.972731 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:31Z","lastTransitionTime":"2026-01-22T06:35:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:32 crc kubenswrapper[4720]: I0122 06:35:32.077334 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:32 crc kubenswrapper[4720]: I0122 06:35:32.077409 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:32 crc kubenswrapper[4720]: I0122 06:35:32.077431 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:32 crc kubenswrapper[4720]: I0122 06:35:32.077466 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:32 crc kubenswrapper[4720]: I0122 06:35:32.077488 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:32Z","lastTransitionTime":"2026-01-22T06:35:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:32 crc kubenswrapper[4720]: I0122 06:35:32.149443 4720 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 10:02:53.898681365 +0000 UTC Jan 22 06:35:32 crc kubenswrapper[4720]: I0122 06:35:32.180714 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:32 crc kubenswrapper[4720]: I0122 06:35:32.180779 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:32 crc kubenswrapper[4720]: I0122 06:35:32.180803 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:32 crc kubenswrapper[4720]: I0122 06:35:32.180839 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:32 crc kubenswrapper[4720]: I0122 06:35:32.180864 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:32Z","lastTransitionTime":"2026-01-22T06:35:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:32 crc kubenswrapper[4720]: I0122 06:35:32.209989 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 06:35:32 crc kubenswrapper[4720]: I0122 06:35:32.210166 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 06:35:32 crc kubenswrapper[4720]: E0122 06:35:32.210362 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 06:35:32 crc kubenswrapper[4720]: E0122 06:35:32.210659 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 06:35:32 crc kubenswrapper[4720]: I0122 06:35:32.284719 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:32 crc kubenswrapper[4720]: I0122 06:35:32.284809 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:32 crc kubenswrapper[4720]: I0122 06:35:32.284839 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:32 crc kubenswrapper[4720]: I0122 06:35:32.284880 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:32 crc kubenswrapper[4720]: I0122 06:35:32.284945 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:32Z","lastTransitionTime":"2026-01-22T06:35:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:32 crc kubenswrapper[4720]: I0122 06:35:32.389476 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:32 crc kubenswrapper[4720]: I0122 06:35:32.389560 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:32 crc kubenswrapper[4720]: I0122 06:35:32.389586 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:32 crc kubenswrapper[4720]: I0122 06:35:32.389630 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:32 crc kubenswrapper[4720]: I0122 06:35:32.389659 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:32Z","lastTransitionTime":"2026-01-22T06:35:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:32 crc kubenswrapper[4720]: I0122 06:35:32.492471 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:32 crc kubenswrapper[4720]: I0122 06:35:32.492509 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:32 crc kubenswrapper[4720]: I0122 06:35:32.492518 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:32 crc kubenswrapper[4720]: I0122 06:35:32.492534 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:32 crc kubenswrapper[4720]: I0122 06:35:32.492546 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:32Z","lastTransitionTime":"2026-01-22T06:35:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:32 crc kubenswrapper[4720]: I0122 06:35:32.552384 4720 csr.go:261] certificate signing request csr-n4p5k is approved, waiting to be issued Jan 22 06:35:32 crc kubenswrapper[4720]: I0122 06:35:32.580090 4720 csr.go:257] certificate signing request csr-n4p5k is issued Jan 22 06:35:32 crc kubenswrapper[4720]: I0122 06:35:32.594656 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:32 crc kubenswrapper[4720]: I0122 06:35:32.594689 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:32 crc kubenswrapper[4720]: I0122 06:35:32.594700 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:32 crc kubenswrapper[4720]: I0122 06:35:32.594716 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:32 crc kubenswrapper[4720]: I0122 06:35:32.594727 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:32Z","lastTransitionTime":"2026-01-22T06:35:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:32 crc kubenswrapper[4720]: I0122 06:35:32.697150 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:32 crc kubenswrapper[4720]: I0122 06:35:32.697198 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:32 crc kubenswrapper[4720]: I0122 06:35:32.697208 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:32 crc kubenswrapper[4720]: I0122 06:35:32.697225 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:32 crc kubenswrapper[4720]: I0122 06:35:32.697235 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:32Z","lastTransitionTime":"2026-01-22T06:35:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:32 crc kubenswrapper[4720]: I0122 06:35:32.799251 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:32 crc kubenswrapper[4720]: I0122 06:35:32.799294 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:32 crc kubenswrapper[4720]: I0122 06:35:32.799303 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:32 crc kubenswrapper[4720]: I0122 06:35:32.799321 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:32 crc kubenswrapper[4720]: I0122 06:35:32.799332 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:32Z","lastTransitionTime":"2026-01-22T06:35:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:32 crc kubenswrapper[4720]: I0122 06:35:32.901279 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:32 crc kubenswrapper[4720]: I0122 06:35:32.901324 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:32 crc kubenswrapper[4720]: I0122 06:35:32.901336 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:32 crc kubenswrapper[4720]: I0122 06:35:32.901356 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:32 crc kubenswrapper[4720]: I0122 06:35:32.901366 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:32Z","lastTransitionTime":"2026-01-22T06:35:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:32 crc kubenswrapper[4720]: I0122 06:35:32.917660 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-dtnxt"] Jan 22 06:35:32 crc kubenswrapper[4720]: I0122 06:35:32.918046 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-dtnxt" Jan 22 06:35:32 crc kubenswrapper[4720]: I0122 06:35:32.918814 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-n5w5r"] Jan 22 06:35:32 crc kubenswrapper[4720]: I0122 06:35:32.919128 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-n5w5r" Jan 22 06:35:32 crc kubenswrapper[4720]: I0122 06:35:32.920782 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 22 06:35:32 crc kubenswrapper[4720]: I0122 06:35:32.921449 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 22 06:35:32 crc kubenswrapper[4720]: I0122 06:35:32.921582 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 22 06:35:32 crc kubenswrapper[4720]: I0122 06:35:32.922323 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 22 06:35:32 crc kubenswrapper[4720]: I0122 06:35:32.922416 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 22 06:35:32 crc kubenswrapper[4720]: I0122 06:35:32.924164 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 22 06:35:32 crc kubenswrapper[4720]: I0122 06:35:32.924170 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 22 06:35:32 crc kubenswrapper[4720]: I0122 06:35:32.924537 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 22 06:35:32 crc kubenswrapper[4720]: I0122 06:35:32.947224 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63aad383-f2cb-4bee-a51e-8f27e7b6f6dc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f91f6cde32c3019a63315238551af8420def8e585e4dbf3d29e55f62f155ba3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eff41c6fc4cac32edebfbabd313603d50771e92427938f4fc8c12bc75563133d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5dc3d31620607809111c9a1ed06ab7dfb1b24ff5ef18155979efc6b09b6d67d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://365ce842df37c87fae429e32228087a45828c6c717b63a2eb9818f26bf9aed7b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:08Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:32Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:32 crc kubenswrapper[4720]: I0122 06:35:32.971419 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:32Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:32 crc kubenswrapper[4720]: I0122 06:35:32.985721 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:32Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:32 crc kubenswrapper[4720]: I0122 06:35:32.999854 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:32Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:33 crc kubenswrapper[4720]: I0122 06:35:33.008009 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:33 crc kubenswrapper[4720]: I0122 06:35:33.008053 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:33 crc kubenswrapper[4720]: I0122 06:35:33.008064 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:33 crc kubenswrapper[4720]: I0122 06:35:33.008081 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:33 crc kubenswrapper[4720]: I0122 06:35:33.008092 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:33Z","lastTransitionTime":"2026-01-22T06:35:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:33 crc kubenswrapper[4720]: I0122 06:35:33.015648 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3042f2c25662a3c0a0f010b2e431521ea9dc24b59303773b1fc4e2b43308937\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:33Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:33 crc kubenswrapper[4720]: I0122 06:35:33.029508 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dtnxt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"518eedd0-2cb6-458d-a7a8-d8c8b8296401\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:32Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:32Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqx2p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dtnxt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:33Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:33 crc kubenswrapper[4720]: I0122 06:35:33.052381 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2336077a-0d11-443d-aa5f-3bc0f75aeb59\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7f8b443098a61889ca09134ff7b6e219efca59abcbcea3bb4c1aa9266ee8e55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfdba4f4bb3ed55acad58ed0c07d2dd58b1a1e9fd56443090c002bdefd3e7e10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2caddf6dd66b641ff10add3b52e37581c741507660ec4fe67a3b9ddde699e56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://108525dd49ce1847f1cc6c4e2a4c70cae3d2d990cda97eeb4aba8879b955432d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d49a501c34b86f0f9300742970a00c92ce3b8d352518d8f53d1ab8c734e2af6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49fa65139740d25a82a8becea2b7476e9846a3eebfe5a02bb0b3e8cc4e7fa30d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49fa65139740d25a82a8becea2b7476e9846a3eebfe5a02bb0b3e8cc4e7fa30d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e171d6a13b6108f7df37e46d2195682d2ce7a75e5fbfd0ccf4aca27871964cf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e171d6a13b6108f7df37e46d2195682d2ce7a75e5fbfd0ccf4aca27871964cf6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://136504ce05b178f4c60a839a1044c0b0892ea97babfe80181eac601c5057df16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://136504ce05b178f4c60a839a1044c0b0892ea97babfe80181eac601c5057df16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:08Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:33Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:33 crc kubenswrapper[4720]: I0122 06:35:33.062390 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/518eedd0-2cb6-458d-a7a8-d8c8b8296401-hosts-file\") pod \"node-resolver-dtnxt\" (UID: \"518eedd0-2cb6-458d-a7a8-d8c8b8296401\") " pod="openshift-dns/node-resolver-dtnxt" Jan 22 06:35:33 crc kubenswrapper[4720]: I0122 06:35:33.062427 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/85373343-156d-4de0-a72b-baaf7c4e3d08-multus-conf-dir\") pod \"multus-n5w5r\" (UID: \"85373343-156d-4de0-a72b-baaf7c4e3d08\") " pod="openshift-multus/multus-n5w5r" Jan 22 06:35:33 crc kubenswrapper[4720]: I0122 06:35:33.062444 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/85373343-156d-4de0-a72b-baaf7c4e3d08-multus-daemon-config\") pod \"multus-n5w5r\" (UID: \"85373343-156d-4de0-a72b-baaf7c4e3d08\") " pod="openshift-multus/multus-n5w5r" Jan 22 06:35:33 crc kubenswrapper[4720]: I0122 06:35:33.062519 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/85373343-156d-4de0-a72b-baaf7c4e3d08-multus-cni-dir\") pod \"multus-n5w5r\" (UID: \"85373343-156d-4de0-a72b-baaf7c4e3d08\") " pod="openshift-multus/multus-n5w5r" Jan 22 06:35:33 crc kubenswrapper[4720]: I0122 06:35:33.062534 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/85373343-156d-4de0-a72b-baaf7c4e3d08-cni-binary-copy\") pod \"multus-n5w5r\" (UID: \"85373343-156d-4de0-a72b-baaf7c4e3d08\") " pod="openshift-multus/multus-n5w5r" Jan 22 06:35:33 crc kubenswrapper[4720]: I0122 06:35:33.062595 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/85373343-156d-4de0-a72b-baaf7c4e3d08-host-run-k8s-cni-cncf-io\") pod \"multus-n5w5r\" (UID: \"85373343-156d-4de0-a72b-baaf7c4e3d08\") " pod="openshift-multus/multus-n5w5r" Jan 22 06:35:33 crc kubenswrapper[4720]: I0122 06:35:33.062615 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/85373343-156d-4de0-a72b-baaf7c4e3d08-hostroot\") pod \"multus-n5w5r\" (UID: \"85373343-156d-4de0-a72b-baaf7c4e3d08\") " pod="openshift-multus/multus-n5w5r" Jan 22 06:35:33 crc kubenswrapper[4720]: I0122 06:35:33.062657 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/85373343-156d-4de0-a72b-baaf7c4e3d08-multus-socket-dir-parent\") pod \"multus-n5w5r\" (UID: \"85373343-156d-4de0-a72b-baaf7c4e3d08\") " pod="openshift-multus/multus-n5w5r" Jan 22 06:35:33 crc kubenswrapper[4720]: I0122 06:35:33.062672 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tlzmz\" (UniqueName: \"kubernetes.io/projected/85373343-156d-4de0-a72b-baaf7c4e3d08-kube-api-access-tlzmz\") pod \"multus-n5w5r\" (UID: \"85373343-156d-4de0-a72b-baaf7c4e3d08\") " pod="openshift-multus/multus-n5w5r" Jan 22 06:35:33 crc kubenswrapper[4720]: I0122 06:35:33.062686 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/85373343-156d-4de0-a72b-baaf7c4e3d08-cnibin\") pod \"multus-n5w5r\" (UID: \"85373343-156d-4de0-a72b-baaf7c4e3d08\") " pod="openshift-multus/multus-n5w5r" Jan 22 06:35:33 crc kubenswrapper[4720]: I0122 06:35:33.062731 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/85373343-156d-4de0-a72b-baaf7c4e3d08-host-run-netns\") pod \"multus-n5w5r\" (UID: \"85373343-156d-4de0-a72b-baaf7c4e3d08\") " pod="openshift-multus/multus-n5w5r" Jan 22 06:35:33 crc kubenswrapper[4720]: I0122 06:35:33.062751 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wqx2p\" (UniqueName: \"kubernetes.io/projected/518eedd0-2cb6-458d-a7a8-d8c8b8296401-kube-api-access-wqx2p\") pod \"node-resolver-dtnxt\" (UID: \"518eedd0-2cb6-458d-a7a8-d8c8b8296401\") " pod="openshift-dns/node-resolver-dtnxt" Jan 22 06:35:33 crc kubenswrapper[4720]: I0122 06:35:33.062765 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/85373343-156d-4de0-a72b-baaf7c4e3d08-system-cni-dir\") pod \"multus-n5w5r\" (UID: \"85373343-156d-4de0-a72b-baaf7c4e3d08\") " pod="openshift-multus/multus-n5w5r" Jan 22 06:35:33 crc kubenswrapper[4720]: I0122 06:35:33.062816 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/85373343-156d-4de0-a72b-baaf7c4e3d08-host-var-lib-cni-multus\") pod \"multus-n5w5r\" (UID: \"85373343-156d-4de0-a72b-baaf7c4e3d08\") " pod="openshift-multus/multus-n5w5r" Jan 22 06:35:33 crc kubenswrapper[4720]: I0122 06:35:33.062881 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/85373343-156d-4de0-a72b-baaf7c4e3d08-host-var-lib-cni-bin\") pod \"multus-n5w5r\" (UID: \"85373343-156d-4de0-a72b-baaf7c4e3d08\") " pod="openshift-multus/multus-n5w5r" Jan 22 06:35:33 crc kubenswrapper[4720]: I0122 06:35:33.062897 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/85373343-156d-4de0-a72b-baaf7c4e3d08-etc-kubernetes\") pod \"multus-n5w5r\" (UID: \"85373343-156d-4de0-a72b-baaf7c4e3d08\") " pod="openshift-multus/multus-n5w5r" Jan 22 06:35:33 crc kubenswrapper[4720]: I0122 06:35:33.062944 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/85373343-156d-4de0-a72b-baaf7c4e3d08-os-release\") pod \"multus-n5w5r\" (UID: \"85373343-156d-4de0-a72b-baaf7c4e3d08\") " pod="openshift-multus/multus-n5w5r" Jan 22 06:35:33 crc kubenswrapper[4720]: I0122 06:35:33.062961 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/85373343-156d-4de0-a72b-baaf7c4e3d08-host-var-lib-kubelet\") pod \"multus-n5w5r\" (UID: \"85373343-156d-4de0-a72b-baaf7c4e3d08\") " pod="openshift-multus/multus-n5w5r" Jan 22 06:35:33 crc kubenswrapper[4720]: I0122 06:35:33.062974 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/85373343-156d-4de0-a72b-baaf7c4e3d08-host-run-multus-certs\") pod \"multus-n5w5r\" (UID: \"85373343-156d-4de0-a72b-baaf7c4e3d08\") " pod="openshift-multus/multus-n5w5r" Jan 22 06:35:33 crc kubenswrapper[4720]: I0122 06:35:33.064877 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71c3232e-a7c6-4127-b9ae-54b793cf40fc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b844e983a5de29cec0e9c51f209f04921cd45b437887ca38e93348bc0816832a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d21c2ba52009379f44498a3064c67bcd1a54a6277cc2b1df9db949bf0871e95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65a3043e63bb84b58bb5486070eb84265d51c514df2d766703a62382b3d6f66f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b944677a78ab673857bf2dd507dcc2eced1caef632613ec1e2054cb76ef57f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f51bf89aa732948c3672632435a055b62f58316da07931b2773bc2d3c10789e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T06:35:26Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 06:35:20.895806 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 06:35:20.896560 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2161447925/tls.crt::/tmp/serving-cert-2161447925/tls.key\\\\\\\"\\\\nI0122 06:35:26.673138 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 06:35:26.677382 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 06:35:26.677419 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 06:35:26.677459 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 06:35:26.677469 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 06:35:26.688473 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 06:35:26.688552 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 06:35:26.688566 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 06:35:26.688580 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 06:35:26.688588 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 06:35:26.688596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 06:35:26.688605 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 06:35:26.688875 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 06:35:26.690763 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://927a5a6aba8d7533888fb9e5e664fbaed25c6e6c0616c5b3c6f0afd6f45aa128\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3418e6eacd36147ae6ae1be72f23113dea7dbaab3597ba0fee76fbc846c0162f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3418e6eacd36147ae6ae1be72f23113dea7dbaab3597ba0fee76fbc846c0162f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:33Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:33 crc kubenswrapper[4720]: I0122 06:35:33.079650 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://42d0f971bb1f2a686c44e255b97648123170034b41d626318aab29e111dda4a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:33Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:33 crc kubenswrapper[4720]: I0122 06:35:33.092140 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc18b75d6761db0d4a459a096804ac6600c6c2026ca133d47671a8aadba175f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6616553b8c62205590801d10e6e316b396b7190584b08a774ebb87ab425cc122\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:33Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:33 crc kubenswrapper[4720]: I0122 06:35:33.110488 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:33 crc kubenswrapper[4720]: I0122 06:35:33.110696 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:33 crc kubenswrapper[4720]: I0122 06:35:33.110772 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:33 crc kubenswrapper[4720]: I0122 06:35:33.110845 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:33 crc kubenswrapper[4720]: I0122 06:35:33.110968 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:33Z","lastTransitionTime":"2026-01-22T06:35:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:33 crc kubenswrapper[4720]: I0122 06:35:33.113761 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2336077a-0d11-443d-aa5f-3bc0f75aeb59\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7f8b443098a61889ca09134ff7b6e219efca59abcbcea3bb4c1aa9266ee8e55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfdba4f4bb3ed55acad58ed0c07d2dd58b1a1e9fd56443090c002bdefd3e7e10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2caddf6dd66b641ff10add3b52e37581c741507660ec4fe67a3b9ddde699e56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://108525dd49ce1847f1cc6c4e2a4c70cae3d2d990cda97eeb4aba8879b955432d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d49a501c34b86f0f9300742970a00c92ce3b8d352518d8f53d1ab8c734e2af6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49fa65139740d25a82a8becea2b7476e9846a3eebfe5a02bb0b3e8cc4e7fa30d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49fa65139740d25a82a8becea2b7476e9846a3eebfe5a02bb0b3e8cc4e7fa30d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e171d6a13b6108f7df37e46d2195682d2ce7a75e5fbfd0ccf4aca27871964cf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e171d6a13b6108f7df37e46d2195682d2ce7a75e5fbfd0ccf4aca27871964cf6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://136504ce05b178f4c60a839a1044c0b0892ea97babfe80181eac601c5057df16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://136504ce05b178f4c60a839a1044c0b0892ea97babfe80181eac601c5057df16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:08Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:33Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:33 crc kubenswrapper[4720]: I0122 06:35:33.128303 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71c3232e-a7c6-4127-b9ae-54b793cf40fc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b844e983a5de29cec0e9c51f209f04921cd45b437887ca38e93348bc0816832a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d21c2ba52009379f44498a3064c67bcd1a54a6277cc2b1df9db949bf0871e95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65a3043e63bb84b58bb5486070eb84265d51c514df2d766703a62382b3d6f66f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b944677a78ab673857bf2dd507dcc2eced1caef632613ec1e2054cb76ef57f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f51bf89aa732948c3672632435a055b62f58316da07931b2773bc2d3c10789e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T06:35:26Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 06:35:20.895806 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 06:35:20.896560 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2161447925/tls.crt::/tmp/serving-cert-2161447925/tls.key\\\\\\\"\\\\nI0122 06:35:26.673138 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 06:35:26.677382 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 06:35:26.677419 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 06:35:26.677459 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 06:35:26.677469 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 06:35:26.688473 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 06:35:26.688552 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 06:35:26.688566 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 06:35:26.688580 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 06:35:26.688588 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 06:35:26.688596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 06:35:26.688605 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 06:35:26.688875 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 06:35:26.690763 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://927a5a6aba8d7533888fb9e5e664fbaed25c6e6c0616c5b3c6f0afd6f45aa128\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3418e6eacd36147ae6ae1be72f23113dea7dbaab3597ba0fee76fbc846c0162f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3418e6eacd36147ae6ae1be72f23113dea7dbaab3597ba0fee76fbc846c0162f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:33Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:33 crc kubenswrapper[4720]: I0122 06:35:33.143980 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://42d0f971bb1f2a686c44e255b97648123170034b41d626318aab29e111dda4a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:33Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:33 crc kubenswrapper[4720]: I0122 06:35:33.150281 4720 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 22:38:05.525365959 +0000 UTC Jan 22 06:35:33 crc kubenswrapper[4720]: I0122 06:35:33.159836 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc18b75d6761db0d4a459a096804ac6600c6c2026ca133d47671a8aadba175f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6616553b8c62205590801d10e6e316b396b7190584b08a774ebb87ab425cc122\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:33Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:33 crc kubenswrapper[4720]: I0122 06:35:33.164462 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/85373343-156d-4de0-a72b-baaf7c4e3d08-cnibin\") pod \"multus-n5w5r\" (UID: \"85373343-156d-4de0-a72b-baaf7c4e3d08\") " pod="openshift-multus/multus-n5w5r" Jan 22 06:35:33 crc kubenswrapper[4720]: I0122 06:35:33.164576 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/85373343-156d-4de0-a72b-baaf7c4e3d08-cnibin\") pod \"multus-n5w5r\" (UID: \"85373343-156d-4de0-a72b-baaf7c4e3d08\") " pod="openshift-multus/multus-n5w5r" Jan 22 06:35:33 crc kubenswrapper[4720]: I0122 06:35:33.164645 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/85373343-156d-4de0-a72b-baaf7c4e3d08-multus-socket-dir-parent\") pod \"multus-n5w5r\" (UID: \"85373343-156d-4de0-a72b-baaf7c4e3d08\") " pod="openshift-multus/multus-n5w5r" Jan 22 06:35:33 crc kubenswrapper[4720]: I0122 06:35:33.164674 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tlzmz\" (UniqueName: \"kubernetes.io/projected/85373343-156d-4de0-a72b-baaf7c4e3d08-kube-api-access-tlzmz\") pod \"multus-n5w5r\" (UID: \"85373343-156d-4de0-a72b-baaf7c4e3d08\") " pod="openshift-multus/multus-n5w5r" Jan 22 06:35:33 crc kubenswrapper[4720]: I0122 06:35:33.164802 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wqx2p\" (UniqueName: \"kubernetes.io/projected/518eedd0-2cb6-458d-a7a8-d8c8b8296401-kube-api-access-wqx2p\") pod \"node-resolver-dtnxt\" (UID: \"518eedd0-2cb6-458d-a7a8-d8c8b8296401\") " pod="openshift-dns/node-resolver-dtnxt" Jan 22 06:35:33 crc kubenswrapper[4720]: I0122 06:35:33.164746 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/85373343-156d-4de0-a72b-baaf7c4e3d08-multus-socket-dir-parent\") pod \"multus-n5w5r\" (UID: \"85373343-156d-4de0-a72b-baaf7c4e3d08\") " pod="openshift-multus/multus-n5w5r" Jan 22 06:35:33 crc kubenswrapper[4720]: I0122 06:35:33.164886 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/85373343-156d-4de0-a72b-baaf7c4e3d08-system-cni-dir\") pod \"multus-n5w5r\" (UID: \"85373343-156d-4de0-a72b-baaf7c4e3d08\") " pod="openshift-multus/multus-n5w5r" Jan 22 06:35:33 crc kubenswrapper[4720]: I0122 06:35:33.165153 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/85373343-156d-4de0-a72b-baaf7c4e3d08-system-cni-dir\") pod \"multus-n5w5r\" (UID: \"85373343-156d-4de0-a72b-baaf7c4e3d08\") " pod="openshift-multus/multus-n5w5r" Jan 22 06:35:33 crc kubenswrapper[4720]: I0122 06:35:33.165181 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/85373343-156d-4de0-a72b-baaf7c4e3d08-host-run-netns\") pod \"multus-n5w5r\" (UID: \"85373343-156d-4de0-a72b-baaf7c4e3d08\") " pod="openshift-multus/multus-n5w5r" Jan 22 06:35:33 crc kubenswrapper[4720]: I0122 06:35:33.165242 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/85373343-156d-4de0-a72b-baaf7c4e3d08-host-run-netns\") pod \"multus-n5w5r\" (UID: \"85373343-156d-4de0-a72b-baaf7c4e3d08\") " pod="openshift-multus/multus-n5w5r" Jan 22 06:35:33 crc kubenswrapper[4720]: I0122 06:35:33.165293 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/85373343-156d-4de0-a72b-baaf7c4e3d08-host-var-lib-cni-multus\") pod \"multus-n5w5r\" (UID: \"85373343-156d-4de0-a72b-baaf7c4e3d08\") " pod="openshift-multus/multus-n5w5r" Jan 22 06:35:33 crc kubenswrapper[4720]: I0122 06:35:33.165352 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/85373343-156d-4de0-a72b-baaf7c4e3d08-host-var-lib-cni-bin\") pod \"multus-n5w5r\" (UID: \"85373343-156d-4de0-a72b-baaf7c4e3d08\") " pod="openshift-multus/multus-n5w5r" Jan 22 06:35:33 crc kubenswrapper[4720]: I0122 06:35:33.165385 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/85373343-156d-4de0-a72b-baaf7c4e3d08-etc-kubernetes\") pod \"multus-n5w5r\" (UID: \"85373343-156d-4de0-a72b-baaf7c4e3d08\") " pod="openshift-multus/multus-n5w5r" Jan 22 06:35:33 crc kubenswrapper[4720]: I0122 06:35:33.165395 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/85373343-156d-4de0-a72b-baaf7c4e3d08-host-var-lib-cni-multus\") pod \"multus-n5w5r\" (UID: \"85373343-156d-4de0-a72b-baaf7c4e3d08\") " pod="openshift-multus/multus-n5w5r" Jan 22 06:35:33 crc kubenswrapper[4720]: I0122 06:35:33.165440 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/85373343-156d-4de0-a72b-baaf7c4e3d08-host-var-lib-cni-bin\") pod \"multus-n5w5r\" (UID: \"85373343-156d-4de0-a72b-baaf7c4e3d08\") " pod="openshift-multus/multus-n5w5r" Jan 22 06:35:33 crc kubenswrapper[4720]: I0122 06:35:33.165448 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/85373343-156d-4de0-a72b-baaf7c4e3d08-os-release\") pod \"multus-n5w5r\" (UID: \"85373343-156d-4de0-a72b-baaf7c4e3d08\") " pod="openshift-multus/multus-n5w5r" Jan 22 06:35:33 crc kubenswrapper[4720]: I0122 06:35:33.165477 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/85373343-156d-4de0-a72b-baaf7c4e3d08-host-var-lib-kubelet\") pod \"multus-n5w5r\" (UID: \"85373343-156d-4de0-a72b-baaf7c4e3d08\") " pod="openshift-multus/multus-n5w5r" Jan 22 06:35:33 crc kubenswrapper[4720]: I0122 06:35:33.165484 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/85373343-156d-4de0-a72b-baaf7c4e3d08-etc-kubernetes\") pod \"multus-n5w5r\" (UID: \"85373343-156d-4de0-a72b-baaf7c4e3d08\") " pod="openshift-multus/multus-n5w5r" Jan 22 06:35:33 crc kubenswrapper[4720]: I0122 06:35:33.165503 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/85373343-156d-4de0-a72b-baaf7c4e3d08-host-run-multus-certs\") pod \"multus-n5w5r\" (UID: \"85373343-156d-4de0-a72b-baaf7c4e3d08\") " pod="openshift-multus/multus-n5w5r" Jan 22 06:35:33 crc kubenswrapper[4720]: I0122 06:35:33.165539 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/85373343-156d-4de0-a72b-baaf7c4e3d08-multus-conf-dir\") pod \"multus-n5w5r\" (UID: \"85373343-156d-4de0-a72b-baaf7c4e3d08\") " pod="openshift-multus/multus-n5w5r" Jan 22 06:35:33 crc kubenswrapper[4720]: I0122 06:35:33.165566 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/85373343-156d-4de0-a72b-baaf7c4e3d08-multus-daemon-config\") pod \"multus-n5w5r\" (UID: \"85373343-156d-4de0-a72b-baaf7c4e3d08\") " pod="openshift-multus/multus-n5w5r" Jan 22 06:35:33 crc kubenswrapper[4720]: I0122 06:35:33.165595 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/85373343-156d-4de0-a72b-baaf7c4e3d08-host-run-multus-certs\") pod \"multus-n5w5r\" (UID: \"85373343-156d-4de0-a72b-baaf7c4e3d08\") " pod="openshift-multus/multus-n5w5r" Jan 22 06:35:33 crc kubenswrapper[4720]: I0122 06:35:33.165597 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/518eedd0-2cb6-458d-a7a8-d8c8b8296401-hosts-file\") pod \"node-resolver-dtnxt\" (UID: \"518eedd0-2cb6-458d-a7a8-d8c8b8296401\") " pod="openshift-dns/node-resolver-dtnxt" Jan 22 06:35:33 crc kubenswrapper[4720]: I0122 06:35:33.165640 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/518eedd0-2cb6-458d-a7a8-d8c8b8296401-hosts-file\") pod \"node-resolver-dtnxt\" (UID: \"518eedd0-2cb6-458d-a7a8-d8c8b8296401\") " pod="openshift-dns/node-resolver-dtnxt" Jan 22 06:35:33 crc kubenswrapper[4720]: I0122 06:35:33.165671 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/85373343-156d-4de0-a72b-baaf7c4e3d08-multus-cni-dir\") pod \"multus-n5w5r\" (UID: \"85373343-156d-4de0-a72b-baaf7c4e3d08\") " pod="openshift-multus/multus-n5w5r" Jan 22 06:35:33 crc kubenswrapper[4720]: I0122 06:35:33.165685 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/85373343-156d-4de0-a72b-baaf7c4e3d08-multus-conf-dir\") pod \"multus-n5w5r\" (UID: \"85373343-156d-4de0-a72b-baaf7c4e3d08\") " pod="openshift-multus/multus-n5w5r" Jan 22 06:35:33 crc kubenswrapper[4720]: I0122 06:35:33.165698 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/85373343-156d-4de0-a72b-baaf7c4e3d08-cni-binary-copy\") pod \"multus-n5w5r\" (UID: \"85373343-156d-4de0-a72b-baaf7c4e3d08\") " pod="openshift-multus/multus-n5w5r" Jan 22 06:35:33 crc kubenswrapper[4720]: I0122 06:35:33.165726 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/85373343-156d-4de0-a72b-baaf7c4e3d08-host-run-k8s-cni-cncf-io\") pod \"multus-n5w5r\" (UID: \"85373343-156d-4de0-a72b-baaf7c4e3d08\") " pod="openshift-multus/multus-n5w5r" Jan 22 06:35:33 crc kubenswrapper[4720]: I0122 06:35:33.165750 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/85373343-156d-4de0-a72b-baaf7c4e3d08-hostroot\") pod \"multus-n5w5r\" (UID: \"85373343-156d-4de0-a72b-baaf7c4e3d08\") " pod="openshift-multus/multus-n5w5r" Jan 22 06:35:33 crc kubenswrapper[4720]: I0122 06:35:33.165805 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/85373343-156d-4de0-a72b-baaf7c4e3d08-hostroot\") pod \"multus-n5w5r\" (UID: \"85373343-156d-4de0-a72b-baaf7c4e3d08\") " pod="openshift-multus/multus-n5w5r" Jan 22 06:35:33 crc kubenswrapper[4720]: I0122 06:35:33.165842 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/85373343-156d-4de0-a72b-baaf7c4e3d08-host-var-lib-kubelet\") pod \"multus-n5w5r\" (UID: \"85373343-156d-4de0-a72b-baaf7c4e3d08\") " pod="openshift-multus/multus-n5w5r" Jan 22 06:35:33 crc kubenswrapper[4720]: I0122 06:35:33.165897 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/85373343-156d-4de0-a72b-baaf7c4e3d08-multus-cni-dir\") pod \"multus-n5w5r\" (UID: \"85373343-156d-4de0-a72b-baaf7c4e3d08\") " pod="openshift-multus/multus-n5w5r" Jan 22 06:35:33 crc kubenswrapper[4720]: I0122 06:35:33.165565 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/85373343-156d-4de0-a72b-baaf7c4e3d08-os-release\") pod \"multus-n5w5r\" (UID: \"85373343-156d-4de0-a72b-baaf7c4e3d08\") " pod="openshift-multus/multus-n5w5r" Jan 22 06:35:33 crc kubenswrapper[4720]: I0122 06:35:33.166636 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/85373343-156d-4de0-a72b-baaf7c4e3d08-multus-daemon-config\") pod \"multus-n5w5r\" (UID: \"85373343-156d-4de0-a72b-baaf7c4e3d08\") " pod="openshift-multus/multus-n5w5r" Jan 22 06:35:33 crc kubenswrapper[4720]: I0122 06:35:33.166747 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/85373343-156d-4de0-a72b-baaf7c4e3d08-cni-binary-copy\") pod \"multus-n5w5r\" (UID: \"85373343-156d-4de0-a72b-baaf7c4e3d08\") " pod="openshift-multus/multus-n5w5r" Jan 22 06:35:33 crc kubenswrapper[4720]: I0122 06:35:33.166044 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/85373343-156d-4de0-a72b-baaf7c4e3d08-host-run-k8s-cni-cncf-io\") pod \"multus-n5w5r\" (UID: \"85373343-156d-4de0-a72b-baaf7c4e3d08\") " pod="openshift-multus/multus-n5w5r" Jan 22 06:35:33 crc kubenswrapper[4720]: I0122 06:35:33.176123 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-n5w5r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85373343-156d-4de0-a72b-baaf7c4e3d08\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlzmz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-n5w5r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:33Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:33 crc kubenswrapper[4720]: I0122 06:35:33.191646 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wqx2p\" (UniqueName: \"kubernetes.io/projected/518eedd0-2cb6-458d-a7a8-d8c8b8296401-kube-api-access-wqx2p\") pod \"node-resolver-dtnxt\" (UID: \"518eedd0-2cb6-458d-a7a8-d8c8b8296401\") " pod="openshift-dns/node-resolver-dtnxt" Jan 22 06:35:33 crc kubenswrapper[4720]: I0122 06:35:33.198040 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tlzmz\" (UniqueName: \"kubernetes.io/projected/85373343-156d-4de0-a72b-baaf7c4e3d08-kube-api-access-tlzmz\") pod \"multus-n5w5r\" (UID: \"85373343-156d-4de0-a72b-baaf7c4e3d08\") " pod="openshift-multus/multus-n5w5r" Jan 22 06:35:33 crc kubenswrapper[4720]: I0122 06:35:33.203523 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63aad383-f2cb-4bee-a51e-8f27e7b6f6dc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f91f6cde32c3019a63315238551af8420def8e585e4dbf3d29e55f62f155ba3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eff41c6fc4cac32edebfbabd313603d50771e92427938f4fc8c12bc75563133d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5dc3d31620607809111c9a1ed06ab7dfb1b24ff5ef18155979efc6b09b6d67d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://365ce842df37c87fae429e32228087a45828c6c717b63a2eb9818f26bf9aed7b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:08Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:33Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:33 crc kubenswrapper[4720]: I0122 06:35:33.209861 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 06:35:33 crc kubenswrapper[4720]: E0122 06:35:33.210030 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 06:35:33 crc kubenswrapper[4720]: I0122 06:35:33.213743 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:33 crc kubenswrapper[4720]: I0122 06:35:33.213786 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:33 crc kubenswrapper[4720]: I0122 06:35:33.213800 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:33 crc kubenswrapper[4720]: I0122 06:35:33.213819 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:33 crc kubenswrapper[4720]: I0122 06:35:33.213832 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:33Z","lastTransitionTime":"2026-01-22T06:35:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:33 crc kubenswrapper[4720]: I0122 06:35:33.216488 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:33Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:33 crc kubenswrapper[4720]: I0122 06:35:33.230625 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:33Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:33 crc kubenswrapper[4720]: I0122 06:35:33.232758 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-dtnxt" Jan 22 06:35:33 crc kubenswrapper[4720]: I0122 06:35:33.245692 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-n5w5r" Jan 22 06:35:33 crc kubenswrapper[4720]: W0122 06:35:33.248723 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod518eedd0_2cb6_458d_a7a8_d8c8b8296401.slice/crio-ea243c72075c340797b220ab18717ba1bfaba064e27462bf5d74a31247a0bddc WatchSource:0}: Error finding container ea243c72075c340797b220ab18717ba1bfaba064e27462bf5d74a31247a0bddc: Status 404 returned error can't find the container with id ea243c72075c340797b220ab18717ba1bfaba064e27462bf5d74a31247a0bddc Jan 22 06:35:33 crc kubenswrapper[4720]: I0122 06:35:33.260569 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:33Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:33 crc kubenswrapper[4720]: W0122 06:35:33.265264 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod85373343_156d_4de0_a72b_baaf7c4e3d08.slice/crio-6c8c1a09110c893d2c168d96e4d9d78a6af46425be267694c04c46e06c655cce WatchSource:0}: Error finding container 6c8c1a09110c893d2c168d96e4d9d78a6af46425be267694c04c46e06c655cce: Status 404 returned error can't find the container with id 6c8c1a09110c893d2c168d96e4d9d78a6af46425be267694c04c46e06c655cce Jan 22 06:35:33 crc kubenswrapper[4720]: I0122 06:35:33.316210 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:33 crc kubenswrapper[4720]: I0122 06:35:33.316240 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:33 crc kubenswrapper[4720]: I0122 06:35:33.316248 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:33 crc kubenswrapper[4720]: I0122 06:35:33.316263 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:33 crc kubenswrapper[4720]: I0122 06:35:33.316272 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:33Z","lastTransitionTime":"2026-01-22T06:35:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:33 crc kubenswrapper[4720]: I0122 06:35:33.322747 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-pc2f4"] Jan 22 06:35:33 crc kubenswrapper[4720]: I0122 06:35:33.323541 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-pc2f4" Jan 22 06:35:33 crc kubenswrapper[4720]: I0122 06:35:33.325287 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3042f2c25662a3c0a0f010b2e431521ea9dc24b59303773b1fc4e2b43308937\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:33Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:33 crc kubenswrapper[4720]: I0122 06:35:33.326236 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 22 06:35:33 crc kubenswrapper[4720]: I0122 06:35:33.326425 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 22 06:35:33 crc kubenswrapper[4720]: I0122 06:35:33.327238 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 22 06:35:33 crc kubenswrapper[4720]: I0122 06:35:33.327410 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 22 06:35:33 crc kubenswrapper[4720]: I0122 06:35:33.327540 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 22 06:35:33 crc kubenswrapper[4720]: I0122 06:35:33.327773 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 22 06:35:33 crc kubenswrapper[4720]: I0122 06:35:33.327892 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 22 06:35:33 crc kubenswrapper[4720]: I0122 06:35:33.329229 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-lxzml"] Jan 22 06:35:33 crc kubenswrapper[4720]: I0122 06:35:33.329813 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-lxzml" Jan 22 06:35:33 crc kubenswrapper[4720]: I0122 06:35:33.330461 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-bnsvd"] Jan 22 06:35:33 crc kubenswrapper[4720]: I0122 06:35:33.331158 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" Jan 22 06:35:33 crc kubenswrapper[4720]: I0122 06:35:33.331638 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 22 06:35:33 crc kubenswrapper[4720]: I0122 06:35:33.331699 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 22 06:35:33 crc kubenswrapper[4720]: I0122 06:35:33.333129 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 22 06:35:33 crc kubenswrapper[4720]: I0122 06:35:33.333322 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 22 06:35:33 crc kubenswrapper[4720]: I0122 06:35:33.335725 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 22 06:35:33 crc kubenswrapper[4720]: I0122 06:35:33.335747 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 22 06:35:33 crc kubenswrapper[4720]: I0122 06:35:33.338457 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 22 06:35:33 crc kubenswrapper[4720]: I0122 06:35:33.353957 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dtnxt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"518eedd0-2cb6-458d-a7a8-d8c8b8296401\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:32Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:32Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqx2p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dtnxt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:33Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:33 crc kubenswrapper[4720]: I0122 06:35:33.374244 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-dtnxt" event={"ID":"518eedd0-2cb6-458d-a7a8-d8c8b8296401","Type":"ContainerStarted","Data":"ea243c72075c340797b220ab18717ba1bfaba064e27462bf5d74a31247a0bddc"} Jan 22 06:35:33 crc kubenswrapper[4720]: I0122 06:35:33.389061 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-n5w5r" event={"ID":"85373343-156d-4de0-a72b-baaf7c4e3d08","Type":"ContainerStarted","Data":"6c8c1a09110c893d2c168d96e4d9d78a6af46425be267694c04c46e06c655cce"} Jan 22 06:35:33 crc kubenswrapper[4720]: I0122 06:35:33.409543 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:33Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:33 crc kubenswrapper[4720]: I0122 06:35:33.429182 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dtnxt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"518eedd0-2cb6-458d-a7a8-d8c8b8296401\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:32Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:32Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqx2p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dtnxt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:33Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:33 crc kubenswrapper[4720]: I0122 06:35:33.437240 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:33 crc kubenswrapper[4720]: I0122 06:35:33.437307 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:33 crc kubenswrapper[4720]: I0122 06:35:33.437321 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:33 crc kubenswrapper[4720]: I0122 06:35:33.437343 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:33 crc kubenswrapper[4720]: I0122 06:35:33.437358 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:33Z","lastTransitionTime":"2026-01-22T06:35:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:33 crc kubenswrapper[4720]: I0122 06:35:33.464355 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc18b75d6761db0d4a459a096804ac6600c6c2026ca133d47671a8aadba175f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6616553b8c62205590801d10e6e316b396b7190584b08a774ebb87ab425cc122\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:33Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:33 crc kubenswrapper[4720]: I0122 06:35:33.473331 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/9a725fa6-120e-41b1-bf7b-e1419e35c891-run-systemd\") pod \"ovnkube-node-pc2f4\" (UID: \"9a725fa6-120e-41b1-bf7b-e1419e35c891\") " pod="openshift-ovn-kubernetes/ovnkube-node-pc2f4" Jan 22 06:35:33 crc kubenswrapper[4720]: I0122 06:35:33.473376 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9a725fa6-120e-41b1-bf7b-e1419e35c891-run-openvswitch\") pod \"ovnkube-node-pc2f4\" (UID: \"9a725fa6-120e-41b1-bf7b-e1419e35c891\") " pod="openshift-ovn-kubernetes/ovnkube-node-pc2f4" Jan 22 06:35:33 crc kubenswrapper[4720]: I0122 06:35:33.473396 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f4b26e9d-6a95-4b1c-9750-88b6aa100c67-mcd-auth-proxy-config\") pod \"machine-config-daemon-bnsvd\" (UID: \"f4b26e9d-6a95-4b1c-9750-88b6aa100c67\") " pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" Jan 22 06:35:33 crc kubenswrapper[4720]: I0122 06:35:33.473422 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/9a725fa6-120e-41b1-bf7b-e1419e35c891-host-run-netns\") pod \"ovnkube-node-pc2f4\" (UID: \"9a725fa6-120e-41b1-bf7b-e1419e35c891\") " pod="openshift-ovn-kubernetes/ovnkube-node-pc2f4" Jan 22 06:35:33 crc kubenswrapper[4720]: I0122 06:35:33.473438 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/9a725fa6-120e-41b1-bf7b-e1419e35c891-ovnkube-script-lib\") pod \"ovnkube-node-pc2f4\" (UID: \"9a725fa6-120e-41b1-bf7b-e1419e35c891\") " pod="openshift-ovn-kubernetes/ovnkube-node-pc2f4" Jan 22 06:35:33 crc kubenswrapper[4720]: I0122 06:35:33.473453 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/9a725fa6-120e-41b1-bf7b-e1419e35c891-host-cni-bin\") pod \"ovnkube-node-pc2f4\" (UID: \"9a725fa6-120e-41b1-bf7b-e1419e35c891\") " pod="openshift-ovn-kubernetes/ovnkube-node-pc2f4" Jan 22 06:35:33 crc kubenswrapper[4720]: I0122 06:35:33.473474 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/c7b3c34a-9870-4c9f-990b-29b7e768d5a5-cnibin\") pod \"multus-additional-cni-plugins-lxzml\" (UID: \"c7b3c34a-9870-4c9f-990b-29b7e768d5a5\") " pod="openshift-multus/multus-additional-cni-plugins-lxzml" Jan 22 06:35:33 crc kubenswrapper[4720]: I0122 06:35:33.473491 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/9a725fa6-120e-41b1-bf7b-e1419e35c891-node-log\") pod \"ovnkube-node-pc2f4\" (UID: \"9a725fa6-120e-41b1-bf7b-e1419e35c891\") " pod="openshift-ovn-kubernetes/ovnkube-node-pc2f4" Jan 22 06:35:33 crc kubenswrapper[4720]: I0122 06:35:33.473519 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c7b3c34a-9870-4c9f-990b-29b7e768d5a5-system-cni-dir\") pod \"multus-additional-cni-plugins-lxzml\" (UID: \"c7b3c34a-9870-4c9f-990b-29b7e768d5a5\") " pod="openshift-multus/multus-additional-cni-plugins-lxzml" Jan 22 06:35:33 crc kubenswrapper[4720]: I0122 06:35:33.473535 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/c7b3c34a-9870-4c9f-990b-29b7e768d5a5-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-lxzml\" (UID: \"c7b3c34a-9870-4c9f-990b-29b7e768d5a5\") " pod="openshift-multus/multus-additional-cni-plugins-lxzml" Jan 22 06:35:33 crc kubenswrapper[4720]: I0122 06:35:33.473551 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9f52f\" (UniqueName: \"kubernetes.io/projected/f4b26e9d-6a95-4b1c-9750-88b6aa100c67-kube-api-access-9f52f\") pod \"machine-config-daemon-bnsvd\" (UID: \"f4b26e9d-6a95-4b1c-9750-88b6aa100c67\") " pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" Jan 22 06:35:33 crc kubenswrapper[4720]: I0122 06:35:33.473572 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8h66q\" (UniqueName: \"kubernetes.io/projected/c7b3c34a-9870-4c9f-990b-29b7e768d5a5-kube-api-access-8h66q\") pod \"multus-additional-cni-plugins-lxzml\" (UID: \"c7b3c34a-9870-4c9f-990b-29b7e768d5a5\") " pod="openshift-multus/multus-additional-cni-plugins-lxzml" Jan 22 06:35:33 crc kubenswrapper[4720]: I0122 06:35:33.473591 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/9a725fa6-120e-41b1-bf7b-e1419e35c891-run-ovn\") pod \"ovnkube-node-pc2f4\" (UID: \"9a725fa6-120e-41b1-bf7b-e1419e35c891\") " pod="openshift-ovn-kubernetes/ovnkube-node-pc2f4" Jan 22 06:35:33 crc kubenswrapper[4720]: I0122 06:35:33.473608 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/9a725fa6-120e-41b1-bf7b-e1419e35c891-env-overrides\") pod \"ovnkube-node-pc2f4\" (UID: \"9a725fa6-120e-41b1-bf7b-e1419e35c891\") " pod="openshift-ovn-kubernetes/ovnkube-node-pc2f4" Jan 22 06:35:33 crc kubenswrapper[4720]: I0122 06:35:33.473627 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/9a725fa6-120e-41b1-bf7b-e1419e35c891-ovnkube-config\") pod \"ovnkube-node-pc2f4\" (UID: \"9a725fa6-120e-41b1-bf7b-e1419e35c891\") " pod="openshift-ovn-kubernetes/ovnkube-node-pc2f4" Jan 22 06:35:33 crc kubenswrapper[4720]: I0122 06:35:33.473644 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fmnn9\" (UniqueName: \"kubernetes.io/projected/9a725fa6-120e-41b1-bf7b-e1419e35c891-kube-api-access-fmnn9\") pod \"ovnkube-node-pc2f4\" (UID: \"9a725fa6-120e-41b1-bf7b-e1419e35c891\") " pod="openshift-ovn-kubernetes/ovnkube-node-pc2f4" Jan 22 06:35:33 crc kubenswrapper[4720]: I0122 06:35:33.473660 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/c7b3c34a-9870-4c9f-990b-29b7e768d5a5-cni-binary-copy\") pod \"multus-additional-cni-plugins-lxzml\" (UID: \"c7b3c34a-9870-4c9f-990b-29b7e768d5a5\") " pod="openshift-multus/multus-additional-cni-plugins-lxzml" Jan 22 06:35:33 crc kubenswrapper[4720]: I0122 06:35:33.473691 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/9a725fa6-120e-41b1-bf7b-e1419e35c891-host-kubelet\") pod \"ovnkube-node-pc2f4\" (UID: \"9a725fa6-120e-41b1-bf7b-e1419e35c891\") " pod="openshift-ovn-kubernetes/ovnkube-node-pc2f4" Jan 22 06:35:33 crc kubenswrapper[4720]: I0122 06:35:33.473709 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/c7b3c34a-9870-4c9f-990b-29b7e768d5a5-tuning-conf-dir\") pod \"multus-additional-cni-plugins-lxzml\" (UID: \"c7b3c34a-9870-4c9f-990b-29b7e768d5a5\") " pod="openshift-multus/multus-additional-cni-plugins-lxzml" Jan 22 06:35:33 crc kubenswrapper[4720]: I0122 06:35:33.473730 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/9a725fa6-120e-41b1-bf7b-e1419e35c891-systemd-units\") pod \"ovnkube-node-pc2f4\" (UID: \"9a725fa6-120e-41b1-bf7b-e1419e35c891\") " pod="openshift-ovn-kubernetes/ovnkube-node-pc2f4" Jan 22 06:35:33 crc kubenswrapper[4720]: I0122 06:35:33.473746 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9a725fa6-120e-41b1-bf7b-e1419e35c891-var-lib-openvswitch\") pod \"ovnkube-node-pc2f4\" (UID: \"9a725fa6-120e-41b1-bf7b-e1419e35c891\") " pod="openshift-ovn-kubernetes/ovnkube-node-pc2f4" Jan 22 06:35:33 crc kubenswrapper[4720]: I0122 06:35:33.473764 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/9a725fa6-120e-41b1-bf7b-e1419e35c891-ovn-node-metrics-cert\") pod \"ovnkube-node-pc2f4\" (UID: \"9a725fa6-120e-41b1-bf7b-e1419e35c891\") " pod="openshift-ovn-kubernetes/ovnkube-node-pc2f4" Jan 22 06:35:33 crc kubenswrapper[4720]: I0122 06:35:33.473782 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/f4b26e9d-6a95-4b1c-9750-88b6aa100c67-rootfs\") pod \"machine-config-daemon-bnsvd\" (UID: \"f4b26e9d-6a95-4b1c-9750-88b6aa100c67\") " pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" Jan 22 06:35:33 crc kubenswrapper[4720]: I0122 06:35:33.473801 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9a725fa6-120e-41b1-bf7b-e1419e35c891-host-run-ovn-kubernetes\") pod \"ovnkube-node-pc2f4\" (UID: \"9a725fa6-120e-41b1-bf7b-e1419e35c891\") " pod="openshift-ovn-kubernetes/ovnkube-node-pc2f4" Jan 22 06:35:33 crc kubenswrapper[4720]: I0122 06:35:33.473822 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/9a725fa6-120e-41b1-bf7b-e1419e35c891-log-socket\") pod \"ovnkube-node-pc2f4\" (UID: \"9a725fa6-120e-41b1-bf7b-e1419e35c891\") " pod="openshift-ovn-kubernetes/ovnkube-node-pc2f4" Jan 22 06:35:33 crc kubenswrapper[4720]: I0122 06:35:33.473841 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f4b26e9d-6a95-4b1c-9750-88b6aa100c67-proxy-tls\") pod \"machine-config-daemon-bnsvd\" (UID: \"f4b26e9d-6a95-4b1c-9750-88b6aa100c67\") " pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" Jan 22 06:35:33 crc kubenswrapper[4720]: I0122 06:35:33.473861 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9a725fa6-120e-41b1-bf7b-e1419e35c891-etc-openvswitch\") pod \"ovnkube-node-pc2f4\" (UID: \"9a725fa6-120e-41b1-bf7b-e1419e35c891\") " pod="openshift-ovn-kubernetes/ovnkube-node-pc2f4" Jan 22 06:35:33 crc kubenswrapper[4720]: I0122 06:35:33.473881 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9a725fa6-120e-41b1-bf7b-e1419e35c891-host-cni-netd\") pod \"ovnkube-node-pc2f4\" (UID: \"9a725fa6-120e-41b1-bf7b-e1419e35c891\") " pod="openshift-ovn-kubernetes/ovnkube-node-pc2f4" Jan 22 06:35:33 crc kubenswrapper[4720]: I0122 06:35:33.474206 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/9a725fa6-120e-41b1-bf7b-e1419e35c891-host-slash\") pod \"ovnkube-node-pc2f4\" (UID: \"9a725fa6-120e-41b1-bf7b-e1419e35c891\") " pod="openshift-ovn-kubernetes/ovnkube-node-pc2f4" Jan 22 06:35:33 crc kubenswrapper[4720]: I0122 06:35:33.474228 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/c7b3c34a-9870-4c9f-990b-29b7e768d5a5-os-release\") pod \"multus-additional-cni-plugins-lxzml\" (UID: \"c7b3c34a-9870-4c9f-990b-29b7e768d5a5\") " pod="openshift-multus/multus-additional-cni-plugins-lxzml" Jan 22 06:35:33 crc kubenswrapper[4720]: I0122 06:35:33.474245 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9a725fa6-120e-41b1-bf7b-e1419e35c891-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-pc2f4\" (UID: \"9a725fa6-120e-41b1-bf7b-e1419e35c891\") " pod="openshift-ovn-kubernetes/ovnkube-node-pc2f4" Jan 22 06:35:33 crc kubenswrapper[4720]: I0122 06:35:33.481809 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4b26e9d-6a95-4b1c-9750-88b6aa100c67\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:33Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:33Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f52f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f52f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bnsvd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:33Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:33 crc kubenswrapper[4720]: I0122 06:35:33.499066 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63aad383-f2cb-4bee-a51e-8f27e7b6f6dc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f91f6cde32c3019a63315238551af8420def8e585e4dbf3d29e55f62f155ba3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eff41c6fc4cac32edebfbabd313603d50771e92427938f4fc8c12bc75563133d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5dc3d31620607809111c9a1ed06ab7dfb1b24ff5ef18155979efc6b09b6d67d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://365ce842df37c87fae429e32228087a45828c6c717b63a2eb9818f26bf9aed7b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:08Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:33Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:33 crc kubenswrapper[4720]: I0122 06:35:33.512770 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:33Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:33 crc kubenswrapper[4720]: I0122 06:35:33.529702 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:33Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:33 crc kubenswrapper[4720]: I0122 06:35:33.541843 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:33 crc kubenswrapper[4720]: I0122 06:35:33.541919 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:33 crc kubenswrapper[4720]: I0122 06:35:33.541933 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:33 crc kubenswrapper[4720]: I0122 06:35:33.541957 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:33 crc kubenswrapper[4720]: I0122 06:35:33.541970 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:33Z","lastTransitionTime":"2026-01-22T06:35:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:33 crc kubenswrapper[4720]: I0122 06:35:33.546042 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3042f2c25662a3c0a0f010b2e431521ea9dc24b59303773b1fc4e2b43308937\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:33Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:33 crc kubenswrapper[4720]: I0122 06:35:33.568622 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pc2f4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a725fa6-120e-41b1-bf7b-e1419e35c891\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:33Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pc2f4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:33Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:33 crc kubenswrapper[4720]: I0122 06:35:33.575015 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/c7b3c34a-9870-4c9f-990b-29b7e768d5a5-os-release\") pod \"multus-additional-cni-plugins-lxzml\" (UID: \"c7b3c34a-9870-4c9f-990b-29b7e768d5a5\") " pod="openshift-multus/multus-additional-cni-plugins-lxzml" Jan 22 06:35:33 crc kubenswrapper[4720]: I0122 06:35:33.575049 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9a725fa6-120e-41b1-bf7b-e1419e35c891-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-pc2f4\" (UID: \"9a725fa6-120e-41b1-bf7b-e1419e35c891\") " pod="openshift-ovn-kubernetes/ovnkube-node-pc2f4" Jan 22 06:35:33 crc kubenswrapper[4720]: I0122 06:35:33.575075 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/9a725fa6-120e-41b1-bf7b-e1419e35c891-run-systemd\") pod \"ovnkube-node-pc2f4\" (UID: \"9a725fa6-120e-41b1-bf7b-e1419e35c891\") " pod="openshift-ovn-kubernetes/ovnkube-node-pc2f4" Jan 22 06:35:33 crc kubenswrapper[4720]: I0122 06:35:33.575099 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9a725fa6-120e-41b1-bf7b-e1419e35c891-run-openvswitch\") pod \"ovnkube-node-pc2f4\" (UID: \"9a725fa6-120e-41b1-bf7b-e1419e35c891\") " pod="openshift-ovn-kubernetes/ovnkube-node-pc2f4" Jan 22 06:35:33 crc kubenswrapper[4720]: I0122 06:35:33.575122 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f4b26e9d-6a95-4b1c-9750-88b6aa100c67-mcd-auth-proxy-config\") pod \"machine-config-daemon-bnsvd\" (UID: \"f4b26e9d-6a95-4b1c-9750-88b6aa100c67\") " pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" Jan 22 06:35:33 crc kubenswrapper[4720]: I0122 06:35:33.575142 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/9a725fa6-120e-41b1-bf7b-e1419e35c891-host-run-netns\") pod \"ovnkube-node-pc2f4\" (UID: \"9a725fa6-120e-41b1-bf7b-e1419e35c891\") " pod="openshift-ovn-kubernetes/ovnkube-node-pc2f4" Jan 22 06:35:33 crc kubenswrapper[4720]: I0122 06:35:33.575162 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/9a725fa6-120e-41b1-bf7b-e1419e35c891-ovnkube-script-lib\") pod \"ovnkube-node-pc2f4\" (UID: \"9a725fa6-120e-41b1-bf7b-e1419e35c891\") " pod="openshift-ovn-kubernetes/ovnkube-node-pc2f4" Jan 22 06:35:33 crc kubenswrapper[4720]: I0122 06:35:33.575182 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/c7b3c34a-9870-4c9f-990b-29b7e768d5a5-cnibin\") pod \"multus-additional-cni-plugins-lxzml\" (UID: \"c7b3c34a-9870-4c9f-990b-29b7e768d5a5\") " pod="openshift-multus/multus-additional-cni-plugins-lxzml" Jan 22 06:35:33 crc kubenswrapper[4720]: I0122 06:35:33.575200 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/9a725fa6-120e-41b1-bf7b-e1419e35c891-node-log\") pod \"ovnkube-node-pc2f4\" (UID: \"9a725fa6-120e-41b1-bf7b-e1419e35c891\") " pod="openshift-ovn-kubernetes/ovnkube-node-pc2f4" Jan 22 06:35:33 crc kubenswrapper[4720]: I0122 06:35:33.575219 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/9a725fa6-120e-41b1-bf7b-e1419e35c891-host-cni-bin\") pod \"ovnkube-node-pc2f4\" (UID: \"9a725fa6-120e-41b1-bf7b-e1419e35c891\") " pod="openshift-ovn-kubernetes/ovnkube-node-pc2f4" Jan 22 06:35:33 crc kubenswrapper[4720]: I0122 06:35:33.575224 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/c7b3c34a-9870-4c9f-990b-29b7e768d5a5-os-release\") pod \"multus-additional-cni-plugins-lxzml\" (UID: \"c7b3c34a-9870-4c9f-990b-29b7e768d5a5\") " pod="openshift-multus/multus-additional-cni-plugins-lxzml" Jan 22 06:35:33 crc kubenswrapper[4720]: I0122 06:35:33.575304 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9a725fa6-120e-41b1-bf7b-e1419e35c891-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-pc2f4\" (UID: \"9a725fa6-120e-41b1-bf7b-e1419e35c891\") " pod="openshift-ovn-kubernetes/ovnkube-node-pc2f4" Jan 22 06:35:33 crc kubenswrapper[4720]: I0122 06:35:33.575314 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c7b3c34a-9870-4c9f-990b-29b7e768d5a5-system-cni-dir\") pod \"multus-additional-cni-plugins-lxzml\" (UID: \"c7b3c34a-9870-4c9f-990b-29b7e768d5a5\") " pod="openshift-multus/multus-additional-cni-plugins-lxzml" Jan 22 06:35:33 crc kubenswrapper[4720]: I0122 06:35:33.575248 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c7b3c34a-9870-4c9f-990b-29b7e768d5a5-system-cni-dir\") pod \"multus-additional-cni-plugins-lxzml\" (UID: \"c7b3c34a-9870-4c9f-990b-29b7e768d5a5\") " pod="openshift-multus/multus-additional-cni-plugins-lxzml" Jan 22 06:35:33 crc kubenswrapper[4720]: I0122 06:35:33.575371 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/c7b3c34a-9870-4c9f-990b-29b7e768d5a5-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-lxzml\" (UID: \"c7b3c34a-9870-4c9f-990b-29b7e768d5a5\") " pod="openshift-multus/multus-additional-cni-plugins-lxzml" Jan 22 06:35:33 crc kubenswrapper[4720]: I0122 06:35:33.575426 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/9a725fa6-120e-41b1-bf7b-e1419e35c891-host-cni-bin\") pod \"ovnkube-node-pc2f4\" (UID: \"9a725fa6-120e-41b1-bf7b-e1419e35c891\") " pod="openshift-ovn-kubernetes/ovnkube-node-pc2f4" Jan 22 06:35:33 crc kubenswrapper[4720]: I0122 06:35:33.575451 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/9a725fa6-120e-41b1-bf7b-e1419e35c891-run-systemd\") pod \"ovnkube-node-pc2f4\" (UID: \"9a725fa6-120e-41b1-bf7b-e1419e35c891\") " pod="openshift-ovn-kubernetes/ovnkube-node-pc2f4" Jan 22 06:35:33 crc kubenswrapper[4720]: I0122 06:35:33.575480 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9a725fa6-120e-41b1-bf7b-e1419e35c891-run-openvswitch\") pod \"ovnkube-node-pc2f4\" (UID: \"9a725fa6-120e-41b1-bf7b-e1419e35c891\") " pod="openshift-ovn-kubernetes/ovnkube-node-pc2f4" Jan 22 06:35:33 crc kubenswrapper[4720]: I0122 06:35:33.575501 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/c7b3c34a-9870-4c9f-990b-29b7e768d5a5-cnibin\") pod \"multus-additional-cni-plugins-lxzml\" (UID: \"c7b3c34a-9870-4c9f-990b-29b7e768d5a5\") " pod="openshift-multus/multus-additional-cni-plugins-lxzml" Jan 22 06:35:33 crc kubenswrapper[4720]: I0122 06:35:33.575529 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/9a725fa6-120e-41b1-bf7b-e1419e35c891-node-log\") pod \"ovnkube-node-pc2f4\" (UID: \"9a725fa6-120e-41b1-bf7b-e1419e35c891\") " pod="openshift-ovn-kubernetes/ovnkube-node-pc2f4" Jan 22 06:35:33 crc kubenswrapper[4720]: I0122 06:35:33.575788 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/9a725fa6-120e-41b1-bf7b-e1419e35c891-host-run-netns\") pod \"ovnkube-node-pc2f4\" (UID: \"9a725fa6-120e-41b1-bf7b-e1419e35c891\") " pod="openshift-ovn-kubernetes/ovnkube-node-pc2f4" Jan 22 06:35:33 crc kubenswrapper[4720]: I0122 06:35:33.575960 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8h66q\" (UniqueName: \"kubernetes.io/projected/c7b3c34a-9870-4c9f-990b-29b7e768d5a5-kube-api-access-8h66q\") pod \"multus-additional-cni-plugins-lxzml\" (UID: \"c7b3c34a-9870-4c9f-990b-29b7e768d5a5\") " pod="openshift-multus/multus-additional-cni-plugins-lxzml" Jan 22 06:35:33 crc kubenswrapper[4720]: I0122 06:35:33.576377 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/9a725fa6-120e-41b1-bf7b-e1419e35c891-run-ovn\") pod \"ovnkube-node-pc2f4\" (UID: \"9a725fa6-120e-41b1-bf7b-e1419e35c891\") " pod="openshift-ovn-kubernetes/ovnkube-node-pc2f4" Jan 22 06:35:33 crc kubenswrapper[4720]: I0122 06:35:33.576492 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/9a725fa6-120e-41b1-bf7b-e1419e35c891-env-overrides\") pod \"ovnkube-node-pc2f4\" (UID: \"9a725fa6-120e-41b1-bf7b-e1419e35c891\") " pod="openshift-ovn-kubernetes/ovnkube-node-pc2f4" Jan 22 06:35:33 crc kubenswrapper[4720]: I0122 06:35:33.576997 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9f52f\" (UniqueName: \"kubernetes.io/projected/f4b26e9d-6a95-4b1c-9750-88b6aa100c67-kube-api-access-9f52f\") pod \"machine-config-daemon-bnsvd\" (UID: \"f4b26e9d-6a95-4b1c-9750-88b6aa100c67\") " pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" Jan 22 06:35:33 crc kubenswrapper[4720]: I0122 06:35:33.577438 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/9a725fa6-120e-41b1-bf7b-e1419e35c891-ovnkube-config\") pod \"ovnkube-node-pc2f4\" (UID: \"9a725fa6-120e-41b1-bf7b-e1419e35c891\") " pod="openshift-ovn-kubernetes/ovnkube-node-pc2f4" Jan 22 06:35:33 crc kubenswrapper[4720]: I0122 06:35:33.578023 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fmnn9\" (UniqueName: \"kubernetes.io/projected/9a725fa6-120e-41b1-bf7b-e1419e35c891-kube-api-access-fmnn9\") pod \"ovnkube-node-pc2f4\" (UID: \"9a725fa6-120e-41b1-bf7b-e1419e35c891\") " pod="openshift-ovn-kubernetes/ovnkube-node-pc2f4" Jan 22 06:35:33 crc kubenswrapper[4720]: I0122 06:35:33.576629 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f4b26e9d-6a95-4b1c-9750-88b6aa100c67-mcd-auth-proxy-config\") pod \"machine-config-daemon-bnsvd\" (UID: \"f4b26e9d-6a95-4b1c-9750-88b6aa100c67\") " pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" Jan 22 06:35:33 crc kubenswrapper[4720]: I0122 06:35:33.576951 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/9a725fa6-120e-41b1-bf7b-e1419e35c891-env-overrides\") pod \"ovnkube-node-pc2f4\" (UID: \"9a725fa6-120e-41b1-bf7b-e1419e35c891\") " pod="openshift-ovn-kubernetes/ovnkube-node-pc2f4" Jan 22 06:35:33 crc kubenswrapper[4720]: I0122 06:35:33.576307 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/c7b3c34a-9870-4c9f-990b-29b7e768d5a5-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-lxzml\" (UID: \"c7b3c34a-9870-4c9f-990b-29b7e768d5a5\") " pod="openshift-multus/multus-additional-cni-plugins-lxzml" Jan 22 06:35:33 crc kubenswrapper[4720]: I0122 06:35:33.576445 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/9a725fa6-120e-41b1-bf7b-e1419e35c891-run-ovn\") pod \"ovnkube-node-pc2f4\" (UID: \"9a725fa6-120e-41b1-bf7b-e1419e35c891\") " pod="openshift-ovn-kubernetes/ovnkube-node-pc2f4" Jan 22 06:35:33 crc kubenswrapper[4720]: I0122 06:35:33.577978 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/9a725fa6-120e-41b1-bf7b-e1419e35c891-ovnkube-config\") pod \"ovnkube-node-pc2f4\" (UID: \"9a725fa6-120e-41b1-bf7b-e1419e35c891\") " pod="openshift-ovn-kubernetes/ovnkube-node-pc2f4" Jan 22 06:35:33 crc kubenswrapper[4720]: I0122 06:35:33.576356 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/9a725fa6-120e-41b1-bf7b-e1419e35c891-ovnkube-script-lib\") pod \"ovnkube-node-pc2f4\" (UID: \"9a725fa6-120e-41b1-bf7b-e1419e35c891\") " pod="openshift-ovn-kubernetes/ovnkube-node-pc2f4" Jan 22 06:35:33 crc kubenswrapper[4720]: I0122 06:35:33.578789 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/c7b3c34a-9870-4c9f-990b-29b7e768d5a5-cni-binary-copy\") pod \"multus-additional-cni-plugins-lxzml\" (UID: \"c7b3c34a-9870-4c9f-990b-29b7e768d5a5\") " pod="openshift-multus/multus-additional-cni-plugins-lxzml" Jan 22 06:35:33 crc kubenswrapper[4720]: I0122 06:35:33.579341 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/9a725fa6-120e-41b1-bf7b-e1419e35c891-host-kubelet\") pod \"ovnkube-node-pc2f4\" (UID: \"9a725fa6-120e-41b1-bf7b-e1419e35c891\") " pod="openshift-ovn-kubernetes/ovnkube-node-pc2f4" Jan 22 06:35:33 crc kubenswrapper[4720]: I0122 06:35:33.579499 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/c7b3c34a-9870-4c9f-990b-29b7e768d5a5-tuning-conf-dir\") pod \"multus-additional-cni-plugins-lxzml\" (UID: \"c7b3c34a-9870-4c9f-990b-29b7e768d5a5\") " pod="openshift-multus/multus-additional-cni-plugins-lxzml" Jan 22 06:35:33 crc kubenswrapper[4720]: I0122 06:35:33.579690 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/9a725fa6-120e-41b1-bf7b-e1419e35c891-systemd-units\") pod \"ovnkube-node-pc2f4\" (UID: \"9a725fa6-120e-41b1-bf7b-e1419e35c891\") " pod="openshift-ovn-kubernetes/ovnkube-node-pc2f4" Jan 22 06:35:33 crc kubenswrapper[4720]: I0122 06:35:33.579842 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9a725fa6-120e-41b1-bf7b-e1419e35c891-var-lib-openvswitch\") pod \"ovnkube-node-pc2f4\" (UID: \"9a725fa6-120e-41b1-bf7b-e1419e35c891\") " pod="openshift-ovn-kubernetes/ovnkube-node-pc2f4" Jan 22 06:35:33 crc kubenswrapper[4720]: I0122 06:35:33.580014 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/9a725fa6-120e-41b1-bf7b-e1419e35c891-ovn-node-metrics-cert\") pod \"ovnkube-node-pc2f4\" (UID: \"9a725fa6-120e-41b1-bf7b-e1419e35c891\") " pod="openshift-ovn-kubernetes/ovnkube-node-pc2f4" Jan 22 06:35:33 crc kubenswrapper[4720]: I0122 06:35:33.580662 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9a725fa6-120e-41b1-bf7b-e1419e35c891-host-run-ovn-kubernetes\") pod \"ovnkube-node-pc2f4\" (UID: \"9a725fa6-120e-41b1-bf7b-e1419e35c891\") " pod="openshift-ovn-kubernetes/ovnkube-node-pc2f4" Jan 22 06:35:33 crc kubenswrapper[4720]: I0122 06:35:33.580786 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/f4b26e9d-6a95-4b1c-9750-88b6aa100c67-rootfs\") pod \"machine-config-daemon-bnsvd\" (UID: \"f4b26e9d-6a95-4b1c-9750-88b6aa100c67\") " pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" Jan 22 06:35:33 crc kubenswrapper[4720]: I0122 06:35:33.580855 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/f4b26e9d-6a95-4b1c-9750-88b6aa100c67-rootfs\") pod \"machine-config-daemon-bnsvd\" (UID: \"f4b26e9d-6a95-4b1c-9750-88b6aa100c67\") " pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" Jan 22 06:35:33 crc kubenswrapper[4720]: I0122 06:35:33.579975 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9a725fa6-120e-41b1-bf7b-e1419e35c891-var-lib-openvswitch\") pod \"ovnkube-node-pc2f4\" (UID: \"9a725fa6-120e-41b1-bf7b-e1419e35c891\") " pod="openshift-ovn-kubernetes/ovnkube-node-pc2f4" Jan 22 06:35:33 crc kubenswrapper[4720]: I0122 06:35:33.579649 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/c7b3c34a-9870-4c9f-990b-29b7e768d5a5-tuning-conf-dir\") pod \"multus-additional-cni-plugins-lxzml\" (UID: \"c7b3c34a-9870-4c9f-990b-29b7e768d5a5\") " pod="openshift-multus/multus-additional-cni-plugins-lxzml" Jan 22 06:35:33 crc kubenswrapper[4720]: I0122 06:35:33.579458 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/9a725fa6-120e-41b1-bf7b-e1419e35c891-host-kubelet\") pod \"ovnkube-node-pc2f4\" (UID: \"9a725fa6-120e-41b1-bf7b-e1419e35c891\") " pod="openshift-ovn-kubernetes/ovnkube-node-pc2f4" Jan 22 06:35:33 crc kubenswrapper[4720]: I0122 06:35:33.579809 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/9a725fa6-120e-41b1-bf7b-e1419e35c891-systemd-units\") pod \"ovnkube-node-pc2f4\" (UID: \"9a725fa6-120e-41b1-bf7b-e1419e35c891\") " pod="openshift-ovn-kubernetes/ovnkube-node-pc2f4" Jan 22 06:35:33 crc kubenswrapper[4720]: I0122 06:35:33.580803 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9a725fa6-120e-41b1-bf7b-e1419e35c891-host-run-ovn-kubernetes\") pod \"ovnkube-node-pc2f4\" (UID: \"9a725fa6-120e-41b1-bf7b-e1419e35c891\") " pod="openshift-ovn-kubernetes/ovnkube-node-pc2f4" Jan 22 06:35:33 crc kubenswrapper[4720]: I0122 06:35:33.580842 4720 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-01-22 06:30:32 +0000 UTC, rotation deadline is 2026-11-23 18:24:26.671258251 +0000 UTC Jan 22 06:35:33 crc kubenswrapper[4720]: I0122 06:35:33.581033 4720 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 7331h48m53.09023522s for next certificate rotation Jan 22 06:35:33 crc kubenswrapper[4720]: I0122 06:35:33.580874 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/9a725fa6-120e-41b1-bf7b-e1419e35c891-log-socket\") pod \"ovnkube-node-pc2f4\" (UID: \"9a725fa6-120e-41b1-bf7b-e1419e35c891\") " pod="openshift-ovn-kubernetes/ovnkube-node-pc2f4" Jan 22 06:35:33 crc kubenswrapper[4720]: I0122 06:35:33.581078 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f4b26e9d-6a95-4b1c-9750-88b6aa100c67-proxy-tls\") pod \"machine-config-daemon-bnsvd\" (UID: \"f4b26e9d-6a95-4b1c-9750-88b6aa100c67\") " pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" Jan 22 06:35:33 crc kubenswrapper[4720]: I0122 06:35:33.581100 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9a725fa6-120e-41b1-bf7b-e1419e35c891-etc-openvswitch\") pod \"ovnkube-node-pc2f4\" (UID: \"9a725fa6-120e-41b1-bf7b-e1419e35c891\") " pod="openshift-ovn-kubernetes/ovnkube-node-pc2f4" Jan 22 06:35:33 crc kubenswrapper[4720]: I0122 06:35:33.581121 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9a725fa6-120e-41b1-bf7b-e1419e35c891-host-cni-netd\") pod \"ovnkube-node-pc2f4\" (UID: \"9a725fa6-120e-41b1-bf7b-e1419e35c891\") " pod="openshift-ovn-kubernetes/ovnkube-node-pc2f4" Jan 22 06:35:33 crc kubenswrapper[4720]: I0122 06:35:33.581167 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/9a725fa6-120e-41b1-bf7b-e1419e35c891-host-slash\") pod \"ovnkube-node-pc2f4\" (UID: \"9a725fa6-120e-41b1-bf7b-e1419e35c891\") " pod="openshift-ovn-kubernetes/ovnkube-node-pc2f4" Jan 22 06:35:33 crc kubenswrapper[4720]: I0122 06:35:33.581227 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/9a725fa6-120e-41b1-bf7b-e1419e35c891-host-slash\") pod \"ovnkube-node-pc2f4\" (UID: \"9a725fa6-120e-41b1-bf7b-e1419e35c891\") " pod="openshift-ovn-kubernetes/ovnkube-node-pc2f4" Jan 22 06:35:33 crc kubenswrapper[4720]: I0122 06:35:33.579292 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/c7b3c34a-9870-4c9f-990b-29b7e768d5a5-cni-binary-copy\") pod \"multus-additional-cni-plugins-lxzml\" (UID: \"c7b3c34a-9870-4c9f-990b-29b7e768d5a5\") " pod="openshift-multus/multus-additional-cni-plugins-lxzml" Jan 22 06:35:33 crc kubenswrapper[4720]: I0122 06:35:33.581392 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/9a725fa6-120e-41b1-bf7b-e1419e35c891-log-socket\") pod \"ovnkube-node-pc2f4\" (UID: \"9a725fa6-120e-41b1-bf7b-e1419e35c891\") " pod="openshift-ovn-kubernetes/ovnkube-node-pc2f4" Jan 22 06:35:33 crc kubenswrapper[4720]: I0122 06:35:33.581467 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9a725fa6-120e-41b1-bf7b-e1419e35c891-etc-openvswitch\") pod \"ovnkube-node-pc2f4\" (UID: \"9a725fa6-120e-41b1-bf7b-e1419e35c891\") " pod="openshift-ovn-kubernetes/ovnkube-node-pc2f4" Jan 22 06:35:33 crc kubenswrapper[4720]: I0122 06:35:33.581470 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9a725fa6-120e-41b1-bf7b-e1419e35c891-host-cni-netd\") pod \"ovnkube-node-pc2f4\" (UID: \"9a725fa6-120e-41b1-bf7b-e1419e35c891\") " pod="openshift-ovn-kubernetes/ovnkube-node-pc2f4" Jan 22 06:35:33 crc kubenswrapper[4720]: I0122 06:35:33.584565 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/9a725fa6-120e-41b1-bf7b-e1419e35c891-ovn-node-metrics-cert\") pod \"ovnkube-node-pc2f4\" (UID: \"9a725fa6-120e-41b1-bf7b-e1419e35c891\") " pod="openshift-ovn-kubernetes/ovnkube-node-pc2f4" Jan 22 06:35:33 crc kubenswrapper[4720]: I0122 06:35:33.584671 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f4b26e9d-6a95-4b1c-9750-88b6aa100c67-proxy-tls\") pod \"machine-config-daemon-bnsvd\" (UID: \"f4b26e9d-6a95-4b1c-9750-88b6aa100c67\") " pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" Jan 22 06:35:33 crc kubenswrapper[4720]: I0122 06:35:33.586776 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lxzml" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c7b3c34a-9870-4c9f-990b-29b7e768d5a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:33Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h66q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h66q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h66q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h66q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h66q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h66q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h66q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lxzml\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:33Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:33 crc kubenswrapper[4720]: I0122 06:35:33.596427 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fmnn9\" (UniqueName: \"kubernetes.io/projected/9a725fa6-120e-41b1-bf7b-e1419e35c891-kube-api-access-fmnn9\") pod \"ovnkube-node-pc2f4\" (UID: \"9a725fa6-120e-41b1-bf7b-e1419e35c891\") " pod="openshift-ovn-kubernetes/ovnkube-node-pc2f4" Jan 22 06:35:33 crc kubenswrapper[4720]: I0122 06:35:33.596992 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8h66q\" (UniqueName: \"kubernetes.io/projected/c7b3c34a-9870-4c9f-990b-29b7e768d5a5-kube-api-access-8h66q\") pod \"multus-additional-cni-plugins-lxzml\" (UID: \"c7b3c34a-9870-4c9f-990b-29b7e768d5a5\") " pod="openshift-multus/multus-additional-cni-plugins-lxzml" Jan 22 06:35:33 crc kubenswrapper[4720]: I0122 06:35:33.602466 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9f52f\" (UniqueName: \"kubernetes.io/projected/f4b26e9d-6a95-4b1c-9750-88b6aa100c67-kube-api-access-9f52f\") pod \"machine-config-daemon-bnsvd\" (UID: \"f4b26e9d-6a95-4b1c-9750-88b6aa100c67\") " pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" Jan 22 06:35:33 crc kubenswrapper[4720]: I0122 06:35:33.609242 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2336077a-0d11-443d-aa5f-3bc0f75aeb59\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7f8b443098a61889ca09134ff7b6e219efca59abcbcea3bb4c1aa9266ee8e55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfdba4f4bb3ed55acad58ed0c07d2dd58b1a1e9fd56443090c002bdefd3e7e10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2caddf6dd66b641ff10add3b52e37581c741507660ec4fe67a3b9ddde699e56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://108525dd49ce1847f1cc6c4e2a4c70cae3d2d990cda97eeb4aba8879b955432d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d49a501c34b86f0f9300742970a00c92ce3b8d352518d8f53d1ab8c734e2af6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49fa65139740d25a82a8becea2b7476e9846a3eebfe5a02bb0b3e8cc4e7fa30d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49fa65139740d25a82a8becea2b7476e9846a3eebfe5a02bb0b3e8cc4e7fa30d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e171d6a13b6108f7df37e46d2195682d2ce7a75e5fbfd0ccf4aca27871964cf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e171d6a13b6108f7df37e46d2195682d2ce7a75e5fbfd0ccf4aca27871964cf6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://136504ce05b178f4c60a839a1044c0b0892ea97babfe80181eac601c5057df16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://136504ce05b178f4c60a839a1044c0b0892ea97babfe80181eac601c5057df16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:08Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:33Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:33 crc kubenswrapper[4720]: I0122 06:35:33.625998 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71c3232e-a7c6-4127-b9ae-54b793cf40fc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b844e983a5de29cec0e9c51f209f04921cd45b437887ca38e93348bc0816832a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d21c2ba52009379f44498a3064c67bcd1a54a6277cc2b1df9db949bf0871e95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65a3043e63bb84b58bb5486070eb84265d51c514df2d766703a62382b3d6f66f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b944677a78ab673857bf2dd507dcc2eced1caef632613ec1e2054cb76ef57f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f51bf89aa732948c3672632435a055b62f58316da07931b2773bc2d3c10789e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T06:35:26Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 06:35:20.895806 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 06:35:20.896560 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2161447925/tls.crt::/tmp/serving-cert-2161447925/tls.key\\\\\\\"\\\\nI0122 06:35:26.673138 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 06:35:26.677382 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 06:35:26.677419 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 06:35:26.677459 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 06:35:26.677469 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 06:35:26.688473 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 06:35:26.688552 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 06:35:26.688566 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 06:35:26.688580 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 06:35:26.688588 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 06:35:26.688596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 06:35:26.688605 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 06:35:26.688875 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 06:35:26.690763 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://927a5a6aba8d7533888fb9e5e664fbaed25c6e6c0616c5b3c6f0afd6f45aa128\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3418e6eacd36147ae6ae1be72f23113dea7dbaab3597ba0fee76fbc846c0162f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3418e6eacd36147ae6ae1be72f23113dea7dbaab3597ba0fee76fbc846c0162f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:33Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:33 crc kubenswrapper[4720]: I0122 06:35:33.637270 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-pc2f4" Jan 22 06:35:33 crc kubenswrapper[4720]: I0122 06:35:33.640815 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://42d0f971bb1f2a686c44e255b97648123170034b41d626318aab29e111dda4a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:33Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:33 crc kubenswrapper[4720]: I0122 06:35:33.644292 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:33 crc kubenswrapper[4720]: I0122 06:35:33.644409 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:33 crc kubenswrapper[4720]: I0122 06:35:33.644495 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:33 crc kubenswrapper[4720]: I0122 06:35:33.644571 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:33 crc kubenswrapper[4720]: I0122 06:35:33.644656 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:33Z","lastTransitionTime":"2026-01-22T06:35:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:33 crc kubenswrapper[4720]: W0122 06:35:33.647699 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9a725fa6_120e_41b1_bf7b_e1419e35c891.slice/crio-97f65448ee42888f06a1ee0565e9d5e6a0ccb5044062ad539d5060910ee6b4bd WatchSource:0}: Error finding container 97f65448ee42888f06a1ee0565e9d5e6a0ccb5044062ad539d5060910ee6b4bd: Status 404 returned error can't find the container with id 97f65448ee42888f06a1ee0565e9d5e6a0ccb5044062ad539d5060910ee6b4bd Jan 22 06:35:33 crc kubenswrapper[4720]: I0122 06:35:33.649227 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-lxzml" Jan 22 06:35:33 crc kubenswrapper[4720]: I0122 06:35:33.657814 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-n5w5r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85373343-156d-4de0-a72b-baaf7c4e3d08\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlzmz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-n5w5r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:33Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:33 crc kubenswrapper[4720]: W0122 06:35:33.660104 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc7b3c34a_9870_4c9f_990b_29b7e768d5a5.slice/crio-7a4a0ae15ae1dd5cdb2b49ad133e283bfe808093a39c83c2385a6ad9795cf688 WatchSource:0}: Error finding container 7a4a0ae15ae1dd5cdb2b49ad133e283bfe808093a39c83c2385a6ad9795cf688: Status 404 returned error can't find the container with id 7a4a0ae15ae1dd5cdb2b49ad133e283bfe808093a39c83c2385a6ad9795cf688 Jan 22 06:35:33 crc kubenswrapper[4720]: I0122 06:35:33.669435 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" Jan 22 06:35:33 crc kubenswrapper[4720]: W0122 06:35:33.683812 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b26e9d_6a95_4b1c_9750_88b6aa100c67.slice/crio-96ba8f7e7ecd9d6f41b6382f687c3bdc5191a30a0064aeeceffe5b67e336c7b8 WatchSource:0}: Error finding container 96ba8f7e7ecd9d6f41b6382f687c3bdc5191a30a0064aeeceffe5b67e336c7b8: Status 404 returned error can't find the container with id 96ba8f7e7ecd9d6f41b6382f687c3bdc5191a30a0064aeeceffe5b67e336c7b8 Jan 22 06:35:33 crc kubenswrapper[4720]: I0122 06:35:33.748946 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:33 crc kubenswrapper[4720]: I0122 06:35:33.749000 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:33 crc kubenswrapper[4720]: I0122 06:35:33.749011 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:33 crc kubenswrapper[4720]: I0122 06:35:33.749030 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:33 crc kubenswrapper[4720]: I0122 06:35:33.749043 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:33Z","lastTransitionTime":"2026-01-22T06:35:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:33 crc kubenswrapper[4720]: I0122 06:35:33.851128 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:33 crc kubenswrapper[4720]: I0122 06:35:33.851207 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:33 crc kubenswrapper[4720]: I0122 06:35:33.851227 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:33 crc kubenswrapper[4720]: I0122 06:35:33.851256 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:33 crc kubenswrapper[4720]: I0122 06:35:33.851275 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:33Z","lastTransitionTime":"2026-01-22T06:35:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:33 crc kubenswrapper[4720]: I0122 06:35:33.953879 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:33 crc kubenswrapper[4720]: I0122 06:35:33.953934 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:33 crc kubenswrapper[4720]: I0122 06:35:33.953944 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:33 crc kubenswrapper[4720]: I0122 06:35:33.953963 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:33 crc kubenswrapper[4720]: I0122 06:35:33.953974 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:33Z","lastTransitionTime":"2026-01-22T06:35:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:34 crc kubenswrapper[4720]: I0122 06:35:34.056640 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:34 crc kubenswrapper[4720]: I0122 06:35:34.056690 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:34 crc kubenswrapper[4720]: I0122 06:35:34.056702 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:34 crc kubenswrapper[4720]: I0122 06:35:34.056725 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:34 crc kubenswrapper[4720]: I0122 06:35:34.056737 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:34Z","lastTransitionTime":"2026-01-22T06:35:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:34 crc kubenswrapper[4720]: I0122 06:35:34.151563 4720 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 09:58:30.423865033 +0000 UTC Jan 22 06:35:34 crc kubenswrapper[4720]: I0122 06:35:34.160052 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:34 crc kubenswrapper[4720]: I0122 06:35:34.160106 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:34 crc kubenswrapper[4720]: I0122 06:35:34.160122 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:34 crc kubenswrapper[4720]: I0122 06:35:34.160145 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:34 crc kubenswrapper[4720]: I0122 06:35:34.160160 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:34Z","lastTransitionTime":"2026-01-22T06:35:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:34 crc kubenswrapper[4720]: I0122 06:35:34.210369 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 06:35:34 crc kubenswrapper[4720]: I0122 06:35:34.210509 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 06:35:34 crc kubenswrapper[4720]: E0122 06:35:34.210652 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 06:35:34 crc kubenswrapper[4720]: E0122 06:35:34.210768 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 06:35:34 crc kubenswrapper[4720]: I0122 06:35:34.262377 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:34 crc kubenswrapper[4720]: I0122 06:35:34.262419 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:34 crc kubenswrapper[4720]: I0122 06:35:34.262429 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:34 crc kubenswrapper[4720]: I0122 06:35:34.262444 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:34 crc kubenswrapper[4720]: I0122 06:35:34.262453 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:34Z","lastTransitionTime":"2026-01-22T06:35:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:34 crc kubenswrapper[4720]: I0122 06:35:34.365647 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:34 crc kubenswrapper[4720]: I0122 06:35:34.365694 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:34 crc kubenswrapper[4720]: I0122 06:35:34.365714 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:34 crc kubenswrapper[4720]: I0122 06:35:34.365735 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:34 crc kubenswrapper[4720]: I0122 06:35:34.365745 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:34Z","lastTransitionTime":"2026-01-22T06:35:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:34 crc kubenswrapper[4720]: I0122 06:35:34.390721 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 06:35:34 crc kubenswrapper[4720]: E0122 06:35:34.390865 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 06:35:42.390835668 +0000 UTC m=+34.532742413 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 06:35:34 crc kubenswrapper[4720]: I0122 06:35:34.391066 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 06:35:34 crc kubenswrapper[4720]: E0122 06:35:34.391254 4720 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 22 06:35:34 crc kubenswrapper[4720]: E0122 06:35:34.391359 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-22 06:35:42.391333952 +0000 UTC m=+34.533240667 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 22 06:35:34 crc kubenswrapper[4720]: I0122 06:35:34.391266 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 06:35:34 crc kubenswrapper[4720]: E0122 06:35:34.391958 4720 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 22 06:35:34 crc kubenswrapper[4720]: E0122 06:35:34.392060 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-22 06:35:42.392042322 +0000 UTC m=+34.533949067 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 22 06:35:34 crc kubenswrapper[4720]: I0122 06:35:34.395898 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-n5w5r" event={"ID":"85373343-156d-4de0-a72b-baaf7c4e3d08","Type":"ContainerStarted","Data":"e4da0abad7292a9d82b80caabd4b1f0fc15fbe54bfa5b4316c85b2131ea8b5b7"} Jan 22 06:35:34 crc kubenswrapper[4720]: I0122 06:35:34.398501 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-dtnxt" event={"ID":"518eedd0-2cb6-458d-a7a8-d8c8b8296401","Type":"ContainerStarted","Data":"794bf7a98b2300bd21bd914f5ad90c8f92ea7c055c78f62dbac7bc66c1c4d282"} Jan 22 06:35:34 crc kubenswrapper[4720]: I0122 06:35:34.400306 4720 generic.go:334] "Generic (PLEG): container finished" podID="9a725fa6-120e-41b1-bf7b-e1419e35c891" containerID="a478b385adad231c65bafa44aef49fd983ef1a81babc0464478e199578b612b6" exitCode=0 Jan 22 06:35:34 crc kubenswrapper[4720]: I0122 06:35:34.400401 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pc2f4" event={"ID":"9a725fa6-120e-41b1-bf7b-e1419e35c891","Type":"ContainerDied","Data":"a478b385adad231c65bafa44aef49fd983ef1a81babc0464478e199578b612b6"} Jan 22 06:35:34 crc kubenswrapper[4720]: I0122 06:35:34.400459 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pc2f4" event={"ID":"9a725fa6-120e-41b1-bf7b-e1419e35c891","Type":"ContainerStarted","Data":"97f65448ee42888f06a1ee0565e9d5e6a0ccb5044062ad539d5060910ee6b4bd"} Jan 22 06:35:34 crc kubenswrapper[4720]: I0122 06:35:34.402478 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" event={"ID":"f4b26e9d-6a95-4b1c-9750-88b6aa100c67","Type":"ContainerStarted","Data":"57c69f24cbbb5aea3a63b602916230df0cc3d74344a32bd1528bd162786a6d4e"} Jan 22 06:35:34 crc kubenswrapper[4720]: I0122 06:35:34.402530 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" event={"ID":"f4b26e9d-6a95-4b1c-9750-88b6aa100c67","Type":"ContainerStarted","Data":"88eb6692702bcb8523c759d764bb8dede5af5a2890217a1c6897a5b18a7197dd"} Jan 22 06:35:34 crc kubenswrapper[4720]: I0122 06:35:34.402548 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" event={"ID":"f4b26e9d-6a95-4b1c-9750-88b6aa100c67","Type":"ContainerStarted","Data":"96ba8f7e7ecd9d6f41b6382f687c3bdc5191a30a0064aeeceffe5b67e336c7b8"} Jan 22 06:35:34 crc kubenswrapper[4720]: I0122 06:35:34.405353 4720 generic.go:334] "Generic (PLEG): container finished" podID="c7b3c34a-9870-4c9f-990b-29b7e768d5a5" containerID="f0e395a61e07527eace7f980f476ef9b16a0c6d525b461837ebd9abcbebfa9b8" exitCode=0 Jan 22 06:35:34 crc kubenswrapper[4720]: I0122 06:35:34.405410 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-lxzml" event={"ID":"c7b3c34a-9870-4c9f-990b-29b7e768d5a5","Type":"ContainerDied","Data":"f0e395a61e07527eace7f980f476ef9b16a0c6d525b461837ebd9abcbebfa9b8"} Jan 22 06:35:34 crc kubenswrapper[4720]: I0122 06:35:34.405440 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-lxzml" event={"ID":"c7b3c34a-9870-4c9f-990b-29b7e768d5a5","Type":"ContainerStarted","Data":"7a4a0ae15ae1dd5cdb2b49ad133e283bfe808093a39c83c2385a6ad9795cf688"} Jan 22 06:35:34 crc kubenswrapper[4720]: I0122 06:35:34.420726 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71c3232e-a7c6-4127-b9ae-54b793cf40fc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b844e983a5de29cec0e9c51f209f04921cd45b437887ca38e93348bc0816832a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d21c2ba52009379f44498a3064c67bcd1a54a6277cc2b1df9db949bf0871e95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65a3043e63bb84b58bb5486070eb84265d51c514df2d766703a62382b3d6f66f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b944677a78ab673857bf2dd507dcc2eced1caef632613ec1e2054cb76ef57f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f51bf89aa732948c3672632435a055b62f58316da07931b2773bc2d3c10789e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T06:35:26Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 06:35:20.895806 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 06:35:20.896560 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2161447925/tls.crt::/tmp/serving-cert-2161447925/tls.key\\\\\\\"\\\\nI0122 06:35:26.673138 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 06:35:26.677382 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 06:35:26.677419 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 06:35:26.677459 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 06:35:26.677469 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 06:35:26.688473 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 06:35:26.688552 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 06:35:26.688566 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 06:35:26.688580 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 06:35:26.688588 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 06:35:26.688596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 06:35:26.688605 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 06:35:26.688875 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 06:35:26.690763 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://927a5a6aba8d7533888fb9e5e664fbaed25c6e6c0616c5b3c6f0afd6f45aa128\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3418e6eacd36147ae6ae1be72f23113dea7dbaab3597ba0fee76fbc846c0162f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3418e6eacd36147ae6ae1be72f23113dea7dbaab3597ba0fee76fbc846c0162f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:34Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:34 crc kubenswrapper[4720]: I0122 06:35:34.437409 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://42d0f971bb1f2a686c44e255b97648123170034b41d626318aab29e111dda4a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:34Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:34 crc kubenswrapper[4720]: I0122 06:35:34.448568 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-n5w5r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85373343-156d-4de0-a72b-baaf7c4e3d08\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4da0abad7292a9d82b80caabd4b1f0fc15fbe54bfa5b4316c85b2131ea8b5b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlzmz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-n5w5r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:34Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:34 crc kubenswrapper[4720]: I0122 06:35:34.470817 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2336077a-0d11-443d-aa5f-3bc0f75aeb59\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7f8b443098a61889ca09134ff7b6e219efca59abcbcea3bb4c1aa9266ee8e55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfdba4f4bb3ed55acad58ed0c07d2dd58b1a1e9fd56443090c002bdefd3e7e10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2caddf6dd66b641ff10add3b52e37581c741507660ec4fe67a3b9ddde699e56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://108525dd49ce1847f1cc6c4e2a4c70cae3d2d990cda97eeb4aba8879b955432d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d49a501c34b86f0f9300742970a00c92ce3b8d352518d8f53d1ab8c734e2af6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49fa65139740d25a82a8becea2b7476e9846a3eebfe5a02bb0b3e8cc4e7fa30d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49fa65139740d25a82a8becea2b7476e9846a3eebfe5a02bb0b3e8cc4e7fa30d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e171d6a13b6108f7df37e46d2195682d2ce7a75e5fbfd0ccf4aca27871964cf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e171d6a13b6108f7df37e46d2195682d2ce7a75e5fbfd0ccf4aca27871964cf6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://136504ce05b178f4c60a839a1044c0b0892ea97babfe80181eac601c5057df16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://136504ce05b178f4c60a839a1044c0b0892ea97babfe80181eac601c5057df16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:08Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:34Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:34 crc kubenswrapper[4720]: I0122 06:35:34.473220 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:34 crc kubenswrapper[4720]: I0122 06:35:34.473263 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:34 crc kubenswrapper[4720]: I0122 06:35:34.473275 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:34 crc kubenswrapper[4720]: I0122 06:35:34.473295 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:34 crc kubenswrapper[4720]: I0122 06:35:34.473308 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:34Z","lastTransitionTime":"2026-01-22T06:35:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:34 crc kubenswrapper[4720]: I0122 06:35:34.486523 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:34Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:34 crc kubenswrapper[4720]: I0122 06:35:34.494140 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 06:35:34 crc kubenswrapper[4720]: I0122 06:35:34.494311 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 06:35:34 crc kubenswrapper[4720]: E0122 06:35:34.494633 4720 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 22 06:35:34 crc kubenswrapper[4720]: E0122 06:35:34.494658 4720 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 22 06:35:34 crc kubenswrapper[4720]: E0122 06:35:34.494674 4720 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 06:35:34 crc kubenswrapper[4720]: E0122 06:35:34.494727 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-22 06:35:42.494707395 +0000 UTC m=+34.636614200 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 06:35:34 crc kubenswrapper[4720]: E0122 06:35:34.494755 4720 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 22 06:35:34 crc kubenswrapper[4720]: E0122 06:35:34.494786 4720 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 22 06:35:34 crc kubenswrapper[4720]: E0122 06:35:34.494802 4720 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 06:35:34 crc kubenswrapper[4720]: E0122 06:35:34.494860 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-22 06:35:42.494835689 +0000 UTC m=+34.636742614 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 06:35:34 crc kubenswrapper[4720]: I0122 06:35:34.496842 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dtnxt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"518eedd0-2cb6-458d-a7a8-d8c8b8296401\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:32Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:32Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqx2p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dtnxt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:34Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:34 crc kubenswrapper[4720]: I0122 06:35:34.511350 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc18b75d6761db0d4a459a096804ac6600c6c2026ca133d47671a8aadba175f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6616553b8c62205590801d10e6e316b396b7190584b08a774ebb87ab425cc122\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:34Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:34 crc kubenswrapper[4720]: I0122 06:35:34.524550 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4b26e9d-6a95-4b1c-9750-88b6aa100c67\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:33Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:33Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f52f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f52f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bnsvd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:34Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:34 crc kubenswrapper[4720]: I0122 06:35:34.541458 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:34Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:34 crc kubenswrapper[4720]: I0122 06:35:34.556741 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:34Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:34 crc kubenswrapper[4720]: I0122 06:35:34.571639 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3042f2c25662a3c0a0f010b2e431521ea9dc24b59303773b1fc4e2b43308937\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:34Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:34 crc kubenswrapper[4720]: I0122 06:35:34.575972 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:34 crc kubenswrapper[4720]: I0122 06:35:34.576015 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:34 crc kubenswrapper[4720]: I0122 06:35:34.576028 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:34 crc kubenswrapper[4720]: I0122 06:35:34.576049 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:34 crc kubenswrapper[4720]: I0122 06:35:34.576061 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:34Z","lastTransitionTime":"2026-01-22T06:35:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:34 crc kubenswrapper[4720]: I0122 06:35:34.594555 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pc2f4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a725fa6-120e-41b1-bf7b-e1419e35c891\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:33Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pc2f4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:34Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:34 crc kubenswrapper[4720]: I0122 06:35:34.610334 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lxzml" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c7b3c34a-9870-4c9f-990b-29b7e768d5a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:33Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h66q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h66q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h66q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h66q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h66q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h66q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h66q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lxzml\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:34Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:34 crc kubenswrapper[4720]: I0122 06:35:34.623650 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63aad383-f2cb-4bee-a51e-8f27e7b6f6dc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f91f6cde32c3019a63315238551af8420def8e585e4dbf3d29e55f62f155ba3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eff41c6fc4cac32edebfbabd313603d50771e92427938f4fc8c12bc75563133d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5dc3d31620607809111c9a1ed06ab7dfb1b24ff5ef18155979efc6b09b6d67d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://365ce842df37c87fae429e32228087a45828c6c717b63a2eb9818f26bf9aed7b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:08Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:34Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:34 crc kubenswrapper[4720]: I0122 06:35:34.643999 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc18b75d6761db0d4a459a096804ac6600c6c2026ca133d47671a8aadba175f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6616553b8c62205590801d10e6e316b396b7190584b08a774ebb87ab425cc122\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:34Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:34 crc kubenswrapper[4720]: I0122 06:35:34.659902 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4b26e9d-6a95-4b1c-9750-88b6aa100c67\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57c69f24cbbb5aea3a63b602916230df0cc3d74344a32bd1528bd162786a6d4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f52f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88eb6692702bcb8523c759d764bb8dede5af5a2890217a1c6897a5b18a7197dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f52f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bnsvd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:34Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:34 crc kubenswrapper[4720]: I0122 06:35:34.673860 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3042f2c25662a3c0a0f010b2e431521ea9dc24b59303773b1fc4e2b43308937\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:34Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:34 crc kubenswrapper[4720]: I0122 06:35:34.690558 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:34 crc kubenswrapper[4720]: I0122 06:35:34.690614 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:34 crc kubenswrapper[4720]: I0122 06:35:34.690629 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:34 crc kubenswrapper[4720]: I0122 06:35:34.690653 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:34 crc kubenswrapper[4720]: I0122 06:35:34.690670 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:34Z","lastTransitionTime":"2026-01-22T06:35:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:34 crc kubenswrapper[4720]: I0122 06:35:34.704203 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pc2f4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a725fa6-120e-41b1-bf7b-e1419e35c891\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a478b385adad231c65bafa44aef49fd983ef1a81babc0464478e199578b612b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a478b385adad231c65bafa44aef49fd983ef1a81babc0464478e199578b612b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pc2f4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:34Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:34 crc kubenswrapper[4720]: I0122 06:35:34.747727 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lxzml" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c7b3c34a-9870-4c9f-990b-29b7e768d5a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:33Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h66q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0e395a61e07527eace7f980f476ef9b16a0c6d525b461837ebd9abcbebfa9b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0e395a61e07527eace7f980f476ef9b16a0c6d525b461837ebd9abcbebfa9b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h66q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h66q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h66q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h66q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h66q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h66q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lxzml\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:34Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:34 crc kubenswrapper[4720]: I0122 06:35:34.762356 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63aad383-f2cb-4bee-a51e-8f27e7b6f6dc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f91f6cde32c3019a63315238551af8420def8e585e4dbf3d29e55f62f155ba3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eff41c6fc4cac32edebfbabd313603d50771e92427938f4fc8c12bc75563133d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5dc3d31620607809111c9a1ed06ab7dfb1b24ff5ef18155979efc6b09b6d67d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://365ce842df37c87fae429e32228087a45828c6c717b63a2eb9818f26bf9aed7b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:08Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:34Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:34 crc kubenswrapper[4720]: I0122 06:35:34.777593 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:34Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:34 crc kubenswrapper[4720]: I0122 06:35:34.789528 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:34Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:34 crc kubenswrapper[4720]: I0122 06:35:34.793433 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:34 crc kubenswrapper[4720]: I0122 06:35:34.793489 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:34 crc kubenswrapper[4720]: I0122 06:35:34.793500 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:34 crc kubenswrapper[4720]: I0122 06:35:34.793521 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:34 crc kubenswrapper[4720]: I0122 06:35:34.793533 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:34Z","lastTransitionTime":"2026-01-22T06:35:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:34 crc kubenswrapper[4720]: I0122 06:35:34.804579 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-n5w5r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85373343-156d-4de0-a72b-baaf7c4e3d08\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4da0abad7292a9d82b80caabd4b1f0fc15fbe54bfa5b4316c85b2131ea8b5b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlzmz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-n5w5r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:34Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:34 crc kubenswrapper[4720]: I0122 06:35:34.828225 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2336077a-0d11-443d-aa5f-3bc0f75aeb59\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7f8b443098a61889ca09134ff7b6e219efca59abcbcea3bb4c1aa9266ee8e55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfdba4f4bb3ed55acad58ed0c07d2dd58b1a1e9fd56443090c002bdefd3e7e10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2caddf6dd66b641ff10add3b52e37581c741507660ec4fe67a3b9ddde699e56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://108525dd49ce1847f1cc6c4e2a4c70cae3d2d990cda97eeb4aba8879b955432d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d49a501c34b86f0f9300742970a00c92ce3b8d352518d8f53d1ab8c734e2af6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49fa65139740d25a82a8becea2b7476e9846a3eebfe5a02bb0b3e8cc4e7fa30d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49fa65139740d25a82a8becea2b7476e9846a3eebfe5a02bb0b3e8cc4e7fa30d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e171d6a13b6108f7df37e46d2195682d2ce7a75e5fbfd0ccf4aca27871964cf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e171d6a13b6108f7df37e46d2195682d2ce7a75e5fbfd0ccf4aca27871964cf6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://136504ce05b178f4c60a839a1044c0b0892ea97babfe80181eac601c5057df16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://136504ce05b178f4c60a839a1044c0b0892ea97babfe80181eac601c5057df16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:08Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:34Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:34 crc kubenswrapper[4720]: I0122 06:35:34.846001 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71c3232e-a7c6-4127-b9ae-54b793cf40fc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b844e983a5de29cec0e9c51f209f04921cd45b437887ca38e93348bc0816832a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d21c2ba52009379f44498a3064c67bcd1a54a6277cc2b1df9db949bf0871e95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65a3043e63bb84b58bb5486070eb84265d51c514df2d766703a62382b3d6f66f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b944677a78ab673857bf2dd507dcc2eced1caef632613ec1e2054cb76ef57f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f51bf89aa732948c3672632435a055b62f58316da07931b2773bc2d3c10789e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T06:35:26Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 06:35:20.895806 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 06:35:20.896560 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2161447925/tls.crt::/tmp/serving-cert-2161447925/tls.key\\\\\\\"\\\\nI0122 06:35:26.673138 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 06:35:26.677382 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 06:35:26.677419 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 06:35:26.677459 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 06:35:26.677469 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 06:35:26.688473 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 06:35:26.688552 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 06:35:26.688566 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 06:35:26.688580 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 06:35:26.688588 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 06:35:26.688596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 06:35:26.688605 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 06:35:26.688875 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 06:35:26.690763 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://927a5a6aba8d7533888fb9e5e664fbaed25c6e6c0616c5b3c6f0afd6f45aa128\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3418e6eacd36147ae6ae1be72f23113dea7dbaab3597ba0fee76fbc846c0162f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3418e6eacd36147ae6ae1be72f23113dea7dbaab3597ba0fee76fbc846c0162f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:34Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:34 crc kubenswrapper[4720]: I0122 06:35:34.861569 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://42d0f971bb1f2a686c44e255b97648123170034b41d626318aab29e111dda4a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:34Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:34 crc kubenswrapper[4720]: I0122 06:35:34.877450 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:34Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:34 crc kubenswrapper[4720]: I0122 06:35:34.889399 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dtnxt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"518eedd0-2cb6-458d-a7a8-d8c8b8296401\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://794bf7a98b2300bd21bd914f5ad90c8f92ea7c055c78f62dbac7bc66c1c4d282\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqx2p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dtnxt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:34Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:34 crc kubenswrapper[4720]: I0122 06:35:34.897648 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:34 crc kubenswrapper[4720]: I0122 06:35:34.898084 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:34 crc kubenswrapper[4720]: I0122 06:35:34.898105 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:34 crc kubenswrapper[4720]: I0122 06:35:34.898132 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:34 crc kubenswrapper[4720]: I0122 06:35:34.898149 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:34Z","lastTransitionTime":"2026-01-22T06:35:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:35 crc kubenswrapper[4720]: I0122 06:35:35.001899 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:35 crc kubenswrapper[4720]: I0122 06:35:35.002485 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:35 crc kubenswrapper[4720]: I0122 06:35:35.002504 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:35 crc kubenswrapper[4720]: I0122 06:35:35.002543 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:35 crc kubenswrapper[4720]: I0122 06:35:35.002560 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:35Z","lastTransitionTime":"2026-01-22T06:35:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:35 crc kubenswrapper[4720]: I0122 06:35:35.104732 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:35 crc kubenswrapper[4720]: I0122 06:35:35.104788 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:35 crc kubenswrapper[4720]: I0122 06:35:35.104799 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:35 crc kubenswrapper[4720]: I0122 06:35:35.104818 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:35 crc kubenswrapper[4720]: I0122 06:35:35.104829 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:35Z","lastTransitionTime":"2026-01-22T06:35:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:35 crc kubenswrapper[4720]: I0122 06:35:35.152198 4720 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 06:49:54.786964988 +0000 UTC Jan 22 06:35:35 crc kubenswrapper[4720]: I0122 06:35:35.207513 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:35 crc kubenswrapper[4720]: I0122 06:35:35.207563 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:35 crc kubenswrapper[4720]: I0122 06:35:35.207574 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:35 crc kubenswrapper[4720]: I0122 06:35:35.207597 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:35 crc kubenswrapper[4720]: I0122 06:35:35.207610 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:35Z","lastTransitionTime":"2026-01-22T06:35:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:35 crc kubenswrapper[4720]: I0122 06:35:35.209520 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 06:35:35 crc kubenswrapper[4720]: E0122 06:35:35.209654 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 06:35:35 crc kubenswrapper[4720]: I0122 06:35:35.239671 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-5bmrh"] Jan 22 06:35:35 crc kubenswrapper[4720]: I0122 06:35:35.240136 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-5bmrh" Jan 22 06:35:35 crc kubenswrapper[4720]: I0122 06:35:35.242670 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 22 06:35:35 crc kubenswrapper[4720]: I0122 06:35:35.242896 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 22 06:35:35 crc kubenswrapper[4720]: I0122 06:35:35.243471 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 22 06:35:35 crc kubenswrapper[4720]: I0122 06:35:35.255167 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 22 06:35:35 crc kubenswrapper[4720]: I0122 06:35:35.281692 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:35Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:35 crc kubenswrapper[4720]: I0122 06:35:35.299811 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dtnxt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"518eedd0-2cb6-458d-a7a8-d8c8b8296401\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://794bf7a98b2300bd21bd914f5ad90c8f92ea7c055c78f62dbac7bc66c1c4d282\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqx2p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dtnxt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:35Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:35 crc kubenswrapper[4720]: I0122 06:35:35.304704 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/819a554b-cde8-41eb-bf3c-b965b5754ee9-serviceca\") pod \"node-ca-5bmrh\" (UID: \"819a554b-cde8-41eb-bf3c-b965b5754ee9\") " pod="openshift-image-registry/node-ca-5bmrh" Jan 22 06:35:35 crc kubenswrapper[4720]: I0122 06:35:35.304753 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w949z\" (UniqueName: \"kubernetes.io/projected/819a554b-cde8-41eb-bf3c-b965b5754ee9-kube-api-access-w949z\") pod \"node-ca-5bmrh\" (UID: \"819a554b-cde8-41eb-bf3c-b965b5754ee9\") " pod="openshift-image-registry/node-ca-5bmrh" Jan 22 06:35:35 crc kubenswrapper[4720]: I0122 06:35:35.304783 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/819a554b-cde8-41eb-bf3c-b965b5754ee9-host\") pod \"node-ca-5bmrh\" (UID: \"819a554b-cde8-41eb-bf3c-b965b5754ee9\") " pod="openshift-image-registry/node-ca-5bmrh" Jan 22 06:35:35 crc kubenswrapper[4720]: I0122 06:35:35.310542 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:35 crc kubenswrapper[4720]: I0122 06:35:35.310593 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:35 crc kubenswrapper[4720]: I0122 06:35:35.310605 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:35 crc kubenswrapper[4720]: I0122 06:35:35.310624 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:35 crc kubenswrapper[4720]: I0122 06:35:35.310634 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:35Z","lastTransitionTime":"2026-01-22T06:35:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:35 crc kubenswrapper[4720]: I0122 06:35:35.329747 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc18b75d6761db0d4a459a096804ac6600c6c2026ca133d47671a8aadba175f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6616553b8c62205590801d10e6e316b396b7190584b08a774ebb87ab425cc122\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:35Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:35 crc kubenswrapper[4720]: I0122 06:35:35.347397 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4b26e9d-6a95-4b1c-9750-88b6aa100c67\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57c69f24cbbb5aea3a63b602916230df0cc3d74344a32bd1528bd162786a6d4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f52f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88eb6692702bcb8523c759d764bb8dede5af5a2890217a1c6897a5b18a7197dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f52f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bnsvd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:35Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:35 crc kubenswrapper[4720]: I0122 06:35:35.360785 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3042f2c25662a3c0a0f010b2e431521ea9dc24b59303773b1fc4e2b43308937\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:35Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:35 crc kubenswrapper[4720]: I0122 06:35:35.388752 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pc2f4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a725fa6-120e-41b1-bf7b-e1419e35c891\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a478b385adad231c65bafa44aef49fd983ef1a81babc0464478e199578b612b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a478b385adad231c65bafa44aef49fd983ef1a81babc0464478e199578b612b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pc2f4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:35Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:35 crc kubenswrapper[4720]: I0122 06:35:35.403753 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lxzml" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c7b3c34a-9870-4c9f-990b-29b7e768d5a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:33Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h66q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0e395a61e07527eace7f980f476ef9b16a0c6d525b461837ebd9abcbebfa9b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0e395a61e07527eace7f980f476ef9b16a0c6d525b461837ebd9abcbebfa9b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h66q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h66q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h66q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h66q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h66q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h66q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lxzml\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:35Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:35 crc kubenswrapper[4720]: I0122 06:35:35.406129 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w949z\" (UniqueName: \"kubernetes.io/projected/819a554b-cde8-41eb-bf3c-b965b5754ee9-kube-api-access-w949z\") pod \"node-ca-5bmrh\" (UID: \"819a554b-cde8-41eb-bf3c-b965b5754ee9\") " pod="openshift-image-registry/node-ca-5bmrh" Jan 22 06:35:35 crc kubenswrapper[4720]: I0122 06:35:35.406185 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/819a554b-cde8-41eb-bf3c-b965b5754ee9-host\") pod \"node-ca-5bmrh\" (UID: \"819a554b-cde8-41eb-bf3c-b965b5754ee9\") " pod="openshift-image-registry/node-ca-5bmrh" Jan 22 06:35:35 crc kubenswrapper[4720]: I0122 06:35:35.406255 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/819a554b-cde8-41eb-bf3c-b965b5754ee9-serviceca\") pod \"node-ca-5bmrh\" (UID: \"819a554b-cde8-41eb-bf3c-b965b5754ee9\") " pod="openshift-image-registry/node-ca-5bmrh" Jan 22 06:35:35 crc kubenswrapper[4720]: I0122 06:35:35.406466 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/819a554b-cde8-41eb-bf3c-b965b5754ee9-host\") pod \"node-ca-5bmrh\" (UID: \"819a554b-cde8-41eb-bf3c-b965b5754ee9\") " pod="openshift-image-registry/node-ca-5bmrh" Jan 22 06:35:35 crc kubenswrapper[4720]: I0122 06:35:35.407708 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/819a554b-cde8-41eb-bf3c-b965b5754ee9-serviceca\") pod \"node-ca-5bmrh\" (UID: \"819a554b-cde8-41eb-bf3c-b965b5754ee9\") " pod="openshift-image-registry/node-ca-5bmrh" Jan 22 06:35:35 crc kubenswrapper[4720]: I0122 06:35:35.416430 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5bmrh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"819a554b-cde8-41eb-bf3c-b965b5754ee9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:35Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:35Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w949z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5bmrh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:35Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:35 crc kubenswrapper[4720]: I0122 06:35:35.416539 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:35 crc kubenswrapper[4720]: I0122 06:35:35.416575 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:35 crc kubenswrapper[4720]: I0122 06:35:35.416589 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:35 crc kubenswrapper[4720]: I0122 06:35:35.416613 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:35 crc kubenswrapper[4720]: I0122 06:35:35.416628 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:35Z","lastTransitionTime":"2026-01-22T06:35:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:35 crc kubenswrapper[4720]: I0122 06:35:35.421702 4720 generic.go:334] "Generic (PLEG): container finished" podID="c7b3c34a-9870-4c9f-990b-29b7e768d5a5" containerID="9fa836ad8fb1d77bc1b9d30575db8ed7f2394a2b03d41f0b26a3907fa07ab154" exitCode=0 Jan 22 06:35:35 crc kubenswrapper[4720]: I0122 06:35:35.421791 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-lxzml" event={"ID":"c7b3c34a-9870-4c9f-990b-29b7e768d5a5","Type":"ContainerDied","Data":"9fa836ad8fb1d77bc1b9d30575db8ed7f2394a2b03d41f0b26a3907fa07ab154"} Jan 22 06:35:35 crc kubenswrapper[4720]: I0122 06:35:35.429354 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63aad383-f2cb-4bee-a51e-8f27e7b6f6dc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f91f6cde32c3019a63315238551af8420def8e585e4dbf3d29e55f62f155ba3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eff41c6fc4cac32edebfbabd313603d50771e92427938f4fc8c12bc75563133d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5dc3d31620607809111c9a1ed06ab7dfb1b24ff5ef18155979efc6b09b6d67d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://365ce842df37c87fae429e32228087a45828c6c717b63a2eb9818f26bf9aed7b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:08Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:35Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:35 crc kubenswrapper[4720]: I0122 06:35:35.430392 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pc2f4" event={"ID":"9a725fa6-120e-41b1-bf7b-e1419e35c891","Type":"ContainerStarted","Data":"b013c0f8d83bb91a1385ed067377ea5bd81640f7b420160042c33a3112cbe153"} Jan 22 06:35:35 crc kubenswrapper[4720]: I0122 06:35:35.430468 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pc2f4" event={"ID":"9a725fa6-120e-41b1-bf7b-e1419e35c891","Type":"ContainerStarted","Data":"5a1f835b7dfa74f56bbfdc5ef0a6674e6c04ae1be7119e3675d4a7e970bc37d9"} Jan 22 06:35:35 crc kubenswrapper[4720]: I0122 06:35:35.430490 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pc2f4" event={"ID":"9a725fa6-120e-41b1-bf7b-e1419e35c891","Type":"ContainerStarted","Data":"b14f48ebae84e1450337b5ce78c6b9e74f4ffd418eaaf9df9fcb7495f5feae06"} Jan 22 06:35:35 crc kubenswrapper[4720]: I0122 06:35:35.430504 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pc2f4" event={"ID":"9a725fa6-120e-41b1-bf7b-e1419e35c891","Type":"ContainerStarted","Data":"be8927bdfee25dd12f02d8f5e4923d1ef6e034a4ab3c036ee8fb76b699aa4a04"} Jan 22 06:35:35 crc kubenswrapper[4720]: I0122 06:35:35.430545 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pc2f4" event={"ID":"9a725fa6-120e-41b1-bf7b-e1419e35c891","Type":"ContainerStarted","Data":"bdea94d764eb25af4aec5da1f1aaf94b7d75e66292561a54803a079a6ca35cbd"} Jan 22 06:35:35 crc kubenswrapper[4720]: I0122 06:35:35.430559 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pc2f4" event={"ID":"9a725fa6-120e-41b1-bf7b-e1419e35c891","Type":"ContainerStarted","Data":"279bdc70caff8b05f2eaff1d28bee03725a7069dcc99c4770a8df9a3e3ed0440"} Jan 22 06:35:35 crc kubenswrapper[4720]: I0122 06:35:35.434182 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w949z\" (UniqueName: \"kubernetes.io/projected/819a554b-cde8-41eb-bf3c-b965b5754ee9-kube-api-access-w949z\") pod \"node-ca-5bmrh\" (UID: \"819a554b-cde8-41eb-bf3c-b965b5754ee9\") " pod="openshift-image-registry/node-ca-5bmrh" Jan 22 06:35:35 crc kubenswrapper[4720]: I0122 06:35:35.447199 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:35Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:35 crc kubenswrapper[4720]: I0122 06:35:35.464695 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:35Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:35 crc kubenswrapper[4720]: I0122 06:35:35.479635 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-n5w5r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85373343-156d-4de0-a72b-baaf7c4e3d08\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4da0abad7292a9d82b80caabd4b1f0fc15fbe54bfa5b4316c85b2131ea8b5b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlzmz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-n5w5r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:35Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:35 crc kubenswrapper[4720]: I0122 06:35:35.504304 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2336077a-0d11-443d-aa5f-3bc0f75aeb59\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7f8b443098a61889ca09134ff7b6e219efca59abcbcea3bb4c1aa9266ee8e55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfdba4f4bb3ed55acad58ed0c07d2dd58b1a1e9fd56443090c002bdefd3e7e10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2caddf6dd66b641ff10add3b52e37581c741507660ec4fe67a3b9ddde699e56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://108525dd49ce1847f1cc6c4e2a4c70cae3d2d990cda97eeb4aba8879b955432d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d49a501c34b86f0f9300742970a00c92ce3b8d352518d8f53d1ab8c734e2af6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49fa65139740d25a82a8becea2b7476e9846a3eebfe5a02bb0b3e8cc4e7fa30d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49fa65139740d25a82a8becea2b7476e9846a3eebfe5a02bb0b3e8cc4e7fa30d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e171d6a13b6108f7df37e46d2195682d2ce7a75e5fbfd0ccf4aca27871964cf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e171d6a13b6108f7df37e46d2195682d2ce7a75e5fbfd0ccf4aca27871964cf6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://136504ce05b178f4c60a839a1044c0b0892ea97babfe80181eac601c5057df16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://136504ce05b178f4c60a839a1044c0b0892ea97babfe80181eac601c5057df16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:08Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:35Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:35 crc kubenswrapper[4720]: I0122 06:35:35.520315 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71c3232e-a7c6-4127-b9ae-54b793cf40fc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b844e983a5de29cec0e9c51f209f04921cd45b437887ca38e93348bc0816832a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d21c2ba52009379f44498a3064c67bcd1a54a6277cc2b1df9db949bf0871e95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65a3043e63bb84b58bb5486070eb84265d51c514df2d766703a62382b3d6f66f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b944677a78ab673857bf2dd507dcc2eced1caef632613ec1e2054cb76ef57f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f51bf89aa732948c3672632435a055b62f58316da07931b2773bc2d3c10789e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T06:35:26Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 06:35:20.895806 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 06:35:20.896560 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2161447925/tls.crt::/tmp/serving-cert-2161447925/tls.key\\\\\\\"\\\\nI0122 06:35:26.673138 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 06:35:26.677382 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 06:35:26.677419 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 06:35:26.677459 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 06:35:26.677469 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 06:35:26.688473 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 06:35:26.688552 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 06:35:26.688566 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 06:35:26.688580 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 06:35:26.688588 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 06:35:26.688596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 06:35:26.688605 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 06:35:26.688875 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 06:35:26.690763 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://927a5a6aba8d7533888fb9e5e664fbaed25c6e6c0616c5b3c6f0afd6f45aa128\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3418e6eacd36147ae6ae1be72f23113dea7dbaab3597ba0fee76fbc846c0162f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3418e6eacd36147ae6ae1be72f23113dea7dbaab3597ba0fee76fbc846c0162f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:35Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:35 crc kubenswrapper[4720]: I0122 06:35:35.521842 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:35 crc kubenswrapper[4720]: I0122 06:35:35.521869 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:35 crc kubenswrapper[4720]: I0122 06:35:35.521881 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:35 crc kubenswrapper[4720]: I0122 06:35:35.521900 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:35 crc kubenswrapper[4720]: I0122 06:35:35.521928 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:35Z","lastTransitionTime":"2026-01-22T06:35:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:35 crc kubenswrapper[4720]: I0122 06:35:35.538222 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://42d0f971bb1f2a686c44e255b97648123170034b41d626318aab29e111dda4a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:35Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:35 crc kubenswrapper[4720]: I0122 06:35:35.554166 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:35Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:35 crc kubenswrapper[4720]: I0122 06:35:35.559307 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-5bmrh" Jan 22 06:35:35 crc kubenswrapper[4720]: I0122 06:35:35.585965 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dtnxt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"518eedd0-2cb6-458d-a7a8-d8c8b8296401\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://794bf7a98b2300bd21bd914f5ad90c8f92ea7c055c78f62dbac7bc66c1c4d282\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqx2p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dtnxt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:35Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:35 crc kubenswrapper[4720]: I0122 06:35:35.602184 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc18b75d6761db0d4a459a096804ac6600c6c2026ca133d47671a8aadba175f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6616553b8c62205590801d10e6e316b396b7190584b08a774ebb87ab425cc122\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:35Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:35 crc kubenswrapper[4720]: I0122 06:35:35.616548 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4b26e9d-6a95-4b1c-9750-88b6aa100c67\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57c69f24cbbb5aea3a63b602916230df0cc3d74344a32bd1528bd162786a6d4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f52f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88eb6692702bcb8523c759d764bb8dede5af5a2890217a1c6897a5b18a7197dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f52f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bnsvd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:35Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:35 crc kubenswrapper[4720]: I0122 06:35:35.631646 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:35 crc kubenswrapper[4720]: I0122 06:35:35.631688 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:35 crc kubenswrapper[4720]: I0122 06:35:35.631700 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:35 crc kubenswrapper[4720]: I0122 06:35:35.631720 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:35 crc kubenswrapper[4720]: I0122 06:35:35.631737 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:35Z","lastTransitionTime":"2026-01-22T06:35:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:35 crc kubenswrapper[4720]: I0122 06:35:35.633128 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63aad383-f2cb-4bee-a51e-8f27e7b6f6dc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f91f6cde32c3019a63315238551af8420def8e585e4dbf3d29e55f62f155ba3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eff41c6fc4cac32edebfbabd313603d50771e92427938f4fc8c12bc75563133d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5dc3d31620607809111c9a1ed06ab7dfb1b24ff5ef18155979efc6b09b6d67d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://365ce842df37c87fae429e32228087a45828c6c717b63a2eb9818f26bf9aed7b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:08Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:35Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:35 crc kubenswrapper[4720]: I0122 06:35:35.649823 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:35Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:35 crc kubenswrapper[4720]: I0122 06:35:35.666239 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:35Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:35 crc kubenswrapper[4720]: I0122 06:35:35.681666 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3042f2c25662a3c0a0f010b2e431521ea9dc24b59303773b1fc4e2b43308937\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:35Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:35 crc kubenswrapper[4720]: I0122 06:35:35.703368 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pc2f4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a725fa6-120e-41b1-bf7b-e1419e35c891\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a478b385adad231c65bafa44aef49fd983ef1a81babc0464478e199578b612b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a478b385adad231c65bafa44aef49fd983ef1a81babc0464478e199578b612b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pc2f4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:35Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:35 crc kubenswrapper[4720]: I0122 06:35:35.721281 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lxzml" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c7b3c34a-9870-4c9f-990b-29b7e768d5a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:33Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h66q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0e395a61e07527eace7f980f476ef9b16a0c6d525b461837ebd9abcbebfa9b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0e395a61e07527eace7f980f476ef9b16a0c6d525b461837ebd9abcbebfa9b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h66q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fa836ad8fb1d77bc1b9d30575db8ed7f2394a2b03d41f0b26a3907fa07ab154\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fa836ad8fb1d77bc1b9d30575db8ed7f2394a2b03d41f0b26a3907fa07ab154\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h66q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h66q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h66q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h66q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h66q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lxzml\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:35Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:35 crc kubenswrapper[4720]: I0122 06:35:35.734116 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5bmrh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"819a554b-cde8-41eb-bf3c-b965b5754ee9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:35Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:35Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w949z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5bmrh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:35Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:35 crc kubenswrapper[4720]: I0122 06:35:35.734620 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:35 crc kubenswrapper[4720]: I0122 06:35:35.734654 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:35 crc kubenswrapper[4720]: I0122 06:35:35.734664 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:35 crc kubenswrapper[4720]: I0122 06:35:35.734683 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:35 crc kubenswrapper[4720]: I0122 06:35:35.734693 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:35Z","lastTransitionTime":"2026-01-22T06:35:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:35 crc kubenswrapper[4720]: I0122 06:35:35.755403 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2336077a-0d11-443d-aa5f-3bc0f75aeb59\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7f8b443098a61889ca09134ff7b6e219efca59abcbcea3bb4c1aa9266ee8e55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfdba4f4bb3ed55acad58ed0c07d2dd58b1a1e9fd56443090c002bdefd3e7e10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2caddf6dd66b641ff10add3b52e37581c741507660ec4fe67a3b9ddde699e56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://108525dd49ce1847f1cc6c4e2a4c70cae3d2d990cda97eeb4aba8879b955432d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d49a501c34b86f0f9300742970a00c92ce3b8d352518d8f53d1ab8c734e2af6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49fa65139740d25a82a8becea2b7476e9846a3eebfe5a02bb0b3e8cc4e7fa30d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49fa65139740d25a82a8becea2b7476e9846a3eebfe5a02bb0b3e8cc4e7fa30d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e171d6a13b6108f7df37e46d2195682d2ce7a75e5fbfd0ccf4aca27871964cf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e171d6a13b6108f7df37e46d2195682d2ce7a75e5fbfd0ccf4aca27871964cf6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://136504ce05b178f4c60a839a1044c0b0892ea97babfe80181eac601c5057df16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://136504ce05b178f4c60a839a1044c0b0892ea97babfe80181eac601c5057df16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:08Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:35Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:35 crc kubenswrapper[4720]: I0122 06:35:35.771451 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71c3232e-a7c6-4127-b9ae-54b793cf40fc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b844e983a5de29cec0e9c51f209f04921cd45b437887ca38e93348bc0816832a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d21c2ba52009379f44498a3064c67bcd1a54a6277cc2b1df9db949bf0871e95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65a3043e63bb84b58bb5486070eb84265d51c514df2d766703a62382b3d6f66f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b944677a78ab673857bf2dd507dcc2eced1caef632613ec1e2054cb76ef57f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f51bf89aa732948c3672632435a055b62f58316da07931b2773bc2d3c10789e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T06:35:26Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 06:35:20.895806 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 06:35:20.896560 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2161447925/tls.crt::/tmp/serving-cert-2161447925/tls.key\\\\\\\"\\\\nI0122 06:35:26.673138 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 06:35:26.677382 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 06:35:26.677419 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 06:35:26.677459 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 06:35:26.677469 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 06:35:26.688473 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 06:35:26.688552 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 06:35:26.688566 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 06:35:26.688580 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 06:35:26.688588 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 06:35:26.688596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 06:35:26.688605 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 06:35:26.688875 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 06:35:26.690763 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://927a5a6aba8d7533888fb9e5e664fbaed25c6e6c0616c5b3c6f0afd6f45aa128\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3418e6eacd36147ae6ae1be72f23113dea7dbaab3597ba0fee76fbc846c0162f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3418e6eacd36147ae6ae1be72f23113dea7dbaab3597ba0fee76fbc846c0162f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:35Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:35 crc kubenswrapper[4720]: I0122 06:35:35.794701 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://42d0f971bb1f2a686c44e255b97648123170034b41d626318aab29e111dda4a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:35Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:35 crc kubenswrapper[4720]: I0122 06:35:35.812091 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-n5w5r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85373343-156d-4de0-a72b-baaf7c4e3d08\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4da0abad7292a9d82b80caabd4b1f0fc15fbe54bfa5b4316c85b2131ea8b5b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlzmz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-n5w5r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:35Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:35 crc kubenswrapper[4720]: I0122 06:35:35.837098 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:35 crc kubenswrapper[4720]: I0122 06:35:35.837130 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:35 crc kubenswrapper[4720]: I0122 06:35:35.837139 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:35 crc kubenswrapper[4720]: I0122 06:35:35.837158 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:35 crc kubenswrapper[4720]: I0122 06:35:35.837170 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:35Z","lastTransitionTime":"2026-01-22T06:35:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:35 crc kubenswrapper[4720]: I0122 06:35:35.940372 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:35 crc kubenswrapper[4720]: I0122 06:35:35.940417 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:35 crc kubenswrapper[4720]: I0122 06:35:35.940428 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:35 crc kubenswrapper[4720]: I0122 06:35:35.940451 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:35 crc kubenswrapper[4720]: I0122 06:35:35.940464 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:35Z","lastTransitionTime":"2026-01-22T06:35:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:36 crc kubenswrapper[4720]: I0122 06:35:36.043483 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:36 crc kubenswrapper[4720]: I0122 06:35:36.043547 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:36 crc kubenswrapper[4720]: I0122 06:35:36.043561 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:36 crc kubenswrapper[4720]: I0122 06:35:36.043584 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:36 crc kubenswrapper[4720]: I0122 06:35:36.043603 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:36Z","lastTransitionTime":"2026-01-22T06:35:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:36 crc kubenswrapper[4720]: I0122 06:35:36.148034 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:36 crc kubenswrapper[4720]: I0122 06:35:36.148098 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:36 crc kubenswrapper[4720]: I0122 06:35:36.148125 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:36 crc kubenswrapper[4720]: I0122 06:35:36.148155 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:36 crc kubenswrapper[4720]: I0122 06:35:36.148172 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:36Z","lastTransitionTime":"2026-01-22T06:35:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:36 crc kubenswrapper[4720]: I0122 06:35:36.153080 4720 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 03:13:47.453078151 +0000 UTC Jan 22 06:35:36 crc kubenswrapper[4720]: I0122 06:35:36.210567 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 06:35:36 crc kubenswrapper[4720]: I0122 06:35:36.210651 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 06:35:36 crc kubenswrapper[4720]: E0122 06:35:36.210755 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 06:35:36 crc kubenswrapper[4720]: E0122 06:35:36.210838 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 06:35:36 crc kubenswrapper[4720]: I0122 06:35:36.256765 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:36 crc kubenswrapper[4720]: I0122 06:35:36.256833 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:36 crc kubenswrapper[4720]: I0122 06:35:36.256852 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:36 crc kubenswrapper[4720]: I0122 06:35:36.256877 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:36 crc kubenswrapper[4720]: I0122 06:35:36.256894 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:36Z","lastTransitionTime":"2026-01-22T06:35:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:36 crc kubenswrapper[4720]: I0122 06:35:36.361603 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:36 crc kubenswrapper[4720]: I0122 06:35:36.361690 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:36 crc kubenswrapper[4720]: I0122 06:35:36.361716 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:36 crc kubenswrapper[4720]: I0122 06:35:36.361748 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:36 crc kubenswrapper[4720]: I0122 06:35:36.361772 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:36Z","lastTransitionTime":"2026-01-22T06:35:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:36 crc kubenswrapper[4720]: I0122 06:35:36.437587 4720 generic.go:334] "Generic (PLEG): container finished" podID="c7b3c34a-9870-4c9f-990b-29b7e768d5a5" containerID="09cda77dbfa7cafc1f167478649fe1bc961d7610005bd3260407796956dd8fc3" exitCode=0 Jan 22 06:35:36 crc kubenswrapper[4720]: I0122 06:35:36.437695 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-lxzml" event={"ID":"c7b3c34a-9870-4c9f-990b-29b7e768d5a5","Type":"ContainerDied","Data":"09cda77dbfa7cafc1f167478649fe1bc961d7610005bd3260407796956dd8fc3"} Jan 22 06:35:36 crc kubenswrapper[4720]: I0122 06:35:36.443267 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-5bmrh" event={"ID":"819a554b-cde8-41eb-bf3c-b965b5754ee9","Type":"ContainerStarted","Data":"4fd548b8bf40d7be75f3433d2396eac3eb467cdc77a6e92f29c5321b0f2e11c6"} Jan 22 06:35:36 crc kubenswrapper[4720]: I0122 06:35:36.443353 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-5bmrh" event={"ID":"819a554b-cde8-41eb-bf3c-b965b5754ee9","Type":"ContainerStarted","Data":"ecc28de01363cb016a5219151dcdbde390b954162154cf9107c854e2cce7c537"} Jan 22 06:35:36 crc kubenswrapper[4720]: I0122 06:35:36.459007 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://42d0f971bb1f2a686c44e255b97648123170034b41d626318aab29e111dda4a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:36Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:36 crc kubenswrapper[4720]: I0122 06:35:36.464863 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:36 crc kubenswrapper[4720]: I0122 06:35:36.464942 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:36 crc kubenswrapper[4720]: I0122 06:35:36.464953 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:36 crc kubenswrapper[4720]: I0122 06:35:36.464971 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:36 crc kubenswrapper[4720]: I0122 06:35:36.464984 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:36Z","lastTransitionTime":"2026-01-22T06:35:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:36 crc kubenswrapper[4720]: I0122 06:35:36.483558 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-n5w5r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85373343-156d-4de0-a72b-baaf7c4e3d08\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4da0abad7292a9d82b80caabd4b1f0fc15fbe54bfa5b4316c85b2131ea8b5b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlzmz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-n5w5r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:36Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:36 crc kubenswrapper[4720]: I0122 06:35:36.507618 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2336077a-0d11-443d-aa5f-3bc0f75aeb59\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7f8b443098a61889ca09134ff7b6e219efca59abcbcea3bb4c1aa9266ee8e55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfdba4f4bb3ed55acad58ed0c07d2dd58b1a1e9fd56443090c002bdefd3e7e10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2caddf6dd66b641ff10add3b52e37581c741507660ec4fe67a3b9ddde699e56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://108525dd49ce1847f1cc6c4e2a4c70cae3d2d990cda97eeb4aba8879b955432d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d49a501c34b86f0f9300742970a00c92ce3b8d352518d8f53d1ab8c734e2af6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49fa65139740d25a82a8becea2b7476e9846a3eebfe5a02bb0b3e8cc4e7fa30d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49fa65139740d25a82a8becea2b7476e9846a3eebfe5a02bb0b3e8cc4e7fa30d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e171d6a13b6108f7df37e46d2195682d2ce7a75e5fbfd0ccf4aca27871964cf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e171d6a13b6108f7df37e46d2195682d2ce7a75e5fbfd0ccf4aca27871964cf6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://136504ce05b178f4c60a839a1044c0b0892ea97babfe80181eac601c5057df16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://136504ce05b178f4c60a839a1044c0b0892ea97babfe80181eac601c5057df16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:08Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:36Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:36 crc kubenswrapper[4720]: I0122 06:35:36.524164 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71c3232e-a7c6-4127-b9ae-54b793cf40fc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b844e983a5de29cec0e9c51f209f04921cd45b437887ca38e93348bc0816832a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d21c2ba52009379f44498a3064c67bcd1a54a6277cc2b1df9db949bf0871e95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65a3043e63bb84b58bb5486070eb84265d51c514df2d766703a62382b3d6f66f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b944677a78ab673857bf2dd507dcc2eced1caef632613ec1e2054cb76ef57f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f51bf89aa732948c3672632435a055b62f58316da07931b2773bc2d3c10789e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T06:35:26Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 06:35:20.895806 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 06:35:20.896560 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2161447925/tls.crt::/tmp/serving-cert-2161447925/tls.key\\\\\\\"\\\\nI0122 06:35:26.673138 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 06:35:26.677382 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 06:35:26.677419 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 06:35:26.677459 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 06:35:26.677469 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 06:35:26.688473 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 06:35:26.688552 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 06:35:26.688566 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 06:35:26.688580 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 06:35:26.688588 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 06:35:26.688596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 06:35:26.688605 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 06:35:26.688875 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 06:35:26.690763 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://927a5a6aba8d7533888fb9e5e664fbaed25c6e6c0616c5b3c6f0afd6f45aa128\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3418e6eacd36147ae6ae1be72f23113dea7dbaab3597ba0fee76fbc846c0162f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3418e6eacd36147ae6ae1be72f23113dea7dbaab3597ba0fee76fbc846c0162f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:36Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:36 crc kubenswrapper[4720]: I0122 06:35:36.535133 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dtnxt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"518eedd0-2cb6-458d-a7a8-d8c8b8296401\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://794bf7a98b2300bd21bd914f5ad90c8f92ea7c055c78f62dbac7bc66c1c4d282\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqx2p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dtnxt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:36Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:36 crc kubenswrapper[4720]: I0122 06:35:36.551640 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:36Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:36 crc kubenswrapper[4720]: I0122 06:35:36.563285 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4b26e9d-6a95-4b1c-9750-88b6aa100c67\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57c69f24cbbb5aea3a63b602916230df0cc3d74344a32bd1528bd162786a6d4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f52f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88eb6692702bcb8523c759d764bb8dede5af5a2890217a1c6897a5b18a7197dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f52f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bnsvd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:36Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:36 crc kubenswrapper[4720]: I0122 06:35:36.576137 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:36 crc kubenswrapper[4720]: I0122 06:35:36.576268 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:36 crc kubenswrapper[4720]: I0122 06:35:36.576331 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:36 crc kubenswrapper[4720]: I0122 06:35:36.576362 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:36 crc kubenswrapper[4720]: I0122 06:35:36.576384 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:36Z","lastTransitionTime":"2026-01-22T06:35:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:36 crc kubenswrapper[4720]: I0122 06:35:36.578200 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc18b75d6761db0d4a459a096804ac6600c6c2026ca133d47671a8aadba175f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6616553b8c62205590801d10e6e316b396b7190584b08a774ebb87ab425cc122\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:36Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:36 crc kubenswrapper[4720]: I0122 06:35:36.594030 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:36Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:36 crc kubenswrapper[4720]: I0122 06:35:36.607192 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3042f2c25662a3c0a0f010b2e431521ea9dc24b59303773b1fc4e2b43308937\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:36Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:36 crc kubenswrapper[4720]: I0122 06:35:36.636569 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pc2f4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a725fa6-120e-41b1-bf7b-e1419e35c891\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a478b385adad231c65bafa44aef49fd983ef1a81babc0464478e199578b612b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a478b385adad231c65bafa44aef49fd983ef1a81babc0464478e199578b612b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pc2f4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:36Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:36 crc kubenswrapper[4720]: I0122 06:35:36.656368 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lxzml" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c7b3c34a-9870-4c9f-990b-29b7e768d5a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:33Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h66q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0e395a61e07527eace7f980f476ef9b16a0c6d525b461837ebd9abcbebfa9b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0e395a61e07527eace7f980f476ef9b16a0c6d525b461837ebd9abcbebfa9b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h66q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fa836ad8fb1d77bc1b9d30575db8ed7f2394a2b03d41f0b26a3907fa07ab154\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fa836ad8fb1d77bc1b9d30575db8ed7f2394a2b03d41f0b26a3907fa07ab154\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h66q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09cda77dbfa7cafc1f167478649fe1bc961d7610005bd3260407796956dd8fc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09cda77dbfa7cafc1f167478649fe1bc961d7610005bd3260407796956dd8fc3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h66q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h66q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h66q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h66q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lxzml\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:36Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:36 crc kubenswrapper[4720]: I0122 06:35:36.668585 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5bmrh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"819a554b-cde8-41eb-bf3c-b965b5754ee9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:35Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:35Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w949z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5bmrh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:36Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:36 crc kubenswrapper[4720]: I0122 06:35:36.679011 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:36 crc kubenswrapper[4720]: I0122 06:35:36.679057 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:36 crc kubenswrapper[4720]: I0122 06:35:36.679069 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:36 crc kubenswrapper[4720]: I0122 06:35:36.679094 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:36 crc kubenswrapper[4720]: I0122 06:35:36.679113 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:36Z","lastTransitionTime":"2026-01-22T06:35:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:36 crc kubenswrapper[4720]: I0122 06:35:36.691067 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63aad383-f2cb-4bee-a51e-8f27e7b6f6dc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f91f6cde32c3019a63315238551af8420def8e585e4dbf3d29e55f62f155ba3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eff41c6fc4cac32edebfbabd313603d50771e92427938f4fc8c12bc75563133d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5dc3d31620607809111c9a1ed06ab7dfb1b24ff5ef18155979efc6b09b6d67d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://365ce842df37c87fae429e32228087a45828c6c717b63a2eb9818f26bf9aed7b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:08Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:36Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:36 crc kubenswrapper[4720]: I0122 06:35:36.710163 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:36Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:36 crc kubenswrapper[4720]: I0122 06:35:36.723463 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc18b75d6761db0d4a459a096804ac6600c6c2026ca133d47671a8aadba175f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6616553b8c62205590801d10e6e316b396b7190584b08a774ebb87ab425cc122\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:36Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:36 crc kubenswrapper[4720]: I0122 06:35:36.734655 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4b26e9d-6a95-4b1c-9750-88b6aa100c67\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57c69f24cbbb5aea3a63b602916230df0cc3d74344a32bd1528bd162786a6d4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f52f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88eb6692702bcb8523c759d764bb8dede5af5a2890217a1c6897a5b18a7197dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f52f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bnsvd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:36Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:36 crc kubenswrapper[4720]: I0122 06:35:36.756408 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pc2f4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a725fa6-120e-41b1-bf7b-e1419e35c891\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a478b385adad231c65bafa44aef49fd983ef1a81babc0464478e199578b612b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a478b385adad231c65bafa44aef49fd983ef1a81babc0464478e199578b612b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pc2f4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:36Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:36 crc kubenswrapper[4720]: I0122 06:35:36.772974 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lxzml" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c7b3c34a-9870-4c9f-990b-29b7e768d5a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:33Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h66q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0e395a61e07527eace7f980f476ef9b16a0c6d525b461837ebd9abcbebfa9b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0e395a61e07527eace7f980f476ef9b16a0c6d525b461837ebd9abcbebfa9b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h66q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fa836ad8fb1d77bc1b9d30575db8ed7f2394a2b03d41f0b26a3907fa07ab154\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fa836ad8fb1d77bc1b9d30575db8ed7f2394a2b03d41f0b26a3907fa07ab154\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h66q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09cda77dbfa7cafc1f167478649fe1bc961d7610005bd3260407796956dd8fc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09cda77dbfa7cafc1f167478649fe1bc961d7610005bd3260407796956dd8fc3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h66q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h66q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h66q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h66q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lxzml\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:36Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:36 crc kubenswrapper[4720]: I0122 06:35:36.782584 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:36 crc kubenswrapper[4720]: I0122 06:35:36.782625 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:36 crc kubenswrapper[4720]: I0122 06:35:36.782636 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:36 crc kubenswrapper[4720]: I0122 06:35:36.782656 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:36 crc kubenswrapper[4720]: I0122 06:35:36.782683 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:36Z","lastTransitionTime":"2026-01-22T06:35:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:36 crc kubenswrapper[4720]: I0122 06:35:36.789234 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5bmrh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"819a554b-cde8-41eb-bf3c-b965b5754ee9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fd548b8bf40d7be75f3433d2396eac3eb467cdc77a6e92f29c5321b0f2e11c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w949z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5bmrh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:36Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:36 crc kubenswrapper[4720]: I0122 06:35:36.806740 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63aad383-f2cb-4bee-a51e-8f27e7b6f6dc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f91f6cde32c3019a63315238551af8420def8e585e4dbf3d29e55f62f155ba3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eff41c6fc4cac32edebfbabd313603d50771e92427938f4fc8c12bc75563133d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5dc3d31620607809111c9a1ed06ab7dfb1b24ff5ef18155979efc6b09b6d67d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://365ce842df37c87fae429e32228087a45828c6c717b63a2eb9818f26bf9aed7b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:08Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:36Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:36 crc kubenswrapper[4720]: I0122 06:35:36.827606 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:36Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:36 crc kubenswrapper[4720]: I0122 06:35:36.843820 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:36Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:36 crc kubenswrapper[4720]: I0122 06:35:36.859453 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3042f2c25662a3c0a0f010b2e431521ea9dc24b59303773b1fc4e2b43308937\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:36Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:36 crc kubenswrapper[4720]: I0122 06:35:36.885528 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:36 crc kubenswrapper[4720]: I0122 06:35:36.885596 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:36 crc kubenswrapper[4720]: I0122 06:35:36.885611 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:36 crc kubenswrapper[4720]: I0122 06:35:36.885639 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:36 crc kubenswrapper[4720]: I0122 06:35:36.885660 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:36Z","lastTransitionTime":"2026-01-22T06:35:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:36 crc kubenswrapper[4720]: I0122 06:35:36.890446 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2336077a-0d11-443d-aa5f-3bc0f75aeb59\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7f8b443098a61889ca09134ff7b6e219efca59abcbcea3bb4c1aa9266ee8e55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfdba4f4bb3ed55acad58ed0c07d2dd58b1a1e9fd56443090c002bdefd3e7e10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2caddf6dd66b641ff10add3b52e37581c741507660ec4fe67a3b9ddde699e56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://108525dd49ce1847f1cc6c4e2a4c70cae3d2d990cda97eeb4aba8879b955432d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d49a501c34b86f0f9300742970a00c92ce3b8d352518d8f53d1ab8c734e2af6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49fa65139740d25a82a8becea2b7476e9846a3eebfe5a02bb0b3e8cc4e7fa30d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49fa65139740d25a82a8becea2b7476e9846a3eebfe5a02bb0b3e8cc4e7fa30d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e171d6a13b6108f7df37e46d2195682d2ce7a75e5fbfd0ccf4aca27871964cf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e171d6a13b6108f7df37e46d2195682d2ce7a75e5fbfd0ccf4aca27871964cf6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://136504ce05b178f4c60a839a1044c0b0892ea97babfe80181eac601c5057df16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://136504ce05b178f4c60a839a1044c0b0892ea97babfe80181eac601c5057df16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:08Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:36Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:36 crc kubenswrapper[4720]: I0122 06:35:36.911864 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71c3232e-a7c6-4127-b9ae-54b793cf40fc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b844e983a5de29cec0e9c51f209f04921cd45b437887ca38e93348bc0816832a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d21c2ba52009379f44498a3064c67bcd1a54a6277cc2b1df9db949bf0871e95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65a3043e63bb84b58bb5486070eb84265d51c514df2d766703a62382b3d6f66f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b944677a78ab673857bf2dd507dcc2eced1caef632613ec1e2054cb76ef57f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f51bf89aa732948c3672632435a055b62f58316da07931b2773bc2d3c10789e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T06:35:26Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 06:35:20.895806 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 06:35:20.896560 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2161447925/tls.crt::/tmp/serving-cert-2161447925/tls.key\\\\\\\"\\\\nI0122 06:35:26.673138 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 06:35:26.677382 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 06:35:26.677419 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 06:35:26.677459 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 06:35:26.677469 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 06:35:26.688473 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 06:35:26.688552 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 06:35:26.688566 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 06:35:26.688580 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 06:35:26.688588 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 06:35:26.688596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 06:35:26.688605 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 06:35:26.688875 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 06:35:26.690763 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://927a5a6aba8d7533888fb9e5e664fbaed25c6e6c0616c5b3c6f0afd6f45aa128\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3418e6eacd36147ae6ae1be72f23113dea7dbaab3597ba0fee76fbc846c0162f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3418e6eacd36147ae6ae1be72f23113dea7dbaab3597ba0fee76fbc846c0162f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:36Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:36 crc kubenswrapper[4720]: I0122 06:35:36.936419 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://42d0f971bb1f2a686c44e255b97648123170034b41d626318aab29e111dda4a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:36Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:36 crc kubenswrapper[4720]: I0122 06:35:36.957168 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-n5w5r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85373343-156d-4de0-a72b-baaf7c4e3d08\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4da0abad7292a9d82b80caabd4b1f0fc15fbe54bfa5b4316c85b2131ea8b5b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlzmz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-n5w5r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:36Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:36 crc kubenswrapper[4720]: I0122 06:35:36.976807 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:36Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:36 crc kubenswrapper[4720]: I0122 06:35:36.988142 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:36 crc kubenswrapper[4720]: I0122 06:35:36.988193 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:36 crc kubenswrapper[4720]: I0122 06:35:36.988207 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:36 crc kubenswrapper[4720]: I0122 06:35:36.988230 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:36 crc kubenswrapper[4720]: I0122 06:35:36.988242 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:36Z","lastTransitionTime":"2026-01-22T06:35:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:36 crc kubenswrapper[4720]: I0122 06:35:36.995445 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dtnxt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"518eedd0-2cb6-458d-a7a8-d8c8b8296401\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://794bf7a98b2300bd21bd914f5ad90c8f92ea7c055c78f62dbac7bc66c1c4d282\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqx2p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dtnxt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:36Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:37 crc kubenswrapper[4720]: I0122 06:35:37.091793 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:37 crc kubenswrapper[4720]: I0122 06:35:37.092060 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:37 crc kubenswrapper[4720]: I0122 06:35:37.092143 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:37 crc kubenswrapper[4720]: I0122 06:35:37.092209 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:37 crc kubenswrapper[4720]: I0122 06:35:37.092267 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:37Z","lastTransitionTime":"2026-01-22T06:35:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:37 crc kubenswrapper[4720]: I0122 06:35:37.153474 4720 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 10:28:50.393727376 +0000 UTC Jan 22 06:35:37 crc kubenswrapper[4720]: I0122 06:35:37.195948 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:37 crc kubenswrapper[4720]: I0122 06:35:37.196321 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:37 crc kubenswrapper[4720]: I0122 06:35:37.196409 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:37 crc kubenswrapper[4720]: I0122 06:35:37.196506 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:37 crc kubenswrapper[4720]: I0122 06:35:37.196586 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:37Z","lastTransitionTime":"2026-01-22T06:35:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:37 crc kubenswrapper[4720]: I0122 06:35:37.210381 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 06:35:37 crc kubenswrapper[4720]: E0122 06:35:37.210582 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 06:35:37 crc kubenswrapper[4720]: I0122 06:35:37.299279 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:37 crc kubenswrapper[4720]: I0122 06:35:37.299333 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:37 crc kubenswrapper[4720]: I0122 06:35:37.299347 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:37 crc kubenswrapper[4720]: I0122 06:35:37.299367 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:37 crc kubenswrapper[4720]: I0122 06:35:37.299384 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:37Z","lastTransitionTime":"2026-01-22T06:35:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:37 crc kubenswrapper[4720]: I0122 06:35:37.402530 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:37 crc kubenswrapper[4720]: I0122 06:35:37.402585 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:37 crc kubenswrapper[4720]: I0122 06:35:37.402600 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:37 crc kubenswrapper[4720]: I0122 06:35:37.402625 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:37 crc kubenswrapper[4720]: I0122 06:35:37.402641 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:37Z","lastTransitionTime":"2026-01-22T06:35:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:37 crc kubenswrapper[4720]: I0122 06:35:37.458892 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pc2f4" event={"ID":"9a725fa6-120e-41b1-bf7b-e1419e35c891","Type":"ContainerStarted","Data":"dd6371da033cf0ecd1c0b746cae82e14593edbf2353910a77284a2987741580d"} Jan 22 06:35:37 crc kubenswrapper[4720]: I0122 06:35:37.461994 4720 generic.go:334] "Generic (PLEG): container finished" podID="c7b3c34a-9870-4c9f-990b-29b7e768d5a5" containerID="3d74c02c82589070c20f88c3ce9574341f2c4898af034fe9a686aa7b7c19ed93" exitCode=0 Jan 22 06:35:37 crc kubenswrapper[4720]: I0122 06:35:37.462031 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-lxzml" event={"ID":"c7b3c34a-9870-4c9f-990b-29b7e768d5a5","Type":"ContainerDied","Data":"3d74c02c82589070c20f88c3ce9574341f2c4898af034fe9a686aa7b7c19ed93"} Jan 22 06:35:37 crc kubenswrapper[4720]: I0122 06:35:37.493638 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lxzml" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c7b3c34a-9870-4c9f-990b-29b7e768d5a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:33Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h66q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0e395a61e07527eace7f980f476ef9b16a0c6d525b461837ebd9abcbebfa9b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0e395a61e07527eace7f980f476ef9b16a0c6d525b461837ebd9abcbebfa9b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h66q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fa836ad8fb1d77bc1b9d30575db8ed7f2394a2b03d41f0b26a3907fa07ab154\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fa836ad8fb1d77bc1b9d30575db8ed7f2394a2b03d41f0b26a3907fa07ab154\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h66q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09cda77dbfa7cafc1f167478649fe1bc961d7610005bd3260407796956dd8fc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09cda77dbfa7cafc1f167478649fe1bc961d7610005bd3260407796956dd8fc3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h66q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d74c02c82589070c20f88c3ce9574341f2c4898af034fe9a686aa7b7c19ed93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d74c02c82589070c20f88c3ce9574341f2c4898af034fe9a686aa7b7c19ed93\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h66q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h66q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h66q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lxzml\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:37Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:37 crc kubenswrapper[4720]: I0122 06:35:37.506974 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:37 crc kubenswrapper[4720]: I0122 06:35:37.507019 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:37 crc kubenswrapper[4720]: I0122 06:35:37.507032 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:37 crc kubenswrapper[4720]: I0122 06:35:37.507054 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:37 crc kubenswrapper[4720]: I0122 06:35:37.507068 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:37Z","lastTransitionTime":"2026-01-22T06:35:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:37 crc kubenswrapper[4720]: I0122 06:35:37.508014 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5bmrh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"819a554b-cde8-41eb-bf3c-b965b5754ee9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fd548b8bf40d7be75f3433d2396eac3eb467cdc77a6e92f29c5321b0f2e11c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w949z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5bmrh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:37Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:37 crc kubenswrapper[4720]: I0122 06:35:37.522143 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63aad383-f2cb-4bee-a51e-8f27e7b6f6dc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f91f6cde32c3019a63315238551af8420def8e585e4dbf3d29e55f62f155ba3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eff41c6fc4cac32edebfbabd313603d50771e92427938f4fc8c12bc75563133d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5dc3d31620607809111c9a1ed06ab7dfb1b24ff5ef18155979efc6b09b6d67d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://365ce842df37c87fae429e32228087a45828c6c717b63a2eb9818f26bf9aed7b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:08Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:37Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:37 crc kubenswrapper[4720]: I0122 06:35:37.538563 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:37Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:37 crc kubenswrapper[4720]: I0122 06:35:37.554144 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:37Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:37 crc kubenswrapper[4720]: I0122 06:35:37.574356 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3042f2c25662a3c0a0f010b2e431521ea9dc24b59303773b1fc4e2b43308937\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:37Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:37 crc kubenswrapper[4720]: I0122 06:35:37.596325 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pc2f4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a725fa6-120e-41b1-bf7b-e1419e35c891\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a478b385adad231c65bafa44aef49fd983ef1a81babc0464478e199578b612b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a478b385adad231c65bafa44aef49fd983ef1a81babc0464478e199578b612b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pc2f4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:37Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:37 crc kubenswrapper[4720]: I0122 06:35:37.610052 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:37 crc kubenswrapper[4720]: I0122 06:35:37.610096 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:37 crc kubenswrapper[4720]: I0122 06:35:37.610111 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:37 crc kubenswrapper[4720]: I0122 06:35:37.610133 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:37 crc kubenswrapper[4720]: I0122 06:35:37.610147 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:37Z","lastTransitionTime":"2026-01-22T06:35:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:37 crc kubenswrapper[4720]: I0122 06:35:37.625497 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2336077a-0d11-443d-aa5f-3bc0f75aeb59\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7f8b443098a61889ca09134ff7b6e219efca59abcbcea3bb4c1aa9266ee8e55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfdba4f4bb3ed55acad58ed0c07d2dd58b1a1e9fd56443090c002bdefd3e7e10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2caddf6dd66b641ff10add3b52e37581c741507660ec4fe67a3b9ddde699e56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://108525dd49ce1847f1cc6c4e2a4c70cae3d2d990cda97eeb4aba8879b955432d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d49a501c34b86f0f9300742970a00c92ce3b8d352518d8f53d1ab8c734e2af6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49fa65139740d25a82a8becea2b7476e9846a3eebfe5a02bb0b3e8cc4e7fa30d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49fa65139740d25a82a8becea2b7476e9846a3eebfe5a02bb0b3e8cc4e7fa30d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e171d6a13b6108f7df37e46d2195682d2ce7a75e5fbfd0ccf4aca27871964cf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e171d6a13b6108f7df37e46d2195682d2ce7a75e5fbfd0ccf4aca27871964cf6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://136504ce05b178f4c60a839a1044c0b0892ea97babfe80181eac601c5057df16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://136504ce05b178f4c60a839a1044c0b0892ea97babfe80181eac601c5057df16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:08Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:37Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:37 crc kubenswrapper[4720]: I0122 06:35:37.647576 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71c3232e-a7c6-4127-b9ae-54b793cf40fc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b844e983a5de29cec0e9c51f209f04921cd45b437887ca38e93348bc0816832a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d21c2ba52009379f44498a3064c67bcd1a54a6277cc2b1df9db949bf0871e95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65a3043e63bb84b58bb5486070eb84265d51c514df2d766703a62382b3d6f66f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b944677a78ab673857bf2dd507dcc2eced1caef632613ec1e2054cb76ef57f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f51bf89aa732948c3672632435a055b62f58316da07931b2773bc2d3c10789e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T06:35:26Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 06:35:20.895806 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 06:35:20.896560 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2161447925/tls.crt::/tmp/serving-cert-2161447925/tls.key\\\\\\\"\\\\nI0122 06:35:26.673138 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 06:35:26.677382 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 06:35:26.677419 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 06:35:26.677459 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 06:35:26.677469 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 06:35:26.688473 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 06:35:26.688552 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 06:35:26.688566 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 06:35:26.688580 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 06:35:26.688588 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 06:35:26.688596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 06:35:26.688605 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 06:35:26.688875 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 06:35:26.690763 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://927a5a6aba8d7533888fb9e5e664fbaed25c6e6c0616c5b3c6f0afd6f45aa128\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3418e6eacd36147ae6ae1be72f23113dea7dbaab3597ba0fee76fbc846c0162f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3418e6eacd36147ae6ae1be72f23113dea7dbaab3597ba0fee76fbc846c0162f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:37Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:37 crc kubenswrapper[4720]: I0122 06:35:37.664582 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://42d0f971bb1f2a686c44e255b97648123170034b41d626318aab29e111dda4a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:37Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:37 crc kubenswrapper[4720]: I0122 06:35:37.681327 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-n5w5r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85373343-156d-4de0-a72b-baaf7c4e3d08\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4da0abad7292a9d82b80caabd4b1f0fc15fbe54bfa5b4316c85b2131ea8b5b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlzmz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-n5w5r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:37Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:37 crc kubenswrapper[4720]: I0122 06:35:37.697628 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:37Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:37 crc kubenswrapper[4720]: I0122 06:35:37.709926 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dtnxt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"518eedd0-2cb6-458d-a7a8-d8c8b8296401\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://794bf7a98b2300bd21bd914f5ad90c8f92ea7c055c78f62dbac7bc66c1c4d282\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqx2p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dtnxt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:37Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:37 crc kubenswrapper[4720]: I0122 06:35:37.713466 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:37 crc kubenswrapper[4720]: I0122 06:35:37.713514 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:37 crc kubenswrapper[4720]: I0122 06:35:37.713531 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:37 crc kubenswrapper[4720]: I0122 06:35:37.713552 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:37 crc kubenswrapper[4720]: I0122 06:35:37.713569 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:37Z","lastTransitionTime":"2026-01-22T06:35:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:37 crc kubenswrapper[4720]: I0122 06:35:37.725151 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc18b75d6761db0d4a459a096804ac6600c6c2026ca133d47671a8aadba175f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6616553b8c62205590801d10e6e316b396b7190584b08a774ebb87ab425cc122\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:37Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:37 crc kubenswrapper[4720]: I0122 06:35:37.737065 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4b26e9d-6a95-4b1c-9750-88b6aa100c67\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57c69f24cbbb5aea3a63b602916230df0cc3d74344a32bd1528bd162786a6d4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f52f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88eb6692702bcb8523c759d764bb8dede5af5a2890217a1c6897a5b18a7197dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f52f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bnsvd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:37Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:37 crc kubenswrapper[4720]: I0122 06:35:37.817987 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:37 crc kubenswrapper[4720]: I0122 06:35:37.818046 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:37 crc kubenswrapper[4720]: I0122 06:35:37.818060 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:37 crc kubenswrapper[4720]: I0122 06:35:37.818079 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:37 crc kubenswrapper[4720]: I0122 06:35:37.818093 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:37Z","lastTransitionTime":"2026-01-22T06:35:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:37 crc kubenswrapper[4720]: I0122 06:35:37.921529 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:37 crc kubenswrapper[4720]: I0122 06:35:37.921992 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:37 crc kubenswrapper[4720]: I0122 06:35:37.922011 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:37 crc kubenswrapper[4720]: I0122 06:35:37.922041 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:37 crc kubenswrapper[4720]: I0122 06:35:37.922059 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:37Z","lastTransitionTime":"2026-01-22T06:35:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:38 crc kubenswrapper[4720]: I0122 06:35:38.015755 4720 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 22 06:35:38 crc kubenswrapper[4720]: I0122 06:35:38.042339 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:38 crc kubenswrapper[4720]: I0122 06:35:38.042391 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:38 crc kubenswrapper[4720]: I0122 06:35:38.042408 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:38 crc kubenswrapper[4720]: I0122 06:35:38.042429 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:38 crc kubenswrapper[4720]: I0122 06:35:38.042444 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:38Z","lastTransitionTime":"2026-01-22T06:35:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:38 crc kubenswrapper[4720]: I0122 06:35:38.145259 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:38 crc kubenswrapper[4720]: I0122 06:35:38.145301 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:38 crc kubenswrapper[4720]: I0122 06:35:38.145313 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:38 crc kubenswrapper[4720]: I0122 06:35:38.145332 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:38 crc kubenswrapper[4720]: I0122 06:35:38.145344 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:38Z","lastTransitionTime":"2026-01-22T06:35:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:38 crc kubenswrapper[4720]: I0122 06:35:38.154426 4720 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 09:17:55.367257386 +0000 UTC Jan 22 06:35:38 crc kubenswrapper[4720]: I0122 06:35:38.210644 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 06:35:38 crc kubenswrapper[4720]: E0122 06:35:38.210799 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 06:35:38 crc kubenswrapper[4720]: I0122 06:35:38.210864 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 06:35:38 crc kubenswrapper[4720]: E0122 06:35:38.211168 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 06:35:38 crc kubenswrapper[4720]: I0122 06:35:38.229628 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:38Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:38 crc kubenswrapper[4720]: I0122 06:35:38.248048 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dtnxt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"518eedd0-2cb6-458d-a7a8-d8c8b8296401\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://794bf7a98b2300bd21bd914f5ad90c8f92ea7c055c78f62dbac7bc66c1c4d282\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqx2p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dtnxt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:38Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:38 crc kubenswrapper[4720]: I0122 06:35:38.248246 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:38 crc kubenswrapper[4720]: I0122 06:35:38.248514 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:38 crc kubenswrapper[4720]: I0122 06:35:38.248534 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:38 crc kubenswrapper[4720]: I0122 06:35:38.248561 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:38 crc kubenswrapper[4720]: I0122 06:35:38.248580 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:38Z","lastTransitionTime":"2026-01-22T06:35:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:38 crc kubenswrapper[4720]: I0122 06:35:38.266063 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc18b75d6761db0d4a459a096804ac6600c6c2026ca133d47671a8aadba175f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6616553b8c62205590801d10e6e316b396b7190584b08a774ebb87ab425cc122\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:38Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:38 crc kubenswrapper[4720]: I0122 06:35:38.283327 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4b26e9d-6a95-4b1c-9750-88b6aa100c67\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57c69f24cbbb5aea3a63b602916230df0cc3d74344a32bd1528bd162786a6d4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f52f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88eb6692702bcb8523c759d764bb8dede5af5a2890217a1c6897a5b18a7197dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f52f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bnsvd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:38Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:38 crc kubenswrapper[4720]: I0122 06:35:38.306494 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lxzml" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c7b3c34a-9870-4c9f-990b-29b7e768d5a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:33Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h66q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0e395a61e07527eace7f980f476ef9b16a0c6d525b461837ebd9abcbebfa9b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0e395a61e07527eace7f980f476ef9b16a0c6d525b461837ebd9abcbebfa9b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h66q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fa836ad8fb1d77bc1b9d30575db8ed7f2394a2b03d41f0b26a3907fa07ab154\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fa836ad8fb1d77bc1b9d30575db8ed7f2394a2b03d41f0b26a3907fa07ab154\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h66q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09cda77dbfa7cafc1f167478649fe1bc961d7610005bd3260407796956dd8fc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09cda77dbfa7cafc1f167478649fe1bc961d7610005bd3260407796956dd8fc3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h66q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d74c02c82589070c20f88c3ce9574341f2c4898af034fe9a686aa7b7c19ed93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d74c02c82589070c20f88c3ce9574341f2c4898af034fe9a686aa7b7c19ed93\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h66q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h66q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h66q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lxzml\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:38Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:38 crc kubenswrapper[4720]: I0122 06:35:38.322498 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5bmrh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"819a554b-cde8-41eb-bf3c-b965b5754ee9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fd548b8bf40d7be75f3433d2396eac3eb467cdc77a6e92f29c5321b0f2e11c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w949z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5bmrh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:38Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:38 crc kubenswrapper[4720]: I0122 06:35:38.345480 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63aad383-f2cb-4bee-a51e-8f27e7b6f6dc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f91f6cde32c3019a63315238551af8420def8e585e4dbf3d29e55f62f155ba3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eff41c6fc4cac32edebfbabd313603d50771e92427938f4fc8c12bc75563133d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5dc3d31620607809111c9a1ed06ab7dfb1b24ff5ef18155979efc6b09b6d67d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://365ce842df37c87fae429e32228087a45828c6c717b63a2eb9818f26bf9aed7b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:08Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:38Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:38 crc kubenswrapper[4720]: I0122 06:35:38.360192 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:38 crc kubenswrapper[4720]: I0122 06:35:38.360278 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:38 crc kubenswrapper[4720]: I0122 06:35:38.360297 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:38 crc kubenswrapper[4720]: I0122 06:35:38.360328 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:38 crc kubenswrapper[4720]: I0122 06:35:38.360349 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:38Z","lastTransitionTime":"2026-01-22T06:35:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:38 crc kubenswrapper[4720]: I0122 06:35:38.368045 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:38Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:38 crc kubenswrapper[4720]: I0122 06:35:38.386438 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:38Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:38 crc kubenswrapper[4720]: I0122 06:35:38.409450 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3042f2c25662a3c0a0f010b2e431521ea9dc24b59303773b1fc4e2b43308937\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:38Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:38 crc kubenswrapper[4720]: I0122 06:35:38.466791 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:38 crc kubenswrapper[4720]: I0122 06:35:38.466849 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:38 crc kubenswrapper[4720]: I0122 06:35:38.466861 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:38 crc kubenswrapper[4720]: I0122 06:35:38.466882 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:38 crc kubenswrapper[4720]: I0122 06:35:38.466893 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:38Z","lastTransitionTime":"2026-01-22T06:35:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:38 crc kubenswrapper[4720]: I0122 06:35:38.473562 4720 generic.go:334] "Generic (PLEG): container finished" podID="c7b3c34a-9870-4c9f-990b-29b7e768d5a5" containerID="42e7374bf2c5e55d0afbe1e7564a2bc7fd23d0af7e22f0cab853fb9d75802422" exitCode=0 Jan 22 06:35:38 crc kubenswrapper[4720]: I0122 06:35:38.473612 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-lxzml" event={"ID":"c7b3c34a-9870-4c9f-990b-29b7e768d5a5","Type":"ContainerDied","Data":"42e7374bf2c5e55d0afbe1e7564a2bc7fd23d0af7e22f0cab853fb9d75802422"} Jan 22 06:35:38 crc kubenswrapper[4720]: I0122 06:35:38.475536 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pc2f4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a725fa6-120e-41b1-bf7b-e1419e35c891\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a478b385adad231c65bafa44aef49fd983ef1a81babc0464478e199578b612b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a478b385adad231c65bafa44aef49fd983ef1a81babc0464478e199578b612b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pc2f4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:38Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:38 crc kubenswrapper[4720]: I0122 06:35:38.504046 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2336077a-0d11-443d-aa5f-3bc0f75aeb59\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7f8b443098a61889ca09134ff7b6e219efca59abcbcea3bb4c1aa9266ee8e55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfdba4f4bb3ed55acad58ed0c07d2dd58b1a1e9fd56443090c002bdefd3e7e10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2caddf6dd66b641ff10add3b52e37581c741507660ec4fe67a3b9ddde699e56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://108525dd49ce1847f1cc6c4e2a4c70cae3d2d990cda97eeb4aba8879b955432d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d49a501c34b86f0f9300742970a00c92ce3b8d352518d8f53d1ab8c734e2af6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49fa65139740d25a82a8becea2b7476e9846a3eebfe5a02bb0b3e8cc4e7fa30d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49fa65139740d25a82a8becea2b7476e9846a3eebfe5a02bb0b3e8cc4e7fa30d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e171d6a13b6108f7df37e46d2195682d2ce7a75e5fbfd0ccf4aca27871964cf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e171d6a13b6108f7df37e46d2195682d2ce7a75e5fbfd0ccf4aca27871964cf6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://136504ce05b178f4c60a839a1044c0b0892ea97babfe80181eac601c5057df16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://136504ce05b178f4c60a839a1044c0b0892ea97babfe80181eac601c5057df16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:08Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:38Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:38 crc kubenswrapper[4720]: I0122 06:35:38.519285 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71c3232e-a7c6-4127-b9ae-54b793cf40fc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b844e983a5de29cec0e9c51f209f04921cd45b437887ca38e93348bc0816832a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d21c2ba52009379f44498a3064c67bcd1a54a6277cc2b1df9db949bf0871e95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65a3043e63bb84b58bb5486070eb84265d51c514df2d766703a62382b3d6f66f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b944677a78ab673857bf2dd507dcc2eced1caef632613ec1e2054cb76ef57f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f51bf89aa732948c3672632435a055b62f58316da07931b2773bc2d3c10789e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T06:35:26Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 06:35:20.895806 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 06:35:20.896560 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2161447925/tls.crt::/tmp/serving-cert-2161447925/tls.key\\\\\\\"\\\\nI0122 06:35:26.673138 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 06:35:26.677382 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 06:35:26.677419 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 06:35:26.677459 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 06:35:26.677469 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 06:35:26.688473 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 06:35:26.688552 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 06:35:26.688566 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 06:35:26.688580 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 06:35:26.688588 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 06:35:26.688596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 06:35:26.688605 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 06:35:26.688875 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 06:35:26.690763 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://927a5a6aba8d7533888fb9e5e664fbaed25c6e6c0616c5b3c6f0afd6f45aa128\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3418e6eacd36147ae6ae1be72f23113dea7dbaab3597ba0fee76fbc846c0162f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3418e6eacd36147ae6ae1be72f23113dea7dbaab3597ba0fee76fbc846c0162f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:38Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:38 crc kubenswrapper[4720]: I0122 06:35:38.536891 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://42d0f971bb1f2a686c44e255b97648123170034b41d626318aab29e111dda4a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:38Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:38 crc kubenswrapper[4720]: I0122 06:35:38.553641 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-n5w5r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85373343-156d-4de0-a72b-baaf7c4e3d08\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4da0abad7292a9d82b80caabd4b1f0fc15fbe54bfa5b4316c85b2131ea8b5b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlzmz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-n5w5r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:38Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:38 crc kubenswrapper[4720]: I0122 06:35:38.569507 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:38 crc kubenswrapper[4720]: I0122 06:35:38.569539 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:38 crc kubenswrapper[4720]: I0122 06:35:38.569548 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:38 crc kubenswrapper[4720]: I0122 06:35:38.569565 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:38 crc kubenswrapper[4720]: I0122 06:35:38.569575 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:38Z","lastTransitionTime":"2026-01-22T06:35:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:38 crc kubenswrapper[4720]: I0122 06:35:38.576556 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2336077a-0d11-443d-aa5f-3bc0f75aeb59\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7f8b443098a61889ca09134ff7b6e219efca59abcbcea3bb4c1aa9266ee8e55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfdba4f4bb3ed55acad58ed0c07d2dd58b1a1e9fd56443090c002bdefd3e7e10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2caddf6dd66b641ff10add3b52e37581c741507660ec4fe67a3b9ddde699e56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://108525dd49ce1847f1cc6c4e2a4c70cae3d2d990cda97eeb4aba8879b955432d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d49a501c34b86f0f9300742970a00c92ce3b8d352518d8f53d1ab8c734e2af6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49fa65139740d25a82a8becea2b7476e9846a3eebfe5a02bb0b3e8cc4e7fa30d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49fa65139740d25a82a8becea2b7476e9846a3eebfe5a02bb0b3e8cc4e7fa30d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e171d6a13b6108f7df37e46d2195682d2ce7a75e5fbfd0ccf4aca27871964cf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e171d6a13b6108f7df37e46d2195682d2ce7a75e5fbfd0ccf4aca27871964cf6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://136504ce05b178f4c60a839a1044c0b0892ea97babfe80181eac601c5057df16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://136504ce05b178f4c60a839a1044c0b0892ea97babfe80181eac601c5057df16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:08Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:38Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:38 crc kubenswrapper[4720]: I0122 06:35:38.598366 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71c3232e-a7c6-4127-b9ae-54b793cf40fc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b844e983a5de29cec0e9c51f209f04921cd45b437887ca38e93348bc0816832a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d21c2ba52009379f44498a3064c67bcd1a54a6277cc2b1df9db949bf0871e95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65a3043e63bb84b58bb5486070eb84265d51c514df2d766703a62382b3d6f66f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b944677a78ab673857bf2dd507dcc2eced1caef632613ec1e2054cb76ef57f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f51bf89aa732948c3672632435a055b62f58316da07931b2773bc2d3c10789e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T06:35:26Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 06:35:20.895806 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 06:35:20.896560 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2161447925/tls.crt::/tmp/serving-cert-2161447925/tls.key\\\\\\\"\\\\nI0122 06:35:26.673138 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 06:35:26.677382 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 06:35:26.677419 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 06:35:26.677459 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 06:35:26.677469 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 06:35:26.688473 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 06:35:26.688552 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 06:35:26.688566 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 06:35:26.688580 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 06:35:26.688588 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 06:35:26.688596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 06:35:26.688605 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 06:35:26.688875 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 06:35:26.690763 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://927a5a6aba8d7533888fb9e5e664fbaed25c6e6c0616c5b3c6f0afd6f45aa128\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3418e6eacd36147ae6ae1be72f23113dea7dbaab3597ba0fee76fbc846c0162f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3418e6eacd36147ae6ae1be72f23113dea7dbaab3597ba0fee76fbc846c0162f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:38Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:38 crc kubenswrapper[4720]: I0122 06:35:38.614949 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://42d0f971bb1f2a686c44e255b97648123170034b41d626318aab29e111dda4a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:38Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:38 crc kubenswrapper[4720]: I0122 06:35:38.630151 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-n5w5r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85373343-156d-4de0-a72b-baaf7c4e3d08\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4da0abad7292a9d82b80caabd4b1f0fc15fbe54bfa5b4316c85b2131ea8b5b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlzmz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-n5w5r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:38Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:38 crc kubenswrapper[4720]: I0122 06:35:38.647794 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:38Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:38 crc kubenswrapper[4720]: I0122 06:35:38.662034 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dtnxt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"518eedd0-2cb6-458d-a7a8-d8c8b8296401\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://794bf7a98b2300bd21bd914f5ad90c8f92ea7c055c78f62dbac7bc66c1c4d282\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqx2p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dtnxt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:38Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:38 crc kubenswrapper[4720]: I0122 06:35:38.671905 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:38 crc kubenswrapper[4720]: I0122 06:35:38.671961 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:38 crc kubenswrapper[4720]: I0122 06:35:38.671973 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:38 crc kubenswrapper[4720]: I0122 06:35:38.671989 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:38 crc kubenswrapper[4720]: I0122 06:35:38.672001 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:38Z","lastTransitionTime":"2026-01-22T06:35:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:38 crc kubenswrapper[4720]: I0122 06:35:38.677801 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc18b75d6761db0d4a459a096804ac6600c6c2026ca133d47671a8aadba175f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6616553b8c62205590801d10e6e316b396b7190584b08a774ebb87ab425cc122\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:38Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:38 crc kubenswrapper[4720]: I0122 06:35:38.693353 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4b26e9d-6a95-4b1c-9750-88b6aa100c67\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57c69f24cbbb5aea3a63b602916230df0cc3d74344a32bd1528bd162786a6d4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f52f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88eb6692702bcb8523c759d764bb8dede5af5a2890217a1c6897a5b18a7197dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f52f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bnsvd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:38Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:38 crc kubenswrapper[4720]: I0122 06:35:38.706533 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5bmrh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"819a554b-cde8-41eb-bf3c-b965b5754ee9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fd548b8bf40d7be75f3433d2396eac3eb467cdc77a6e92f29c5321b0f2e11c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w949z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5bmrh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:38Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:38 crc kubenswrapper[4720]: I0122 06:35:38.722639 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63aad383-f2cb-4bee-a51e-8f27e7b6f6dc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f91f6cde32c3019a63315238551af8420def8e585e4dbf3d29e55f62f155ba3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eff41c6fc4cac32edebfbabd313603d50771e92427938f4fc8c12bc75563133d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5dc3d31620607809111c9a1ed06ab7dfb1b24ff5ef18155979efc6b09b6d67d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://365ce842df37c87fae429e32228087a45828c6c717b63a2eb9818f26bf9aed7b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:08Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:38Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:38 crc kubenswrapper[4720]: I0122 06:35:38.743656 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:38Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:38 crc kubenswrapper[4720]: I0122 06:35:38.764946 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:38Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:38 crc kubenswrapper[4720]: I0122 06:35:38.775596 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:38 crc kubenswrapper[4720]: I0122 06:35:38.775642 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:38 crc kubenswrapper[4720]: I0122 06:35:38.775652 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:38 crc kubenswrapper[4720]: I0122 06:35:38.775673 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:38 crc kubenswrapper[4720]: I0122 06:35:38.775685 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:38Z","lastTransitionTime":"2026-01-22T06:35:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:38 crc kubenswrapper[4720]: I0122 06:35:38.782840 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3042f2c25662a3c0a0f010b2e431521ea9dc24b59303773b1fc4e2b43308937\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:38Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:38 crc kubenswrapper[4720]: I0122 06:35:38.812992 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pc2f4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a725fa6-120e-41b1-bf7b-e1419e35c891\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a478b385adad231c65bafa44aef49fd983ef1a81babc0464478e199578b612b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a478b385adad231c65bafa44aef49fd983ef1a81babc0464478e199578b612b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pc2f4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:38Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:38 crc kubenswrapper[4720]: I0122 06:35:38.833142 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lxzml" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c7b3c34a-9870-4c9f-990b-29b7e768d5a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:33Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h66q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0e395a61e07527eace7f980f476ef9b16a0c6d525b461837ebd9abcbebfa9b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0e395a61e07527eace7f980f476ef9b16a0c6d525b461837ebd9abcbebfa9b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h66q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fa836ad8fb1d77bc1b9d30575db8ed7f2394a2b03d41f0b26a3907fa07ab154\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fa836ad8fb1d77bc1b9d30575db8ed7f2394a2b03d41f0b26a3907fa07ab154\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h66q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09cda77dbfa7cafc1f167478649fe1bc961d7610005bd3260407796956dd8fc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09cda77dbfa7cafc1f167478649fe1bc961d7610005bd3260407796956dd8fc3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h66q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d74c02c82589070c20f88c3ce9574341f2c4898af034fe9a686aa7b7c19ed93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d74c02c82589070c20f88c3ce9574341f2c4898af034fe9a686aa7b7c19ed93\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h66q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42e7374bf2c5e55d0afbe1e7564a2bc7fd23d0af7e22f0cab853fb9d75802422\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42e7374bf2c5e55d0afbe1e7564a2bc7fd23d0af7e22f0cab853fb9d75802422\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h66q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h66q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lxzml\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:38Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:38 crc kubenswrapper[4720]: I0122 06:35:38.878689 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:38 crc kubenswrapper[4720]: I0122 06:35:38.878723 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:38 crc kubenswrapper[4720]: I0122 06:35:38.878734 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:38 crc kubenswrapper[4720]: I0122 06:35:38.878770 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:38 crc kubenswrapper[4720]: I0122 06:35:38.878783 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:38Z","lastTransitionTime":"2026-01-22T06:35:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:38 crc kubenswrapper[4720]: I0122 06:35:38.981368 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:38 crc kubenswrapper[4720]: I0122 06:35:38.981434 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:38 crc kubenswrapper[4720]: I0122 06:35:38.981453 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:38 crc kubenswrapper[4720]: I0122 06:35:38.981480 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:38 crc kubenswrapper[4720]: I0122 06:35:38.981498 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:38Z","lastTransitionTime":"2026-01-22T06:35:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:39 crc kubenswrapper[4720]: I0122 06:35:39.084689 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:39 crc kubenswrapper[4720]: I0122 06:35:39.084765 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:39 crc kubenswrapper[4720]: I0122 06:35:39.084789 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:39 crc kubenswrapper[4720]: I0122 06:35:39.084877 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:39 crc kubenswrapper[4720]: I0122 06:35:39.084935 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:39Z","lastTransitionTime":"2026-01-22T06:35:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:39 crc kubenswrapper[4720]: I0122 06:35:39.154555 4720 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 03:49:28.924710903 +0000 UTC Jan 22 06:35:39 crc kubenswrapper[4720]: I0122 06:35:39.188524 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:39 crc kubenswrapper[4720]: I0122 06:35:39.188571 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:39 crc kubenswrapper[4720]: I0122 06:35:39.188588 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:39 crc kubenswrapper[4720]: I0122 06:35:39.188612 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:39 crc kubenswrapper[4720]: I0122 06:35:39.188630 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:39Z","lastTransitionTime":"2026-01-22T06:35:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:39 crc kubenswrapper[4720]: I0122 06:35:39.209616 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 06:35:39 crc kubenswrapper[4720]: E0122 06:35:39.209790 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 06:35:39 crc kubenswrapper[4720]: I0122 06:35:39.298120 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:39 crc kubenswrapper[4720]: I0122 06:35:39.298243 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:39 crc kubenswrapper[4720]: I0122 06:35:39.298270 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:39 crc kubenswrapper[4720]: I0122 06:35:39.298302 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:39 crc kubenswrapper[4720]: I0122 06:35:39.298322 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:39Z","lastTransitionTime":"2026-01-22T06:35:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:39 crc kubenswrapper[4720]: I0122 06:35:39.401524 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:39 crc kubenswrapper[4720]: I0122 06:35:39.401563 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:39 crc kubenswrapper[4720]: I0122 06:35:39.401574 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:39 crc kubenswrapper[4720]: I0122 06:35:39.401592 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:39 crc kubenswrapper[4720]: I0122 06:35:39.401604 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:39Z","lastTransitionTime":"2026-01-22T06:35:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:39 crc kubenswrapper[4720]: I0122 06:35:39.496758 4720 generic.go:334] "Generic (PLEG): container finished" podID="c7b3c34a-9870-4c9f-990b-29b7e768d5a5" containerID="3bbafab11f2868d0a97a54debc01d8e4cc6360b08a5b7e883a2f385db2e9ba0c" exitCode=0 Jan 22 06:35:39 crc kubenswrapper[4720]: I0122 06:35:39.496842 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-lxzml" event={"ID":"c7b3c34a-9870-4c9f-990b-29b7e768d5a5","Type":"ContainerDied","Data":"3bbafab11f2868d0a97a54debc01d8e4cc6360b08a5b7e883a2f385db2e9ba0c"} Jan 22 06:35:39 crc kubenswrapper[4720]: I0122 06:35:39.513847 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:39 crc kubenswrapper[4720]: I0122 06:35:39.513897 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:39 crc kubenswrapper[4720]: I0122 06:35:39.513938 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:39 crc kubenswrapper[4720]: I0122 06:35:39.513968 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:39 crc kubenswrapper[4720]: I0122 06:35:39.513982 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:39Z","lastTransitionTime":"2026-01-22T06:35:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:39 crc kubenswrapper[4720]: I0122 06:35:39.528184 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:39Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:39 crc kubenswrapper[4720]: I0122 06:35:39.547763 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3042f2c25662a3c0a0f010b2e431521ea9dc24b59303773b1fc4e2b43308937\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:39Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:39 crc kubenswrapper[4720]: I0122 06:35:39.581018 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pc2f4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a725fa6-120e-41b1-bf7b-e1419e35c891\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a478b385adad231c65bafa44aef49fd983ef1a81babc0464478e199578b612b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a478b385adad231c65bafa44aef49fd983ef1a81babc0464478e199578b612b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pc2f4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:39Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:39 crc kubenswrapper[4720]: I0122 06:35:39.607591 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lxzml" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c7b3c34a-9870-4c9f-990b-29b7e768d5a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h66q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0e395a61e07527eace7f980f476ef9b16a0c6d525b461837ebd9abcbebfa9b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0e395a61e07527eace7f980f476ef9b16a0c6d525b461837ebd9abcbebfa9b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h66q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fa836ad8fb1d77bc1b9d30575db8ed7f2394a2b03d41f0b26a3907fa07ab154\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fa836ad8fb1d77bc1b9d30575db8ed7f2394a2b03d41f0b26a3907fa07ab154\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h66q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09cda77dbfa7cafc1f167478649fe1bc961d7610005bd3260407796956dd8fc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09cda77dbfa7cafc1f167478649fe1bc961d7610005bd3260407796956dd8fc3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h66q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d74c02c82589070c20f88c3ce9574341f2c4898af034fe9a686aa7b7c19ed93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d74c02c82589070c20f88c3ce9574341f2c4898af034fe9a686aa7b7c19ed93\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h66q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42e7374bf2c5e55d0afbe1e7564a2bc7fd23d0af7e22f0cab853fb9d75802422\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42e7374bf2c5e55d0afbe1e7564a2bc7fd23d0af7e22f0cab853fb9d75802422\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h66q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bbafab11f2868d0a97a54debc01d8e4cc6360b08a5b7e883a2f385db2e9ba0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3bbafab11f2868d0a97a54debc01d8e4cc6360b08a5b7e883a2f385db2e9ba0c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h66q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lxzml\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:39Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:39 crc kubenswrapper[4720]: I0122 06:35:39.616754 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:39 crc kubenswrapper[4720]: I0122 06:35:39.616819 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:39 crc kubenswrapper[4720]: I0122 06:35:39.616839 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:39 crc kubenswrapper[4720]: I0122 06:35:39.616867 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:39 crc kubenswrapper[4720]: I0122 06:35:39.616887 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:39Z","lastTransitionTime":"2026-01-22T06:35:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:39 crc kubenswrapper[4720]: I0122 06:35:39.624561 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5bmrh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"819a554b-cde8-41eb-bf3c-b965b5754ee9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fd548b8bf40d7be75f3433d2396eac3eb467cdc77a6e92f29c5321b0f2e11c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w949z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5bmrh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:39Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:39 crc kubenswrapper[4720]: I0122 06:35:39.645143 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63aad383-f2cb-4bee-a51e-8f27e7b6f6dc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f91f6cde32c3019a63315238551af8420def8e585e4dbf3d29e55f62f155ba3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eff41c6fc4cac32edebfbabd313603d50771e92427938f4fc8c12bc75563133d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5dc3d31620607809111c9a1ed06ab7dfb1b24ff5ef18155979efc6b09b6d67d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://365ce842df37c87fae429e32228087a45828c6c717b63a2eb9818f26bf9aed7b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:08Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:39Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:39 crc kubenswrapper[4720]: I0122 06:35:39.658644 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:39Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:39 crc kubenswrapper[4720]: I0122 06:35:39.674873 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://42d0f971bb1f2a686c44e255b97648123170034b41d626318aab29e111dda4a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:39Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:39 crc kubenswrapper[4720]: I0122 06:35:39.696454 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-n5w5r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85373343-156d-4de0-a72b-baaf7c4e3d08\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4da0abad7292a9d82b80caabd4b1f0fc15fbe54bfa5b4316c85b2131ea8b5b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlzmz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-n5w5r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:39Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:39 crc kubenswrapper[4720]: I0122 06:35:39.720155 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:39 crc kubenswrapper[4720]: I0122 06:35:39.720200 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:39 crc kubenswrapper[4720]: I0122 06:35:39.720212 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:39 crc kubenswrapper[4720]: I0122 06:35:39.720231 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:39 crc kubenswrapper[4720]: I0122 06:35:39.720243 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:39Z","lastTransitionTime":"2026-01-22T06:35:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:39 crc kubenswrapper[4720]: I0122 06:35:39.729712 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2336077a-0d11-443d-aa5f-3bc0f75aeb59\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7f8b443098a61889ca09134ff7b6e219efca59abcbcea3bb4c1aa9266ee8e55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfdba4f4bb3ed55acad58ed0c07d2dd58b1a1e9fd56443090c002bdefd3e7e10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2caddf6dd66b641ff10add3b52e37581c741507660ec4fe67a3b9ddde699e56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://108525dd49ce1847f1cc6c4e2a4c70cae3d2d990cda97eeb4aba8879b955432d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d49a501c34b86f0f9300742970a00c92ce3b8d352518d8f53d1ab8c734e2af6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49fa65139740d25a82a8becea2b7476e9846a3eebfe5a02bb0b3e8cc4e7fa30d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49fa65139740d25a82a8becea2b7476e9846a3eebfe5a02bb0b3e8cc4e7fa30d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e171d6a13b6108f7df37e46d2195682d2ce7a75e5fbfd0ccf4aca27871964cf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e171d6a13b6108f7df37e46d2195682d2ce7a75e5fbfd0ccf4aca27871964cf6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://136504ce05b178f4c60a839a1044c0b0892ea97babfe80181eac601c5057df16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://136504ce05b178f4c60a839a1044c0b0892ea97babfe80181eac601c5057df16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:08Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:39Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:39 crc kubenswrapper[4720]: I0122 06:35:39.747586 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71c3232e-a7c6-4127-b9ae-54b793cf40fc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b844e983a5de29cec0e9c51f209f04921cd45b437887ca38e93348bc0816832a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d21c2ba52009379f44498a3064c67bcd1a54a6277cc2b1df9db949bf0871e95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65a3043e63bb84b58bb5486070eb84265d51c514df2d766703a62382b3d6f66f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b944677a78ab673857bf2dd507dcc2eced1caef632613ec1e2054cb76ef57f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f51bf89aa732948c3672632435a055b62f58316da07931b2773bc2d3c10789e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T06:35:26Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 06:35:20.895806 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 06:35:20.896560 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2161447925/tls.crt::/tmp/serving-cert-2161447925/tls.key\\\\\\\"\\\\nI0122 06:35:26.673138 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 06:35:26.677382 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 06:35:26.677419 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 06:35:26.677459 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 06:35:26.677469 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 06:35:26.688473 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 06:35:26.688552 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 06:35:26.688566 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 06:35:26.688580 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 06:35:26.688588 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 06:35:26.688596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 06:35:26.688605 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 06:35:26.688875 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 06:35:26.690763 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://927a5a6aba8d7533888fb9e5e664fbaed25c6e6c0616c5b3c6f0afd6f45aa128\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3418e6eacd36147ae6ae1be72f23113dea7dbaab3597ba0fee76fbc846c0162f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3418e6eacd36147ae6ae1be72f23113dea7dbaab3597ba0fee76fbc846c0162f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:39Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:39 crc kubenswrapper[4720]: I0122 06:35:39.765719 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dtnxt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"518eedd0-2cb6-458d-a7a8-d8c8b8296401\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://794bf7a98b2300bd21bd914f5ad90c8f92ea7c055c78f62dbac7bc66c1c4d282\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqx2p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dtnxt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:39Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:39 crc kubenswrapper[4720]: I0122 06:35:39.782158 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:39Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:39 crc kubenswrapper[4720]: I0122 06:35:39.794508 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4b26e9d-6a95-4b1c-9750-88b6aa100c67\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57c69f24cbbb5aea3a63b602916230df0cc3d74344a32bd1528bd162786a6d4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f52f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88eb6692702bcb8523c759d764bb8dede5af5a2890217a1c6897a5b18a7197dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f52f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bnsvd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:39Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:39 crc kubenswrapper[4720]: I0122 06:35:39.809452 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc18b75d6761db0d4a459a096804ac6600c6c2026ca133d47671a8aadba175f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6616553b8c62205590801d10e6e316b396b7190584b08a774ebb87ab425cc122\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:39Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:39 crc kubenswrapper[4720]: I0122 06:35:39.823107 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:39 crc kubenswrapper[4720]: I0122 06:35:39.823156 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:39 crc kubenswrapper[4720]: I0122 06:35:39.823186 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:39 crc kubenswrapper[4720]: I0122 06:35:39.823207 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:39 crc kubenswrapper[4720]: I0122 06:35:39.823220 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:39Z","lastTransitionTime":"2026-01-22T06:35:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:39 crc kubenswrapper[4720]: I0122 06:35:39.926556 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:39 crc kubenswrapper[4720]: I0122 06:35:39.926603 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:39 crc kubenswrapper[4720]: I0122 06:35:39.926613 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:39 crc kubenswrapper[4720]: I0122 06:35:39.926632 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:39 crc kubenswrapper[4720]: I0122 06:35:39.926646 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:39Z","lastTransitionTime":"2026-01-22T06:35:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:40 crc kubenswrapper[4720]: I0122 06:35:40.030008 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:40 crc kubenswrapper[4720]: I0122 06:35:40.030079 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:40 crc kubenswrapper[4720]: I0122 06:35:40.030095 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:40 crc kubenswrapper[4720]: I0122 06:35:40.030122 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:40 crc kubenswrapper[4720]: I0122 06:35:40.030141 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:40Z","lastTransitionTime":"2026-01-22T06:35:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:40 crc kubenswrapper[4720]: I0122 06:35:40.034485 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 06:35:40 crc kubenswrapper[4720]: I0122 06:35:40.047038 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dtnxt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"518eedd0-2cb6-458d-a7a8-d8c8b8296401\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://794bf7a98b2300bd21bd914f5ad90c8f92ea7c055c78f62dbac7bc66c1c4d282\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqx2p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dtnxt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:40Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:40 crc kubenswrapper[4720]: I0122 06:35:40.062577 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:40Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:40 crc kubenswrapper[4720]: I0122 06:35:40.078033 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4b26e9d-6a95-4b1c-9750-88b6aa100c67\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57c69f24cbbb5aea3a63b602916230df0cc3d74344a32bd1528bd162786a6d4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f52f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88eb6692702bcb8523c759d764bb8dede5af5a2890217a1c6897a5b18a7197dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f52f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bnsvd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:40Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:40 crc kubenswrapper[4720]: I0122 06:35:40.091855 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc18b75d6761db0d4a459a096804ac6600c6c2026ca133d47671a8aadba175f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6616553b8c62205590801d10e6e316b396b7190584b08a774ebb87ab425cc122\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:40Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:40 crc kubenswrapper[4720]: I0122 06:35:40.109228 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:40Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:40 crc kubenswrapper[4720]: I0122 06:35:40.125417 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3042f2c25662a3c0a0f010b2e431521ea9dc24b59303773b1fc4e2b43308937\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:40Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:40 crc kubenswrapper[4720]: I0122 06:35:40.133470 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:40 crc kubenswrapper[4720]: I0122 06:35:40.133516 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:40 crc kubenswrapper[4720]: I0122 06:35:40.133529 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:40 crc kubenswrapper[4720]: I0122 06:35:40.133549 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:40 crc kubenswrapper[4720]: I0122 06:35:40.133565 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:40Z","lastTransitionTime":"2026-01-22T06:35:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:40 crc kubenswrapper[4720]: I0122 06:35:40.146120 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pc2f4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a725fa6-120e-41b1-bf7b-e1419e35c891\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a478b385adad231c65bafa44aef49fd983ef1a81babc0464478e199578b612b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a478b385adad231c65bafa44aef49fd983ef1a81babc0464478e199578b612b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pc2f4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:40Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:40 crc kubenswrapper[4720]: I0122 06:35:40.154713 4720 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 19:58:59.295635792 +0000 UTC Jan 22 06:35:40 crc kubenswrapper[4720]: I0122 06:35:40.165757 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lxzml" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c7b3c34a-9870-4c9f-990b-29b7e768d5a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h66q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0e395a61e07527eace7f980f476ef9b16a0c6d525b461837ebd9abcbebfa9b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0e395a61e07527eace7f980f476ef9b16a0c6d525b461837ebd9abcbebfa9b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h66q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fa836ad8fb1d77bc1b9d30575db8ed7f2394a2b03d41f0b26a3907fa07ab154\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fa836ad8fb1d77bc1b9d30575db8ed7f2394a2b03d41f0b26a3907fa07ab154\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h66q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09cda77dbfa7cafc1f167478649fe1bc961d7610005bd3260407796956dd8fc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09cda77dbfa7cafc1f167478649fe1bc961d7610005bd3260407796956dd8fc3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h66q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d74c02c82589070c20f88c3ce9574341f2c4898af034fe9a686aa7b7c19ed93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d74c02c82589070c20f88c3ce9574341f2c4898af034fe9a686aa7b7c19ed93\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h66q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42e7374bf2c5e55d0afbe1e7564a2bc7fd23d0af7e22f0cab853fb9d75802422\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42e7374bf2c5e55d0afbe1e7564a2bc7fd23d0af7e22f0cab853fb9d75802422\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h66q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bbafab11f2868d0a97a54debc01d8e4cc6360b08a5b7e883a2f385db2e9ba0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3bbafab11f2868d0a97a54debc01d8e4cc6360b08a5b7e883a2f385db2e9ba0c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h66q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lxzml\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:40Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:40 crc kubenswrapper[4720]: I0122 06:35:40.182295 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5bmrh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"819a554b-cde8-41eb-bf3c-b965b5754ee9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fd548b8bf40d7be75f3433d2396eac3eb467cdc77a6e92f29c5321b0f2e11c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w949z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5bmrh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:40Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:40 crc kubenswrapper[4720]: I0122 06:35:40.197209 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63aad383-f2cb-4bee-a51e-8f27e7b6f6dc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f91f6cde32c3019a63315238551af8420def8e585e4dbf3d29e55f62f155ba3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eff41c6fc4cac32edebfbabd313603d50771e92427938f4fc8c12bc75563133d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5dc3d31620607809111c9a1ed06ab7dfb1b24ff5ef18155979efc6b09b6d67d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://365ce842df37c87fae429e32228087a45828c6c717b63a2eb9818f26bf9aed7b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:08Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:40Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:40 crc kubenswrapper[4720]: I0122 06:35:40.209707 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 06:35:40 crc kubenswrapper[4720]: I0122 06:35:40.209783 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 06:35:40 crc kubenswrapper[4720]: E0122 06:35:40.209933 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 06:35:40 crc kubenswrapper[4720]: E0122 06:35:40.210034 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 06:35:40 crc kubenswrapper[4720]: I0122 06:35:40.216939 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:40Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:40 crc kubenswrapper[4720]: I0122 06:35:40.235842 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:40 crc kubenswrapper[4720]: I0122 06:35:40.235898 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:40 crc kubenswrapper[4720]: I0122 06:35:40.235933 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:40 crc kubenswrapper[4720]: I0122 06:35:40.235955 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:40 crc kubenswrapper[4720]: I0122 06:35:40.235972 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:40Z","lastTransitionTime":"2026-01-22T06:35:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:40 crc kubenswrapper[4720]: I0122 06:35:40.242649 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://42d0f971bb1f2a686c44e255b97648123170034b41d626318aab29e111dda4a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:40Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:40 crc kubenswrapper[4720]: I0122 06:35:40.260158 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-n5w5r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85373343-156d-4de0-a72b-baaf7c4e3d08\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4da0abad7292a9d82b80caabd4b1f0fc15fbe54bfa5b4316c85b2131ea8b5b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlzmz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-n5w5r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:40Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:40 crc kubenswrapper[4720]: I0122 06:35:40.281536 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2336077a-0d11-443d-aa5f-3bc0f75aeb59\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7f8b443098a61889ca09134ff7b6e219efca59abcbcea3bb4c1aa9266ee8e55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfdba4f4bb3ed55acad58ed0c07d2dd58b1a1e9fd56443090c002bdefd3e7e10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2caddf6dd66b641ff10add3b52e37581c741507660ec4fe67a3b9ddde699e56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://108525dd49ce1847f1cc6c4e2a4c70cae3d2d990cda97eeb4aba8879b955432d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d49a501c34b86f0f9300742970a00c92ce3b8d352518d8f53d1ab8c734e2af6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49fa65139740d25a82a8becea2b7476e9846a3eebfe5a02bb0b3e8cc4e7fa30d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49fa65139740d25a82a8becea2b7476e9846a3eebfe5a02bb0b3e8cc4e7fa30d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e171d6a13b6108f7df37e46d2195682d2ce7a75e5fbfd0ccf4aca27871964cf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e171d6a13b6108f7df37e46d2195682d2ce7a75e5fbfd0ccf4aca27871964cf6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://136504ce05b178f4c60a839a1044c0b0892ea97babfe80181eac601c5057df16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://136504ce05b178f4c60a839a1044c0b0892ea97babfe80181eac601c5057df16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:08Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:40Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:40 crc kubenswrapper[4720]: I0122 06:35:40.304052 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71c3232e-a7c6-4127-b9ae-54b793cf40fc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b844e983a5de29cec0e9c51f209f04921cd45b437887ca38e93348bc0816832a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d21c2ba52009379f44498a3064c67bcd1a54a6277cc2b1df9db949bf0871e95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65a3043e63bb84b58bb5486070eb84265d51c514df2d766703a62382b3d6f66f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b944677a78ab673857bf2dd507dcc2eced1caef632613ec1e2054cb76ef57f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f51bf89aa732948c3672632435a055b62f58316da07931b2773bc2d3c10789e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T06:35:26Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 06:35:20.895806 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 06:35:20.896560 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2161447925/tls.crt::/tmp/serving-cert-2161447925/tls.key\\\\\\\"\\\\nI0122 06:35:26.673138 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 06:35:26.677382 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 06:35:26.677419 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 06:35:26.677459 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 06:35:26.677469 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 06:35:26.688473 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 06:35:26.688552 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 06:35:26.688566 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 06:35:26.688580 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 06:35:26.688588 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 06:35:26.688596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 06:35:26.688605 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 06:35:26.688875 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 06:35:26.690763 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://927a5a6aba8d7533888fb9e5e664fbaed25c6e6c0616c5b3c6f0afd6f45aa128\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3418e6eacd36147ae6ae1be72f23113dea7dbaab3597ba0fee76fbc846c0162f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3418e6eacd36147ae6ae1be72f23113dea7dbaab3597ba0fee76fbc846c0162f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:40Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:40 crc kubenswrapper[4720]: I0122 06:35:40.339431 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:40 crc kubenswrapper[4720]: I0122 06:35:40.339474 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:40 crc kubenswrapper[4720]: I0122 06:35:40.339483 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:40 crc kubenswrapper[4720]: I0122 06:35:40.339502 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:40 crc kubenswrapper[4720]: I0122 06:35:40.339511 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:40Z","lastTransitionTime":"2026-01-22T06:35:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:40 crc kubenswrapper[4720]: I0122 06:35:40.442832 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:40 crc kubenswrapper[4720]: I0122 06:35:40.442966 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:40 crc kubenswrapper[4720]: I0122 06:35:40.442996 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:40 crc kubenswrapper[4720]: I0122 06:35:40.443032 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:40 crc kubenswrapper[4720]: I0122 06:35:40.443059 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:40Z","lastTransitionTime":"2026-01-22T06:35:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:40 crc kubenswrapper[4720]: I0122 06:35:40.506124 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-lxzml" event={"ID":"c7b3c34a-9870-4c9f-990b-29b7e768d5a5","Type":"ContainerStarted","Data":"ec19859a546dc85e4807d0f522014187b1d22c1167a342776aafbcfd7c3a2cd7"} Jan 22 06:35:40 crc kubenswrapper[4720]: I0122 06:35:40.515304 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pc2f4" event={"ID":"9a725fa6-120e-41b1-bf7b-e1419e35c891","Type":"ContainerStarted","Data":"ab402c5e4e13bb7d60dfd745a6c6a7becd4ea9eab192323e5066ea6252f8c6d6"} Jan 22 06:35:40 crc kubenswrapper[4720]: I0122 06:35:40.516324 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-pc2f4" Jan 22 06:35:40 crc kubenswrapper[4720]: I0122 06:35:40.516406 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-pc2f4" Jan 22 06:35:40 crc kubenswrapper[4720]: I0122 06:35:40.526706 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:40Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:40 crc kubenswrapper[4720]: I0122 06:35:40.544871 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dtnxt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"518eedd0-2cb6-458d-a7a8-d8c8b8296401\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://794bf7a98b2300bd21bd914f5ad90c8f92ea7c055c78f62dbac7bc66c1c4d282\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqx2p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dtnxt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:40Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:40 crc kubenswrapper[4720]: I0122 06:35:40.547231 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:40 crc kubenswrapper[4720]: I0122 06:35:40.547272 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:40 crc kubenswrapper[4720]: I0122 06:35:40.547289 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:40 crc kubenswrapper[4720]: I0122 06:35:40.547314 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:40 crc kubenswrapper[4720]: I0122 06:35:40.547328 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:40Z","lastTransitionTime":"2026-01-22T06:35:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:40 crc kubenswrapper[4720]: I0122 06:35:40.549990 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-pc2f4" Jan 22 06:35:40 crc kubenswrapper[4720]: I0122 06:35:40.555007 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-pc2f4" Jan 22 06:35:40 crc kubenswrapper[4720]: I0122 06:35:40.562814 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc18b75d6761db0d4a459a096804ac6600c6c2026ca133d47671a8aadba175f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6616553b8c62205590801d10e6e316b396b7190584b08a774ebb87ab425cc122\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:40Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:40 crc kubenswrapper[4720]: I0122 06:35:40.579082 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4b26e9d-6a95-4b1c-9750-88b6aa100c67\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57c69f24cbbb5aea3a63b602916230df0cc3d74344a32bd1528bd162786a6d4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f52f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88eb6692702bcb8523c759d764bb8dede5af5a2890217a1c6897a5b18a7197dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f52f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bnsvd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:40Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:40 crc kubenswrapper[4720]: I0122 06:35:40.600649 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3042f2c25662a3c0a0f010b2e431521ea9dc24b59303773b1fc4e2b43308937\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:40Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:40 crc kubenswrapper[4720]: I0122 06:35:40.636085 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pc2f4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a725fa6-120e-41b1-bf7b-e1419e35c891\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a478b385adad231c65bafa44aef49fd983ef1a81babc0464478e199578b612b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a478b385adad231c65bafa44aef49fd983ef1a81babc0464478e199578b612b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pc2f4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:40Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:40 crc kubenswrapper[4720]: I0122 06:35:40.651148 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:40 crc kubenswrapper[4720]: I0122 06:35:40.651275 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:40 crc kubenswrapper[4720]: I0122 06:35:40.651307 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:40 crc kubenswrapper[4720]: I0122 06:35:40.651345 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:40 crc kubenswrapper[4720]: I0122 06:35:40.651373 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:40Z","lastTransitionTime":"2026-01-22T06:35:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:40 crc kubenswrapper[4720]: I0122 06:35:40.661789 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lxzml" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c7b3c34a-9870-4c9f-990b-29b7e768d5a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec19859a546dc85e4807d0f522014187b1d22c1167a342776aafbcfd7c3a2cd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h66q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0e395a61e07527eace7f980f476ef9b16a0c6d525b461837ebd9abcbebfa9b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0e395a61e07527eace7f980f476ef9b16a0c6d525b461837ebd9abcbebfa9b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h66q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fa836ad8fb1d77bc1b9d30575db8ed7f2394a2b03d41f0b26a3907fa07ab154\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fa836ad8fb1d77bc1b9d30575db8ed7f2394a2b03d41f0b26a3907fa07ab154\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h66q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09cda77dbfa7cafc1f167478649fe1bc961d7610005bd3260407796956dd8fc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09cda77dbfa7cafc1f167478649fe1bc961d7610005bd3260407796956dd8fc3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h66q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d74c02c82589070c20f88c3ce9574341f2c4898af034fe9a686aa7b7c19ed93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d74c02c82589070c20f88c3ce9574341f2c4898af034fe9a686aa7b7c19ed93\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h66q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42e7374bf2c5e55d0afbe1e7564a2bc7fd23d0af7e22f0cab853fb9d75802422\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42e7374bf2c5e55d0afbe1e7564a2bc7fd23d0af7e22f0cab853fb9d75802422\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h66q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bbafab11f2868d0a97a54debc01d8e4cc6360b08a5b7e883a2f385db2e9ba0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3bbafab11f2868d0a97a54debc01d8e4cc6360b08a5b7e883a2f385db2e9ba0c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h66q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lxzml\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:40Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:40 crc kubenswrapper[4720]: I0122 06:35:40.676270 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5bmrh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"819a554b-cde8-41eb-bf3c-b965b5754ee9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fd548b8bf40d7be75f3433d2396eac3eb467cdc77a6e92f29c5321b0f2e11c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w949z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5bmrh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:40Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:40 crc kubenswrapper[4720]: I0122 06:35:40.699467 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63aad383-f2cb-4bee-a51e-8f27e7b6f6dc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f91f6cde32c3019a63315238551af8420def8e585e4dbf3d29e55f62f155ba3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eff41c6fc4cac32edebfbabd313603d50771e92427938f4fc8c12bc75563133d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5dc3d31620607809111c9a1ed06ab7dfb1b24ff5ef18155979efc6b09b6d67d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://365ce842df37c87fae429e32228087a45828c6c717b63a2eb9818f26bf9aed7b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:08Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:40Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:40 crc kubenswrapper[4720]: I0122 06:35:40.720193 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:40Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:40 crc kubenswrapper[4720]: I0122 06:35:40.739074 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:40Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:40 crc kubenswrapper[4720]: I0122 06:35:40.755552 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:40 crc kubenswrapper[4720]: I0122 06:35:40.755632 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:40 crc kubenswrapper[4720]: I0122 06:35:40.755653 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:40 crc kubenswrapper[4720]: I0122 06:35:40.755691 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:40 crc kubenswrapper[4720]: I0122 06:35:40.755714 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:40Z","lastTransitionTime":"2026-01-22T06:35:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:40 crc kubenswrapper[4720]: I0122 06:35:40.759006 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-n5w5r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85373343-156d-4de0-a72b-baaf7c4e3d08\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4da0abad7292a9d82b80caabd4b1f0fc15fbe54bfa5b4316c85b2131ea8b5b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlzmz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-n5w5r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:40Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:40 crc kubenswrapper[4720]: I0122 06:35:40.783098 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2336077a-0d11-443d-aa5f-3bc0f75aeb59\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7f8b443098a61889ca09134ff7b6e219efca59abcbcea3bb4c1aa9266ee8e55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfdba4f4bb3ed55acad58ed0c07d2dd58b1a1e9fd56443090c002bdefd3e7e10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2caddf6dd66b641ff10add3b52e37581c741507660ec4fe67a3b9ddde699e56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://108525dd49ce1847f1cc6c4e2a4c70cae3d2d990cda97eeb4aba8879b955432d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d49a501c34b86f0f9300742970a00c92ce3b8d352518d8f53d1ab8c734e2af6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49fa65139740d25a82a8becea2b7476e9846a3eebfe5a02bb0b3e8cc4e7fa30d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49fa65139740d25a82a8becea2b7476e9846a3eebfe5a02bb0b3e8cc4e7fa30d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e171d6a13b6108f7df37e46d2195682d2ce7a75e5fbfd0ccf4aca27871964cf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e171d6a13b6108f7df37e46d2195682d2ce7a75e5fbfd0ccf4aca27871964cf6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://136504ce05b178f4c60a839a1044c0b0892ea97babfe80181eac601c5057df16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://136504ce05b178f4c60a839a1044c0b0892ea97babfe80181eac601c5057df16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:08Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:40Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:40 crc kubenswrapper[4720]: I0122 06:35:40.800659 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71c3232e-a7c6-4127-b9ae-54b793cf40fc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b844e983a5de29cec0e9c51f209f04921cd45b437887ca38e93348bc0816832a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d21c2ba52009379f44498a3064c67bcd1a54a6277cc2b1df9db949bf0871e95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65a3043e63bb84b58bb5486070eb84265d51c514df2d766703a62382b3d6f66f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b944677a78ab673857bf2dd507dcc2eced1caef632613ec1e2054cb76ef57f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f51bf89aa732948c3672632435a055b62f58316da07931b2773bc2d3c10789e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T06:35:26Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 06:35:20.895806 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 06:35:20.896560 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2161447925/tls.crt::/tmp/serving-cert-2161447925/tls.key\\\\\\\"\\\\nI0122 06:35:26.673138 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 06:35:26.677382 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 06:35:26.677419 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 06:35:26.677459 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 06:35:26.677469 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 06:35:26.688473 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 06:35:26.688552 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 06:35:26.688566 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 06:35:26.688580 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 06:35:26.688588 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 06:35:26.688596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 06:35:26.688605 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 06:35:26.688875 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 06:35:26.690763 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://927a5a6aba8d7533888fb9e5e664fbaed25c6e6c0616c5b3c6f0afd6f45aa128\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3418e6eacd36147ae6ae1be72f23113dea7dbaab3597ba0fee76fbc846c0162f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3418e6eacd36147ae6ae1be72f23113dea7dbaab3597ba0fee76fbc846c0162f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:40Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:40 crc kubenswrapper[4720]: I0122 06:35:40.815441 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://42d0f971bb1f2a686c44e255b97648123170034b41d626318aab29e111dda4a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:40Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:40 crc kubenswrapper[4720]: I0122 06:35:40.834513 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://42d0f971bb1f2a686c44e255b97648123170034b41d626318aab29e111dda4a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:40Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:40 crc kubenswrapper[4720]: I0122 06:35:40.858255 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-n5w5r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85373343-156d-4de0-a72b-baaf7c4e3d08\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4da0abad7292a9d82b80caabd4b1f0fc15fbe54bfa5b4316c85b2131ea8b5b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlzmz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-n5w5r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:40Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:40 crc kubenswrapper[4720]: I0122 06:35:40.859876 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:40 crc kubenswrapper[4720]: I0122 06:35:40.859953 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:40 crc kubenswrapper[4720]: I0122 06:35:40.859968 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:40 crc kubenswrapper[4720]: I0122 06:35:40.859989 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:40 crc kubenswrapper[4720]: I0122 06:35:40.860000 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:40Z","lastTransitionTime":"2026-01-22T06:35:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:40 crc kubenswrapper[4720]: I0122 06:35:40.887078 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2336077a-0d11-443d-aa5f-3bc0f75aeb59\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7f8b443098a61889ca09134ff7b6e219efca59abcbcea3bb4c1aa9266ee8e55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfdba4f4bb3ed55acad58ed0c07d2dd58b1a1e9fd56443090c002bdefd3e7e10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2caddf6dd66b641ff10add3b52e37581c741507660ec4fe67a3b9ddde699e56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://108525dd49ce1847f1cc6c4e2a4c70cae3d2d990cda97eeb4aba8879b955432d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d49a501c34b86f0f9300742970a00c92ce3b8d352518d8f53d1ab8c734e2af6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49fa65139740d25a82a8becea2b7476e9846a3eebfe5a02bb0b3e8cc4e7fa30d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49fa65139740d25a82a8becea2b7476e9846a3eebfe5a02bb0b3e8cc4e7fa30d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e171d6a13b6108f7df37e46d2195682d2ce7a75e5fbfd0ccf4aca27871964cf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e171d6a13b6108f7df37e46d2195682d2ce7a75e5fbfd0ccf4aca27871964cf6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://136504ce05b178f4c60a839a1044c0b0892ea97babfe80181eac601c5057df16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://136504ce05b178f4c60a839a1044c0b0892ea97babfe80181eac601c5057df16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:08Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:40Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:40 crc kubenswrapper[4720]: I0122 06:35:40.906133 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71c3232e-a7c6-4127-b9ae-54b793cf40fc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b844e983a5de29cec0e9c51f209f04921cd45b437887ca38e93348bc0816832a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d21c2ba52009379f44498a3064c67bcd1a54a6277cc2b1df9db949bf0871e95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65a3043e63bb84b58bb5486070eb84265d51c514df2d766703a62382b3d6f66f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b944677a78ab673857bf2dd507dcc2eced1caef632613ec1e2054cb76ef57f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f51bf89aa732948c3672632435a055b62f58316da07931b2773bc2d3c10789e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T06:35:26Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 06:35:20.895806 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 06:35:20.896560 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2161447925/tls.crt::/tmp/serving-cert-2161447925/tls.key\\\\\\\"\\\\nI0122 06:35:26.673138 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 06:35:26.677382 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 06:35:26.677419 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 06:35:26.677459 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 06:35:26.677469 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 06:35:26.688473 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 06:35:26.688552 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 06:35:26.688566 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 06:35:26.688580 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 06:35:26.688588 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 06:35:26.688596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 06:35:26.688605 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 06:35:26.688875 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 06:35:26.690763 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://927a5a6aba8d7533888fb9e5e664fbaed25c6e6c0616c5b3c6f0afd6f45aa128\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3418e6eacd36147ae6ae1be72f23113dea7dbaab3597ba0fee76fbc846c0162f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3418e6eacd36147ae6ae1be72f23113dea7dbaab3597ba0fee76fbc846c0162f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:40Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:40 crc kubenswrapper[4720]: I0122 06:35:40.918433 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dtnxt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"518eedd0-2cb6-458d-a7a8-d8c8b8296401\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://794bf7a98b2300bd21bd914f5ad90c8f92ea7c055c78f62dbac7bc66c1c4d282\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqx2p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dtnxt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:40Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:40 crc kubenswrapper[4720]: I0122 06:35:40.935224 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:40Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:40 crc kubenswrapper[4720]: I0122 06:35:40.950366 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4b26e9d-6a95-4b1c-9750-88b6aa100c67\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57c69f24cbbb5aea3a63b602916230df0cc3d74344a32bd1528bd162786a6d4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f52f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88eb6692702bcb8523c759d764bb8dede5af5a2890217a1c6897a5b18a7197dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f52f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bnsvd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:40Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:40 crc kubenswrapper[4720]: I0122 06:35:40.962510 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:40 crc kubenswrapper[4720]: I0122 06:35:40.962571 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:40 crc kubenswrapper[4720]: I0122 06:35:40.962583 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:40 crc kubenswrapper[4720]: I0122 06:35:40.962606 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:40 crc kubenswrapper[4720]: I0122 06:35:40.962619 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:40Z","lastTransitionTime":"2026-01-22T06:35:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:40 crc kubenswrapper[4720]: I0122 06:35:40.964761 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc18b75d6761db0d4a459a096804ac6600c6c2026ca133d47671a8aadba175f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6616553b8c62205590801d10e6e316b396b7190584b08a774ebb87ab425cc122\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:40Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:40 crc kubenswrapper[4720]: I0122 06:35:40.987885 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:40Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:41 crc kubenswrapper[4720]: I0122 06:35:41.007501 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3042f2c25662a3c0a0f010b2e431521ea9dc24b59303773b1fc4e2b43308937\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:41Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:41 crc kubenswrapper[4720]: I0122 06:35:41.048366 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pc2f4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a725fa6-120e-41b1-bf7b-e1419e35c891\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be8927bdfee25dd12f02d8f5e4923d1ef6e034a4ab3c036ee8fb76b699aa4a04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b14f48ebae84e1450337b5ce78c6b9e74f4ffd418eaaf9df9fcb7495f5feae06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b013c0f8d83bb91a1385ed067377ea5bd81640f7b420160042c33a3112cbe153\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a1f835b7dfa74f56bbfdc5ef0a6674e6c04ae1be7119e3675d4a7e970bc37d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdea94d764eb25af4aec5da1f1aaf94b7d75e66292561a54803a079a6ca35cbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://279bdc70caff8b05f2eaff1d28bee03725a7069dcc99c4770a8df9a3e3ed0440\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab402c5e4e13bb7d60dfd745a6c6a7becd4ea9eab192323e5066ea6252f8c6d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd6371da033cf0ecd1c0b746cae82e14593edbf2353910a77284a2987741580d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a478b385adad231c65bafa44aef49fd983ef1a81babc0464478e199578b612b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a478b385adad231c65bafa44aef49fd983ef1a81babc0464478e199578b612b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pc2f4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:41Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:41 crc kubenswrapper[4720]: I0122 06:35:41.062420 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lxzml" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c7b3c34a-9870-4c9f-990b-29b7e768d5a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec19859a546dc85e4807d0f522014187b1d22c1167a342776aafbcfd7c3a2cd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h66q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0e395a61e07527eace7f980f476ef9b16a0c6d525b461837ebd9abcbebfa9b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0e395a61e07527eace7f980f476ef9b16a0c6d525b461837ebd9abcbebfa9b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h66q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fa836ad8fb1d77bc1b9d30575db8ed7f2394a2b03d41f0b26a3907fa07ab154\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fa836ad8fb1d77bc1b9d30575db8ed7f2394a2b03d41f0b26a3907fa07ab154\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h66q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09cda77dbfa7cafc1f167478649fe1bc961d7610005bd3260407796956dd8fc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09cda77dbfa7cafc1f167478649fe1bc961d7610005bd3260407796956dd8fc3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h66q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d74c02c82589070c20f88c3ce9574341f2c4898af034fe9a686aa7b7c19ed93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d74c02c82589070c20f88c3ce9574341f2c4898af034fe9a686aa7b7c19ed93\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h66q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42e7374bf2c5e55d0afbe1e7564a2bc7fd23d0af7e22f0cab853fb9d75802422\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42e7374bf2c5e55d0afbe1e7564a2bc7fd23d0af7e22f0cab853fb9d75802422\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h66q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bbafab11f2868d0a97a54debc01d8e4cc6360b08a5b7e883a2f385db2e9ba0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3bbafab11f2868d0a97a54debc01d8e4cc6360b08a5b7e883a2f385db2e9ba0c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h66q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lxzml\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:41Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:41 crc kubenswrapper[4720]: I0122 06:35:41.065021 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:41 crc kubenswrapper[4720]: I0122 06:35:41.065065 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:41 crc kubenswrapper[4720]: I0122 06:35:41.065077 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:41 crc kubenswrapper[4720]: I0122 06:35:41.065096 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:41 crc kubenswrapper[4720]: I0122 06:35:41.065105 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:41Z","lastTransitionTime":"2026-01-22T06:35:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:41 crc kubenswrapper[4720]: I0122 06:35:41.073978 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5bmrh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"819a554b-cde8-41eb-bf3c-b965b5754ee9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fd548b8bf40d7be75f3433d2396eac3eb467cdc77a6e92f29c5321b0f2e11c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w949z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5bmrh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:41Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:41 crc kubenswrapper[4720]: I0122 06:35:41.083783 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63aad383-f2cb-4bee-a51e-8f27e7b6f6dc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f91f6cde32c3019a63315238551af8420def8e585e4dbf3d29e55f62f155ba3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eff41c6fc4cac32edebfbabd313603d50771e92427938f4fc8c12bc75563133d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5dc3d31620607809111c9a1ed06ab7dfb1b24ff5ef18155979efc6b09b6d67d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://365ce842df37c87fae429e32228087a45828c6c717b63a2eb9818f26bf9aed7b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:08Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:41Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:41 crc kubenswrapper[4720]: I0122 06:35:41.093701 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:41Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:41 crc kubenswrapper[4720]: I0122 06:35:41.155204 4720 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 23:51:15.370625926 +0000 UTC Jan 22 06:35:41 crc kubenswrapper[4720]: I0122 06:35:41.167334 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:41 crc kubenswrapper[4720]: I0122 06:35:41.167376 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:41 crc kubenswrapper[4720]: I0122 06:35:41.167385 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:41 crc kubenswrapper[4720]: I0122 06:35:41.167403 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:41 crc kubenswrapper[4720]: I0122 06:35:41.167414 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:41Z","lastTransitionTime":"2026-01-22T06:35:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:41 crc kubenswrapper[4720]: I0122 06:35:41.210349 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 06:35:41 crc kubenswrapper[4720]: E0122 06:35:41.210553 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 06:35:41 crc kubenswrapper[4720]: I0122 06:35:41.269839 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:41 crc kubenswrapper[4720]: I0122 06:35:41.269980 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:41 crc kubenswrapper[4720]: I0122 06:35:41.270000 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:41 crc kubenswrapper[4720]: I0122 06:35:41.270028 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:41 crc kubenswrapper[4720]: I0122 06:35:41.270049 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:41Z","lastTransitionTime":"2026-01-22T06:35:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:41 crc kubenswrapper[4720]: I0122 06:35:41.373132 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:41 crc kubenswrapper[4720]: I0122 06:35:41.373190 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:41 crc kubenswrapper[4720]: I0122 06:35:41.373202 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:41 crc kubenswrapper[4720]: I0122 06:35:41.373222 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:41 crc kubenswrapper[4720]: I0122 06:35:41.373234 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:41Z","lastTransitionTime":"2026-01-22T06:35:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:41 crc kubenswrapper[4720]: I0122 06:35:41.382819 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:41 crc kubenswrapper[4720]: I0122 06:35:41.382878 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:41 crc kubenswrapper[4720]: I0122 06:35:41.382890 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:41 crc kubenswrapper[4720]: I0122 06:35:41.382930 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:41 crc kubenswrapper[4720]: I0122 06:35:41.382944 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:41Z","lastTransitionTime":"2026-01-22T06:35:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:41 crc kubenswrapper[4720]: E0122 06:35:41.404387 4720 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T06:35:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T06:35:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T06:35:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T06:35:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"234f6209-cc86-46cc-ab69-026482c920c9\\\",\\\"systemUUID\\\":\\\"4713dd6d-99ec-4bb6-94e4-e7199d2e8be9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:41Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:41 crc kubenswrapper[4720]: I0122 06:35:41.408020 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:41 crc kubenswrapper[4720]: I0122 06:35:41.408085 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:41 crc kubenswrapper[4720]: I0122 06:35:41.408096 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:41 crc kubenswrapper[4720]: I0122 06:35:41.408114 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:41 crc kubenswrapper[4720]: I0122 06:35:41.408128 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:41Z","lastTransitionTime":"2026-01-22T06:35:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:41 crc kubenswrapper[4720]: E0122 06:35:41.420019 4720 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T06:35:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T06:35:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T06:35:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T06:35:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"234f6209-cc86-46cc-ab69-026482c920c9\\\",\\\"systemUUID\\\":\\\"4713dd6d-99ec-4bb6-94e4-e7199d2e8be9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:41Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:41 crc kubenswrapper[4720]: I0122 06:35:41.423886 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:41 crc kubenswrapper[4720]: I0122 06:35:41.423976 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:41 crc kubenswrapper[4720]: I0122 06:35:41.424001 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:41 crc kubenswrapper[4720]: I0122 06:35:41.424032 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:41 crc kubenswrapper[4720]: I0122 06:35:41.424058 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:41Z","lastTransitionTime":"2026-01-22T06:35:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:41 crc kubenswrapper[4720]: E0122 06:35:41.435493 4720 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T06:35:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T06:35:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T06:35:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T06:35:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"234f6209-cc86-46cc-ab69-026482c920c9\\\",\\\"systemUUID\\\":\\\"4713dd6d-99ec-4bb6-94e4-e7199d2e8be9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:41Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:41 crc kubenswrapper[4720]: I0122 06:35:41.439018 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:41 crc kubenswrapper[4720]: I0122 06:35:41.439064 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:41 crc kubenswrapper[4720]: I0122 06:35:41.439073 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:41 crc kubenswrapper[4720]: I0122 06:35:41.439090 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:41 crc kubenswrapper[4720]: I0122 06:35:41.439100 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:41Z","lastTransitionTime":"2026-01-22T06:35:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:41 crc kubenswrapper[4720]: E0122 06:35:41.451153 4720 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T06:35:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T06:35:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T06:35:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T06:35:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"234f6209-cc86-46cc-ab69-026482c920c9\\\",\\\"systemUUID\\\":\\\"4713dd6d-99ec-4bb6-94e4-e7199d2e8be9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:41Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:41 crc kubenswrapper[4720]: I0122 06:35:41.454282 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:41 crc kubenswrapper[4720]: I0122 06:35:41.454323 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:41 crc kubenswrapper[4720]: I0122 06:35:41.454339 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:41 crc kubenswrapper[4720]: I0122 06:35:41.454359 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:41 crc kubenswrapper[4720]: I0122 06:35:41.454371 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:41Z","lastTransitionTime":"2026-01-22T06:35:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:41 crc kubenswrapper[4720]: E0122 06:35:41.468676 4720 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T06:35:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T06:35:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T06:35:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T06:35:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"234f6209-cc86-46cc-ab69-026482c920c9\\\",\\\"systemUUID\\\":\\\"4713dd6d-99ec-4bb6-94e4-e7199d2e8be9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:41Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:41 crc kubenswrapper[4720]: E0122 06:35:41.468832 4720 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 22 06:35:41 crc kubenswrapper[4720]: I0122 06:35:41.476050 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:41 crc kubenswrapper[4720]: I0122 06:35:41.476096 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:41 crc kubenswrapper[4720]: I0122 06:35:41.476113 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:41 crc kubenswrapper[4720]: I0122 06:35:41.476140 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:41 crc kubenswrapper[4720]: I0122 06:35:41.476159 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:41Z","lastTransitionTime":"2026-01-22T06:35:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:41 crc kubenswrapper[4720]: I0122 06:35:41.517821 4720 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 22 06:35:41 crc kubenswrapper[4720]: I0122 06:35:41.578812 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:41 crc kubenswrapper[4720]: I0122 06:35:41.578861 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:41 crc kubenswrapper[4720]: I0122 06:35:41.578877 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:41 crc kubenswrapper[4720]: I0122 06:35:41.578901 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:41 crc kubenswrapper[4720]: I0122 06:35:41.578943 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:41Z","lastTransitionTime":"2026-01-22T06:35:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:41 crc kubenswrapper[4720]: I0122 06:35:41.682707 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:41 crc kubenswrapper[4720]: I0122 06:35:41.682770 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:41 crc kubenswrapper[4720]: I0122 06:35:41.682788 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:41 crc kubenswrapper[4720]: I0122 06:35:41.682812 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:41 crc kubenswrapper[4720]: I0122 06:35:41.682839 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:41Z","lastTransitionTime":"2026-01-22T06:35:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:41 crc kubenswrapper[4720]: I0122 06:35:41.785803 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:41 crc kubenswrapper[4720]: I0122 06:35:41.785835 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:41 crc kubenswrapper[4720]: I0122 06:35:41.785844 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:41 crc kubenswrapper[4720]: I0122 06:35:41.785859 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:41 crc kubenswrapper[4720]: I0122 06:35:41.785868 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:41Z","lastTransitionTime":"2026-01-22T06:35:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:41 crc kubenswrapper[4720]: I0122 06:35:41.887916 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:41 crc kubenswrapper[4720]: I0122 06:35:41.887960 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:41 crc kubenswrapper[4720]: I0122 06:35:41.887970 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:41 crc kubenswrapper[4720]: I0122 06:35:41.887990 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:41 crc kubenswrapper[4720]: I0122 06:35:41.888004 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:41Z","lastTransitionTime":"2026-01-22T06:35:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:41 crc kubenswrapper[4720]: I0122 06:35:41.990319 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:41 crc kubenswrapper[4720]: I0122 06:35:41.990366 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:41 crc kubenswrapper[4720]: I0122 06:35:41.990382 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:41 crc kubenswrapper[4720]: I0122 06:35:41.990402 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:41 crc kubenswrapper[4720]: I0122 06:35:41.990414 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:41Z","lastTransitionTime":"2026-01-22T06:35:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:42 crc kubenswrapper[4720]: I0122 06:35:42.092956 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:42 crc kubenswrapper[4720]: I0122 06:35:42.093001 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:42 crc kubenswrapper[4720]: I0122 06:35:42.093010 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:42 crc kubenswrapper[4720]: I0122 06:35:42.093030 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:42 crc kubenswrapper[4720]: I0122 06:35:42.093040 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:42Z","lastTransitionTime":"2026-01-22T06:35:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:42 crc kubenswrapper[4720]: I0122 06:35:42.156051 4720 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 03:52:53.612979264 +0000 UTC Jan 22 06:35:42 crc kubenswrapper[4720]: I0122 06:35:42.195176 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:42 crc kubenswrapper[4720]: I0122 06:35:42.195217 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:42 crc kubenswrapper[4720]: I0122 06:35:42.195227 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:42 crc kubenswrapper[4720]: I0122 06:35:42.195246 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:42 crc kubenswrapper[4720]: I0122 06:35:42.195256 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:42Z","lastTransitionTime":"2026-01-22T06:35:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:42 crc kubenswrapper[4720]: I0122 06:35:42.209874 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 06:35:42 crc kubenswrapper[4720]: I0122 06:35:42.209962 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 06:35:42 crc kubenswrapper[4720]: E0122 06:35:42.210103 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 06:35:42 crc kubenswrapper[4720]: E0122 06:35:42.210252 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 06:35:42 crc kubenswrapper[4720]: I0122 06:35:42.297774 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:42 crc kubenswrapper[4720]: I0122 06:35:42.297817 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:42 crc kubenswrapper[4720]: I0122 06:35:42.297828 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:42 crc kubenswrapper[4720]: I0122 06:35:42.297848 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:42 crc kubenswrapper[4720]: I0122 06:35:42.297859 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:42Z","lastTransitionTime":"2026-01-22T06:35:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:42 crc kubenswrapper[4720]: I0122 06:35:42.399744 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:42 crc kubenswrapper[4720]: I0122 06:35:42.399791 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:42 crc kubenswrapper[4720]: I0122 06:35:42.399806 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:42 crc kubenswrapper[4720]: I0122 06:35:42.399829 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:42 crc kubenswrapper[4720]: I0122 06:35:42.399845 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:42Z","lastTransitionTime":"2026-01-22T06:35:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:42 crc kubenswrapper[4720]: I0122 06:35:42.407141 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 06:35:42 crc kubenswrapper[4720]: I0122 06:35:42.407231 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 06:35:42 crc kubenswrapper[4720]: I0122 06:35:42.407276 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 06:35:42 crc kubenswrapper[4720]: E0122 06:35:42.407298 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 06:35:58.407280857 +0000 UTC m=+50.549187552 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 06:35:42 crc kubenswrapper[4720]: E0122 06:35:42.407358 4720 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 22 06:35:42 crc kubenswrapper[4720]: E0122 06:35:42.407380 4720 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 22 06:35:42 crc kubenswrapper[4720]: E0122 06:35:42.407406 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-22 06:35:58.407395341 +0000 UTC m=+50.549302056 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 22 06:35:42 crc kubenswrapper[4720]: E0122 06:35:42.407423 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-22 06:35:58.407414261 +0000 UTC m=+50.549320976 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 22 06:35:42 crc kubenswrapper[4720]: I0122 06:35:42.501877 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:42 crc kubenswrapper[4720]: I0122 06:35:42.501928 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:42 crc kubenswrapper[4720]: I0122 06:35:42.501938 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:42 crc kubenswrapper[4720]: I0122 06:35:42.501954 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:42 crc kubenswrapper[4720]: I0122 06:35:42.501967 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:42Z","lastTransitionTime":"2026-01-22T06:35:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:42 crc kubenswrapper[4720]: I0122 06:35:42.507818 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 06:35:42 crc kubenswrapper[4720]: I0122 06:35:42.507946 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 06:35:42 crc kubenswrapper[4720]: E0122 06:35:42.508045 4720 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 22 06:35:42 crc kubenswrapper[4720]: E0122 06:35:42.508097 4720 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 22 06:35:42 crc kubenswrapper[4720]: E0122 06:35:42.508129 4720 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 06:35:42 crc kubenswrapper[4720]: E0122 06:35:42.508225 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-22 06:35:58.508197546 +0000 UTC m=+50.650104331 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 06:35:42 crc kubenswrapper[4720]: E0122 06:35:42.508233 4720 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 22 06:35:42 crc kubenswrapper[4720]: E0122 06:35:42.508272 4720 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 22 06:35:42 crc kubenswrapper[4720]: E0122 06:35:42.508283 4720 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 06:35:42 crc kubenswrapper[4720]: E0122 06:35:42.508344 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-22 06:35:58.50833393 +0000 UTC m=+50.650240725 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 06:35:42 crc kubenswrapper[4720]: I0122 06:35:42.521292 4720 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 22 06:35:42 crc kubenswrapper[4720]: I0122 06:35:42.604816 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:42 crc kubenswrapper[4720]: I0122 06:35:42.604875 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:42 crc kubenswrapper[4720]: I0122 06:35:42.604887 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:42 crc kubenswrapper[4720]: I0122 06:35:42.604939 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:42 crc kubenswrapper[4720]: I0122 06:35:42.604965 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:42Z","lastTransitionTime":"2026-01-22T06:35:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:42 crc kubenswrapper[4720]: I0122 06:35:42.707565 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:42 crc kubenswrapper[4720]: I0122 06:35:42.707617 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:42 crc kubenswrapper[4720]: I0122 06:35:42.707634 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:42 crc kubenswrapper[4720]: I0122 06:35:42.707660 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:42 crc kubenswrapper[4720]: I0122 06:35:42.707679 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:42Z","lastTransitionTime":"2026-01-22T06:35:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:42 crc kubenswrapper[4720]: I0122 06:35:42.812175 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:42 crc kubenswrapper[4720]: I0122 06:35:42.812263 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:42 crc kubenswrapper[4720]: I0122 06:35:42.812276 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:42 crc kubenswrapper[4720]: I0122 06:35:42.812306 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:42 crc kubenswrapper[4720]: I0122 06:35:42.812319 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:42Z","lastTransitionTime":"2026-01-22T06:35:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:42 crc kubenswrapper[4720]: I0122 06:35:42.915475 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:42 crc kubenswrapper[4720]: I0122 06:35:42.915525 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:42 crc kubenswrapper[4720]: I0122 06:35:42.915533 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:42 crc kubenswrapper[4720]: I0122 06:35:42.915551 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:42 crc kubenswrapper[4720]: I0122 06:35:42.915561 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:42Z","lastTransitionTime":"2026-01-22T06:35:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:43 crc kubenswrapper[4720]: I0122 06:35:43.018993 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:43 crc kubenswrapper[4720]: I0122 06:35:43.019087 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:43 crc kubenswrapper[4720]: I0122 06:35:43.019115 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:43 crc kubenswrapper[4720]: I0122 06:35:43.019153 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:43 crc kubenswrapper[4720]: I0122 06:35:43.019172 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:43Z","lastTransitionTime":"2026-01-22T06:35:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:43 crc kubenswrapper[4720]: I0122 06:35:43.121976 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:43 crc kubenswrapper[4720]: I0122 06:35:43.122020 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:43 crc kubenswrapper[4720]: I0122 06:35:43.122034 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:43 crc kubenswrapper[4720]: I0122 06:35:43.122057 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:43 crc kubenswrapper[4720]: I0122 06:35:43.122070 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:43Z","lastTransitionTime":"2026-01-22T06:35:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:43 crc kubenswrapper[4720]: I0122 06:35:43.157081 4720 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 05:06:58.167635057 +0000 UTC Jan 22 06:35:43 crc kubenswrapper[4720]: I0122 06:35:43.209617 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 06:35:43 crc kubenswrapper[4720]: E0122 06:35:43.209788 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 06:35:43 crc kubenswrapper[4720]: I0122 06:35:43.225009 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:43 crc kubenswrapper[4720]: I0122 06:35:43.225078 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:43 crc kubenswrapper[4720]: I0122 06:35:43.225090 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:43 crc kubenswrapper[4720]: I0122 06:35:43.225110 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:43 crc kubenswrapper[4720]: I0122 06:35:43.225123 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:43Z","lastTransitionTime":"2026-01-22T06:35:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:43 crc kubenswrapper[4720]: I0122 06:35:43.328141 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:43 crc kubenswrapper[4720]: I0122 06:35:43.328186 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:43 crc kubenswrapper[4720]: I0122 06:35:43.328197 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:43 crc kubenswrapper[4720]: I0122 06:35:43.328217 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:43 crc kubenswrapper[4720]: I0122 06:35:43.328230 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:43Z","lastTransitionTime":"2026-01-22T06:35:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:43 crc kubenswrapper[4720]: I0122 06:35:43.430651 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:43 crc kubenswrapper[4720]: I0122 06:35:43.430713 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:43 crc kubenswrapper[4720]: I0122 06:35:43.430731 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:43 crc kubenswrapper[4720]: I0122 06:35:43.430762 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:43 crc kubenswrapper[4720]: I0122 06:35:43.430782 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:43Z","lastTransitionTime":"2026-01-22T06:35:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:43 crc kubenswrapper[4720]: I0122 06:35:43.533082 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:43 crc kubenswrapper[4720]: I0122 06:35:43.533162 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:43 crc kubenswrapper[4720]: I0122 06:35:43.533184 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:43 crc kubenswrapper[4720]: I0122 06:35:43.533214 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:43 crc kubenswrapper[4720]: I0122 06:35:43.533236 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:43Z","lastTransitionTime":"2026-01-22T06:35:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:43 crc kubenswrapper[4720]: I0122 06:35:43.636483 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:43 crc kubenswrapper[4720]: I0122 06:35:43.636555 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:43 crc kubenswrapper[4720]: I0122 06:35:43.636572 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:43 crc kubenswrapper[4720]: I0122 06:35:43.636600 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:43 crc kubenswrapper[4720]: I0122 06:35:43.636620 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:43Z","lastTransitionTime":"2026-01-22T06:35:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:43 crc kubenswrapper[4720]: I0122 06:35:43.740010 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:43 crc kubenswrapper[4720]: I0122 06:35:43.740071 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:43 crc kubenswrapper[4720]: I0122 06:35:43.740099 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:43 crc kubenswrapper[4720]: I0122 06:35:43.740126 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:43 crc kubenswrapper[4720]: I0122 06:35:43.740146 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:43Z","lastTransitionTime":"2026-01-22T06:35:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:43 crc kubenswrapper[4720]: I0122 06:35:43.843121 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:43 crc kubenswrapper[4720]: I0122 06:35:43.843204 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:43 crc kubenswrapper[4720]: I0122 06:35:43.843224 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:43 crc kubenswrapper[4720]: I0122 06:35:43.843253 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:43 crc kubenswrapper[4720]: I0122 06:35:43.843272 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:43Z","lastTransitionTime":"2026-01-22T06:35:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:43 crc kubenswrapper[4720]: I0122 06:35:43.946967 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:43 crc kubenswrapper[4720]: I0122 06:35:43.947033 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:43 crc kubenswrapper[4720]: I0122 06:35:43.947052 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:43 crc kubenswrapper[4720]: I0122 06:35:43.947080 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:43 crc kubenswrapper[4720]: I0122 06:35:43.947097 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:43Z","lastTransitionTime":"2026-01-22T06:35:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:44 crc kubenswrapper[4720]: I0122 06:35:44.051477 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:44 crc kubenswrapper[4720]: I0122 06:35:44.051546 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:44 crc kubenswrapper[4720]: I0122 06:35:44.051563 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:44 crc kubenswrapper[4720]: I0122 06:35:44.051591 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:44 crc kubenswrapper[4720]: I0122 06:35:44.051609 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:44Z","lastTransitionTime":"2026-01-22T06:35:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:44 crc kubenswrapper[4720]: I0122 06:35:44.155667 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:44 crc kubenswrapper[4720]: I0122 06:35:44.155733 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:44 crc kubenswrapper[4720]: I0122 06:35:44.155762 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:44 crc kubenswrapper[4720]: I0122 06:35:44.155792 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:44 crc kubenswrapper[4720]: I0122 06:35:44.155809 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:44Z","lastTransitionTime":"2026-01-22T06:35:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:44 crc kubenswrapper[4720]: I0122 06:35:44.157898 4720 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 22:05:22.369833229 +0000 UTC Jan 22 06:35:44 crc kubenswrapper[4720]: I0122 06:35:44.210507 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 06:35:44 crc kubenswrapper[4720]: I0122 06:35:44.210583 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 06:35:44 crc kubenswrapper[4720]: E0122 06:35:44.210748 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 06:35:44 crc kubenswrapper[4720]: E0122 06:35:44.210970 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 06:35:44 crc kubenswrapper[4720]: I0122 06:35:44.259265 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:44 crc kubenswrapper[4720]: I0122 06:35:44.259612 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:44 crc kubenswrapper[4720]: I0122 06:35:44.259805 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:44 crc kubenswrapper[4720]: I0122 06:35:44.260085 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:44 crc kubenswrapper[4720]: I0122 06:35:44.260368 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:44Z","lastTransitionTime":"2026-01-22T06:35:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:44 crc kubenswrapper[4720]: I0122 06:35:44.363981 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:44 crc kubenswrapper[4720]: I0122 06:35:44.364052 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:44 crc kubenswrapper[4720]: I0122 06:35:44.364071 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:44 crc kubenswrapper[4720]: I0122 06:35:44.364101 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:44 crc kubenswrapper[4720]: I0122 06:35:44.364121 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:44Z","lastTransitionTime":"2026-01-22T06:35:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:44 crc kubenswrapper[4720]: I0122 06:35:44.466804 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:44 crc kubenswrapper[4720]: I0122 06:35:44.467163 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:44 crc kubenswrapper[4720]: I0122 06:35:44.467261 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:44 crc kubenswrapper[4720]: I0122 06:35:44.467367 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:44 crc kubenswrapper[4720]: I0122 06:35:44.467457 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:44Z","lastTransitionTime":"2026-01-22T06:35:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:44 crc kubenswrapper[4720]: I0122 06:35:44.534724 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pc2f4_9a725fa6-120e-41b1-bf7b-e1419e35c891/ovnkube-controller/0.log" Jan 22 06:35:44 crc kubenswrapper[4720]: I0122 06:35:44.539717 4720 generic.go:334] "Generic (PLEG): container finished" podID="9a725fa6-120e-41b1-bf7b-e1419e35c891" containerID="ab402c5e4e13bb7d60dfd745a6c6a7becd4ea9eab192323e5066ea6252f8c6d6" exitCode=1 Jan 22 06:35:44 crc kubenswrapper[4720]: I0122 06:35:44.539830 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pc2f4" event={"ID":"9a725fa6-120e-41b1-bf7b-e1419e35c891","Type":"ContainerDied","Data":"ab402c5e4e13bb7d60dfd745a6c6a7becd4ea9eab192323e5066ea6252f8c6d6"} Jan 22 06:35:44 crc kubenswrapper[4720]: I0122 06:35:44.541471 4720 scope.go:117] "RemoveContainer" containerID="ab402c5e4e13bb7d60dfd745a6c6a7becd4ea9eab192323e5066ea6252f8c6d6" Jan 22 06:35:44 crc kubenswrapper[4720]: I0122 06:35:44.572039 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:44 crc kubenswrapper[4720]: I0122 06:35:44.572104 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:44 crc kubenswrapper[4720]: I0122 06:35:44.572122 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:44 crc kubenswrapper[4720]: I0122 06:35:44.572153 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:44 crc kubenswrapper[4720]: I0122 06:35:44.572174 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:44Z","lastTransitionTime":"2026-01-22T06:35:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:44 crc kubenswrapper[4720]: I0122 06:35:44.576673 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pc2f4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a725fa6-120e-41b1-bf7b-e1419e35c891\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be8927bdfee25dd12f02d8f5e4923d1ef6e034a4ab3c036ee8fb76b699aa4a04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b14f48ebae84e1450337b5ce78c6b9e74f4ffd418eaaf9df9fcb7495f5feae06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b013c0f8d83bb91a1385ed067377ea5bd81640f7b420160042c33a3112cbe153\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a1f835b7dfa74f56bbfdc5ef0a6674e6c04ae1be7119e3675d4a7e970bc37d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdea94d764eb25af4aec5da1f1aaf94b7d75e66292561a54803a079a6ca35cbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://279bdc70caff8b05f2eaff1d28bee03725a7069dcc99c4770a8df9a3e3ed0440\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab402c5e4e13bb7d60dfd745a6c6a7becd4ea9eab192323e5066ea6252f8c6d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab402c5e4e13bb7d60dfd745a6c6a7becd4ea9eab192323e5066ea6252f8c6d6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T06:35:43Z\\\",\\\"message\\\":\\\"140\\\\nI0122 06:35:42.520091 6010 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 06:35:42.518994 6010 obj_retry.go:439] Stop channel got triggered: will stop retrying failed objects of type *v1.Node\\\\nI0122 06:35:42.520373 6010 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0122 06:35:42.520388 6010 nad_controller.go:166] [zone-nad-controller NAD controller]: shutting down\\\\nI0122 06:35:42.520834 6010 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0122 06:35:42.520852 6010 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0122 06:35:42.520984 6010 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 06:35:42.521130 6010 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0122 06:35:42.522315 6010 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd6371da033cf0ecd1c0b746cae82e14593edbf2353910a77284a2987741580d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a478b385adad231c65bafa44aef49fd983ef1a81babc0464478e199578b612b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a478b385adad231c65bafa44aef49fd983ef1a81babc0464478e199578b612b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pc2f4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:44Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:44 crc kubenswrapper[4720]: I0122 06:35:44.603163 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lxzml" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c7b3c34a-9870-4c9f-990b-29b7e768d5a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec19859a546dc85e4807d0f522014187b1d22c1167a342776aafbcfd7c3a2cd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h66q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0e395a61e07527eace7f980f476ef9b16a0c6d525b461837ebd9abcbebfa9b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0e395a61e07527eace7f980f476ef9b16a0c6d525b461837ebd9abcbebfa9b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h66q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fa836ad8fb1d77bc1b9d30575db8ed7f2394a2b03d41f0b26a3907fa07ab154\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fa836ad8fb1d77bc1b9d30575db8ed7f2394a2b03d41f0b26a3907fa07ab154\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h66q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09cda77dbfa7cafc1f167478649fe1bc961d7610005bd3260407796956dd8fc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09cda77dbfa7cafc1f167478649fe1bc961d7610005bd3260407796956dd8fc3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h66q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d74c02c82589070c20f88c3ce9574341f2c4898af034fe9a686aa7b7c19ed93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d74c02c82589070c20f88c3ce9574341f2c4898af034fe9a686aa7b7c19ed93\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h66q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42e7374bf2c5e55d0afbe1e7564a2bc7fd23d0af7e22f0cab853fb9d75802422\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42e7374bf2c5e55d0afbe1e7564a2bc7fd23d0af7e22f0cab853fb9d75802422\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h66q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bbafab11f2868d0a97a54debc01d8e4cc6360b08a5b7e883a2f385db2e9ba0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3bbafab11f2868d0a97a54debc01d8e4cc6360b08a5b7e883a2f385db2e9ba0c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h66q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lxzml\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:44Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:44 crc kubenswrapper[4720]: I0122 06:35:44.620678 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5bmrh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"819a554b-cde8-41eb-bf3c-b965b5754ee9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fd548b8bf40d7be75f3433d2396eac3eb467cdc77a6e92f29c5321b0f2e11c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w949z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5bmrh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:44Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:44 crc kubenswrapper[4720]: I0122 06:35:44.645395 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63aad383-f2cb-4bee-a51e-8f27e7b6f6dc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f91f6cde32c3019a63315238551af8420def8e585e4dbf3d29e55f62f155ba3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eff41c6fc4cac32edebfbabd313603d50771e92427938f4fc8c12bc75563133d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5dc3d31620607809111c9a1ed06ab7dfb1b24ff5ef18155979efc6b09b6d67d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://365ce842df37c87fae429e32228087a45828c6c717b63a2eb9818f26bf9aed7b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:08Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:44Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:44 crc kubenswrapper[4720]: I0122 06:35:44.666905 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:44Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:44 crc kubenswrapper[4720]: I0122 06:35:44.676455 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:44 crc kubenswrapper[4720]: I0122 06:35:44.676667 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:44 crc kubenswrapper[4720]: I0122 06:35:44.676691 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:44 crc kubenswrapper[4720]: I0122 06:35:44.676718 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:44 crc kubenswrapper[4720]: I0122 06:35:44.676733 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:44Z","lastTransitionTime":"2026-01-22T06:35:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:44 crc kubenswrapper[4720]: I0122 06:35:44.691409 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:44Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:44 crc kubenswrapper[4720]: I0122 06:35:44.714190 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3042f2c25662a3c0a0f010b2e431521ea9dc24b59303773b1fc4e2b43308937\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:44Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:44 crc kubenswrapper[4720]: I0122 06:35:44.750835 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2336077a-0d11-443d-aa5f-3bc0f75aeb59\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7f8b443098a61889ca09134ff7b6e219efca59abcbcea3bb4c1aa9266ee8e55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfdba4f4bb3ed55acad58ed0c07d2dd58b1a1e9fd56443090c002bdefd3e7e10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2caddf6dd66b641ff10add3b52e37581c741507660ec4fe67a3b9ddde699e56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://108525dd49ce1847f1cc6c4e2a4c70cae3d2d990cda97eeb4aba8879b955432d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d49a501c34b86f0f9300742970a00c92ce3b8d352518d8f53d1ab8c734e2af6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49fa65139740d25a82a8becea2b7476e9846a3eebfe5a02bb0b3e8cc4e7fa30d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49fa65139740d25a82a8becea2b7476e9846a3eebfe5a02bb0b3e8cc4e7fa30d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e171d6a13b6108f7df37e46d2195682d2ce7a75e5fbfd0ccf4aca27871964cf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e171d6a13b6108f7df37e46d2195682d2ce7a75e5fbfd0ccf4aca27871964cf6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://136504ce05b178f4c60a839a1044c0b0892ea97babfe80181eac601c5057df16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://136504ce05b178f4c60a839a1044c0b0892ea97babfe80181eac601c5057df16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:08Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:44Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:44 crc kubenswrapper[4720]: I0122 06:35:44.774820 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71c3232e-a7c6-4127-b9ae-54b793cf40fc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b844e983a5de29cec0e9c51f209f04921cd45b437887ca38e93348bc0816832a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d21c2ba52009379f44498a3064c67bcd1a54a6277cc2b1df9db949bf0871e95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65a3043e63bb84b58bb5486070eb84265d51c514df2d766703a62382b3d6f66f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b944677a78ab673857bf2dd507dcc2eced1caef632613ec1e2054cb76ef57f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f51bf89aa732948c3672632435a055b62f58316da07931b2773bc2d3c10789e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T06:35:26Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 06:35:20.895806 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 06:35:20.896560 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2161447925/tls.crt::/tmp/serving-cert-2161447925/tls.key\\\\\\\"\\\\nI0122 06:35:26.673138 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 06:35:26.677382 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 06:35:26.677419 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 06:35:26.677459 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 06:35:26.677469 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 06:35:26.688473 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 06:35:26.688552 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 06:35:26.688566 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 06:35:26.688580 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 06:35:26.688588 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 06:35:26.688596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 06:35:26.688605 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 06:35:26.688875 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 06:35:26.690763 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://927a5a6aba8d7533888fb9e5e664fbaed25c6e6c0616c5b3c6f0afd6f45aa128\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3418e6eacd36147ae6ae1be72f23113dea7dbaab3597ba0fee76fbc846c0162f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3418e6eacd36147ae6ae1be72f23113dea7dbaab3597ba0fee76fbc846c0162f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:44Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:44 crc kubenswrapper[4720]: I0122 06:35:44.780178 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:44 crc kubenswrapper[4720]: I0122 06:35:44.780236 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:44 crc kubenswrapper[4720]: I0122 06:35:44.780248 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:44 crc kubenswrapper[4720]: I0122 06:35:44.780270 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:44 crc kubenswrapper[4720]: I0122 06:35:44.780287 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:44Z","lastTransitionTime":"2026-01-22T06:35:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:44 crc kubenswrapper[4720]: I0122 06:35:44.799641 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://42d0f971bb1f2a686c44e255b97648123170034b41d626318aab29e111dda4a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:44Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:44 crc kubenswrapper[4720]: I0122 06:35:44.824125 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-n5w5r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85373343-156d-4de0-a72b-baaf7c4e3d08\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4da0abad7292a9d82b80caabd4b1f0fc15fbe54bfa5b4316c85b2131ea8b5b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlzmz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-n5w5r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:44Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:44 crc kubenswrapper[4720]: I0122 06:35:44.843955 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:44Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:44 crc kubenswrapper[4720]: I0122 06:35:44.858520 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dtnxt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"518eedd0-2cb6-458d-a7a8-d8c8b8296401\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://794bf7a98b2300bd21bd914f5ad90c8f92ea7c055c78f62dbac7bc66c1c4d282\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqx2p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dtnxt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:44Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:44 crc kubenswrapper[4720]: I0122 06:35:44.878206 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc18b75d6761db0d4a459a096804ac6600c6c2026ca133d47671a8aadba175f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6616553b8c62205590801d10e6e316b396b7190584b08a774ebb87ab425cc122\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:44Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:44 crc kubenswrapper[4720]: I0122 06:35:44.882977 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:44 crc kubenswrapper[4720]: I0122 06:35:44.883118 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:44 crc kubenswrapper[4720]: I0122 06:35:44.883202 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:44 crc kubenswrapper[4720]: I0122 06:35:44.883321 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:44 crc kubenswrapper[4720]: I0122 06:35:44.883414 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:44Z","lastTransitionTime":"2026-01-22T06:35:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:44 crc kubenswrapper[4720]: I0122 06:35:44.898124 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4b26e9d-6a95-4b1c-9750-88b6aa100c67\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57c69f24cbbb5aea3a63b602916230df0cc3d74344a32bd1528bd162786a6d4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f52f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88eb6692702bcb8523c759d764bb8dede5af5a2890217a1c6897a5b18a7197dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f52f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bnsvd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:44Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:44 crc kubenswrapper[4720]: I0122 06:35:44.987064 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:44 crc kubenswrapper[4720]: I0122 06:35:44.987221 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:44 crc kubenswrapper[4720]: I0122 06:35:44.987254 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:44 crc kubenswrapper[4720]: I0122 06:35:44.987276 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:44 crc kubenswrapper[4720]: I0122 06:35:44.987291 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:44Z","lastTransitionTime":"2026-01-22T06:35:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:45 crc kubenswrapper[4720]: I0122 06:35:45.090802 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:45 crc kubenswrapper[4720]: I0122 06:35:45.090877 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:45 crc kubenswrapper[4720]: I0122 06:35:45.090895 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:45 crc kubenswrapper[4720]: I0122 06:35:45.090961 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:45 crc kubenswrapper[4720]: I0122 06:35:45.090982 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:45Z","lastTransitionTime":"2026-01-22T06:35:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:45 crc kubenswrapper[4720]: I0122 06:35:45.158362 4720 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 21:53:41.345356072 +0000 UTC Jan 22 06:35:45 crc kubenswrapper[4720]: I0122 06:35:45.194256 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:45 crc kubenswrapper[4720]: I0122 06:35:45.194329 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:45 crc kubenswrapper[4720]: I0122 06:35:45.194356 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:45 crc kubenswrapper[4720]: I0122 06:35:45.194390 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:45 crc kubenswrapper[4720]: I0122 06:35:45.194414 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:45Z","lastTransitionTime":"2026-01-22T06:35:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:45 crc kubenswrapper[4720]: I0122 06:35:45.209553 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 06:35:45 crc kubenswrapper[4720]: E0122 06:35:45.209839 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 06:35:45 crc kubenswrapper[4720]: I0122 06:35:45.304130 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:45 crc kubenswrapper[4720]: I0122 06:35:45.304215 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:45 crc kubenswrapper[4720]: I0122 06:35:45.304237 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:45 crc kubenswrapper[4720]: I0122 06:35:45.304267 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:45 crc kubenswrapper[4720]: I0122 06:35:45.304287 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:45Z","lastTransitionTime":"2026-01-22T06:35:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:45 crc kubenswrapper[4720]: I0122 06:35:45.408043 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:45 crc kubenswrapper[4720]: I0122 06:35:45.408133 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:45 crc kubenswrapper[4720]: I0122 06:35:45.408153 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:45 crc kubenswrapper[4720]: I0122 06:35:45.408189 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:45 crc kubenswrapper[4720]: I0122 06:35:45.408210 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:45Z","lastTransitionTime":"2026-01-22T06:35:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:45 crc kubenswrapper[4720]: I0122 06:35:45.511159 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:45 crc kubenswrapper[4720]: I0122 06:35:45.511232 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:45 crc kubenswrapper[4720]: I0122 06:35:45.511252 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:45 crc kubenswrapper[4720]: I0122 06:35:45.511280 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:45 crc kubenswrapper[4720]: I0122 06:35:45.511300 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:45Z","lastTransitionTime":"2026-01-22T06:35:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:45 crc kubenswrapper[4720]: I0122 06:35:45.615042 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:45 crc kubenswrapper[4720]: I0122 06:35:45.615149 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:45 crc kubenswrapper[4720]: I0122 06:35:45.615176 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:45 crc kubenswrapper[4720]: I0122 06:35:45.615209 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:45 crc kubenswrapper[4720]: I0122 06:35:45.615234 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:45Z","lastTransitionTime":"2026-01-22T06:35:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:45 crc kubenswrapper[4720]: I0122 06:35:45.718406 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:45 crc kubenswrapper[4720]: I0122 06:35:45.718868 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:45 crc kubenswrapper[4720]: I0122 06:35:45.718889 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:45 crc kubenswrapper[4720]: I0122 06:35:45.718969 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:45 crc kubenswrapper[4720]: I0122 06:35:45.718997 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:45Z","lastTransitionTime":"2026-01-22T06:35:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:45 crc kubenswrapper[4720]: I0122 06:35:45.822789 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:45 crc kubenswrapper[4720]: I0122 06:35:45.822878 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:45 crc kubenswrapper[4720]: I0122 06:35:45.822901 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:45 crc kubenswrapper[4720]: I0122 06:35:45.822963 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:45 crc kubenswrapper[4720]: I0122 06:35:45.822991 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:45Z","lastTransitionTime":"2026-01-22T06:35:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:45 crc kubenswrapper[4720]: I0122 06:35:45.954273 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:45 crc kubenswrapper[4720]: I0122 06:35:45.954324 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:45 crc kubenswrapper[4720]: I0122 06:35:45.954337 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:45 crc kubenswrapper[4720]: I0122 06:35:45.954355 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:45 crc kubenswrapper[4720]: I0122 06:35:45.954368 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:45Z","lastTransitionTime":"2026-01-22T06:35:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:45 crc kubenswrapper[4720]: I0122 06:35:45.991785 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4c84t"] Jan 22 06:35:45 crc kubenswrapper[4720]: I0122 06:35:45.992308 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4c84t" Jan 22 06:35:45 crc kubenswrapper[4720]: I0122 06:35:45.996697 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 22 06:35:45 crc kubenswrapper[4720]: I0122 06:35:45.997401 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 22 06:35:46 crc kubenswrapper[4720]: I0122 06:35:46.014219 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:46Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:46 crc kubenswrapper[4720]: I0122 06:35:46.029412 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dtnxt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"518eedd0-2cb6-458d-a7a8-d8c8b8296401\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://794bf7a98b2300bd21bd914f5ad90c8f92ea7c055c78f62dbac7bc66c1c4d282\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqx2p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dtnxt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:46Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:46 crc kubenswrapper[4720]: I0122 06:35:46.044149 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc18b75d6761db0d4a459a096804ac6600c6c2026ca133d47671a8aadba175f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6616553b8c62205590801d10e6e316b396b7190584b08a774ebb87ab425cc122\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:46Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:46 crc kubenswrapper[4720]: I0122 06:35:46.047242 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zl9b5\" (UniqueName: \"kubernetes.io/projected/83dddec7-9ecb-4d3b-97ac-e2f8f59e547c-kube-api-access-zl9b5\") pod \"ovnkube-control-plane-749d76644c-4c84t\" (UID: \"83dddec7-9ecb-4d3b-97ac-e2f8f59e547c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4c84t" Jan 22 06:35:46 crc kubenswrapper[4720]: I0122 06:35:46.047308 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/83dddec7-9ecb-4d3b-97ac-e2f8f59e547c-env-overrides\") pod \"ovnkube-control-plane-749d76644c-4c84t\" (UID: \"83dddec7-9ecb-4d3b-97ac-e2f8f59e547c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4c84t" Jan 22 06:35:46 crc kubenswrapper[4720]: I0122 06:35:46.047405 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/83dddec7-9ecb-4d3b-97ac-e2f8f59e547c-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-4c84t\" (UID: \"83dddec7-9ecb-4d3b-97ac-e2f8f59e547c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4c84t" Jan 22 06:35:46 crc kubenswrapper[4720]: I0122 06:35:46.047458 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/83dddec7-9ecb-4d3b-97ac-e2f8f59e547c-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-4c84t\" (UID: \"83dddec7-9ecb-4d3b-97ac-e2f8f59e547c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4c84t" Jan 22 06:35:46 crc kubenswrapper[4720]: I0122 06:35:46.056938 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:46 crc kubenswrapper[4720]: I0122 06:35:46.056978 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:46 crc kubenswrapper[4720]: I0122 06:35:46.056987 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:46 crc kubenswrapper[4720]: I0122 06:35:46.057004 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:46 crc kubenswrapper[4720]: I0122 06:35:46.057015 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:46Z","lastTransitionTime":"2026-01-22T06:35:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:46 crc kubenswrapper[4720]: I0122 06:35:46.058936 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4b26e9d-6a95-4b1c-9750-88b6aa100c67\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57c69f24cbbb5aea3a63b602916230df0cc3d74344a32bd1528bd162786a6d4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f52f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88eb6692702bcb8523c759d764bb8dede5af5a2890217a1c6897a5b18a7197dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f52f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bnsvd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:46Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:46 crc kubenswrapper[4720]: I0122 06:35:46.073323 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63aad383-f2cb-4bee-a51e-8f27e7b6f6dc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f91f6cde32c3019a63315238551af8420def8e585e4dbf3d29e55f62f155ba3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eff41c6fc4cac32edebfbabd313603d50771e92427938f4fc8c12bc75563133d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5dc3d31620607809111c9a1ed06ab7dfb1b24ff5ef18155979efc6b09b6d67d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://365ce842df37c87fae429e32228087a45828c6c717b63a2eb9818f26bf9aed7b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:08Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:46Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:46 crc kubenswrapper[4720]: I0122 06:35:46.089327 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:46Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:46 crc kubenswrapper[4720]: I0122 06:35:46.105585 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:46Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:46 crc kubenswrapper[4720]: I0122 06:35:46.122985 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3042f2c25662a3c0a0f010b2e431521ea9dc24b59303773b1fc4e2b43308937\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:46Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:46 crc kubenswrapper[4720]: I0122 06:35:46.145065 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pc2f4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a725fa6-120e-41b1-bf7b-e1419e35c891\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be8927bdfee25dd12f02d8f5e4923d1ef6e034a4ab3c036ee8fb76b699aa4a04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b14f48ebae84e1450337b5ce78c6b9e74f4ffd418eaaf9df9fcb7495f5feae06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b013c0f8d83bb91a1385ed067377ea5bd81640f7b420160042c33a3112cbe153\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a1f835b7dfa74f56bbfdc5ef0a6674e6c04ae1be7119e3675d4a7e970bc37d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdea94d764eb25af4aec5da1f1aaf94b7d75e66292561a54803a079a6ca35cbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://279bdc70caff8b05f2eaff1d28bee03725a7069dcc99c4770a8df9a3e3ed0440\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab402c5e4e13bb7d60dfd745a6c6a7becd4ea9eab192323e5066ea6252f8c6d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab402c5e4e13bb7d60dfd745a6c6a7becd4ea9eab192323e5066ea6252f8c6d6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T06:35:43Z\\\",\\\"message\\\":\\\"140\\\\nI0122 06:35:42.520091 6010 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 06:35:42.518994 6010 obj_retry.go:439] Stop channel got triggered: will stop retrying failed objects of type *v1.Node\\\\nI0122 06:35:42.520373 6010 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0122 06:35:42.520388 6010 nad_controller.go:166] [zone-nad-controller NAD controller]: shutting down\\\\nI0122 06:35:42.520834 6010 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0122 06:35:42.520852 6010 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0122 06:35:42.520984 6010 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 06:35:42.521130 6010 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0122 06:35:42.522315 6010 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd6371da033cf0ecd1c0b746cae82e14593edbf2353910a77284a2987741580d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a478b385adad231c65bafa44aef49fd983ef1a81babc0464478e199578b612b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a478b385adad231c65bafa44aef49fd983ef1a81babc0464478e199578b612b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pc2f4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:46Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:46 crc kubenswrapper[4720]: I0122 06:35:46.148631 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zl9b5\" (UniqueName: \"kubernetes.io/projected/83dddec7-9ecb-4d3b-97ac-e2f8f59e547c-kube-api-access-zl9b5\") pod \"ovnkube-control-plane-749d76644c-4c84t\" (UID: \"83dddec7-9ecb-4d3b-97ac-e2f8f59e547c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4c84t" Jan 22 06:35:46 crc kubenswrapper[4720]: I0122 06:35:46.149419 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/83dddec7-9ecb-4d3b-97ac-e2f8f59e547c-env-overrides\") pod \"ovnkube-control-plane-749d76644c-4c84t\" (UID: \"83dddec7-9ecb-4d3b-97ac-e2f8f59e547c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4c84t" Jan 22 06:35:46 crc kubenswrapper[4720]: I0122 06:35:46.150505 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/83dddec7-9ecb-4d3b-97ac-e2f8f59e547c-env-overrides\") pod \"ovnkube-control-plane-749d76644c-4c84t\" (UID: \"83dddec7-9ecb-4d3b-97ac-e2f8f59e547c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4c84t" Jan 22 06:35:46 crc kubenswrapper[4720]: I0122 06:35:46.150606 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/83dddec7-9ecb-4d3b-97ac-e2f8f59e547c-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-4c84t\" (UID: \"83dddec7-9ecb-4d3b-97ac-e2f8f59e547c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4c84t" Jan 22 06:35:46 crc kubenswrapper[4720]: I0122 06:35:46.150668 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/83dddec7-9ecb-4d3b-97ac-e2f8f59e547c-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-4c84t\" (UID: \"83dddec7-9ecb-4d3b-97ac-e2f8f59e547c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4c84t" Jan 22 06:35:46 crc kubenswrapper[4720]: I0122 06:35:46.151742 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/83dddec7-9ecb-4d3b-97ac-e2f8f59e547c-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-4c84t\" (UID: \"83dddec7-9ecb-4d3b-97ac-e2f8f59e547c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4c84t" Jan 22 06:35:46 crc kubenswrapper[4720]: I0122 06:35:46.158973 4720 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 14:31:53.272457855 +0000 UTC Jan 22 06:35:46 crc kubenswrapper[4720]: I0122 06:35:46.162227 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/83dddec7-9ecb-4d3b-97ac-e2f8f59e547c-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-4c84t\" (UID: \"83dddec7-9ecb-4d3b-97ac-e2f8f59e547c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4c84t" Jan 22 06:35:46 crc kubenswrapper[4720]: I0122 06:35:46.163011 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:46 crc kubenswrapper[4720]: I0122 06:35:46.163090 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:46 crc kubenswrapper[4720]: I0122 06:35:46.163107 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:46 crc kubenswrapper[4720]: I0122 06:35:46.163132 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:46 crc kubenswrapper[4720]: I0122 06:35:46.163146 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:46Z","lastTransitionTime":"2026-01-22T06:35:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:46 crc kubenswrapper[4720]: I0122 06:35:46.171012 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lxzml" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c7b3c34a-9870-4c9f-990b-29b7e768d5a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec19859a546dc85e4807d0f522014187b1d22c1167a342776aafbcfd7c3a2cd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h66q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0e395a61e07527eace7f980f476ef9b16a0c6d525b461837ebd9abcbebfa9b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0e395a61e07527eace7f980f476ef9b16a0c6d525b461837ebd9abcbebfa9b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h66q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fa836ad8fb1d77bc1b9d30575db8ed7f2394a2b03d41f0b26a3907fa07ab154\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fa836ad8fb1d77bc1b9d30575db8ed7f2394a2b03d41f0b26a3907fa07ab154\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h66q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09cda77dbfa7cafc1f167478649fe1bc961d7610005bd3260407796956dd8fc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09cda77dbfa7cafc1f167478649fe1bc961d7610005bd3260407796956dd8fc3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h66q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d74c02c82589070c20f88c3ce9574341f2c4898af034fe9a686aa7b7c19ed93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d74c02c82589070c20f88c3ce9574341f2c4898af034fe9a686aa7b7c19ed93\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h66q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42e7374bf2c5e55d0afbe1e7564a2bc7fd23d0af7e22f0cab853fb9d75802422\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42e7374bf2c5e55d0afbe1e7564a2bc7fd23d0af7e22f0cab853fb9d75802422\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h66q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bbafab11f2868d0a97a54debc01d8e4cc6360b08a5b7e883a2f385db2e9ba0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3bbafab11f2868d0a97a54debc01d8e4cc6360b08a5b7e883a2f385db2e9ba0c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h66q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lxzml\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:46Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:46 crc kubenswrapper[4720]: I0122 06:35:46.188354 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zl9b5\" (UniqueName: \"kubernetes.io/projected/83dddec7-9ecb-4d3b-97ac-e2f8f59e547c-kube-api-access-zl9b5\") pod \"ovnkube-control-plane-749d76644c-4c84t\" (UID: \"83dddec7-9ecb-4d3b-97ac-e2f8f59e547c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4c84t" Jan 22 06:35:46 crc kubenswrapper[4720]: I0122 06:35:46.191313 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5bmrh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"819a554b-cde8-41eb-bf3c-b965b5754ee9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fd548b8bf40d7be75f3433d2396eac3eb467cdc77a6e92f29c5321b0f2e11c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w949z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5bmrh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:46Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:46 crc kubenswrapper[4720]: I0122 06:35:46.208308 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4c84t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83dddec7-9ecb-4d3b-97ac-e2f8f59e547c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl9b5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl9b5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4c84t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:46Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:46 crc kubenswrapper[4720]: I0122 06:35:46.210715 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 06:35:46 crc kubenswrapper[4720]: E0122 06:35:46.210940 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 06:35:46 crc kubenswrapper[4720]: I0122 06:35:46.211250 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 06:35:46 crc kubenswrapper[4720]: E0122 06:35:46.211519 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 06:35:46 crc kubenswrapper[4720]: I0122 06:35:46.233523 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2336077a-0d11-443d-aa5f-3bc0f75aeb59\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7f8b443098a61889ca09134ff7b6e219efca59abcbcea3bb4c1aa9266ee8e55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfdba4f4bb3ed55acad58ed0c07d2dd58b1a1e9fd56443090c002bdefd3e7e10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2caddf6dd66b641ff10add3b52e37581c741507660ec4fe67a3b9ddde699e56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://108525dd49ce1847f1cc6c4e2a4c70cae3d2d990cda97eeb4aba8879b955432d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d49a501c34b86f0f9300742970a00c92ce3b8d352518d8f53d1ab8c734e2af6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49fa65139740d25a82a8becea2b7476e9846a3eebfe5a02bb0b3e8cc4e7fa30d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49fa65139740d25a82a8becea2b7476e9846a3eebfe5a02bb0b3e8cc4e7fa30d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e171d6a13b6108f7df37e46d2195682d2ce7a75e5fbfd0ccf4aca27871964cf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e171d6a13b6108f7df37e46d2195682d2ce7a75e5fbfd0ccf4aca27871964cf6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://136504ce05b178f4c60a839a1044c0b0892ea97babfe80181eac601c5057df16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://136504ce05b178f4c60a839a1044c0b0892ea97babfe80181eac601c5057df16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:08Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:46Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:46 crc kubenswrapper[4720]: I0122 06:35:46.252664 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71c3232e-a7c6-4127-b9ae-54b793cf40fc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b844e983a5de29cec0e9c51f209f04921cd45b437887ca38e93348bc0816832a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d21c2ba52009379f44498a3064c67bcd1a54a6277cc2b1df9db949bf0871e95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65a3043e63bb84b58bb5486070eb84265d51c514df2d766703a62382b3d6f66f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b944677a78ab673857bf2dd507dcc2eced1caef632613ec1e2054cb76ef57f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f51bf89aa732948c3672632435a055b62f58316da07931b2773bc2d3c10789e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T06:35:26Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 06:35:20.895806 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 06:35:20.896560 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2161447925/tls.crt::/tmp/serving-cert-2161447925/tls.key\\\\\\\"\\\\nI0122 06:35:26.673138 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 06:35:26.677382 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 06:35:26.677419 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 06:35:26.677459 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 06:35:26.677469 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 06:35:26.688473 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 06:35:26.688552 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 06:35:26.688566 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 06:35:26.688580 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 06:35:26.688588 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 06:35:26.688596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 06:35:26.688605 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 06:35:26.688875 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 06:35:26.690763 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://927a5a6aba8d7533888fb9e5e664fbaed25c6e6c0616c5b3c6f0afd6f45aa128\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3418e6eacd36147ae6ae1be72f23113dea7dbaab3597ba0fee76fbc846c0162f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3418e6eacd36147ae6ae1be72f23113dea7dbaab3597ba0fee76fbc846c0162f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:46Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:46 crc kubenswrapper[4720]: I0122 06:35:46.265900 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:46 crc kubenswrapper[4720]: I0122 06:35:46.266032 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:46 crc kubenswrapper[4720]: I0122 06:35:46.266055 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:46 crc kubenswrapper[4720]: I0122 06:35:46.266086 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:46 crc kubenswrapper[4720]: I0122 06:35:46.266107 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:46Z","lastTransitionTime":"2026-01-22T06:35:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:46 crc kubenswrapper[4720]: I0122 06:35:46.270937 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://42d0f971bb1f2a686c44e255b97648123170034b41d626318aab29e111dda4a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:46Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:46 crc kubenswrapper[4720]: I0122 06:35:46.287428 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-n5w5r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85373343-156d-4de0-a72b-baaf7c4e3d08\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4da0abad7292a9d82b80caabd4b1f0fc15fbe54bfa5b4316c85b2131ea8b5b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlzmz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-n5w5r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:46Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:46 crc kubenswrapper[4720]: I0122 06:35:46.304976 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4c84t" Jan 22 06:35:46 crc kubenswrapper[4720]: W0122 06:35:46.323150 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod83dddec7_9ecb_4d3b_97ac_e2f8f59e547c.slice/crio-47eab7641845be28b26c864488d72dc38d175745dcce5e5670c055ba529b8518 WatchSource:0}: Error finding container 47eab7641845be28b26c864488d72dc38d175745dcce5e5670c055ba529b8518: Status 404 returned error can't find the container with id 47eab7641845be28b26c864488d72dc38d175745dcce5e5670c055ba529b8518 Jan 22 06:35:46 crc kubenswrapper[4720]: I0122 06:35:46.370849 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:46 crc kubenswrapper[4720]: I0122 06:35:46.370933 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:46 crc kubenswrapper[4720]: I0122 06:35:46.370946 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:46 crc kubenswrapper[4720]: I0122 06:35:46.370976 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:46 crc kubenswrapper[4720]: I0122 06:35:46.370993 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:46Z","lastTransitionTime":"2026-01-22T06:35:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:46 crc kubenswrapper[4720]: I0122 06:35:46.474530 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:46 crc kubenswrapper[4720]: I0122 06:35:46.474611 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:46 crc kubenswrapper[4720]: I0122 06:35:46.474625 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:46 crc kubenswrapper[4720]: I0122 06:35:46.474679 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:46 crc kubenswrapper[4720]: I0122 06:35:46.474695 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:46Z","lastTransitionTime":"2026-01-22T06:35:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:46 crc kubenswrapper[4720]: I0122 06:35:46.554871 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pc2f4_9a725fa6-120e-41b1-bf7b-e1419e35c891/ovnkube-controller/0.log" Jan 22 06:35:46 crc kubenswrapper[4720]: I0122 06:35:46.558969 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pc2f4" event={"ID":"9a725fa6-120e-41b1-bf7b-e1419e35c891","Type":"ContainerStarted","Data":"8e12d4480e256b933eebea365d0b57b3ab1be7aef00a03a2ff218e1bf3d7460a"} Jan 22 06:35:46 crc kubenswrapper[4720]: I0122 06:35:46.559289 4720 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 22 06:35:46 crc kubenswrapper[4720]: I0122 06:35:46.560076 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4c84t" event={"ID":"83dddec7-9ecb-4d3b-97ac-e2f8f59e547c","Type":"ContainerStarted","Data":"47eab7641845be28b26c864488d72dc38d175745dcce5e5670c055ba529b8518"} Jan 22 06:35:46 crc kubenswrapper[4720]: I0122 06:35:46.577575 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:46 crc kubenswrapper[4720]: I0122 06:35:46.577637 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:46 crc kubenswrapper[4720]: I0122 06:35:46.577655 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:46 crc kubenswrapper[4720]: I0122 06:35:46.577685 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:46 crc kubenswrapper[4720]: I0122 06:35:46.577705 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:46Z","lastTransitionTime":"2026-01-22T06:35:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:46 crc kubenswrapper[4720]: I0122 06:35:46.579560 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3042f2c25662a3c0a0f010b2e431521ea9dc24b59303773b1fc4e2b43308937\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:46Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:46 crc kubenswrapper[4720]: I0122 06:35:46.602195 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pc2f4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a725fa6-120e-41b1-bf7b-e1419e35c891\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be8927bdfee25dd12f02d8f5e4923d1ef6e034a4ab3c036ee8fb76b699aa4a04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b14f48ebae84e1450337b5ce78c6b9e74f4ffd418eaaf9df9fcb7495f5feae06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b013c0f8d83bb91a1385ed067377ea5bd81640f7b420160042c33a3112cbe153\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a1f835b7dfa74f56bbfdc5ef0a6674e6c04ae1be7119e3675d4a7e970bc37d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdea94d764eb25af4aec5da1f1aaf94b7d75e66292561a54803a079a6ca35cbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://279bdc70caff8b05f2eaff1d28bee03725a7069dcc99c4770a8df9a3e3ed0440\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e12d4480e256b933eebea365d0b57b3ab1be7aef00a03a2ff218e1bf3d7460a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab402c5e4e13bb7d60dfd745a6c6a7becd4ea9eab192323e5066ea6252f8c6d6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T06:35:43Z\\\",\\\"message\\\":\\\"140\\\\nI0122 06:35:42.520091 6010 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 06:35:42.518994 6010 obj_retry.go:439] Stop channel got triggered: will stop retrying failed objects of type *v1.Node\\\\nI0122 06:35:42.520373 6010 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0122 06:35:42.520388 6010 nad_controller.go:166] [zone-nad-controller NAD controller]: shutting down\\\\nI0122 06:35:42.520834 6010 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0122 06:35:42.520852 6010 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0122 06:35:42.520984 6010 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 06:35:42.521130 6010 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0122 06:35:42.522315 6010 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:39Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd6371da033cf0ecd1c0b746cae82e14593edbf2353910a77284a2987741580d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a478b385adad231c65bafa44aef49fd983ef1a81babc0464478e199578b612b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a478b385adad231c65bafa44aef49fd983ef1a81babc0464478e199578b612b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pc2f4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:46Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:46 crc kubenswrapper[4720]: I0122 06:35:46.624061 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lxzml" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c7b3c34a-9870-4c9f-990b-29b7e768d5a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec19859a546dc85e4807d0f522014187b1d22c1167a342776aafbcfd7c3a2cd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h66q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0e395a61e07527eace7f980f476ef9b16a0c6d525b461837ebd9abcbebfa9b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0e395a61e07527eace7f980f476ef9b16a0c6d525b461837ebd9abcbebfa9b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h66q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fa836ad8fb1d77bc1b9d30575db8ed7f2394a2b03d41f0b26a3907fa07ab154\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fa836ad8fb1d77bc1b9d30575db8ed7f2394a2b03d41f0b26a3907fa07ab154\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h66q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09cda77dbfa7cafc1f167478649fe1bc961d7610005bd3260407796956dd8fc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09cda77dbfa7cafc1f167478649fe1bc961d7610005bd3260407796956dd8fc3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h66q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d74c02c82589070c20f88c3ce9574341f2c4898af034fe9a686aa7b7c19ed93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d74c02c82589070c20f88c3ce9574341f2c4898af034fe9a686aa7b7c19ed93\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h66q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42e7374bf2c5e55d0afbe1e7564a2bc7fd23d0af7e22f0cab853fb9d75802422\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42e7374bf2c5e55d0afbe1e7564a2bc7fd23d0af7e22f0cab853fb9d75802422\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h66q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bbafab11f2868d0a97a54debc01d8e4cc6360b08a5b7e883a2f385db2e9ba0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3bbafab11f2868d0a97a54debc01d8e4cc6360b08a5b7e883a2f385db2e9ba0c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h66q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lxzml\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:46Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:46 crc kubenswrapper[4720]: I0122 06:35:46.636743 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5bmrh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"819a554b-cde8-41eb-bf3c-b965b5754ee9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fd548b8bf40d7be75f3433d2396eac3eb467cdc77a6e92f29c5321b0f2e11c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w949z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5bmrh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:46Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:46 crc kubenswrapper[4720]: I0122 06:35:46.651523 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4c84t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83dddec7-9ecb-4d3b-97ac-e2f8f59e547c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl9b5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl9b5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4c84t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:46Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:46 crc kubenswrapper[4720]: I0122 06:35:46.671642 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63aad383-f2cb-4bee-a51e-8f27e7b6f6dc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f91f6cde32c3019a63315238551af8420def8e585e4dbf3d29e55f62f155ba3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eff41c6fc4cac32edebfbabd313603d50771e92427938f4fc8c12bc75563133d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5dc3d31620607809111c9a1ed06ab7dfb1b24ff5ef18155979efc6b09b6d67d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://365ce842df37c87fae429e32228087a45828c6c717b63a2eb9818f26bf9aed7b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:08Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:46Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:46 crc kubenswrapper[4720]: I0122 06:35:46.680691 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:46 crc kubenswrapper[4720]: I0122 06:35:46.680761 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:46 crc kubenswrapper[4720]: I0122 06:35:46.680782 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:46 crc kubenswrapper[4720]: I0122 06:35:46.680810 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:46 crc kubenswrapper[4720]: I0122 06:35:46.680829 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:46Z","lastTransitionTime":"2026-01-22T06:35:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:46 crc kubenswrapper[4720]: I0122 06:35:46.689215 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:46Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:46 crc kubenswrapper[4720]: I0122 06:35:46.708972 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:46Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:46 crc kubenswrapper[4720]: I0122 06:35:46.730774 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-n5w5r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85373343-156d-4de0-a72b-baaf7c4e3d08\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4da0abad7292a9d82b80caabd4b1f0fc15fbe54bfa5b4316c85b2131ea8b5b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlzmz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-n5w5r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:46Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:46 crc kubenswrapper[4720]: I0122 06:35:46.766128 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2336077a-0d11-443d-aa5f-3bc0f75aeb59\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7f8b443098a61889ca09134ff7b6e219efca59abcbcea3bb4c1aa9266ee8e55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfdba4f4bb3ed55acad58ed0c07d2dd58b1a1e9fd56443090c002bdefd3e7e10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2caddf6dd66b641ff10add3b52e37581c741507660ec4fe67a3b9ddde699e56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://108525dd49ce1847f1cc6c4e2a4c70cae3d2d990cda97eeb4aba8879b955432d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d49a501c34b86f0f9300742970a00c92ce3b8d352518d8f53d1ab8c734e2af6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49fa65139740d25a82a8becea2b7476e9846a3eebfe5a02bb0b3e8cc4e7fa30d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49fa65139740d25a82a8becea2b7476e9846a3eebfe5a02bb0b3e8cc4e7fa30d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e171d6a13b6108f7df37e46d2195682d2ce7a75e5fbfd0ccf4aca27871964cf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e171d6a13b6108f7df37e46d2195682d2ce7a75e5fbfd0ccf4aca27871964cf6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://136504ce05b178f4c60a839a1044c0b0892ea97babfe80181eac601c5057df16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://136504ce05b178f4c60a839a1044c0b0892ea97babfe80181eac601c5057df16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:08Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:46Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:46 crc kubenswrapper[4720]: I0122 06:35:46.783695 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:46 crc kubenswrapper[4720]: I0122 06:35:46.783751 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:46 crc kubenswrapper[4720]: I0122 06:35:46.783765 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:46 crc kubenswrapper[4720]: I0122 06:35:46.783789 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:46 crc kubenswrapper[4720]: I0122 06:35:46.783808 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:46Z","lastTransitionTime":"2026-01-22T06:35:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:46 crc kubenswrapper[4720]: I0122 06:35:46.792094 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71c3232e-a7c6-4127-b9ae-54b793cf40fc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b844e983a5de29cec0e9c51f209f04921cd45b437887ca38e93348bc0816832a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d21c2ba52009379f44498a3064c67bcd1a54a6277cc2b1df9db949bf0871e95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65a3043e63bb84b58bb5486070eb84265d51c514df2d766703a62382b3d6f66f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b944677a78ab673857bf2dd507dcc2eced1caef632613ec1e2054cb76ef57f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f51bf89aa732948c3672632435a055b62f58316da07931b2773bc2d3c10789e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T06:35:26Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 06:35:20.895806 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 06:35:20.896560 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2161447925/tls.crt::/tmp/serving-cert-2161447925/tls.key\\\\\\\"\\\\nI0122 06:35:26.673138 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 06:35:26.677382 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 06:35:26.677419 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 06:35:26.677459 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 06:35:26.677469 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 06:35:26.688473 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 06:35:26.688552 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 06:35:26.688566 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 06:35:26.688580 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 06:35:26.688588 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 06:35:26.688596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 06:35:26.688605 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 06:35:26.688875 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 06:35:26.690763 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://927a5a6aba8d7533888fb9e5e664fbaed25c6e6c0616c5b3c6f0afd6f45aa128\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3418e6eacd36147ae6ae1be72f23113dea7dbaab3597ba0fee76fbc846c0162f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3418e6eacd36147ae6ae1be72f23113dea7dbaab3597ba0fee76fbc846c0162f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:46Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:46 crc kubenswrapper[4720]: I0122 06:35:46.819435 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://42d0f971bb1f2a686c44e255b97648123170034b41d626318aab29e111dda4a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:46Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:46 crc kubenswrapper[4720]: I0122 06:35:46.844537 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:46Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:46 crc kubenswrapper[4720]: I0122 06:35:46.862590 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dtnxt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"518eedd0-2cb6-458d-a7a8-d8c8b8296401\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://794bf7a98b2300bd21bd914f5ad90c8f92ea7c055c78f62dbac7bc66c1c4d282\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqx2p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dtnxt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:46Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:46 crc kubenswrapper[4720]: I0122 06:35:46.887144 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:46 crc kubenswrapper[4720]: I0122 06:35:46.887460 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:46 crc kubenswrapper[4720]: I0122 06:35:46.887565 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:46 crc kubenswrapper[4720]: I0122 06:35:46.887662 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:46 crc kubenswrapper[4720]: I0122 06:35:46.887747 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:46Z","lastTransitionTime":"2026-01-22T06:35:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:46 crc kubenswrapper[4720]: I0122 06:35:46.888644 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc18b75d6761db0d4a459a096804ac6600c6c2026ca133d47671a8aadba175f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6616553b8c62205590801d10e6e316b396b7190584b08a774ebb87ab425cc122\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:46Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:46 crc kubenswrapper[4720]: I0122 06:35:46.908664 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4b26e9d-6a95-4b1c-9750-88b6aa100c67\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57c69f24cbbb5aea3a63b602916230df0cc3d74344a32bd1528bd162786a6d4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f52f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88eb6692702bcb8523c759d764bb8dede5af5a2890217a1c6897a5b18a7197dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f52f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bnsvd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:46Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:46 crc kubenswrapper[4720]: I0122 06:35:46.992577 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:46 crc kubenswrapper[4720]: I0122 06:35:46.992607 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:46 crc kubenswrapper[4720]: I0122 06:35:46.992616 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:46 crc kubenswrapper[4720]: I0122 06:35:46.992633 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:46 crc kubenswrapper[4720]: I0122 06:35:46.992642 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:46Z","lastTransitionTime":"2026-01-22T06:35:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:47 crc kubenswrapper[4720]: I0122 06:35:47.095249 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:47 crc kubenswrapper[4720]: I0122 06:35:47.095277 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:47 crc kubenswrapper[4720]: I0122 06:35:47.095285 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:47 crc kubenswrapper[4720]: I0122 06:35:47.095299 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:47 crc kubenswrapper[4720]: I0122 06:35:47.095310 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:47Z","lastTransitionTime":"2026-01-22T06:35:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:47 crc kubenswrapper[4720]: I0122 06:35:47.159226 4720 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 17:48:25.148891462 +0000 UTC Jan 22 06:35:47 crc kubenswrapper[4720]: I0122 06:35:47.197960 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:47 crc kubenswrapper[4720]: I0122 06:35:47.198021 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:47 crc kubenswrapper[4720]: I0122 06:35:47.198034 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:47 crc kubenswrapper[4720]: I0122 06:35:47.198062 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:47 crc kubenswrapper[4720]: I0122 06:35:47.198078 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:47Z","lastTransitionTime":"2026-01-22T06:35:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:47 crc kubenswrapper[4720]: I0122 06:35:47.204263 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-pc2f4" Jan 22 06:35:47 crc kubenswrapper[4720]: I0122 06:35:47.210628 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 06:35:47 crc kubenswrapper[4720]: E0122 06:35:47.210879 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 06:35:47 crc kubenswrapper[4720]: I0122 06:35:47.300558 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:47 crc kubenswrapper[4720]: I0122 06:35:47.300725 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:47 crc kubenswrapper[4720]: I0122 06:35:47.300781 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:47 crc kubenswrapper[4720]: I0122 06:35:47.300855 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:47 crc kubenswrapper[4720]: I0122 06:35:47.300941 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:47Z","lastTransitionTime":"2026-01-22T06:35:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:47 crc kubenswrapper[4720]: I0122 06:35:47.404021 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:47 crc kubenswrapper[4720]: I0122 06:35:47.404077 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:47 crc kubenswrapper[4720]: I0122 06:35:47.404091 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:47 crc kubenswrapper[4720]: I0122 06:35:47.404113 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:47 crc kubenswrapper[4720]: I0122 06:35:47.404129 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:47Z","lastTransitionTime":"2026-01-22T06:35:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:47 crc kubenswrapper[4720]: I0122 06:35:47.488451 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-kvtch"] Jan 22 06:35:47 crc kubenswrapper[4720]: I0122 06:35:47.490681 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kvtch" Jan 22 06:35:47 crc kubenswrapper[4720]: E0122 06:35:47.491020 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kvtch" podUID="409f50e8-9b68-4efe-8eb4-bc144d383817" Jan 22 06:35:47 crc kubenswrapper[4720]: I0122 06:35:47.506050 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:47 crc kubenswrapper[4720]: I0122 06:35:47.506256 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:47 crc kubenswrapper[4720]: I0122 06:35:47.506360 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:47 crc kubenswrapper[4720]: I0122 06:35:47.506499 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:47 crc kubenswrapper[4720]: I0122 06:35:47.506614 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:47Z","lastTransitionTime":"2026-01-22T06:35:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:47 crc kubenswrapper[4720]: I0122 06:35:47.507479 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dtnxt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"518eedd0-2cb6-458d-a7a8-d8c8b8296401\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://794bf7a98b2300bd21bd914f5ad90c8f92ea7c055c78f62dbac7bc66c1c4d282\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqx2p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dtnxt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:47Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:47 crc kubenswrapper[4720]: I0122 06:35:47.525371 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:47Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:47 crc kubenswrapper[4720]: I0122 06:35:47.543763 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4b26e9d-6a95-4b1c-9750-88b6aa100c67\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57c69f24cbbb5aea3a63b602916230df0cc3d74344a32bd1528bd162786a6d4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f52f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88eb6692702bcb8523c759d764bb8dede5af5a2890217a1c6897a5b18a7197dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f52f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bnsvd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:47Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:47 crc kubenswrapper[4720]: I0122 06:35:47.566248 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc18b75d6761db0d4a459a096804ac6600c6c2026ca133d47671a8aadba175f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6616553b8c62205590801d10e6e316b396b7190584b08a774ebb87ab425cc122\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:47Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:47 crc kubenswrapper[4720]: I0122 06:35:47.568165 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4c84t" event={"ID":"83dddec7-9ecb-4d3b-97ac-e2f8f59e547c","Type":"ContainerStarted","Data":"8570742a6d232a3695d42f42f4a8a7bfe79325cb20c8da4129148dda33df4683"} Jan 22 06:35:47 crc kubenswrapper[4720]: I0122 06:35:47.568239 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4c84t" event={"ID":"83dddec7-9ecb-4d3b-97ac-e2f8f59e547c","Type":"ContainerStarted","Data":"2ea91d23f213bf1c41ebffa43c30559153a3fdae5aac42557a24566cc90bd2b1"} Jan 22 06:35:47 crc kubenswrapper[4720]: I0122 06:35:47.570317 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pc2f4_9a725fa6-120e-41b1-bf7b-e1419e35c891/ovnkube-controller/1.log" Jan 22 06:35:47 crc kubenswrapper[4720]: I0122 06:35:47.570344 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/409f50e8-9b68-4efe-8eb4-bc144d383817-metrics-certs\") pod \"network-metrics-daemon-kvtch\" (UID: \"409f50e8-9b68-4efe-8eb4-bc144d383817\") " pod="openshift-multus/network-metrics-daemon-kvtch" Jan 22 06:35:47 crc kubenswrapper[4720]: I0122 06:35:47.570391 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fhm9b\" (UniqueName: \"kubernetes.io/projected/409f50e8-9b68-4efe-8eb4-bc144d383817-kube-api-access-fhm9b\") pod \"network-metrics-daemon-kvtch\" (UID: \"409f50e8-9b68-4efe-8eb4-bc144d383817\") " pod="openshift-multus/network-metrics-daemon-kvtch" Jan 22 06:35:47 crc kubenswrapper[4720]: I0122 06:35:47.573237 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pc2f4_9a725fa6-120e-41b1-bf7b-e1419e35c891/ovnkube-controller/0.log" Jan 22 06:35:47 crc kubenswrapper[4720]: I0122 06:35:47.581201 4720 generic.go:334] "Generic (PLEG): container finished" podID="9a725fa6-120e-41b1-bf7b-e1419e35c891" containerID="8e12d4480e256b933eebea365d0b57b3ab1be7aef00a03a2ff218e1bf3d7460a" exitCode=1 Jan 22 06:35:47 crc kubenswrapper[4720]: I0122 06:35:47.581265 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pc2f4" event={"ID":"9a725fa6-120e-41b1-bf7b-e1419e35c891","Type":"ContainerDied","Data":"8e12d4480e256b933eebea365d0b57b3ab1be7aef00a03a2ff218e1bf3d7460a"} Jan 22 06:35:47 crc kubenswrapper[4720]: I0122 06:35:47.581345 4720 scope.go:117] "RemoveContainer" containerID="ab402c5e4e13bb7d60dfd745a6c6a7becd4ea9eab192323e5066ea6252f8c6d6" Jan 22 06:35:47 crc kubenswrapper[4720]: I0122 06:35:47.583708 4720 scope.go:117] "RemoveContainer" containerID="8e12d4480e256b933eebea365d0b57b3ab1be7aef00a03a2ff218e1bf3d7460a" Jan 22 06:35:47 crc kubenswrapper[4720]: E0122 06:35:47.584187 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-pc2f4_openshift-ovn-kubernetes(9a725fa6-120e-41b1-bf7b-e1419e35c891)\"" pod="openshift-ovn-kubernetes/ovnkube-node-pc2f4" podUID="9a725fa6-120e-41b1-bf7b-e1419e35c891" Jan 22 06:35:47 crc kubenswrapper[4720]: I0122 06:35:47.591696 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:47Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:47 crc kubenswrapper[4720]: I0122 06:35:47.609495 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3042f2c25662a3c0a0f010b2e431521ea9dc24b59303773b1fc4e2b43308937\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:47Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:47 crc kubenswrapper[4720]: I0122 06:35:47.610446 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:47 crc kubenswrapper[4720]: I0122 06:35:47.610549 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:47 crc kubenswrapper[4720]: I0122 06:35:47.610573 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:47 crc kubenswrapper[4720]: I0122 06:35:47.610601 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:47 crc kubenswrapper[4720]: I0122 06:35:47.610621 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:47Z","lastTransitionTime":"2026-01-22T06:35:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:47 crc kubenswrapper[4720]: I0122 06:35:47.635218 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pc2f4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a725fa6-120e-41b1-bf7b-e1419e35c891\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be8927bdfee25dd12f02d8f5e4923d1ef6e034a4ab3c036ee8fb76b699aa4a04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b14f48ebae84e1450337b5ce78c6b9e74f4ffd418eaaf9df9fcb7495f5feae06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b013c0f8d83bb91a1385ed067377ea5bd81640f7b420160042c33a3112cbe153\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a1f835b7dfa74f56bbfdc5ef0a6674e6c04ae1be7119e3675d4a7e970bc37d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdea94d764eb25af4aec5da1f1aaf94b7d75e66292561a54803a079a6ca35cbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://279bdc70caff8b05f2eaff1d28bee03725a7069dcc99c4770a8df9a3e3ed0440\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e12d4480e256b933eebea365d0b57b3ab1be7aef00a03a2ff218e1bf3d7460a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab402c5e4e13bb7d60dfd745a6c6a7becd4ea9eab192323e5066ea6252f8c6d6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T06:35:43Z\\\",\\\"message\\\":\\\"140\\\\nI0122 06:35:42.520091 6010 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 06:35:42.518994 6010 obj_retry.go:439] Stop channel got triggered: will stop retrying failed objects of type *v1.Node\\\\nI0122 06:35:42.520373 6010 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0122 06:35:42.520388 6010 nad_controller.go:166] [zone-nad-controller NAD controller]: shutting down\\\\nI0122 06:35:42.520834 6010 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0122 06:35:42.520852 6010 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0122 06:35:42.520984 6010 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 06:35:42.521130 6010 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0122 06:35:42.522315 6010 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:39Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd6371da033cf0ecd1c0b746cae82e14593edbf2353910a77284a2987741580d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a478b385adad231c65bafa44aef49fd983ef1a81babc0464478e199578b612b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a478b385adad231c65bafa44aef49fd983ef1a81babc0464478e199578b612b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pc2f4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:47Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:47 crc kubenswrapper[4720]: I0122 06:35:47.655896 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lxzml" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c7b3c34a-9870-4c9f-990b-29b7e768d5a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec19859a546dc85e4807d0f522014187b1d22c1167a342776aafbcfd7c3a2cd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h66q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0e395a61e07527eace7f980f476ef9b16a0c6d525b461837ebd9abcbebfa9b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0e395a61e07527eace7f980f476ef9b16a0c6d525b461837ebd9abcbebfa9b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h66q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fa836ad8fb1d77bc1b9d30575db8ed7f2394a2b03d41f0b26a3907fa07ab154\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fa836ad8fb1d77bc1b9d30575db8ed7f2394a2b03d41f0b26a3907fa07ab154\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h66q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09cda77dbfa7cafc1f167478649fe1bc961d7610005bd3260407796956dd8fc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09cda77dbfa7cafc1f167478649fe1bc961d7610005bd3260407796956dd8fc3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h66q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d74c02c82589070c20f88c3ce9574341f2c4898af034fe9a686aa7b7c19ed93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d74c02c82589070c20f88c3ce9574341f2c4898af034fe9a686aa7b7c19ed93\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h66q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42e7374bf2c5e55d0afbe1e7564a2bc7fd23d0af7e22f0cab853fb9d75802422\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42e7374bf2c5e55d0afbe1e7564a2bc7fd23d0af7e22f0cab853fb9d75802422\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h66q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bbafab11f2868d0a97a54debc01d8e4cc6360b08a5b7e883a2f385db2e9ba0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3bbafab11f2868d0a97a54debc01d8e4cc6360b08a5b7e883a2f385db2e9ba0c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h66q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lxzml\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:47Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:47 crc kubenswrapper[4720]: I0122 06:35:47.669954 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5bmrh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"819a554b-cde8-41eb-bf3c-b965b5754ee9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fd548b8bf40d7be75f3433d2396eac3eb467cdc77a6e92f29c5321b0f2e11c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w949z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5bmrh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:47Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:47 crc kubenswrapper[4720]: I0122 06:35:47.672383 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/409f50e8-9b68-4efe-8eb4-bc144d383817-metrics-certs\") pod \"network-metrics-daemon-kvtch\" (UID: \"409f50e8-9b68-4efe-8eb4-bc144d383817\") " pod="openshift-multus/network-metrics-daemon-kvtch" Jan 22 06:35:47 crc kubenswrapper[4720]: I0122 06:35:47.672619 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fhm9b\" (UniqueName: \"kubernetes.io/projected/409f50e8-9b68-4efe-8eb4-bc144d383817-kube-api-access-fhm9b\") pod \"network-metrics-daemon-kvtch\" (UID: \"409f50e8-9b68-4efe-8eb4-bc144d383817\") " pod="openshift-multus/network-metrics-daemon-kvtch" Jan 22 06:35:47 crc kubenswrapper[4720]: E0122 06:35:47.672639 4720 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 22 06:35:47 crc kubenswrapper[4720]: E0122 06:35:47.672746 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/409f50e8-9b68-4efe-8eb4-bc144d383817-metrics-certs podName:409f50e8-9b68-4efe-8eb4-bc144d383817 nodeName:}" failed. No retries permitted until 2026-01-22 06:35:48.172714268 +0000 UTC m=+40.314621013 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/409f50e8-9b68-4efe-8eb4-bc144d383817-metrics-certs") pod "network-metrics-daemon-kvtch" (UID: "409f50e8-9b68-4efe-8eb4-bc144d383817") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 22 06:35:47 crc kubenswrapper[4720]: I0122 06:35:47.686078 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4c84t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83dddec7-9ecb-4d3b-97ac-e2f8f59e547c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl9b5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl9b5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4c84t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:47Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:47 crc kubenswrapper[4720]: I0122 06:35:47.701198 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63aad383-f2cb-4bee-a51e-8f27e7b6f6dc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f91f6cde32c3019a63315238551af8420def8e585e4dbf3d29e55f62f155ba3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eff41c6fc4cac32edebfbabd313603d50771e92427938f4fc8c12bc75563133d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5dc3d31620607809111c9a1ed06ab7dfb1b24ff5ef18155979efc6b09b6d67d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://365ce842df37c87fae429e32228087a45828c6c717b63a2eb9818f26bf9aed7b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:08Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:47Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:47 crc kubenswrapper[4720]: I0122 06:35:47.705136 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fhm9b\" (UniqueName: \"kubernetes.io/projected/409f50e8-9b68-4efe-8eb4-bc144d383817-kube-api-access-fhm9b\") pod \"network-metrics-daemon-kvtch\" (UID: \"409f50e8-9b68-4efe-8eb4-bc144d383817\") " pod="openshift-multus/network-metrics-daemon-kvtch" Jan 22 06:35:47 crc kubenswrapper[4720]: I0122 06:35:47.713514 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:47 crc kubenswrapper[4720]: I0122 06:35:47.713556 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:47 crc kubenswrapper[4720]: I0122 06:35:47.713569 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:47 crc kubenswrapper[4720]: I0122 06:35:47.713591 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:47 crc kubenswrapper[4720]: I0122 06:35:47.713606 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:47Z","lastTransitionTime":"2026-01-22T06:35:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:47 crc kubenswrapper[4720]: I0122 06:35:47.719785 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:47Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:47 crc kubenswrapper[4720]: I0122 06:35:47.734869 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kvtch" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"409f50e8-9b68-4efe-8eb4-bc144d383817\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhm9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhm9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:47Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kvtch\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:47Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:47 crc kubenswrapper[4720]: I0122 06:35:47.750882 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://42d0f971bb1f2a686c44e255b97648123170034b41d626318aab29e111dda4a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:47Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:47 crc kubenswrapper[4720]: I0122 06:35:47.772754 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-n5w5r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85373343-156d-4de0-a72b-baaf7c4e3d08\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4da0abad7292a9d82b80caabd4b1f0fc15fbe54bfa5b4316c85b2131ea8b5b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlzmz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-n5w5r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:47Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:47 crc kubenswrapper[4720]: I0122 06:35:47.807678 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2336077a-0d11-443d-aa5f-3bc0f75aeb59\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7f8b443098a61889ca09134ff7b6e219efca59abcbcea3bb4c1aa9266ee8e55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfdba4f4bb3ed55acad58ed0c07d2dd58b1a1e9fd56443090c002bdefd3e7e10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2caddf6dd66b641ff10add3b52e37581c741507660ec4fe67a3b9ddde699e56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://108525dd49ce1847f1cc6c4e2a4c70cae3d2d990cda97eeb4aba8879b955432d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d49a501c34b86f0f9300742970a00c92ce3b8d352518d8f53d1ab8c734e2af6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49fa65139740d25a82a8becea2b7476e9846a3eebfe5a02bb0b3e8cc4e7fa30d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49fa65139740d25a82a8becea2b7476e9846a3eebfe5a02bb0b3e8cc4e7fa30d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e171d6a13b6108f7df37e46d2195682d2ce7a75e5fbfd0ccf4aca27871964cf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e171d6a13b6108f7df37e46d2195682d2ce7a75e5fbfd0ccf4aca27871964cf6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://136504ce05b178f4c60a839a1044c0b0892ea97babfe80181eac601c5057df16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://136504ce05b178f4c60a839a1044c0b0892ea97babfe80181eac601c5057df16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:08Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:47Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:47 crc kubenswrapper[4720]: I0122 06:35:47.816306 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:47 crc kubenswrapper[4720]: I0122 06:35:47.816364 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:47 crc kubenswrapper[4720]: I0122 06:35:47.816383 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:47 crc kubenswrapper[4720]: I0122 06:35:47.816410 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:47 crc kubenswrapper[4720]: I0122 06:35:47.816430 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:47Z","lastTransitionTime":"2026-01-22T06:35:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:47 crc kubenswrapper[4720]: I0122 06:35:47.827048 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71c3232e-a7c6-4127-b9ae-54b793cf40fc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b844e983a5de29cec0e9c51f209f04921cd45b437887ca38e93348bc0816832a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d21c2ba52009379f44498a3064c67bcd1a54a6277cc2b1df9db949bf0871e95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65a3043e63bb84b58bb5486070eb84265d51c514df2d766703a62382b3d6f66f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b944677a78ab673857bf2dd507dcc2eced1caef632613ec1e2054cb76ef57f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f51bf89aa732948c3672632435a055b62f58316da07931b2773bc2d3c10789e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T06:35:26Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 06:35:20.895806 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 06:35:20.896560 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2161447925/tls.crt::/tmp/serving-cert-2161447925/tls.key\\\\\\\"\\\\nI0122 06:35:26.673138 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 06:35:26.677382 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 06:35:26.677419 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 06:35:26.677459 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 06:35:26.677469 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 06:35:26.688473 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 06:35:26.688552 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 06:35:26.688566 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 06:35:26.688580 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 06:35:26.688588 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 06:35:26.688596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 06:35:26.688605 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 06:35:26.688875 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 06:35:26.690763 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://927a5a6aba8d7533888fb9e5e664fbaed25c6e6c0616c5b3c6f0afd6f45aa128\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3418e6eacd36147ae6ae1be72f23113dea7dbaab3597ba0fee76fbc846c0162f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3418e6eacd36147ae6ae1be72f23113dea7dbaab3597ba0fee76fbc846c0162f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:47Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:47 crc kubenswrapper[4720]: I0122 06:35:47.844178 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:47Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:47 crc kubenswrapper[4720]: I0122 06:35:47.859644 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dtnxt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"518eedd0-2cb6-458d-a7a8-d8c8b8296401\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://794bf7a98b2300bd21bd914f5ad90c8f92ea7c055c78f62dbac7bc66c1c4d282\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqx2p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dtnxt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:47Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:47 crc kubenswrapper[4720]: I0122 06:35:47.875517 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc18b75d6761db0d4a459a096804ac6600c6c2026ca133d47671a8aadba175f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6616553b8c62205590801d10e6e316b396b7190584b08a774ebb87ab425cc122\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:47Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:47 crc kubenswrapper[4720]: I0122 06:35:47.890157 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4b26e9d-6a95-4b1c-9750-88b6aa100c67\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57c69f24cbbb5aea3a63b602916230df0cc3d74344a32bd1528bd162786a6d4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f52f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88eb6692702bcb8523c759d764bb8dede5af5a2890217a1c6897a5b18a7197dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f52f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bnsvd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:47Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:47 crc kubenswrapper[4720]: I0122 06:35:47.905570 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63aad383-f2cb-4bee-a51e-8f27e7b6f6dc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f91f6cde32c3019a63315238551af8420def8e585e4dbf3d29e55f62f155ba3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eff41c6fc4cac32edebfbabd313603d50771e92427938f4fc8c12bc75563133d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5dc3d31620607809111c9a1ed06ab7dfb1b24ff5ef18155979efc6b09b6d67d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://365ce842df37c87fae429e32228087a45828c6c717b63a2eb9818f26bf9aed7b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:08Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:47Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:47 crc kubenswrapper[4720]: I0122 06:35:47.919971 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:47 crc kubenswrapper[4720]: I0122 06:35:47.919964 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:47Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:47 crc kubenswrapper[4720]: I0122 06:35:47.920021 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:47 crc kubenswrapper[4720]: I0122 06:35:47.920135 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:47 crc kubenswrapper[4720]: I0122 06:35:47.920165 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:47 crc kubenswrapper[4720]: I0122 06:35:47.920181 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:47Z","lastTransitionTime":"2026-01-22T06:35:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:47 crc kubenswrapper[4720]: I0122 06:35:47.939114 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:47Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:47 crc kubenswrapper[4720]: I0122 06:35:47.957669 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3042f2c25662a3c0a0f010b2e431521ea9dc24b59303773b1fc4e2b43308937\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:47Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:47 crc kubenswrapper[4720]: I0122 06:35:47.983583 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pc2f4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a725fa6-120e-41b1-bf7b-e1419e35c891\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be8927bdfee25dd12f02d8f5e4923d1ef6e034a4ab3c036ee8fb76b699aa4a04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b14f48ebae84e1450337b5ce78c6b9e74f4ffd418eaaf9df9fcb7495f5feae06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b013c0f8d83bb91a1385ed067377ea5bd81640f7b420160042c33a3112cbe153\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a1f835b7dfa74f56bbfdc5ef0a6674e6c04ae1be7119e3675d4a7e970bc37d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdea94d764eb25af4aec5da1f1aaf94b7d75e66292561a54803a079a6ca35cbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://279bdc70caff8b05f2eaff1d28bee03725a7069dcc99c4770a8df9a3e3ed0440\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e12d4480e256b933eebea365d0b57b3ab1be7aef00a03a2ff218e1bf3d7460a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab402c5e4e13bb7d60dfd745a6c6a7becd4ea9eab192323e5066ea6252f8c6d6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T06:35:43Z\\\",\\\"message\\\":\\\"140\\\\nI0122 06:35:42.520091 6010 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 06:35:42.518994 6010 obj_retry.go:439] Stop channel got triggered: will stop retrying failed objects of type *v1.Node\\\\nI0122 06:35:42.520373 6010 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0122 06:35:42.520388 6010 nad_controller.go:166] [zone-nad-controller NAD controller]: shutting down\\\\nI0122 06:35:42.520834 6010 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0122 06:35:42.520852 6010 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0122 06:35:42.520984 6010 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 06:35:42.521130 6010 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0122 06:35:42.522315 6010 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:39Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e12d4480e256b933eebea365d0b57b3ab1be7aef00a03a2ff218e1bf3d7460a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T06:35:46Z\\\",\\\"message\\\":\\\"handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:46Z is after 2025-08-24T17:21:41Z]\\\\nI0122 06:35:46.440584 6161 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-machine-config-operator/machine-config-daemon_TCP_cluster\\\\\\\", UUID:\\\\\\\"a36f6289-d09f-43f8-8a8a-c9d2cc11eb0d\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-config-operator/machine-config-daemon\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-machine-config-operator/machine-config-daemon_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\",\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd6371da033cf0ecd1c0b746cae82e14593edbf2353910a77284a2987741580d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a478b385adad231c65bafa44aef49fd983ef1a81babc0464478e199578b612b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a478b385adad231c65bafa44aef49fd983ef1a81babc0464478e199578b612b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pc2f4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:47Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:48 crc kubenswrapper[4720]: I0122 06:35:48.007309 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lxzml" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c7b3c34a-9870-4c9f-990b-29b7e768d5a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec19859a546dc85e4807d0f522014187b1d22c1167a342776aafbcfd7c3a2cd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h66q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0e395a61e07527eace7f980f476ef9b16a0c6d525b461837ebd9abcbebfa9b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0e395a61e07527eace7f980f476ef9b16a0c6d525b461837ebd9abcbebfa9b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h66q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fa836ad8fb1d77bc1b9d30575db8ed7f2394a2b03d41f0b26a3907fa07ab154\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fa836ad8fb1d77bc1b9d30575db8ed7f2394a2b03d41f0b26a3907fa07ab154\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h66q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09cda77dbfa7cafc1f167478649fe1bc961d7610005bd3260407796956dd8fc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09cda77dbfa7cafc1f167478649fe1bc961d7610005bd3260407796956dd8fc3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h66q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d74c02c82589070c20f88c3ce9574341f2c4898af034fe9a686aa7b7c19ed93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d74c02c82589070c20f88c3ce9574341f2c4898af034fe9a686aa7b7c19ed93\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h66q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42e7374bf2c5e55d0afbe1e7564a2bc7fd23d0af7e22f0cab853fb9d75802422\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42e7374bf2c5e55d0afbe1e7564a2bc7fd23d0af7e22f0cab853fb9d75802422\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h66q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bbafab11f2868d0a97a54debc01d8e4cc6360b08a5b7e883a2f385db2e9ba0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3bbafab11f2868d0a97a54debc01d8e4cc6360b08a5b7e883a2f385db2e9ba0c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h66q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lxzml\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:48Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:48 crc kubenswrapper[4720]: I0122 06:35:48.019119 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5bmrh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"819a554b-cde8-41eb-bf3c-b965b5754ee9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fd548b8bf40d7be75f3433d2396eac3eb467cdc77a6e92f29c5321b0f2e11c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w949z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5bmrh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:48Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:48 crc kubenswrapper[4720]: I0122 06:35:48.023211 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:48 crc kubenswrapper[4720]: I0122 06:35:48.023318 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:48 crc kubenswrapper[4720]: I0122 06:35:48.023339 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:48 crc kubenswrapper[4720]: I0122 06:35:48.023365 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:48 crc kubenswrapper[4720]: I0122 06:35:48.023382 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:48Z","lastTransitionTime":"2026-01-22T06:35:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:48 crc kubenswrapper[4720]: I0122 06:35:48.039224 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4c84t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83dddec7-9ecb-4d3b-97ac-e2f8f59e547c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ea91d23f213bf1c41ebffa43c30559153a3fdae5aac42557a24566cc90bd2b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl9b5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8570742a6d232a3695d42f42f4a8a7bfe79325cb20c8da4129148dda33df4683\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl9b5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4c84t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:48Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:48 crc kubenswrapper[4720]: I0122 06:35:48.052304 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kvtch" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"409f50e8-9b68-4efe-8eb4-bc144d383817\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhm9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhm9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:47Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kvtch\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:48Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:48 crc kubenswrapper[4720]: I0122 06:35:48.107001 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2336077a-0d11-443d-aa5f-3bc0f75aeb59\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7f8b443098a61889ca09134ff7b6e219efca59abcbcea3bb4c1aa9266ee8e55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfdba4f4bb3ed55acad58ed0c07d2dd58b1a1e9fd56443090c002bdefd3e7e10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2caddf6dd66b641ff10add3b52e37581c741507660ec4fe67a3b9ddde699e56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://108525dd49ce1847f1cc6c4e2a4c70cae3d2d990cda97eeb4aba8879b955432d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d49a501c34b86f0f9300742970a00c92ce3b8d352518d8f53d1ab8c734e2af6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49fa65139740d25a82a8becea2b7476e9846a3eebfe5a02bb0b3e8cc4e7fa30d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49fa65139740d25a82a8becea2b7476e9846a3eebfe5a02bb0b3e8cc4e7fa30d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e171d6a13b6108f7df37e46d2195682d2ce7a75e5fbfd0ccf4aca27871964cf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e171d6a13b6108f7df37e46d2195682d2ce7a75e5fbfd0ccf4aca27871964cf6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://136504ce05b178f4c60a839a1044c0b0892ea97babfe80181eac601c5057df16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://136504ce05b178f4c60a839a1044c0b0892ea97babfe80181eac601c5057df16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:08Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:48Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:48 crc kubenswrapper[4720]: I0122 06:35:48.126521 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:48 crc kubenswrapper[4720]: I0122 06:35:48.126562 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:48 crc kubenswrapper[4720]: I0122 06:35:48.126575 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:48 crc kubenswrapper[4720]: I0122 06:35:48.126596 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:48 crc kubenswrapper[4720]: I0122 06:35:48.126610 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:48Z","lastTransitionTime":"2026-01-22T06:35:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:48 crc kubenswrapper[4720]: I0122 06:35:48.131019 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71c3232e-a7c6-4127-b9ae-54b793cf40fc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b844e983a5de29cec0e9c51f209f04921cd45b437887ca38e93348bc0816832a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d21c2ba52009379f44498a3064c67bcd1a54a6277cc2b1df9db949bf0871e95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65a3043e63bb84b58bb5486070eb84265d51c514df2d766703a62382b3d6f66f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b944677a78ab673857bf2dd507dcc2eced1caef632613ec1e2054cb76ef57f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f51bf89aa732948c3672632435a055b62f58316da07931b2773bc2d3c10789e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T06:35:26Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 06:35:20.895806 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 06:35:20.896560 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2161447925/tls.crt::/tmp/serving-cert-2161447925/tls.key\\\\\\\"\\\\nI0122 06:35:26.673138 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 06:35:26.677382 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 06:35:26.677419 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 06:35:26.677459 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 06:35:26.677469 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 06:35:26.688473 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 06:35:26.688552 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 06:35:26.688566 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 06:35:26.688580 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 06:35:26.688588 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 06:35:26.688596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 06:35:26.688605 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 06:35:26.688875 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 06:35:26.690763 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://927a5a6aba8d7533888fb9e5e664fbaed25c6e6c0616c5b3c6f0afd6f45aa128\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3418e6eacd36147ae6ae1be72f23113dea7dbaab3597ba0fee76fbc846c0162f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3418e6eacd36147ae6ae1be72f23113dea7dbaab3597ba0fee76fbc846c0162f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:48Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:48 crc kubenswrapper[4720]: I0122 06:35:48.153369 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://42d0f971bb1f2a686c44e255b97648123170034b41d626318aab29e111dda4a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:48Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:48 crc kubenswrapper[4720]: I0122 06:35:48.159563 4720 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 01:00:31.27885216 +0000 UTC Jan 22 06:35:48 crc kubenswrapper[4720]: I0122 06:35:48.168565 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-n5w5r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85373343-156d-4de0-a72b-baaf7c4e3d08\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4da0abad7292a9d82b80caabd4b1f0fc15fbe54bfa5b4316c85b2131ea8b5b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlzmz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-n5w5r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:48Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:48 crc kubenswrapper[4720]: I0122 06:35:48.178251 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/409f50e8-9b68-4efe-8eb4-bc144d383817-metrics-certs\") pod \"network-metrics-daemon-kvtch\" (UID: \"409f50e8-9b68-4efe-8eb4-bc144d383817\") " pod="openshift-multus/network-metrics-daemon-kvtch" Jan 22 06:35:48 crc kubenswrapper[4720]: E0122 06:35:48.178436 4720 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 22 06:35:48 crc kubenswrapper[4720]: E0122 06:35:48.178499 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/409f50e8-9b68-4efe-8eb4-bc144d383817-metrics-certs podName:409f50e8-9b68-4efe-8eb4-bc144d383817 nodeName:}" failed. No retries permitted until 2026-01-22 06:35:49.178482924 +0000 UTC m=+41.320389629 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/409f50e8-9b68-4efe-8eb4-bc144d383817-metrics-certs") pod "network-metrics-daemon-kvtch" (UID: "409f50e8-9b68-4efe-8eb4-bc144d383817") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 22 06:35:48 crc kubenswrapper[4720]: I0122 06:35:48.210559 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 06:35:48 crc kubenswrapper[4720]: E0122 06:35:48.215208 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 06:35:48 crc kubenswrapper[4720]: I0122 06:35:48.215224 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 06:35:48 crc kubenswrapper[4720]: E0122 06:35:48.215341 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 06:35:48 crc kubenswrapper[4720]: I0122 06:35:48.229406 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:48 crc kubenswrapper[4720]: I0122 06:35:48.229457 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:48 crc kubenswrapper[4720]: I0122 06:35:48.229469 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:48 crc kubenswrapper[4720]: I0122 06:35:48.229487 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:48 crc kubenswrapper[4720]: I0122 06:35:48.229499 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:48Z","lastTransitionTime":"2026-01-22T06:35:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:48 crc kubenswrapper[4720]: I0122 06:35:48.231792 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71c3232e-a7c6-4127-b9ae-54b793cf40fc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b844e983a5de29cec0e9c51f209f04921cd45b437887ca38e93348bc0816832a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d21c2ba52009379f44498a3064c67bcd1a54a6277cc2b1df9db949bf0871e95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65a3043e63bb84b58bb5486070eb84265d51c514df2d766703a62382b3d6f66f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b944677a78ab673857bf2dd507dcc2eced1caef632613ec1e2054cb76ef57f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f51bf89aa732948c3672632435a055b62f58316da07931b2773bc2d3c10789e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T06:35:26Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 06:35:20.895806 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 06:35:20.896560 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2161447925/tls.crt::/tmp/serving-cert-2161447925/tls.key\\\\\\\"\\\\nI0122 06:35:26.673138 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 06:35:26.677382 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 06:35:26.677419 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 06:35:26.677459 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 06:35:26.677469 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 06:35:26.688473 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 06:35:26.688552 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 06:35:26.688566 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 06:35:26.688580 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 06:35:26.688588 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 06:35:26.688596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 06:35:26.688605 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 06:35:26.688875 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 06:35:26.690763 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://927a5a6aba8d7533888fb9e5e664fbaed25c6e6c0616c5b3c6f0afd6f45aa128\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3418e6eacd36147ae6ae1be72f23113dea7dbaab3597ba0fee76fbc846c0162f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3418e6eacd36147ae6ae1be72f23113dea7dbaab3597ba0fee76fbc846c0162f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:48Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:48 crc kubenswrapper[4720]: I0122 06:35:48.246560 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://42d0f971bb1f2a686c44e255b97648123170034b41d626318aab29e111dda4a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:48Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:48 crc kubenswrapper[4720]: I0122 06:35:48.259278 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-n5w5r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85373343-156d-4de0-a72b-baaf7c4e3d08\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4da0abad7292a9d82b80caabd4b1f0fc15fbe54bfa5b4316c85b2131ea8b5b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlzmz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-n5w5r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:48Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:48 crc kubenswrapper[4720]: I0122 06:35:48.279199 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2336077a-0d11-443d-aa5f-3bc0f75aeb59\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7f8b443098a61889ca09134ff7b6e219efca59abcbcea3bb4c1aa9266ee8e55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfdba4f4bb3ed55acad58ed0c07d2dd58b1a1e9fd56443090c002bdefd3e7e10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2caddf6dd66b641ff10add3b52e37581c741507660ec4fe67a3b9ddde699e56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://108525dd49ce1847f1cc6c4e2a4c70cae3d2d990cda97eeb4aba8879b955432d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d49a501c34b86f0f9300742970a00c92ce3b8d352518d8f53d1ab8c734e2af6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49fa65139740d25a82a8becea2b7476e9846a3eebfe5a02bb0b3e8cc4e7fa30d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49fa65139740d25a82a8becea2b7476e9846a3eebfe5a02bb0b3e8cc4e7fa30d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e171d6a13b6108f7df37e46d2195682d2ce7a75e5fbfd0ccf4aca27871964cf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e171d6a13b6108f7df37e46d2195682d2ce7a75e5fbfd0ccf4aca27871964cf6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://136504ce05b178f4c60a839a1044c0b0892ea97babfe80181eac601c5057df16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://136504ce05b178f4c60a839a1044c0b0892ea97babfe80181eac601c5057df16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:08Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:48Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:48 crc kubenswrapper[4720]: I0122 06:35:48.293179 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:48Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:48 crc kubenswrapper[4720]: I0122 06:35:48.305204 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dtnxt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"518eedd0-2cb6-458d-a7a8-d8c8b8296401\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://794bf7a98b2300bd21bd914f5ad90c8f92ea7c055c78f62dbac7bc66c1c4d282\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqx2p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dtnxt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:48Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:48 crc kubenswrapper[4720]: I0122 06:35:48.321356 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc18b75d6761db0d4a459a096804ac6600c6c2026ca133d47671a8aadba175f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6616553b8c62205590801d10e6e316b396b7190584b08a774ebb87ab425cc122\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:48Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:48 crc kubenswrapper[4720]: I0122 06:35:48.331256 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:48 crc kubenswrapper[4720]: I0122 06:35:48.331288 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:48 crc kubenswrapper[4720]: I0122 06:35:48.331298 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:48 crc kubenswrapper[4720]: I0122 06:35:48.331317 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:48 crc kubenswrapper[4720]: I0122 06:35:48.331327 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:48Z","lastTransitionTime":"2026-01-22T06:35:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:48 crc kubenswrapper[4720]: I0122 06:35:48.334825 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4b26e9d-6a95-4b1c-9750-88b6aa100c67\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57c69f24cbbb5aea3a63b602916230df0cc3d74344a32bd1528bd162786a6d4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f52f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88eb6692702bcb8523c759d764bb8dede5af5a2890217a1c6897a5b18a7197dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f52f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bnsvd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:48Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:48 crc kubenswrapper[4720]: I0122 06:35:48.346197 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:48Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:48 crc kubenswrapper[4720]: I0122 06:35:48.363393 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:48Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:48 crc kubenswrapper[4720]: I0122 06:35:48.379926 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3042f2c25662a3c0a0f010b2e431521ea9dc24b59303773b1fc4e2b43308937\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:48Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:48 crc kubenswrapper[4720]: I0122 06:35:48.400816 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pc2f4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a725fa6-120e-41b1-bf7b-e1419e35c891\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be8927bdfee25dd12f02d8f5e4923d1ef6e034a4ab3c036ee8fb76b699aa4a04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b14f48ebae84e1450337b5ce78c6b9e74f4ffd418eaaf9df9fcb7495f5feae06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b013c0f8d83bb91a1385ed067377ea5bd81640f7b420160042c33a3112cbe153\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a1f835b7dfa74f56bbfdc5ef0a6674e6c04ae1be7119e3675d4a7e970bc37d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdea94d764eb25af4aec5da1f1aaf94b7d75e66292561a54803a079a6ca35cbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://279bdc70caff8b05f2eaff1d28bee03725a7069dcc99c4770a8df9a3e3ed0440\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e12d4480e256b933eebea365d0b57b3ab1be7aef00a03a2ff218e1bf3d7460a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab402c5e4e13bb7d60dfd745a6c6a7becd4ea9eab192323e5066ea6252f8c6d6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T06:35:43Z\\\",\\\"message\\\":\\\"140\\\\nI0122 06:35:42.520091 6010 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 06:35:42.518994 6010 obj_retry.go:439] Stop channel got triggered: will stop retrying failed objects of type *v1.Node\\\\nI0122 06:35:42.520373 6010 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0122 06:35:42.520388 6010 nad_controller.go:166] [zone-nad-controller NAD controller]: shutting down\\\\nI0122 06:35:42.520834 6010 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0122 06:35:42.520852 6010 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0122 06:35:42.520984 6010 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0122 06:35:42.521130 6010 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0122 06:35:42.522315 6010 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:39Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e12d4480e256b933eebea365d0b57b3ab1be7aef00a03a2ff218e1bf3d7460a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T06:35:46Z\\\",\\\"message\\\":\\\"handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:46Z is after 2025-08-24T17:21:41Z]\\\\nI0122 06:35:46.440584 6161 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-machine-config-operator/machine-config-daemon_TCP_cluster\\\\\\\", UUID:\\\\\\\"a36f6289-d09f-43f8-8a8a-c9d2cc11eb0d\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-config-operator/machine-config-daemon\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-machine-config-operator/machine-config-daemon_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\",\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd6371da033cf0ecd1c0b746cae82e14593edbf2353910a77284a2987741580d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a478b385adad231c65bafa44aef49fd983ef1a81babc0464478e199578b612b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a478b385adad231c65bafa44aef49fd983ef1a81babc0464478e199578b612b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pc2f4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:48Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:48 crc kubenswrapper[4720]: I0122 06:35:48.419933 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lxzml" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c7b3c34a-9870-4c9f-990b-29b7e768d5a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec19859a546dc85e4807d0f522014187b1d22c1167a342776aafbcfd7c3a2cd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h66q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0e395a61e07527eace7f980f476ef9b16a0c6d525b461837ebd9abcbebfa9b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0e395a61e07527eace7f980f476ef9b16a0c6d525b461837ebd9abcbebfa9b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h66q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fa836ad8fb1d77bc1b9d30575db8ed7f2394a2b03d41f0b26a3907fa07ab154\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fa836ad8fb1d77bc1b9d30575db8ed7f2394a2b03d41f0b26a3907fa07ab154\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h66q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09cda77dbfa7cafc1f167478649fe1bc961d7610005bd3260407796956dd8fc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09cda77dbfa7cafc1f167478649fe1bc961d7610005bd3260407796956dd8fc3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h66q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d74c02c82589070c20f88c3ce9574341f2c4898af034fe9a686aa7b7c19ed93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d74c02c82589070c20f88c3ce9574341f2c4898af034fe9a686aa7b7c19ed93\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h66q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42e7374bf2c5e55d0afbe1e7564a2bc7fd23d0af7e22f0cab853fb9d75802422\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42e7374bf2c5e55d0afbe1e7564a2bc7fd23d0af7e22f0cab853fb9d75802422\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h66q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bbafab11f2868d0a97a54debc01d8e4cc6360b08a5b7e883a2f385db2e9ba0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3bbafab11f2868d0a97a54debc01d8e4cc6360b08a5b7e883a2f385db2e9ba0c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h66q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lxzml\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:48Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:48 crc kubenswrapper[4720]: I0122 06:35:48.430071 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5bmrh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"819a554b-cde8-41eb-bf3c-b965b5754ee9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fd548b8bf40d7be75f3433d2396eac3eb467cdc77a6e92f29c5321b0f2e11c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w949z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5bmrh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:48Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:48 crc kubenswrapper[4720]: I0122 06:35:48.434536 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:48 crc kubenswrapper[4720]: I0122 06:35:48.434574 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:48 crc kubenswrapper[4720]: I0122 06:35:48.434588 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:48 crc kubenswrapper[4720]: I0122 06:35:48.434609 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:48 crc kubenswrapper[4720]: I0122 06:35:48.434621 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:48Z","lastTransitionTime":"2026-01-22T06:35:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:48 crc kubenswrapper[4720]: I0122 06:35:48.442541 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4c84t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83dddec7-9ecb-4d3b-97ac-e2f8f59e547c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ea91d23f213bf1c41ebffa43c30559153a3fdae5aac42557a24566cc90bd2b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl9b5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8570742a6d232a3695d42f42f4a8a7bfe79325cb20c8da4129148dda33df4683\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl9b5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4c84t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:48Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:48 crc kubenswrapper[4720]: I0122 06:35:48.457361 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63aad383-f2cb-4bee-a51e-8f27e7b6f6dc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f91f6cde32c3019a63315238551af8420def8e585e4dbf3d29e55f62f155ba3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eff41c6fc4cac32edebfbabd313603d50771e92427938f4fc8c12bc75563133d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5dc3d31620607809111c9a1ed06ab7dfb1b24ff5ef18155979efc6b09b6d67d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://365ce842df37c87fae429e32228087a45828c6c717b63a2eb9818f26bf9aed7b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:08Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:48Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:48 crc kubenswrapper[4720]: I0122 06:35:48.470069 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kvtch" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"409f50e8-9b68-4efe-8eb4-bc144d383817\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhm9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhm9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:47Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kvtch\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:48Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:48 crc kubenswrapper[4720]: I0122 06:35:48.537785 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:48 crc kubenswrapper[4720]: I0122 06:35:48.537849 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:48 crc kubenswrapper[4720]: I0122 06:35:48.537868 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:48 crc kubenswrapper[4720]: I0122 06:35:48.537897 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:48 crc kubenswrapper[4720]: I0122 06:35:48.537939 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:48Z","lastTransitionTime":"2026-01-22T06:35:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:48 crc kubenswrapper[4720]: I0122 06:35:48.587259 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pc2f4_9a725fa6-120e-41b1-bf7b-e1419e35c891/ovnkube-controller/1.log" Jan 22 06:35:48 crc kubenswrapper[4720]: I0122 06:35:48.592169 4720 scope.go:117] "RemoveContainer" containerID="8e12d4480e256b933eebea365d0b57b3ab1be7aef00a03a2ff218e1bf3d7460a" Jan 22 06:35:48 crc kubenswrapper[4720]: E0122 06:35:48.592475 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-pc2f4_openshift-ovn-kubernetes(9a725fa6-120e-41b1-bf7b-e1419e35c891)\"" pod="openshift-ovn-kubernetes/ovnkube-node-pc2f4" podUID="9a725fa6-120e-41b1-bf7b-e1419e35c891" Jan 22 06:35:48 crc kubenswrapper[4720]: I0122 06:35:48.612460 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lxzml" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c7b3c34a-9870-4c9f-990b-29b7e768d5a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec19859a546dc85e4807d0f522014187b1d22c1167a342776aafbcfd7c3a2cd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h66q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0e395a61e07527eace7f980f476ef9b16a0c6d525b461837ebd9abcbebfa9b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0e395a61e07527eace7f980f476ef9b16a0c6d525b461837ebd9abcbebfa9b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h66q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fa836ad8fb1d77bc1b9d30575db8ed7f2394a2b03d41f0b26a3907fa07ab154\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fa836ad8fb1d77bc1b9d30575db8ed7f2394a2b03d41f0b26a3907fa07ab154\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h66q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09cda77dbfa7cafc1f167478649fe1bc961d7610005bd3260407796956dd8fc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09cda77dbfa7cafc1f167478649fe1bc961d7610005bd3260407796956dd8fc3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h66q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d74c02c82589070c20f88c3ce9574341f2c4898af034fe9a686aa7b7c19ed93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d74c02c82589070c20f88c3ce9574341f2c4898af034fe9a686aa7b7c19ed93\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h66q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42e7374bf2c5e55d0afbe1e7564a2bc7fd23d0af7e22f0cab853fb9d75802422\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42e7374bf2c5e55d0afbe1e7564a2bc7fd23d0af7e22f0cab853fb9d75802422\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h66q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bbafab11f2868d0a97a54debc01d8e4cc6360b08a5b7e883a2f385db2e9ba0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3bbafab11f2868d0a97a54debc01d8e4cc6360b08a5b7e883a2f385db2e9ba0c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h66q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lxzml\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:48Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:48 crc kubenswrapper[4720]: I0122 06:35:48.626462 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5bmrh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"819a554b-cde8-41eb-bf3c-b965b5754ee9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fd548b8bf40d7be75f3433d2396eac3eb467cdc77a6e92f29c5321b0f2e11c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w949z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5bmrh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:48Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:48 crc kubenswrapper[4720]: I0122 06:35:48.640833 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4c84t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83dddec7-9ecb-4d3b-97ac-e2f8f59e547c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ea91d23f213bf1c41ebffa43c30559153a3fdae5aac42557a24566cc90bd2b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl9b5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8570742a6d232a3695d42f42f4a8a7bfe79325cb20c8da4129148dda33df4683\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl9b5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4c84t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:48Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:48 crc kubenswrapper[4720]: I0122 06:35:48.642741 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:48 crc kubenswrapper[4720]: I0122 06:35:48.642784 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:48 crc kubenswrapper[4720]: I0122 06:35:48.642798 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:48 crc kubenswrapper[4720]: I0122 06:35:48.642819 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:48 crc kubenswrapper[4720]: I0122 06:35:48.642832 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:48Z","lastTransitionTime":"2026-01-22T06:35:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:48 crc kubenswrapper[4720]: I0122 06:35:48.659108 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63aad383-f2cb-4bee-a51e-8f27e7b6f6dc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f91f6cde32c3019a63315238551af8420def8e585e4dbf3d29e55f62f155ba3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eff41c6fc4cac32edebfbabd313603d50771e92427938f4fc8c12bc75563133d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5dc3d31620607809111c9a1ed06ab7dfb1b24ff5ef18155979efc6b09b6d67d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://365ce842df37c87fae429e32228087a45828c6c717b63a2eb9818f26bf9aed7b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:08Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:48Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:48 crc kubenswrapper[4720]: I0122 06:35:48.680975 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:48Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:48 crc kubenswrapper[4720]: I0122 06:35:48.704013 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:48Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:48 crc kubenswrapper[4720]: I0122 06:35:48.722501 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3042f2c25662a3c0a0f010b2e431521ea9dc24b59303773b1fc4e2b43308937\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:48Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:48 crc kubenswrapper[4720]: I0122 06:35:48.746072 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:48 crc kubenswrapper[4720]: I0122 06:35:48.746119 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:48 crc kubenswrapper[4720]: I0122 06:35:48.746131 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:48 crc kubenswrapper[4720]: I0122 06:35:48.746181 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:48 crc kubenswrapper[4720]: I0122 06:35:48.746195 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:48Z","lastTransitionTime":"2026-01-22T06:35:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:48 crc kubenswrapper[4720]: I0122 06:35:48.759541 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pc2f4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a725fa6-120e-41b1-bf7b-e1419e35c891\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be8927bdfee25dd12f02d8f5e4923d1ef6e034a4ab3c036ee8fb76b699aa4a04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b14f48ebae84e1450337b5ce78c6b9e74f4ffd418eaaf9df9fcb7495f5feae06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b013c0f8d83bb91a1385ed067377ea5bd81640f7b420160042c33a3112cbe153\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a1f835b7dfa74f56bbfdc5ef0a6674e6c04ae1be7119e3675d4a7e970bc37d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdea94d764eb25af4aec5da1f1aaf94b7d75e66292561a54803a079a6ca35cbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://279bdc70caff8b05f2eaff1d28bee03725a7069dcc99c4770a8df9a3e3ed0440\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e12d4480e256b933eebea365d0b57b3ab1be7aef00a03a2ff218e1bf3d7460a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e12d4480e256b933eebea365d0b57b3ab1be7aef00a03a2ff218e1bf3d7460a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T06:35:46Z\\\",\\\"message\\\":\\\"handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:46Z is after 2025-08-24T17:21:41Z]\\\\nI0122 06:35:46.440584 6161 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-machine-config-operator/machine-config-daemon_TCP_cluster\\\\\\\", UUID:\\\\\\\"a36f6289-d09f-43f8-8a8a-c9d2cc11eb0d\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-config-operator/machine-config-daemon\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-machine-config-operator/machine-config-daemon_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\",\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:45Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-pc2f4_openshift-ovn-kubernetes(9a725fa6-120e-41b1-bf7b-e1419e35c891)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd6371da033cf0ecd1c0b746cae82e14593edbf2353910a77284a2987741580d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a478b385adad231c65bafa44aef49fd983ef1a81babc0464478e199578b612b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a478b385adad231c65bafa44aef49fd983ef1a81babc0464478e199578b612b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pc2f4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:48Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:48 crc kubenswrapper[4720]: I0122 06:35:48.778452 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kvtch" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"409f50e8-9b68-4efe-8eb4-bc144d383817\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhm9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhm9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:47Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kvtch\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:48Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:48 crc kubenswrapper[4720]: I0122 06:35:48.815619 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2336077a-0d11-443d-aa5f-3bc0f75aeb59\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7f8b443098a61889ca09134ff7b6e219efca59abcbcea3bb4c1aa9266ee8e55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfdba4f4bb3ed55acad58ed0c07d2dd58b1a1e9fd56443090c002bdefd3e7e10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2caddf6dd66b641ff10add3b52e37581c741507660ec4fe67a3b9ddde699e56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://108525dd49ce1847f1cc6c4e2a4c70cae3d2d990cda97eeb4aba8879b955432d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d49a501c34b86f0f9300742970a00c92ce3b8d352518d8f53d1ab8c734e2af6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49fa65139740d25a82a8becea2b7476e9846a3eebfe5a02bb0b3e8cc4e7fa30d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49fa65139740d25a82a8becea2b7476e9846a3eebfe5a02bb0b3e8cc4e7fa30d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e171d6a13b6108f7df37e46d2195682d2ce7a75e5fbfd0ccf4aca27871964cf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e171d6a13b6108f7df37e46d2195682d2ce7a75e5fbfd0ccf4aca27871964cf6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://136504ce05b178f4c60a839a1044c0b0892ea97babfe80181eac601c5057df16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://136504ce05b178f4c60a839a1044c0b0892ea97babfe80181eac601c5057df16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:08Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:48Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:48 crc kubenswrapper[4720]: I0122 06:35:48.838476 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71c3232e-a7c6-4127-b9ae-54b793cf40fc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b844e983a5de29cec0e9c51f209f04921cd45b437887ca38e93348bc0816832a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d21c2ba52009379f44498a3064c67bcd1a54a6277cc2b1df9db949bf0871e95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65a3043e63bb84b58bb5486070eb84265d51c514df2d766703a62382b3d6f66f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b944677a78ab673857bf2dd507dcc2eced1caef632613ec1e2054cb76ef57f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f51bf89aa732948c3672632435a055b62f58316da07931b2773bc2d3c10789e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T06:35:26Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 06:35:20.895806 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 06:35:20.896560 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2161447925/tls.crt::/tmp/serving-cert-2161447925/tls.key\\\\\\\"\\\\nI0122 06:35:26.673138 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 06:35:26.677382 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 06:35:26.677419 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 06:35:26.677459 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 06:35:26.677469 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 06:35:26.688473 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 06:35:26.688552 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 06:35:26.688566 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 06:35:26.688580 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 06:35:26.688588 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 06:35:26.688596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 06:35:26.688605 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 06:35:26.688875 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 06:35:26.690763 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://927a5a6aba8d7533888fb9e5e664fbaed25c6e6c0616c5b3c6f0afd6f45aa128\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3418e6eacd36147ae6ae1be72f23113dea7dbaab3597ba0fee76fbc846c0162f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3418e6eacd36147ae6ae1be72f23113dea7dbaab3597ba0fee76fbc846c0162f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:48Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:48 crc kubenswrapper[4720]: I0122 06:35:48.848670 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:48 crc kubenswrapper[4720]: I0122 06:35:48.848721 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:48 crc kubenswrapper[4720]: I0122 06:35:48.848737 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:48 crc kubenswrapper[4720]: I0122 06:35:48.848762 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:48 crc kubenswrapper[4720]: I0122 06:35:48.848779 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:48Z","lastTransitionTime":"2026-01-22T06:35:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:48 crc kubenswrapper[4720]: I0122 06:35:48.858497 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://42d0f971bb1f2a686c44e255b97648123170034b41d626318aab29e111dda4a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:48Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:48 crc kubenswrapper[4720]: I0122 06:35:48.874837 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-n5w5r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85373343-156d-4de0-a72b-baaf7c4e3d08\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4da0abad7292a9d82b80caabd4b1f0fc15fbe54bfa5b4316c85b2131ea8b5b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlzmz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-n5w5r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:48Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:48 crc kubenswrapper[4720]: I0122 06:35:48.893370 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:48Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:48 crc kubenswrapper[4720]: I0122 06:35:48.908438 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dtnxt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"518eedd0-2cb6-458d-a7a8-d8c8b8296401\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://794bf7a98b2300bd21bd914f5ad90c8f92ea7c055c78f62dbac7bc66c1c4d282\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqx2p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dtnxt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:48Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:48 crc kubenswrapper[4720]: I0122 06:35:48.924199 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc18b75d6761db0d4a459a096804ac6600c6c2026ca133d47671a8aadba175f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6616553b8c62205590801d10e6e316b396b7190584b08a774ebb87ab425cc122\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:48Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:48 crc kubenswrapper[4720]: I0122 06:35:48.938190 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4b26e9d-6a95-4b1c-9750-88b6aa100c67\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57c69f24cbbb5aea3a63b602916230df0cc3d74344a32bd1528bd162786a6d4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f52f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88eb6692702bcb8523c759d764bb8dede5af5a2890217a1c6897a5b18a7197dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f52f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bnsvd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:48Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:48 crc kubenswrapper[4720]: I0122 06:35:48.951273 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:48 crc kubenswrapper[4720]: I0122 06:35:48.951338 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:48 crc kubenswrapper[4720]: I0122 06:35:48.951366 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:48 crc kubenswrapper[4720]: I0122 06:35:48.951401 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:48 crc kubenswrapper[4720]: I0122 06:35:48.951426 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:48Z","lastTransitionTime":"2026-01-22T06:35:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:49 crc kubenswrapper[4720]: I0122 06:35:49.055087 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:49 crc kubenswrapper[4720]: I0122 06:35:49.055130 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:49 crc kubenswrapper[4720]: I0122 06:35:49.055143 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:49 crc kubenswrapper[4720]: I0122 06:35:49.055162 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:49 crc kubenswrapper[4720]: I0122 06:35:49.055175 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:49Z","lastTransitionTime":"2026-01-22T06:35:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:49 crc kubenswrapper[4720]: I0122 06:35:49.158386 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:49 crc kubenswrapper[4720]: I0122 06:35:49.158435 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:49 crc kubenswrapper[4720]: I0122 06:35:49.158451 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:49 crc kubenswrapper[4720]: I0122 06:35:49.158479 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:49 crc kubenswrapper[4720]: I0122 06:35:49.158496 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:49Z","lastTransitionTime":"2026-01-22T06:35:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:49 crc kubenswrapper[4720]: I0122 06:35:49.160264 4720 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 10:34:59.000864894 +0000 UTC Jan 22 06:35:49 crc kubenswrapper[4720]: I0122 06:35:49.189298 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/409f50e8-9b68-4efe-8eb4-bc144d383817-metrics-certs\") pod \"network-metrics-daemon-kvtch\" (UID: \"409f50e8-9b68-4efe-8eb4-bc144d383817\") " pod="openshift-multus/network-metrics-daemon-kvtch" Jan 22 06:35:49 crc kubenswrapper[4720]: E0122 06:35:49.189531 4720 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 22 06:35:49 crc kubenswrapper[4720]: E0122 06:35:49.189636 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/409f50e8-9b68-4efe-8eb4-bc144d383817-metrics-certs podName:409f50e8-9b68-4efe-8eb4-bc144d383817 nodeName:}" failed. No retries permitted until 2026-01-22 06:35:51.189610876 +0000 UTC m=+43.331517591 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/409f50e8-9b68-4efe-8eb4-bc144d383817-metrics-certs") pod "network-metrics-daemon-kvtch" (UID: "409f50e8-9b68-4efe-8eb4-bc144d383817") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 22 06:35:49 crc kubenswrapper[4720]: I0122 06:35:49.210446 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 06:35:49 crc kubenswrapper[4720]: E0122 06:35:49.210606 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 06:35:49 crc kubenswrapper[4720]: I0122 06:35:49.210997 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kvtch" Jan 22 06:35:49 crc kubenswrapper[4720]: E0122 06:35:49.211082 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kvtch" podUID="409f50e8-9b68-4efe-8eb4-bc144d383817" Jan 22 06:35:49 crc kubenswrapper[4720]: I0122 06:35:49.261786 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:49 crc kubenswrapper[4720]: I0122 06:35:49.261860 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:49 crc kubenswrapper[4720]: I0122 06:35:49.261880 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:49 crc kubenswrapper[4720]: I0122 06:35:49.261943 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:49 crc kubenswrapper[4720]: I0122 06:35:49.261970 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:49Z","lastTransitionTime":"2026-01-22T06:35:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:49 crc kubenswrapper[4720]: I0122 06:35:49.365295 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:49 crc kubenswrapper[4720]: I0122 06:35:49.365363 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:49 crc kubenswrapper[4720]: I0122 06:35:49.365385 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:49 crc kubenswrapper[4720]: I0122 06:35:49.365416 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:49 crc kubenswrapper[4720]: I0122 06:35:49.365439 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:49Z","lastTransitionTime":"2026-01-22T06:35:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:49 crc kubenswrapper[4720]: I0122 06:35:49.468838 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:49 crc kubenswrapper[4720]: I0122 06:35:49.468900 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:49 crc kubenswrapper[4720]: I0122 06:35:49.469012 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:49 crc kubenswrapper[4720]: I0122 06:35:49.469039 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:49 crc kubenswrapper[4720]: I0122 06:35:49.469060 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:49Z","lastTransitionTime":"2026-01-22T06:35:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:49 crc kubenswrapper[4720]: I0122 06:35:49.572854 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:49 crc kubenswrapper[4720]: I0122 06:35:49.573017 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:49 crc kubenswrapper[4720]: I0122 06:35:49.573052 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:49 crc kubenswrapper[4720]: I0122 06:35:49.573090 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:49 crc kubenswrapper[4720]: I0122 06:35:49.573115 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:49Z","lastTransitionTime":"2026-01-22T06:35:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:49 crc kubenswrapper[4720]: I0122 06:35:49.676499 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:49 crc kubenswrapper[4720]: I0122 06:35:49.676607 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:49 crc kubenswrapper[4720]: I0122 06:35:49.676635 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:49 crc kubenswrapper[4720]: I0122 06:35:49.676668 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:49 crc kubenswrapper[4720]: I0122 06:35:49.676690 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:49Z","lastTransitionTime":"2026-01-22T06:35:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:49 crc kubenswrapper[4720]: I0122 06:35:49.779282 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:49 crc kubenswrapper[4720]: I0122 06:35:49.779354 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:49 crc kubenswrapper[4720]: I0122 06:35:49.779382 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:49 crc kubenswrapper[4720]: I0122 06:35:49.779416 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:49 crc kubenswrapper[4720]: I0122 06:35:49.779444 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:49Z","lastTransitionTime":"2026-01-22T06:35:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:49 crc kubenswrapper[4720]: I0122 06:35:49.883057 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:49 crc kubenswrapper[4720]: I0122 06:35:49.883132 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:49 crc kubenswrapper[4720]: I0122 06:35:49.883168 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:49 crc kubenswrapper[4720]: I0122 06:35:49.883213 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:49 crc kubenswrapper[4720]: I0122 06:35:49.883238 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:49Z","lastTransitionTime":"2026-01-22T06:35:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:49 crc kubenswrapper[4720]: I0122 06:35:49.987047 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:49 crc kubenswrapper[4720]: I0122 06:35:49.987112 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:49 crc kubenswrapper[4720]: I0122 06:35:49.987136 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:49 crc kubenswrapper[4720]: I0122 06:35:49.987163 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:49 crc kubenswrapper[4720]: I0122 06:35:49.987183 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:49Z","lastTransitionTime":"2026-01-22T06:35:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:50 crc kubenswrapper[4720]: I0122 06:35:50.091299 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:50 crc kubenswrapper[4720]: I0122 06:35:50.091351 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:50 crc kubenswrapper[4720]: I0122 06:35:50.091370 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:50 crc kubenswrapper[4720]: I0122 06:35:50.091395 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:50 crc kubenswrapper[4720]: I0122 06:35:50.091413 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:50Z","lastTransitionTime":"2026-01-22T06:35:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:50 crc kubenswrapper[4720]: I0122 06:35:50.161363 4720 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 12:31:29.117634414 +0000 UTC Jan 22 06:35:50 crc kubenswrapper[4720]: I0122 06:35:50.195515 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:50 crc kubenswrapper[4720]: I0122 06:35:50.195584 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:50 crc kubenswrapper[4720]: I0122 06:35:50.195608 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:50 crc kubenswrapper[4720]: I0122 06:35:50.195639 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:50 crc kubenswrapper[4720]: I0122 06:35:50.195660 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:50Z","lastTransitionTime":"2026-01-22T06:35:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:50 crc kubenswrapper[4720]: I0122 06:35:50.210405 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 06:35:50 crc kubenswrapper[4720]: I0122 06:35:50.210550 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 06:35:50 crc kubenswrapper[4720]: E0122 06:35:50.210660 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 06:35:50 crc kubenswrapper[4720]: E0122 06:35:50.210777 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 06:35:50 crc kubenswrapper[4720]: I0122 06:35:50.299317 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:50 crc kubenswrapper[4720]: I0122 06:35:50.299367 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:50 crc kubenswrapper[4720]: I0122 06:35:50.299384 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:50 crc kubenswrapper[4720]: I0122 06:35:50.299412 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:50 crc kubenswrapper[4720]: I0122 06:35:50.299430 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:50Z","lastTransitionTime":"2026-01-22T06:35:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:50 crc kubenswrapper[4720]: I0122 06:35:50.403149 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:50 crc kubenswrapper[4720]: I0122 06:35:50.403228 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:50 crc kubenswrapper[4720]: I0122 06:35:50.403246 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:50 crc kubenswrapper[4720]: I0122 06:35:50.403276 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:50 crc kubenswrapper[4720]: I0122 06:35:50.403293 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:50Z","lastTransitionTime":"2026-01-22T06:35:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:50 crc kubenswrapper[4720]: I0122 06:35:50.506355 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:50 crc kubenswrapper[4720]: I0122 06:35:50.506413 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:50 crc kubenswrapper[4720]: I0122 06:35:50.506431 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:50 crc kubenswrapper[4720]: I0122 06:35:50.506456 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:50 crc kubenswrapper[4720]: I0122 06:35:50.506476 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:50Z","lastTransitionTime":"2026-01-22T06:35:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:50 crc kubenswrapper[4720]: I0122 06:35:50.617302 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:50 crc kubenswrapper[4720]: I0122 06:35:50.618255 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:50 crc kubenswrapper[4720]: I0122 06:35:50.618277 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:50 crc kubenswrapper[4720]: I0122 06:35:50.618309 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:50 crc kubenswrapper[4720]: I0122 06:35:50.618331 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:50Z","lastTransitionTime":"2026-01-22T06:35:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:50 crc kubenswrapper[4720]: I0122 06:35:50.721697 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:50 crc kubenswrapper[4720]: I0122 06:35:50.721770 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:50 crc kubenswrapper[4720]: I0122 06:35:50.721789 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:50 crc kubenswrapper[4720]: I0122 06:35:50.721823 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:50 crc kubenswrapper[4720]: I0122 06:35:50.721844 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:50Z","lastTransitionTime":"2026-01-22T06:35:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:50 crc kubenswrapper[4720]: I0122 06:35:50.825227 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:50 crc kubenswrapper[4720]: I0122 06:35:50.825300 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:50 crc kubenswrapper[4720]: I0122 06:35:50.825324 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:50 crc kubenswrapper[4720]: I0122 06:35:50.825357 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:50 crc kubenswrapper[4720]: I0122 06:35:50.825379 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:50Z","lastTransitionTime":"2026-01-22T06:35:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:50 crc kubenswrapper[4720]: I0122 06:35:50.928700 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:50 crc kubenswrapper[4720]: I0122 06:35:50.928751 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:50 crc kubenswrapper[4720]: I0122 06:35:50.928769 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:50 crc kubenswrapper[4720]: I0122 06:35:50.928795 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:50 crc kubenswrapper[4720]: I0122 06:35:50.928816 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:50Z","lastTransitionTime":"2026-01-22T06:35:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:51 crc kubenswrapper[4720]: I0122 06:35:51.032401 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:51 crc kubenswrapper[4720]: I0122 06:35:51.032483 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:51 crc kubenswrapper[4720]: I0122 06:35:51.032507 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:51 crc kubenswrapper[4720]: I0122 06:35:51.032540 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:51 crc kubenswrapper[4720]: I0122 06:35:51.032563 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:51Z","lastTransitionTime":"2026-01-22T06:35:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:51 crc kubenswrapper[4720]: I0122 06:35:51.136426 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:51 crc kubenswrapper[4720]: I0122 06:35:51.136511 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:51 crc kubenswrapper[4720]: I0122 06:35:51.136531 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:51 crc kubenswrapper[4720]: I0122 06:35:51.136558 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:51 crc kubenswrapper[4720]: I0122 06:35:51.136577 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:51Z","lastTransitionTime":"2026-01-22T06:35:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:51 crc kubenswrapper[4720]: I0122 06:35:51.161831 4720 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 05:25:13.6362639 +0000 UTC Jan 22 06:35:51 crc kubenswrapper[4720]: I0122 06:35:51.210779 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 06:35:51 crc kubenswrapper[4720]: I0122 06:35:51.210785 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kvtch" Jan 22 06:35:51 crc kubenswrapper[4720]: E0122 06:35:51.211042 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 06:35:51 crc kubenswrapper[4720]: E0122 06:35:51.211082 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kvtch" podUID="409f50e8-9b68-4efe-8eb4-bc144d383817" Jan 22 06:35:51 crc kubenswrapper[4720]: I0122 06:35:51.214166 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/409f50e8-9b68-4efe-8eb4-bc144d383817-metrics-certs\") pod \"network-metrics-daemon-kvtch\" (UID: \"409f50e8-9b68-4efe-8eb4-bc144d383817\") " pod="openshift-multus/network-metrics-daemon-kvtch" Jan 22 06:35:51 crc kubenswrapper[4720]: E0122 06:35:51.214403 4720 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 22 06:35:51 crc kubenswrapper[4720]: E0122 06:35:51.214585 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/409f50e8-9b68-4efe-8eb4-bc144d383817-metrics-certs podName:409f50e8-9b68-4efe-8eb4-bc144d383817 nodeName:}" failed. No retries permitted until 2026-01-22 06:35:55.214539469 +0000 UTC m=+47.356446174 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/409f50e8-9b68-4efe-8eb4-bc144d383817-metrics-certs") pod "network-metrics-daemon-kvtch" (UID: "409f50e8-9b68-4efe-8eb4-bc144d383817") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 22 06:35:51 crc kubenswrapper[4720]: I0122 06:35:51.240445 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:51 crc kubenswrapper[4720]: I0122 06:35:51.240512 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:51 crc kubenswrapper[4720]: I0122 06:35:51.240530 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:51 crc kubenswrapper[4720]: I0122 06:35:51.240557 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:51 crc kubenswrapper[4720]: I0122 06:35:51.240576 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:51Z","lastTransitionTime":"2026-01-22T06:35:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:51 crc kubenswrapper[4720]: I0122 06:35:51.345003 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:51 crc kubenswrapper[4720]: I0122 06:35:51.345060 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:51 crc kubenswrapper[4720]: I0122 06:35:51.345075 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:51 crc kubenswrapper[4720]: I0122 06:35:51.345098 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:51 crc kubenswrapper[4720]: I0122 06:35:51.345114 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:51Z","lastTransitionTime":"2026-01-22T06:35:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:51 crc kubenswrapper[4720]: I0122 06:35:51.448029 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:51 crc kubenswrapper[4720]: I0122 06:35:51.448418 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:51 crc kubenswrapper[4720]: I0122 06:35:51.448551 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:51 crc kubenswrapper[4720]: I0122 06:35:51.448733 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:51 crc kubenswrapper[4720]: I0122 06:35:51.448885 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:51Z","lastTransitionTime":"2026-01-22T06:35:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:51 crc kubenswrapper[4720]: I0122 06:35:51.552778 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:51 crc kubenswrapper[4720]: I0122 06:35:51.552826 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:51 crc kubenswrapper[4720]: I0122 06:35:51.552841 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:51 crc kubenswrapper[4720]: I0122 06:35:51.552863 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:51 crc kubenswrapper[4720]: I0122 06:35:51.552877 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:51Z","lastTransitionTime":"2026-01-22T06:35:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:51 crc kubenswrapper[4720]: I0122 06:35:51.656112 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:51 crc kubenswrapper[4720]: I0122 06:35:51.656184 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:51 crc kubenswrapper[4720]: I0122 06:35:51.656204 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:51 crc kubenswrapper[4720]: I0122 06:35:51.656234 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:51 crc kubenswrapper[4720]: I0122 06:35:51.656253 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:51Z","lastTransitionTime":"2026-01-22T06:35:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:51 crc kubenswrapper[4720]: I0122 06:35:51.739001 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:51 crc kubenswrapper[4720]: I0122 06:35:51.739054 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:51 crc kubenswrapper[4720]: I0122 06:35:51.739068 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:51 crc kubenswrapper[4720]: I0122 06:35:51.739092 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:51 crc kubenswrapper[4720]: I0122 06:35:51.739110 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:51Z","lastTransitionTime":"2026-01-22T06:35:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:51 crc kubenswrapper[4720]: E0122 06:35:51.757731 4720 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T06:35:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T06:35:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T06:35:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T06:35:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"234f6209-cc86-46cc-ab69-026482c920c9\\\",\\\"systemUUID\\\":\\\"4713dd6d-99ec-4bb6-94e4-e7199d2e8be9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:51Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:51 crc kubenswrapper[4720]: I0122 06:35:51.764235 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:51 crc kubenswrapper[4720]: I0122 06:35:51.764442 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:51 crc kubenswrapper[4720]: I0122 06:35:51.764507 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:51 crc kubenswrapper[4720]: I0122 06:35:51.764606 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:51 crc kubenswrapper[4720]: I0122 06:35:51.764676 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:51Z","lastTransitionTime":"2026-01-22T06:35:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:51 crc kubenswrapper[4720]: E0122 06:35:51.779834 4720 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T06:35:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T06:35:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T06:35:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T06:35:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"234f6209-cc86-46cc-ab69-026482c920c9\\\",\\\"systemUUID\\\":\\\"4713dd6d-99ec-4bb6-94e4-e7199d2e8be9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:51Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:51 crc kubenswrapper[4720]: I0122 06:35:51.784607 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:51 crc kubenswrapper[4720]: I0122 06:35:51.784664 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:51 crc kubenswrapper[4720]: I0122 06:35:51.784685 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:51 crc kubenswrapper[4720]: I0122 06:35:51.784714 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:51 crc kubenswrapper[4720]: I0122 06:35:51.784735 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:51Z","lastTransitionTime":"2026-01-22T06:35:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:51 crc kubenswrapper[4720]: E0122 06:35:51.807389 4720 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T06:35:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T06:35:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T06:35:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T06:35:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"234f6209-cc86-46cc-ab69-026482c920c9\\\",\\\"systemUUID\\\":\\\"4713dd6d-99ec-4bb6-94e4-e7199d2e8be9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:51Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:51 crc kubenswrapper[4720]: I0122 06:35:51.813069 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:51 crc kubenswrapper[4720]: I0122 06:35:51.813155 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:51 crc kubenswrapper[4720]: I0122 06:35:51.813230 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:51 crc kubenswrapper[4720]: I0122 06:35:51.813332 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:51 crc kubenswrapper[4720]: I0122 06:35:51.813417 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:51Z","lastTransitionTime":"2026-01-22T06:35:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:51 crc kubenswrapper[4720]: E0122 06:35:51.835029 4720 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T06:35:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T06:35:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T06:35:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T06:35:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"234f6209-cc86-46cc-ab69-026482c920c9\\\",\\\"systemUUID\\\":\\\"4713dd6d-99ec-4bb6-94e4-e7199d2e8be9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:51Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:51 crc kubenswrapper[4720]: I0122 06:35:51.841061 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:51 crc kubenswrapper[4720]: I0122 06:35:51.841145 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:51 crc kubenswrapper[4720]: I0122 06:35:51.841167 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:51 crc kubenswrapper[4720]: I0122 06:35:51.841267 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:51 crc kubenswrapper[4720]: I0122 06:35:51.841289 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:51Z","lastTransitionTime":"2026-01-22T06:35:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:51 crc kubenswrapper[4720]: E0122 06:35:51.862405 4720 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T06:35:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T06:35:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T06:35:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T06:35:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"234f6209-cc86-46cc-ab69-026482c920c9\\\",\\\"systemUUID\\\":\\\"4713dd6d-99ec-4bb6-94e4-e7199d2e8be9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:51Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:51 crc kubenswrapper[4720]: E0122 06:35:51.862649 4720 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 22 06:35:51 crc kubenswrapper[4720]: I0122 06:35:51.865221 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:51 crc kubenswrapper[4720]: I0122 06:35:51.865275 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:51 crc kubenswrapper[4720]: I0122 06:35:51.865294 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:51 crc kubenswrapper[4720]: I0122 06:35:51.865318 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:51 crc kubenswrapper[4720]: I0122 06:35:51.865336 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:51Z","lastTransitionTime":"2026-01-22T06:35:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:51 crc kubenswrapper[4720]: I0122 06:35:51.969492 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:51 crc kubenswrapper[4720]: I0122 06:35:51.969578 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:51 crc kubenswrapper[4720]: I0122 06:35:51.969605 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:51 crc kubenswrapper[4720]: I0122 06:35:51.969650 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:51 crc kubenswrapper[4720]: I0122 06:35:51.969675 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:51Z","lastTransitionTime":"2026-01-22T06:35:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:52 crc kubenswrapper[4720]: I0122 06:35:52.073513 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:52 crc kubenswrapper[4720]: I0122 06:35:52.073572 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:52 crc kubenswrapper[4720]: I0122 06:35:52.073592 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:52 crc kubenswrapper[4720]: I0122 06:35:52.073619 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:52 crc kubenswrapper[4720]: I0122 06:35:52.073638 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:52Z","lastTransitionTime":"2026-01-22T06:35:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:52 crc kubenswrapper[4720]: I0122 06:35:52.162740 4720 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 20:18:51.161378966 +0000 UTC Jan 22 06:35:52 crc kubenswrapper[4720]: I0122 06:35:52.176260 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:52 crc kubenswrapper[4720]: I0122 06:35:52.176301 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:52 crc kubenswrapper[4720]: I0122 06:35:52.176313 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:52 crc kubenswrapper[4720]: I0122 06:35:52.176332 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:52 crc kubenswrapper[4720]: I0122 06:35:52.176345 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:52Z","lastTransitionTime":"2026-01-22T06:35:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:52 crc kubenswrapper[4720]: I0122 06:35:52.209697 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 06:35:52 crc kubenswrapper[4720]: I0122 06:35:52.209697 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 06:35:52 crc kubenswrapper[4720]: E0122 06:35:52.209874 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 06:35:52 crc kubenswrapper[4720]: E0122 06:35:52.210004 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 06:35:52 crc kubenswrapper[4720]: I0122 06:35:52.279881 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:52 crc kubenswrapper[4720]: I0122 06:35:52.279958 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:52 crc kubenswrapper[4720]: I0122 06:35:52.279989 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:52 crc kubenswrapper[4720]: I0122 06:35:52.280017 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:52 crc kubenswrapper[4720]: I0122 06:35:52.280035 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:52Z","lastTransitionTime":"2026-01-22T06:35:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:52 crc kubenswrapper[4720]: I0122 06:35:52.383705 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:52 crc kubenswrapper[4720]: I0122 06:35:52.383780 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:52 crc kubenswrapper[4720]: I0122 06:35:52.383792 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:52 crc kubenswrapper[4720]: I0122 06:35:52.383811 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:52 crc kubenswrapper[4720]: I0122 06:35:52.383823 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:52Z","lastTransitionTime":"2026-01-22T06:35:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:52 crc kubenswrapper[4720]: I0122 06:35:52.486897 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:52 crc kubenswrapper[4720]: I0122 06:35:52.487010 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:52 crc kubenswrapper[4720]: I0122 06:35:52.487029 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:52 crc kubenswrapper[4720]: I0122 06:35:52.487056 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:52 crc kubenswrapper[4720]: I0122 06:35:52.487075 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:52Z","lastTransitionTime":"2026-01-22T06:35:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:52 crc kubenswrapper[4720]: I0122 06:35:52.590590 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:52 crc kubenswrapper[4720]: I0122 06:35:52.590639 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:52 crc kubenswrapper[4720]: I0122 06:35:52.590657 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:52 crc kubenswrapper[4720]: I0122 06:35:52.590682 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:52 crc kubenswrapper[4720]: I0122 06:35:52.590700 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:52Z","lastTransitionTime":"2026-01-22T06:35:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:52 crc kubenswrapper[4720]: I0122 06:35:52.695198 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:52 crc kubenswrapper[4720]: I0122 06:35:52.695302 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:52 crc kubenswrapper[4720]: I0122 06:35:52.695325 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:52 crc kubenswrapper[4720]: I0122 06:35:52.695362 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:52 crc kubenswrapper[4720]: I0122 06:35:52.695392 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:52Z","lastTransitionTime":"2026-01-22T06:35:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:52 crc kubenswrapper[4720]: I0122 06:35:52.799261 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:52 crc kubenswrapper[4720]: I0122 06:35:52.799334 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:52 crc kubenswrapper[4720]: I0122 06:35:52.799357 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:52 crc kubenswrapper[4720]: I0122 06:35:52.799387 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:52 crc kubenswrapper[4720]: I0122 06:35:52.799406 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:52Z","lastTransitionTime":"2026-01-22T06:35:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:52 crc kubenswrapper[4720]: I0122 06:35:52.902545 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:52 crc kubenswrapper[4720]: I0122 06:35:52.902613 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:52 crc kubenswrapper[4720]: I0122 06:35:52.902634 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:52 crc kubenswrapper[4720]: I0122 06:35:52.902661 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:52 crc kubenswrapper[4720]: I0122 06:35:52.902687 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:52Z","lastTransitionTime":"2026-01-22T06:35:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:53 crc kubenswrapper[4720]: I0122 06:35:53.006586 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:53 crc kubenswrapper[4720]: I0122 06:35:53.006702 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:53 crc kubenswrapper[4720]: I0122 06:35:53.006721 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:53 crc kubenswrapper[4720]: I0122 06:35:53.006757 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:53 crc kubenswrapper[4720]: I0122 06:35:53.006777 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:53Z","lastTransitionTime":"2026-01-22T06:35:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:53 crc kubenswrapper[4720]: I0122 06:35:53.110235 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:53 crc kubenswrapper[4720]: I0122 06:35:53.110333 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:53 crc kubenswrapper[4720]: I0122 06:35:53.110351 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:53 crc kubenswrapper[4720]: I0122 06:35:53.110384 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:53 crc kubenswrapper[4720]: I0122 06:35:53.110407 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:53Z","lastTransitionTime":"2026-01-22T06:35:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:53 crc kubenswrapper[4720]: I0122 06:35:53.163252 4720 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 11:54:23.458934353 +0000 UTC Jan 22 06:35:53 crc kubenswrapper[4720]: I0122 06:35:53.210392 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 06:35:53 crc kubenswrapper[4720]: I0122 06:35:53.210494 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kvtch" Jan 22 06:35:53 crc kubenswrapper[4720]: E0122 06:35:53.210675 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 06:35:53 crc kubenswrapper[4720]: E0122 06:35:53.210902 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kvtch" podUID="409f50e8-9b68-4efe-8eb4-bc144d383817" Jan 22 06:35:53 crc kubenswrapper[4720]: I0122 06:35:53.213435 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:53 crc kubenswrapper[4720]: I0122 06:35:53.213488 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:53 crc kubenswrapper[4720]: I0122 06:35:53.213505 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:53 crc kubenswrapper[4720]: I0122 06:35:53.213531 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:53 crc kubenswrapper[4720]: I0122 06:35:53.213550 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:53Z","lastTransitionTime":"2026-01-22T06:35:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:53 crc kubenswrapper[4720]: I0122 06:35:53.317179 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:53 crc kubenswrapper[4720]: I0122 06:35:53.317239 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:53 crc kubenswrapper[4720]: I0122 06:35:53.317261 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:53 crc kubenswrapper[4720]: I0122 06:35:53.317291 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:53 crc kubenswrapper[4720]: I0122 06:35:53.317312 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:53Z","lastTransitionTime":"2026-01-22T06:35:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:53 crc kubenswrapper[4720]: I0122 06:35:53.420480 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:53 crc kubenswrapper[4720]: I0122 06:35:53.420540 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:53 crc kubenswrapper[4720]: I0122 06:35:53.420560 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:53 crc kubenswrapper[4720]: I0122 06:35:53.420592 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:53 crc kubenswrapper[4720]: I0122 06:35:53.420612 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:53Z","lastTransitionTime":"2026-01-22T06:35:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:53 crc kubenswrapper[4720]: I0122 06:35:53.524124 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:53 crc kubenswrapper[4720]: I0122 06:35:53.524228 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:53 crc kubenswrapper[4720]: I0122 06:35:53.524265 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:53 crc kubenswrapper[4720]: I0122 06:35:53.524302 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:53 crc kubenswrapper[4720]: I0122 06:35:53.524327 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:53Z","lastTransitionTime":"2026-01-22T06:35:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:53 crc kubenswrapper[4720]: I0122 06:35:53.627128 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:53 crc kubenswrapper[4720]: I0122 06:35:53.627495 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:53 crc kubenswrapper[4720]: I0122 06:35:53.627625 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:53 crc kubenswrapper[4720]: I0122 06:35:53.627751 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:53 crc kubenswrapper[4720]: I0122 06:35:53.627881 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:53Z","lastTransitionTime":"2026-01-22T06:35:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:53 crc kubenswrapper[4720]: I0122 06:35:53.731471 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:53 crc kubenswrapper[4720]: I0122 06:35:53.732464 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:53 crc kubenswrapper[4720]: I0122 06:35:53.732688 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:53 crc kubenswrapper[4720]: I0122 06:35:53.732882 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:53 crc kubenswrapper[4720]: I0122 06:35:53.733111 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:53Z","lastTransitionTime":"2026-01-22T06:35:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:53 crc kubenswrapper[4720]: I0122 06:35:53.837886 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:53 crc kubenswrapper[4720]: I0122 06:35:53.838042 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:53 crc kubenswrapper[4720]: I0122 06:35:53.838073 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:53 crc kubenswrapper[4720]: I0122 06:35:53.838116 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:53 crc kubenswrapper[4720]: I0122 06:35:53.838142 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:53Z","lastTransitionTime":"2026-01-22T06:35:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:53 crc kubenswrapper[4720]: I0122 06:35:53.942958 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:53 crc kubenswrapper[4720]: I0122 06:35:53.943027 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:53 crc kubenswrapper[4720]: I0122 06:35:53.943051 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:53 crc kubenswrapper[4720]: I0122 06:35:53.943088 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:53 crc kubenswrapper[4720]: I0122 06:35:53.943114 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:53Z","lastTransitionTime":"2026-01-22T06:35:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:54 crc kubenswrapper[4720]: I0122 06:35:54.046124 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:54 crc kubenswrapper[4720]: I0122 06:35:54.046238 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:54 crc kubenswrapper[4720]: I0122 06:35:54.046261 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:54 crc kubenswrapper[4720]: I0122 06:35:54.046292 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:54 crc kubenswrapper[4720]: I0122 06:35:54.046317 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:54Z","lastTransitionTime":"2026-01-22T06:35:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:54 crc kubenswrapper[4720]: I0122 06:35:54.149225 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:54 crc kubenswrapper[4720]: I0122 06:35:54.149286 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:54 crc kubenswrapper[4720]: I0122 06:35:54.149311 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:54 crc kubenswrapper[4720]: I0122 06:35:54.149367 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:54 crc kubenswrapper[4720]: I0122 06:35:54.149394 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:54Z","lastTransitionTime":"2026-01-22T06:35:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:54 crc kubenswrapper[4720]: I0122 06:35:54.163810 4720 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 04:53:32.16479665 +0000 UTC Jan 22 06:35:54 crc kubenswrapper[4720]: I0122 06:35:54.210468 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 06:35:54 crc kubenswrapper[4720]: I0122 06:35:54.210580 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 06:35:54 crc kubenswrapper[4720]: E0122 06:35:54.210654 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 06:35:54 crc kubenswrapper[4720]: E0122 06:35:54.210768 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 06:35:54 crc kubenswrapper[4720]: I0122 06:35:54.252597 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:54 crc kubenswrapper[4720]: I0122 06:35:54.252638 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:54 crc kubenswrapper[4720]: I0122 06:35:54.252651 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:54 crc kubenswrapper[4720]: I0122 06:35:54.252670 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:54 crc kubenswrapper[4720]: I0122 06:35:54.252684 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:54Z","lastTransitionTime":"2026-01-22T06:35:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:54 crc kubenswrapper[4720]: I0122 06:35:54.356205 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:54 crc kubenswrapper[4720]: I0122 06:35:54.356267 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:54 crc kubenswrapper[4720]: I0122 06:35:54.356279 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:54 crc kubenswrapper[4720]: I0122 06:35:54.356300 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:54 crc kubenswrapper[4720]: I0122 06:35:54.356313 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:54Z","lastTransitionTime":"2026-01-22T06:35:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:54 crc kubenswrapper[4720]: I0122 06:35:54.459223 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:54 crc kubenswrapper[4720]: I0122 06:35:54.459296 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:54 crc kubenswrapper[4720]: I0122 06:35:54.459323 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:54 crc kubenswrapper[4720]: I0122 06:35:54.459360 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:54 crc kubenswrapper[4720]: I0122 06:35:54.459380 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:54Z","lastTransitionTime":"2026-01-22T06:35:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:54 crc kubenswrapper[4720]: I0122 06:35:54.561987 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:54 crc kubenswrapper[4720]: I0122 06:35:54.562035 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:54 crc kubenswrapper[4720]: I0122 06:35:54.562048 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:54 crc kubenswrapper[4720]: I0122 06:35:54.562068 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:54 crc kubenswrapper[4720]: I0122 06:35:54.562081 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:54Z","lastTransitionTime":"2026-01-22T06:35:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:54 crc kubenswrapper[4720]: I0122 06:35:54.664900 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:54 crc kubenswrapper[4720]: I0122 06:35:54.664981 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:54 crc kubenswrapper[4720]: I0122 06:35:54.664998 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:54 crc kubenswrapper[4720]: I0122 06:35:54.665025 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:54 crc kubenswrapper[4720]: I0122 06:35:54.665042 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:54Z","lastTransitionTime":"2026-01-22T06:35:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:54 crc kubenswrapper[4720]: I0122 06:35:54.773780 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:54 crc kubenswrapper[4720]: I0122 06:35:54.773870 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:54 crc kubenswrapper[4720]: I0122 06:35:54.773892 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:54 crc kubenswrapper[4720]: I0122 06:35:54.774500 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:54 crc kubenswrapper[4720]: I0122 06:35:54.774570 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:54Z","lastTransitionTime":"2026-01-22T06:35:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:54 crc kubenswrapper[4720]: I0122 06:35:54.878743 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:54 crc kubenswrapper[4720]: I0122 06:35:54.878829 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:54 crc kubenswrapper[4720]: I0122 06:35:54.878856 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:54 crc kubenswrapper[4720]: I0122 06:35:54.878889 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:54 crc kubenswrapper[4720]: I0122 06:35:54.878939 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:54Z","lastTransitionTime":"2026-01-22T06:35:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:54 crc kubenswrapper[4720]: I0122 06:35:54.982556 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:54 crc kubenswrapper[4720]: I0122 06:35:54.982620 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:54 crc kubenswrapper[4720]: I0122 06:35:54.982636 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:54 crc kubenswrapper[4720]: I0122 06:35:54.982663 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:54 crc kubenswrapper[4720]: I0122 06:35:54.982681 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:54Z","lastTransitionTime":"2026-01-22T06:35:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:55 crc kubenswrapper[4720]: I0122 06:35:55.085536 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:55 crc kubenswrapper[4720]: I0122 06:35:55.085636 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:55 crc kubenswrapper[4720]: I0122 06:35:55.085655 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:55 crc kubenswrapper[4720]: I0122 06:35:55.085685 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:55 crc kubenswrapper[4720]: I0122 06:35:55.085706 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:55Z","lastTransitionTime":"2026-01-22T06:35:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:55 crc kubenswrapper[4720]: I0122 06:35:55.164434 4720 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 01:50:35.676493952 +0000 UTC Jan 22 06:35:55 crc kubenswrapper[4720]: I0122 06:35:55.189401 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:55 crc kubenswrapper[4720]: I0122 06:35:55.189467 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:55 crc kubenswrapper[4720]: I0122 06:35:55.189489 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:55 crc kubenswrapper[4720]: I0122 06:35:55.189522 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:55 crc kubenswrapper[4720]: I0122 06:35:55.189541 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:55Z","lastTransitionTime":"2026-01-22T06:35:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:55 crc kubenswrapper[4720]: I0122 06:35:55.209760 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 06:35:55 crc kubenswrapper[4720]: I0122 06:35:55.209791 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kvtch" Jan 22 06:35:55 crc kubenswrapper[4720]: E0122 06:35:55.210159 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 06:35:55 crc kubenswrapper[4720]: E0122 06:35:55.210429 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kvtch" podUID="409f50e8-9b68-4efe-8eb4-bc144d383817" Jan 22 06:35:55 crc kubenswrapper[4720]: I0122 06:35:55.266419 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/409f50e8-9b68-4efe-8eb4-bc144d383817-metrics-certs\") pod \"network-metrics-daemon-kvtch\" (UID: \"409f50e8-9b68-4efe-8eb4-bc144d383817\") " pod="openshift-multus/network-metrics-daemon-kvtch" Jan 22 06:35:55 crc kubenswrapper[4720]: E0122 06:35:55.266705 4720 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 22 06:35:55 crc kubenswrapper[4720]: E0122 06:35:55.266815 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/409f50e8-9b68-4efe-8eb4-bc144d383817-metrics-certs podName:409f50e8-9b68-4efe-8eb4-bc144d383817 nodeName:}" failed. No retries permitted until 2026-01-22 06:36:03.266789304 +0000 UTC m=+55.408696009 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/409f50e8-9b68-4efe-8eb4-bc144d383817-metrics-certs") pod "network-metrics-daemon-kvtch" (UID: "409f50e8-9b68-4efe-8eb4-bc144d383817") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 22 06:35:55 crc kubenswrapper[4720]: I0122 06:35:55.292649 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:55 crc kubenswrapper[4720]: I0122 06:35:55.292693 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:55 crc kubenswrapper[4720]: I0122 06:35:55.292710 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:55 crc kubenswrapper[4720]: I0122 06:35:55.292730 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:55 crc kubenswrapper[4720]: I0122 06:35:55.292746 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:55Z","lastTransitionTime":"2026-01-22T06:35:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:55 crc kubenswrapper[4720]: I0122 06:35:55.395636 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:55 crc kubenswrapper[4720]: I0122 06:35:55.395709 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:55 crc kubenswrapper[4720]: I0122 06:35:55.395734 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:55 crc kubenswrapper[4720]: I0122 06:35:55.395762 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:55 crc kubenswrapper[4720]: I0122 06:35:55.395784 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:55Z","lastTransitionTime":"2026-01-22T06:35:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:55 crc kubenswrapper[4720]: I0122 06:35:55.498758 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:55 crc kubenswrapper[4720]: I0122 06:35:55.498813 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:55 crc kubenswrapper[4720]: I0122 06:35:55.498825 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:55 crc kubenswrapper[4720]: I0122 06:35:55.498843 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:55 crc kubenswrapper[4720]: I0122 06:35:55.498854 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:55Z","lastTransitionTime":"2026-01-22T06:35:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:55 crc kubenswrapper[4720]: I0122 06:35:55.601058 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:55 crc kubenswrapper[4720]: I0122 06:35:55.601126 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:55 crc kubenswrapper[4720]: I0122 06:35:55.601143 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:55 crc kubenswrapper[4720]: I0122 06:35:55.601168 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:55 crc kubenswrapper[4720]: I0122 06:35:55.601185 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:55Z","lastTransitionTime":"2026-01-22T06:35:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:55 crc kubenswrapper[4720]: I0122 06:35:55.704582 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:55 crc kubenswrapper[4720]: I0122 06:35:55.704648 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:55 crc kubenswrapper[4720]: I0122 06:35:55.704665 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:55 crc kubenswrapper[4720]: I0122 06:35:55.704690 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:55 crc kubenswrapper[4720]: I0122 06:35:55.704710 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:55Z","lastTransitionTime":"2026-01-22T06:35:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:55 crc kubenswrapper[4720]: I0122 06:35:55.808570 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:55 crc kubenswrapper[4720]: I0122 06:35:55.808667 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:55 crc kubenswrapper[4720]: I0122 06:35:55.808690 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:55 crc kubenswrapper[4720]: I0122 06:35:55.808779 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:55 crc kubenswrapper[4720]: I0122 06:35:55.808803 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:55Z","lastTransitionTime":"2026-01-22T06:35:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:55 crc kubenswrapper[4720]: I0122 06:35:55.912050 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:55 crc kubenswrapper[4720]: I0122 06:35:55.912117 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:55 crc kubenswrapper[4720]: I0122 06:35:55.912137 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:55 crc kubenswrapper[4720]: I0122 06:35:55.912164 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:55 crc kubenswrapper[4720]: I0122 06:35:55.912186 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:55Z","lastTransitionTime":"2026-01-22T06:35:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:56 crc kubenswrapper[4720]: I0122 06:35:56.016007 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:56 crc kubenswrapper[4720]: I0122 06:35:56.016093 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:56 crc kubenswrapper[4720]: I0122 06:35:56.016110 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:56 crc kubenswrapper[4720]: I0122 06:35:56.016139 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:56 crc kubenswrapper[4720]: I0122 06:35:56.016161 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:56Z","lastTransitionTime":"2026-01-22T06:35:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:56 crc kubenswrapper[4720]: I0122 06:35:56.119298 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:56 crc kubenswrapper[4720]: I0122 06:35:56.119353 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:56 crc kubenswrapper[4720]: I0122 06:35:56.119362 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:56 crc kubenswrapper[4720]: I0122 06:35:56.119381 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:56 crc kubenswrapper[4720]: I0122 06:35:56.119393 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:56Z","lastTransitionTime":"2026-01-22T06:35:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:56 crc kubenswrapper[4720]: I0122 06:35:56.165087 4720 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 20:30:19.153536416 +0000 UTC Jan 22 06:35:56 crc kubenswrapper[4720]: I0122 06:35:56.210004 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 06:35:56 crc kubenswrapper[4720]: I0122 06:35:56.210035 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 06:35:56 crc kubenswrapper[4720]: E0122 06:35:56.210358 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 06:35:56 crc kubenswrapper[4720]: E0122 06:35:56.210215 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 06:35:56 crc kubenswrapper[4720]: I0122 06:35:56.222196 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:56 crc kubenswrapper[4720]: I0122 06:35:56.222250 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:56 crc kubenswrapper[4720]: I0122 06:35:56.222268 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:56 crc kubenswrapper[4720]: I0122 06:35:56.222297 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:56 crc kubenswrapper[4720]: I0122 06:35:56.222318 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:56Z","lastTransitionTime":"2026-01-22T06:35:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:56 crc kubenswrapper[4720]: I0122 06:35:56.326560 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:56 crc kubenswrapper[4720]: I0122 06:35:56.326643 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:56 crc kubenswrapper[4720]: I0122 06:35:56.326668 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:56 crc kubenswrapper[4720]: I0122 06:35:56.326696 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:56 crc kubenswrapper[4720]: I0122 06:35:56.326716 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:56Z","lastTransitionTime":"2026-01-22T06:35:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:56 crc kubenswrapper[4720]: I0122 06:35:56.430276 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:56 crc kubenswrapper[4720]: I0122 06:35:56.430365 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:56 crc kubenswrapper[4720]: I0122 06:35:56.430384 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:56 crc kubenswrapper[4720]: I0122 06:35:56.430414 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:56 crc kubenswrapper[4720]: I0122 06:35:56.430433 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:56Z","lastTransitionTime":"2026-01-22T06:35:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:56 crc kubenswrapper[4720]: I0122 06:35:56.533561 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:56 crc kubenswrapper[4720]: I0122 06:35:56.533637 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:56 crc kubenswrapper[4720]: I0122 06:35:56.533651 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:56 crc kubenswrapper[4720]: I0122 06:35:56.533674 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:56 crc kubenswrapper[4720]: I0122 06:35:56.533688 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:56Z","lastTransitionTime":"2026-01-22T06:35:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:56 crc kubenswrapper[4720]: I0122 06:35:56.637522 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:56 crc kubenswrapper[4720]: I0122 06:35:56.637585 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:56 crc kubenswrapper[4720]: I0122 06:35:56.637602 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:56 crc kubenswrapper[4720]: I0122 06:35:56.637627 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:56 crc kubenswrapper[4720]: I0122 06:35:56.637646 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:56Z","lastTransitionTime":"2026-01-22T06:35:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:56 crc kubenswrapper[4720]: I0122 06:35:56.741672 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:56 crc kubenswrapper[4720]: I0122 06:35:56.741732 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:56 crc kubenswrapper[4720]: I0122 06:35:56.741754 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:56 crc kubenswrapper[4720]: I0122 06:35:56.741780 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:56 crc kubenswrapper[4720]: I0122 06:35:56.741797 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:56Z","lastTransitionTime":"2026-01-22T06:35:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:56 crc kubenswrapper[4720]: I0122 06:35:56.846720 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:56 crc kubenswrapper[4720]: I0122 06:35:56.846780 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:56 crc kubenswrapper[4720]: I0122 06:35:56.846798 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:56 crc kubenswrapper[4720]: I0122 06:35:56.846824 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:56 crc kubenswrapper[4720]: I0122 06:35:56.846842 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:56Z","lastTransitionTime":"2026-01-22T06:35:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:56 crc kubenswrapper[4720]: I0122 06:35:56.950399 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:56 crc kubenswrapper[4720]: I0122 06:35:56.950473 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:56 crc kubenswrapper[4720]: I0122 06:35:56.950493 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:56 crc kubenswrapper[4720]: I0122 06:35:56.950523 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:56 crc kubenswrapper[4720]: I0122 06:35:56.950544 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:56Z","lastTransitionTime":"2026-01-22T06:35:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:57 crc kubenswrapper[4720]: I0122 06:35:57.054583 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:57 crc kubenswrapper[4720]: I0122 06:35:57.054657 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:57 crc kubenswrapper[4720]: I0122 06:35:57.054667 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:57 crc kubenswrapper[4720]: I0122 06:35:57.054689 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:57 crc kubenswrapper[4720]: I0122 06:35:57.054702 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:57Z","lastTransitionTime":"2026-01-22T06:35:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:57 crc kubenswrapper[4720]: I0122 06:35:57.164602 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:57 crc kubenswrapper[4720]: I0122 06:35:57.164672 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:57 crc kubenswrapper[4720]: I0122 06:35:57.164692 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:57 crc kubenswrapper[4720]: I0122 06:35:57.164725 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:57 crc kubenswrapper[4720]: I0122 06:35:57.164745 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:57Z","lastTransitionTime":"2026-01-22T06:35:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:57 crc kubenswrapper[4720]: I0122 06:35:57.165552 4720 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 18:36:35.404590727 +0000 UTC Jan 22 06:35:57 crc kubenswrapper[4720]: I0122 06:35:57.210771 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 06:35:57 crc kubenswrapper[4720]: I0122 06:35:57.210778 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kvtch" Jan 22 06:35:57 crc kubenswrapper[4720]: E0122 06:35:57.211028 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 06:35:57 crc kubenswrapper[4720]: E0122 06:35:57.211292 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kvtch" podUID="409f50e8-9b68-4efe-8eb4-bc144d383817" Jan 22 06:35:57 crc kubenswrapper[4720]: I0122 06:35:57.269186 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:57 crc kubenswrapper[4720]: I0122 06:35:57.269256 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:57 crc kubenswrapper[4720]: I0122 06:35:57.269275 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:57 crc kubenswrapper[4720]: I0122 06:35:57.269303 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:57 crc kubenswrapper[4720]: I0122 06:35:57.269323 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:57Z","lastTransitionTime":"2026-01-22T06:35:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:57 crc kubenswrapper[4720]: I0122 06:35:57.374003 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:57 crc kubenswrapper[4720]: I0122 06:35:57.374069 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:57 crc kubenswrapper[4720]: I0122 06:35:57.374087 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:57 crc kubenswrapper[4720]: I0122 06:35:57.374120 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:57 crc kubenswrapper[4720]: I0122 06:35:57.374145 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:57Z","lastTransitionTime":"2026-01-22T06:35:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:57 crc kubenswrapper[4720]: I0122 06:35:57.477504 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:57 crc kubenswrapper[4720]: I0122 06:35:57.477589 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:57 crc kubenswrapper[4720]: I0122 06:35:57.477616 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:57 crc kubenswrapper[4720]: I0122 06:35:57.477653 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:57 crc kubenswrapper[4720]: I0122 06:35:57.477672 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:57Z","lastTransitionTime":"2026-01-22T06:35:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:57 crc kubenswrapper[4720]: I0122 06:35:57.581363 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:57 crc kubenswrapper[4720]: I0122 06:35:57.581431 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:57 crc kubenswrapper[4720]: I0122 06:35:57.581451 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:57 crc kubenswrapper[4720]: I0122 06:35:57.581483 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:57 crc kubenswrapper[4720]: I0122 06:35:57.581503 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:57Z","lastTransitionTime":"2026-01-22T06:35:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:57 crc kubenswrapper[4720]: I0122 06:35:57.685465 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:57 crc kubenswrapper[4720]: I0122 06:35:57.685541 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:57 crc kubenswrapper[4720]: I0122 06:35:57.685559 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:57 crc kubenswrapper[4720]: I0122 06:35:57.685594 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:57 crc kubenswrapper[4720]: I0122 06:35:57.685613 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:57Z","lastTransitionTime":"2026-01-22T06:35:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:57 crc kubenswrapper[4720]: I0122 06:35:57.789212 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:57 crc kubenswrapper[4720]: I0122 06:35:57.789270 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:57 crc kubenswrapper[4720]: I0122 06:35:57.789290 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:57 crc kubenswrapper[4720]: I0122 06:35:57.789315 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:57 crc kubenswrapper[4720]: I0122 06:35:57.789333 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:57Z","lastTransitionTime":"2026-01-22T06:35:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:57 crc kubenswrapper[4720]: I0122 06:35:57.892429 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:57 crc kubenswrapper[4720]: I0122 06:35:57.892462 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:57 crc kubenswrapper[4720]: I0122 06:35:57.892472 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:57 crc kubenswrapper[4720]: I0122 06:35:57.892487 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:57 crc kubenswrapper[4720]: I0122 06:35:57.892496 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:57Z","lastTransitionTime":"2026-01-22T06:35:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:57 crc kubenswrapper[4720]: I0122 06:35:57.995263 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:57 crc kubenswrapper[4720]: I0122 06:35:57.995314 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:57 crc kubenswrapper[4720]: I0122 06:35:57.995326 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:57 crc kubenswrapper[4720]: I0122 06:35:57.995349 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:57 crc kubenswrapper[4720]: I0122 06:35:57.995361 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:57Z","lastTransitionTime":"2026-01-22T06:35:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:58 crc kubenswrapper[4720]: I0122 06:35:58.098694 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:58 crc kubenswrapper[4720]: I0122 06:35:58.098745 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:58 crc kubenswrapper[4720]: I0122 06:35:58.098755 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:58 crc kubenswrapper[4720]: I0122 06:35:58.098775 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:58 crc kubenswrapper[4720]: I0122 06:35:58.098787 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:58Z","lastTransitionTime":"2026-01-22T06:35:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:58 crc kubenswrapper[4720]: I0122 06:35:58.165767 4720 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 03:25:25.058642815 +0000 UTC Jan 22 06:35:58 crc kubenswrapper[4720]: I0122 06:35:58.202105 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:58 crc kubenswrapper[4720]: I0122 06:35:58.202165 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:58 crc kubenswrapper[4720]: I0122 06:35:58.202183 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:58 crc kubenswrapper[4720]: I0122 06:35:58.202213 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:58 crc kubenswrapper[4720]: I0122 06:35:58.202233 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:58Z","lastTransitionTime":"2026-01-22T06:35:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:58 crc kubenswrapper[4720]: I0122 06:35:58.210445 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 06:35:58 crc kubenswrapper[4720]: E0122 06:35:58.210614 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 06:35:58 crc kubenswrapper[4720]: I0122 06:35:58.211399 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 06:35:58 crc kubenswrapper[4720]: E0122 06:35:58.211668 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 06:35:58 crc kubenswrapper[4720]: I0122 06:35:58.234848 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:58Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:58 crc kubenswrapper[4720]: I0122 06:35:58.254707 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dtnxt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"518eedd0-2cb6-458d-a7a8-d8c8b8296401\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://794bf7a98b2300bd21bd914f5ad90c8f92ea7c055c78f62dbac7bc66c1c4d282\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqx2p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dtnxt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:58Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:58 crc kubenswrapper[4720]: I0122 06:35:58.274760 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc18b75d6761db0d4a459a096804ac6600c6c2026ca133d47671a8aadba175f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6616553b8c62205590801d10e6e316b396b7190584b08a774ebb87ab425cc122\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:58Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:58 crc kubenswrapper[4720]: I0122 06:35:58.295160 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4b26e9d-6a95-4b1c-9750-88b6aa100c67\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57c69f24cbbb5aea3a63b602916230df0cc3d74344a32bd1528bd162786a6d4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f52f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88eb6692702bcb8523c759d764bb8dede5af5a2890217a1c6897a5b18a7197dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f52f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bnsvd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:58Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:58 crc kubenswrapper[4720]: I0122 06:35:58.304665 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:58 crc kubenswrapper[4720]: I0122 06:35:58.304718 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:58 crc kubenswrapper[4720]: I0122 06:35:58.304735 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:58 crc kubenswrapper[4720]: I0122 06:35:58.304764 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:58 crc kubenswrapper[4720]: I0122 06:35:58.304786 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:58Z","lastTransitionTime":"2026-01-22T06:35:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:58 crc kubenswrapper[4720]: I0122 06:35:58.313588 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4c84t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83dddec7-9ecb-4d3b-97ac-e2f8f59e547c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ea91d23f213bf1c41ebffa43c30559153a3fdae5aac42557a24566cc90bd2b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl9b5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8570742a6d232a3695d42f42f4a8a7bfe79325cb20c8da4129148dda33df4683\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl9b5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4c84t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:58Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:58 crc kubenswrapper[4720]: I0122 06:35:58.334787 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63aad383-f2cb-4bee-a51e-8f27e7b6f6dc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f91f6cde32c3019a63315238551af8420def8e585e4dbf3d29e55f62f155ba3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eff41c6fc4cac32edebfbabd313603d50771e92427938f4fc8c12bc75563133d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5dc3d31620607809111c9a1ed06ab7dfb1b24ff5ef18155979efc6b09b6d67d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://365ce842df37c87fae429e32228087a45828c6c717b63a2eb9818f26bf9aed7b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:08Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:58Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:58 crc kubenswrapper[4720]: I0122 06:35:58.353721 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:58Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:58 crc kubenswrapper[4720]: I0122 06:35:58.372698 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:58Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:58 crc kubenswrapper[4720]: I0122 06:35:58.395079 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3042f2c25662a3c0a0f010b2e431521ea9dc24b59303773b1fc4e2b43308937\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:58Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:58 crc kubenswrapper[4720]: I0122 06:35:58.407438 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:58 crc kubenswrapper[4720]: I0122 06:35:58.407474 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:58 crc kubenswrapper[4720]: I0122 06:35:58.407485 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:58 crc kubenswrapper[4720]: I0122 06:35:58.407503 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:58 crc kubenswrapper[4720]: I0122 06:35:58.407513 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:58Z","lastTransitionTime":"2026-01-22T06:35:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:58 crc kubenswrapper[4720]: I0122 06:35:58.422840 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pc2f4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a725fa6-120e-41b1-bf7b-e1419e35c891\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be8927bdfee25dd12f02d8f5e4923d1ef6e034a4ab3c036ee8fb76b699aa4a04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b14f48ebae84e1450337b5ce78c6b9e74f4ffd418eaaf9df9fcb7495f5feae06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b013c0f8d83bb91a1385ed067377ea5bd81640f7b420160042c33a3112cbe153\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a1f835b7dfa74f56bbfdc5ef0a6674e6c04ae1be7119e3675d4a7e970bc37d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdea94d764eb25af4aec5da1f1aaf94b7d75e66292561a54803a079a6ca35cbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://279bdc70caff8b05f2eaff1d28bee03725a7069dcc99c4770a8df9a3e3ed0440\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e12d4480e256b933eebea365d0b57b3ab1be7aef00a03a2ff218e1bf3d7460a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e12d4480e256b933eebea365d0b57b3ab1be7aef00a03a2ff218e1bf3d7460a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T06:35:46Z\\\",\\\"message\\\":\\\"handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:46Z is after 2025-08-24T17:21:41Z]\\\\nI0122 06:35:46.440584 6161 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-machine-config-operator/machine-config-daemon_TCP_cluster\\\\\\\", UUID:\\\\\\\"a36f6289-d09f-43f8-8a8a-c9d2cc11eb0d\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-config-operator/machine-config-daemon\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-machine-config-operator/machine-config-daemon_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\",\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:45Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-pc2f4_openshift-ovn-kubernetes(9a725fa6-120e-41b1-bf7b-e1419e35c891)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd6371da033cf0ecd1c0b746cae82e14593edbf2353910a77284a2987741580d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a478b385adad231c65bafa44aef49fd983ef1a81babc0464478e199578b612b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a478b385adad231c65bafa44aef49fd983ef1a81babc0464478e199578b612b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pc2f4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:58Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:58 crc kubenswrapper[4720]: I0122 06:35:58.438643 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lxzml" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c7b3c34a-9870-4c9f-990b-29b7e768d5a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec19859a546dc85e4807d0f522014187b1d22c1167a342776aafbcfd7c3a2cd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h66q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0e395a61e07527eace7f980f476ef9b16a0c6d525b461837ebd9abcbebfa9b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0e395a61e07527eace7f980f476ef9b16a0c6d525b461837ebd9abcbebfa9b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h66q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fa836ad8fb1d77bc1b9d30575db8ed7f2394a2b03d41f0b26a3907fa07ab154\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fa836ad8fb1d77bc1b9d30575db8ed7f2394a2b03d41f0b26a3907fa07ab154\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h66q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09cda77dbfa7cafc1f167478649fe1bc961d7610005bd3260407796956dd8fc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09cda77dbfa7cafc1f167478649fe1bc961d7610005bd3260407796956dd8fc3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h66q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d74c02c82589070c20f88c3ce9574341f2c4898af034fe9a686aa7b7c19ed93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d74c02c82589070c20f88c3ce9574341f2c4898af034fe9a686aa7b7c19ed93\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h66q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42e7374bf2c5e55d0afbe1e7564a2bc7fd23d0af7e22f0cab853fb9d75802422\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42e7374bf2c5e55d0afbe1e7564a2bc7fd23d0af7e22f0cab853fb9d75802422\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h66q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bbafab11f2868d0a97a54debc01d8e4cc6360b08a5b7e883a2f385db2e9ba0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3bbafab11f2868d0a97a54debc01d8e4cc6360b08a5b7e883a2f385db2e9ba0c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h66q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lxzml\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:58Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:58 crc kubenswrapper[4720]: I0122 06:35:58.450639 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5bmrh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"819a554b-cde8-41eb-bf3c-b965b5754ee9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fd548b8bf40d7be75f3433d2396eac3eb467cdc77a6e92f29c5321b0f2e11c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w949z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5bmrh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:58Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:58 crc kubenswrapper[4720]: I0122 06:35:58.467417 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kvtch" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"409f50e8-9b68-4efe-8eb4-bc144d383817\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhm9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhm9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:47Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kvtch\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:58Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:58 crc kubenswrapper[4720]: I0122 06:35:58.492196 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2336077a-0d11-443d-aa5f-3bc0f75aeb59\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7f8b443098a61889ca09134ff7b6e219efca59abcbcea3bb4c1aa9266ee8e55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfdba4f4bb3ed55acad58ed0c07d2dd58b1a1e9fd56443090c002bdefd3e7e10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2caddf6dd66b641ff10add3b52e37581c741507660ec4fe67a3b9ddde699e56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://108525dd49ce1847f1cc6c4e2a4c70cae3d2d990cda97eeb4aba8879b955432d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d49a501c34b86f0f9300742970a00c92ce3b8d352518d8f53d1ab8c734e2af6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49fa65139740d25a82a8becea2b7476e9846a3eebfe5a02bb0b3e8cc4e7fa30d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49fa65139740d25a82a8becea2b7476e9846a3eebfe5a02bb0b3e8cc4e7fa30d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e171d6a13b6108f7df37e46d2195682d2ce7a75e5fbfd0ccf4aca27871964cf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e171d6a13b6108f7df37e46d2195682d2ce7a75e5fbfd0ccf4aca27871964cf6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://136504ce05b178f4c60a839a1044c0b0892ea97babfe80181eac601c5057df16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://136504ce05b178f4c60a839a1044c0b0892ea97babfe80181eac601c5057df16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:08Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:58Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:58 crc kubenswrapper[4720]: I0122 06:35:58.504687 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 06:35:58 crc kubenswrapper[4720]: E0122 06:35:58.504971 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 06:36:30.50488587 +0000 UTC m=+82.646792605 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 06:35:58 crc kubenswrapper[4720]: I0122 06:35:58.505129 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 06:35:58 crc kubenswrapper[4720]: I0122 06:35:58.505254 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 06:35:58 crc kubenswrapper[4720]: E0122 06:35:58.505374 4720 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 22 06:35:58 crc kubenswrapper[4720]: E0122 06:35:58.505452 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-22 06:36:30.505433076 +0000 UTC m=+82.647339791 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 22 06:35:58 crc kubenswrapper[4720]: E0122 06:35:58.505693 4720 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 22 06:35:58 crc kubenswrapper[4720]: E0122 06:35:58.505751 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-22 06:36:30.505738345 +0000 UTC m=+82.647645060 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 22 06:35:58 crc kubenswrapper[4720]: I0122 06:35:58.508041 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71c3232e-a7c6-4127-b9ae-54b793cf40fc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b844e983a5de29cec0e9c51f209f04921cd45b437887ca38e93348bc0816832a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d21c2ba52009379f44498a3064c67bcd1a54a6277cc2b1df9db949bf0871e95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65a3043e63bb84b58bb5486070eb84265d51c514df2d766703a62382b3d6f66f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b944677a78ab673857bf2dd507dcc2eced1caef632613ec1e2054cb76ef57f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f51bf89aa732948c3672632435a055b62f58316da07931b2773bc2d3c10789e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T06:35:26Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 06:35:20.895806 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 06:35:20.896560 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2161447925/tls.crt::/tmp/serving-cert-2161447925/tls.key\\\\\\\"\\\\nI0122 06:35:26.673138 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 06:35:26.677382 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 06:35:26.677419 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 06:35:26.677459 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 06:35:26.677469 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 06:35:26.688473 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 06:35:26.688552 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 06:35:26.688566 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 06:35:26.688580 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 06:35:26.688588 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 06:35:26.688596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 06:35:26.688605 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 06:35:26.688875 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 06:35:26.690763 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://927a5a6aba8d7533888fb9e5e664fbaed25c6e6c0616c5b3c6f0afd6f45aa128\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3418e6eacd36147ae6ae1be72f23113dea7dbaab3597ba0fee76fbc846c0162f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3418e6eacd36147ae6ae1be72f23113dea7dbaab3597ba0fee76fbc846c0162f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:58Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:58 crc kubenswrapper[4720]: I0122 06:35:58.510878 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:58 crc kubenswrapper[4720]: I0122 06:35:58.510983 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:58 crc kubenswrapper[4720]: I0122 06:35:58.511003 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:58 crc kubenswrapper[4720]: I0122 06:35:58.511032 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:58 crc kubenswrapper[4720]: I0122 06:35:58.511052 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:58Z","lastTransitionTime":"2026-01-22T06:35:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:58 crc kubenswrapper[4720]: I0122 06:35:58.522239 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://42d0f971bb1f2a686c44e255b97648123170034b41d626318aab29e111dda4a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:58Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:58 crc kubenswrapper[4720]: I0122 06:35:58.535034 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-n5w5r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85373343-156d-4de0-a72b-baaf7c4e3d08\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4da0abad7292a9d82b80caabd4b1f0fc15fbe54bfa5b4316c85b2131ea8b5b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlzmz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-n5w5r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:58Z is after 2025-08-24T17:21:41Z" Jan 22 06:35:58 crc kubenswrapper[4720]: I0122 06:35:58.606201 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 06:35:58 crc kubenswrapper[4720]: I0122 06:35:58.606279 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 06:35:58 crc kubenswrapper[4720]: E0122 06:35:58.606485 4720 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 22 06:35:58 crc kubenswrapper[4720]: E0122 06:35:58.606519 4720 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 22 06:35:58 crc kubenswrapper[4720]: E0122 06:35:58.606535 4720 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 06:35:58 crc kubenswrapper[4720]: E0122 06:35:58.606606 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-22 06:36:30.606586712 +0000 UTC m=+82.748493427 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 06:35:58 crc kubenswrapper[4720]: E0122 06:35:58.606637 4720 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 22 06:35:58 crc kubenswrapper[4720]: E0122 06:35:58.606694 4720 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 22 06:35:58 crc kubenswrapper[4720]: E0122 06:35:58.606719 4720 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 06:35:58 crc kubenswrapper[4720]: E0122 06:35:58.606832 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-22 06:36:30.606800708 +0000 UTC m=+82.748707453 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 06:35:58 crc kubenswrapper[4720]: I0122 06:35:58.614087 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:58 crc kubenswrapper[4720]: I0122 06:35:58.614140 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:58 crc kubenswrapper[4720]: I0122 06:35:58.614173 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:58 crc kubenswrapper[4720]: I0122 06:35:58.614196 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:58 crc kubenswrapper[4720]: I0122 06:35:58.614210 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:58Z","lastTransitionTime":"2026-01-22T06:35:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:58 crc kubenswrapper[4720]: I0122 06:35:58.717030 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:58 crc kubenswrapper[4720]: I0122 06:35:58.717081 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:58 crc kubenswrapper[4720]: I0122 06:35:58.717094 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:58 crc kubenswrapper[4720]: I0122 06:35:58.717118 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:58 crc kubenswrapper[4720]: I0122 06:35:58.717136 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:58Z","lastTransitionTime":"2026-01-22T06:35:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:58 crc kubenswrapper[4720]: I0122 06:35:58.819557 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:58 crc kubenswrapper[4720]: I0122 06:35:58.819622 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:58 crc kubenswrapper[4720]: I0122 06:35:58.819669 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:58 crc kubenswrapper[4720]: I0122 06:35:58.819696 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:58 crc kubenswrapper[4720]: I0122 06:35:58.819715 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:58Z","lastTransitionTime":"2026-01-22T06:35:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:58 crc kubenswrapper[4720]: I0122 06:35:58.922453 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:58 crc kubenswrapper[4720]: I0122 06:35:58.922504 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:58 crc kubenswrapper[4720]: I0122 06:35:58.922523 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:58 crc kubenswrapper[4720]: I0122 06:35:58.922547 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:58 crc kubenswrapper[4720]: I0122 06:35:58.922565 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:58Z","lastTransitionTime":"2026-01-22T06:35:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:59 crc kubenswrapper[4720]: I0122 06:35:59.027150 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:59 crc kubenswrapper[4720]: I0122 06:35:59.027210 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:59 crc kubenswrapper[4720]: I0122 06:35:59.027228 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:59 crc kubenswrapper[4720]: I0122 06:35:59.027252 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:59 crc kubenswrapper[4720]: I0122 06:35:59.027270 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:59Z","lastTransitionTime":"2026-01-22T06:35:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:59 crc kubenswrapper[4720]: I0122 06:35:59.131648 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:59 crc kubenswrapper[4720]: I0122 06:35:59.131722 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:59 crc kubenswrapper[4720]: I0122 06:35:59.131748 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:59 crc kubenswrapper[4720]: I0122 06:35:59.131780 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:59 crc kubenswrapper[4720]: I0122 06:35:59.131803 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:59Z","lastTransitionTime":"2026-01-22T06:35:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:59 crc kubenswrapper[4720]: I0122 06:35:59.166463 4720 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 07:38:40.571438204 +0000 UTC Jan 22 06:35:59 crc kubenswrapper[4720]: I0122 06:35:59.210069 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 06:35:59 crc kubenswrapper[4720]: I0122 06:35:59.210093 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kvtch" Jan 22 06:35:59 crc kubenswrapper[4720]: E0122 06:35:59.210289 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 06:35:59 crc kubenswrapper[4720]: E0122 06:35:59.210526 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kvtch" podUID="409f50e8-9b68-4efe-8eb4-bc144d383817" Jan 22 06:35:59 crc kubenswrapper[4720]: I0122 06:35:59.235247 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:59 crc kubenswrapper[4720]: I0122 06:35:59.235320 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:59 crc kubenswrapper[4720]: I0122 06:35:59.235340 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:59 crc kubenswrapper[4720]: I0122 06:35:59.235373 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:59 crc kubenswrapper[4720]: I0122 06:35:59.235393 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:59Z","lastTransitionTime":"2026-01-22T06:35:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:59 crc kubenswrapper[4720]: I0122 06:35:59.338840 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:59 crc kubenswrapper[4720]: I0122 06:35:59.338905 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:59 crc kubenswrapper[4720]: I0122 06:35:59.338962 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:59 crc kubenswrapper[4720]: I0122 06:35:59.338994 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:59 crc kubenswrapper[4720]: I0122 06:35:59.339014 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:59Z","lastTransitionTime":"2026-01-22T06:35:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:59 crc kubenswrapper[4720]: I0122 06:35:59.442976 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:59 crc kubenswrapper[4720]: I0122 06:35:59.443052 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:59 crc kubenswrapper[4720]: I0122 06:35:59.443072 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:59 crc kubenswrapper[4720]: I0122 06:35:59.443103 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:59 crc kubenswrapper[4720]: I0122 06:35:59.443125 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:59Z","lastTransitionTime":"2026-01-22T06:35:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:59 crc kubenswrapper[4720]: I0122 06:35:59.547236 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:59 crc kubenswrapper[4720]: I0122 06:35:59.547305 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:59 crc kubenswrapper[4720]: I0122 06:35:59.547331 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:59 crc kubenswrapper[4720]: I0122 06:35:59.547359 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:59 crc kubenswrapper[4720]: I0122 06:35:59.547377 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:59Z","lastTransitionTime":"2026-01-22T06:35:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:59 crc kubenswrapper[4720]: I0122 06:35:59.651603 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:59 crc kubenswrapper[4720]: I0122 06:35:59.651670 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:59 crc kubenswrapper[4720]: I0122 06:35:59.651687 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:59 crc kubenswrapper[4720]: I0122 06:35:59.651713 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:59 crc kubenswrapper[4720]: I0122 06:35:59.651732 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:59Z","lastTransitionTime":"2026-01-22T06:35:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:59 crc kubenswrapper[4720]: I0122 06:35:59.755761 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:59 crc kubenswrapper[4720]: I0122 06:35:59.755841 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:59 crc kubenswrapper[4720]: I0122 06:35:59.755885 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:59 crc kubenswrapper[4720]: I0122 06:35:59.755958 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:59 crc kubenswrapper[4720]: I0122 06:35:59.755989 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:59Z","lastTransitionTime":"2026-01-22T06:35:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:59 crc kubenswrapper[4720]: I0122 06:35:59.859182 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:59 crc kubenswrapper[4720]: I0122 06:35:59.859226 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:59 crc kubenswrapper[4720]: I0122 06:35:59.859238 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:59 crc kubenswrapper[4720]: I0122 06:35:59.859258 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:59 crc kubenswrapper[4720]: I0122 06:35:59.859273 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:59Z","lastTransitionTime":"2026-01-22T06:35:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:35:59 crc kubenswrapper[4720]: I0122 06:35:59.962609 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:35:59 crc kubenswrapper[4720]: I0122 06:35:59.962655 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:35:59 crc kubenswrapper[4720]: I0122 06:35:59.962672 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:35:59 crc kubenswrapper[4720]: I0122 06:35:59.962694 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:35:59 crc kubenswrapper[4720]: I0122 06:35:59.962712 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:35:59Z","lastTransitionTime":"2026-01-22T06:35:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:00 crc kubenswrapper[4720]: I0122 06:36:00.065479 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:00 crc kubenswrapper[4720]: I0122 06:36:00.065529 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:00 crc kubenswrapper[4720]: I0122 06:36:00.065540 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:00 crc kubenswrapper[4720]: I0122 06:36:00.065579 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:00 crc kubenswrapper[4720]: I0122 06:36:00.065592 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:00Z","lastTransitionTime":"2026-01-22T06:36:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:00 crc kubenswrapper[4720]: I0122 06:36:00.166940 4720 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 18:11:19.389279797 +0000 UTC Jan 22 06:36:00 crc kubenswrapper[4720]: I0122 06:36:00.169658 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:00 crc kubenswrapper[4720]: I0122 06:36:00.169718 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:00 crc kubenswrapper[4720]: I0122 06:36:00.169738 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:00 crc kubenswrapper[4720]: I0122 06:36:00.169768 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:00 crc kubenswrapper[4720]: I0122 06:36:00.169789 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:00Z","lastTransitionTime":"2026-01-22T06:36:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:00 crc kubenswrapper[4720]: I0122 06:36:00.210163 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 06:36:00 crc kubenswrapper[4720]: I0122 06:36:00.210433 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 06:36:00 crc kubenswrapper[4720]: E0122 06:36:00.210632 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 06:36:00 crc kubenswrapper[4720]: E0122 06:36:00.210825 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 06:36:00 crc kubenswrapper[4720]: I0122 06:36:00.273585 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:00 crc kubenswrapper[4720]: I0122 06:36:00.273838 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:00 crc kubenswrapper[4720]: I0122 06:36:00.274029 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:00 crc kubenswrapper[4720]: I0122 06:36:00.274236 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:00 crc kubenswrapper[4720]: I0122 06:36:00.274456 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:00Z","lastTransitionTime":"2026-01-22T06:36:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:00 crc kubenswrapper[4720]: I0122 06:36:00.378049 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:00 crc kubenswrapper[4720]: I0122 06:36:00.378098 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:00 crc kubenswrapper[4720]: I0122 06:36:00.378111 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:00 crc kubenswrapper[4720]: I0122 06:36:00.378131 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:00 crc kubenswrapper[4720]: I0122 06:36:00.378148 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:00Z","lastTransitionTime":"2026-01-22T06:36:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:00 crc kubenswrapper[4720]: I0122 06:36:00.481489 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:00 crc kubenswrapper[4720]: I0122 06:36:00.481526 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:00 crc kubenswrapper[4720]: I0122 06:36:00.481535 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:00 crc kubenswrapper[4720]: I0122 06:36:00.481551 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:00 crc kubenswrapper[4720]: I0122 06:36:00.481563 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:00Z","lastTransitionTime":"2026-01-22T06:36:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:00 crc kubenswrapper[4720]: I0122 06:36:00.584965 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:00 crc kubenswrapper[4720]: I0122 06:36:00.585098 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:00 crc kubenswrapper[4720]: I0122 06:36:00.585130 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:00 crc kubenswrapper[4720]: I0122 06:36:00.585170 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:00 crc kubenswrapper[4720]: I0122 06:36:00.585199 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:00Z","lastTransitionTime":"2026-01-22T06:36:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:00 crc kubenswrapper[4720]: I0122 06:36:00.689461 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:00 crc kubenswrapper[4720]: I0122 06:36:00.690035 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:00 crc kubenswrapper[4720]: I0122 06:36:00.690242 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:00 crc kubenswrapper[4720]: I0122 06:36:00.690441 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:00 crc kubenswrapper[4720]: I0122 06:36:00.690678 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:00Z","lastTransitionTime":"2026-01-22T06:36:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:00 crc kubenswrapper[4720]: I0122 06:36:00.794425 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:00 crc kubenswrapper[4720]: I0122 06:36:00.794498 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:00 crc kubenswrapper[4720]: I0122 06:36:00.794516 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:00 crc kubenswrapper[4720]: I0122 06:36:00.794546 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:00 crc kubenswrapper[4720]: I0122 06:36:00.794569 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:00Z","lastTransitionTime":"2026-01-22T06:36:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:00 crc kubenswrapper[4720]: I0122 06:36:00.897745 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:00 crc kubenswrapper[4720]: I0122 06:36:00.897826 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:00 crc kubenswrapper[4720]: I0122 06:36:00.897843 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:00 crc kubenswrapper[4720]: I0122 06:36:00.897873 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:00 crc kubenswrapper[4720]: I0122 06:36:00.897896 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:00Z","lastTransitionTime":"2026-01-22T06:36:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:01 crc kubenswrapper[4720]: I0122 06:36:01.002471 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:01 crc kubenswrapper[4720]: I0122 06:36:01.002553 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:01 crc kubenswrapper[4720]: I0122 06:36:01.002572 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:01 crc kubenswrapper[4720]: I0122 06:36:01.002604 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:01 crc kubenswrapper[4720]: I0122 06:36:01.002622 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:01Z","lastTransitionTime":"2026-01-22T06:36:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:01 crc kubenswrapper[4720]: I0122 06:36:01.107164 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:01 crc kubenswrapper[4720]: I0122 06:36:01.107467 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:01 crc kubenswrapper[4720]: I0122 06:36:01.107598 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:01 crc kubenswrapper[4720]: I0122 06:36:01.107724 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:01 crc kubenswrapper[4720]: I0122 06:36:01.107842 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:01Z","lastTransitionTime":"2026-01-22T06:36:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:01 crc kubenswrapper[4720]: I0122 06:36:01.167222 4720 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 22:25:44.598505813 +0000 UTC Jan 22 06:36:01 crc kubenswrapper[4720]: I0122 06:36:01.210232 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kvtch" Jan 22 06:36:01 crc kubenswrapper[4720]: E0122 06:36:01.210445 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kvtch" podUID="409f50e8-9b68-4efe-8eb4-bc144d383817" Jan 22 06:36:01 crc kubenswrapper[4720]: I0122 06:36:01.210232 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 06:36:01 crc kubenswrapper[4720]: E0122 06:36:01.211411 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 06:36:01 crc kubenswrapper[4720]: I0122 06:36:01.211841 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:01 crc kubenswrapper[4720]: I0122 06:36:01.212085 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:01 crc kubenswrapper[4720]: I0122 06:36:01.212258 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:01 crc kubenswrapper[4720]: I0122 06:36:01.212397 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:01 crc kubenswrapper[4720]: I0122 06:36:01.212523 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:01Z","lastTransitionTime":"2026-01-22T06:36:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:01 crc kubenswrapper[4720]: I0122 06:36:01.316571 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:01 crc kubenswrapper[4720]: I0122 06:36:01.316632 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:01 crc kubenswrapper[4720]: I0122 06:36:01.316651 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:01 crc kubenswrapper[4720]: I0122 06:36:01.316678 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:01 crc kubenswrapper[4720]: I0122 06:36:01.316697 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:01Z","lastTransitionTime":"2026-01-22T06:36:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:01 crc kubenswrapper[4720]: I0122 06:36:01.420113 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:01 crc kubenswrapper[4720]: I0122 06:36:01.420619 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:01 crc kubenswrapper[4720]: I0122 06:36:01.420777 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:01 crc kubenswrapper[4720]: I0122 06:36:01.420947 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:01 crc kubenswrapper[4720]: I0122 06:36:01.421068 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:01Z","lastTransitionTime":"2026-01-22T06:36:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:01 crc kubenswrapper[4720]: I0122 06:36:01.524761 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:01 crc kubenswrapper[4720]: I0122 06:36:01.524822 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:01 crc kubenswrapper[4720]: I0122 06:36:01.524838 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:01 crc kubenswrapper[4720]: I0122 06:36:01.524865 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:01 crc kubenswrapper[4720]: I0122 06:36:01.524891 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:01Z","lastTransitionTime":"2026-01-22T06:36:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:01 crc kubenswrapper[4720]: I0122 06:36:01.628246 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:01 crc kubenswrapper[4720]: I0122 06:36:01.628344 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:01 crc kubenswrapper[4720]: I0122 06:36:01.628370 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:01 crc kubenswrapper[4720]: I0122 06:36:01.628408 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:01 crc kubenswrapper[4720]: I0122 06:36:01.628432 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:01Z","lastTransitionTime":"2026-01-22T06:36:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:01 crc kubenswrapper[4720]: I0122 06:36:01.731779 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:01 crc kubenswrapper[4720]: I0122 06:36:01.732192 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:01 crc kubenswrapper[4720]: I0122 06:36:01.732350 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:01 crc kubenswrapper[4720]: I0122 06:36:01.732443 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:01 crc kubenswrapper[4720]: I0122 06:36:01.732525 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:01Z","lastTransitionTime":"2026-01-22T06:36:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:01 crc kubenswrapper[4720]: I0122 06:36:01.835745 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:01 crc kubenswrapper[4720]: I0122 06:36:01.836588 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:01 crc kubenswrapper[4720]: I0122 06:36:01.836749 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:01 crc kubenswrapper[4720]: I0122 06:36:01.836933 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:01 crc kubenswrapper[4720]: I0122 06:36:01.837103 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:01Z","lastTransitionTime":"2026-01-22T06:36:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:01 crc kubenswrapper[4720]: I0122 06:36:01.941036 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:01 crc kubenswrapper[4720]: I0122 06:36:01.941130 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:01 crc kubenswrapper[4720]: I0122 06:36:01.941148 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:01 crc kubenswrapper[4720]: I0122 06:36:01.941180 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:01 crc kubenswrapper[4720]: I0122 06:36:01.941200 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:01Z","lastTransitionTime":"2026-01-22T06:36:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:02 crc kubenswrapper[4720]: I0122 06:36:02.045062 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:02 crc kubenswrapper[4720]: I0122 06:36:02.045116 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:02 crc kubenswrapper[4720]: I0122 06:36:02.045133 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:02 crc kubenswrapper[4720]: I0122 06:36:02.045158 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:02 crc kubenswrapper[4720]: I0122 06:36:02.045177 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:02Z","lastTransitionTime":"2026-01-22T06:36:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:02 crc kubenswrapper[4720]: I0122 06:36:02.112061 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 22 06:36:02 crc kubenswrapper[4720]: I0122 06:36:02.133057 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 22 06:36:02 crc kubenswrapper[4720]: I0122 06:36:02.137094 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:02Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:02 crc kubenswrapper[4720]: I0122 06:36:02.148667 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:02 crc kubenswrapper[4720]: I0122 06:36:02.148757 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:02 crc kubenswrapper[4720]: I0122 06:36:02.148781 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:02 crc kubenswrapper[4720]: I0122 06:36:02.148890 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:02 crc kubenswrapper[4720]: I0122 06:36:02.148999 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:02Z","lastTransitionTime":"2026-01-22T06:36:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:02 crc kubenswrapper[4720]: I0122 06:36:02.157980 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3042f2c25662a3c0a0f010b2e431521ea9dc24b59303773b1fc4e2b43308937\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:02Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:02 crc kubenswrapper[4720]: I0122 06:36:02.168226 4720 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 05:58:02.237973846 +0000 UTC Jan 22 06:36:02 crc kubenswrapper[4720]: I0122 06:36:02.176602 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:02 crc kubenswrapper[4720]: I0122 06:36:02.176810 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:02 crc kubenswrapper[4720]: I0122 06:36:02.177024 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:02 crc kubenswrapper[4720]: I0122 06:36:02.177225 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:02 crc kubenswrapper[4720]: I0122 06:36:02.177426 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:02Z","lastTransitionTime":"2026-01-22T06:36:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:02 crc kubenswrapper[4720]: I0122 06:36:02.190227 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pc2f4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a725fa6-120e-41b1-bf7b-e1419e35c891\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be8927bdfee25dd12f02d8f5e4923d1ef6e034a4ab3c036ee8fb76b699aa4a04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b14f48ebae84e1450337b5ce78c6b9e74f4ffd418eaaf9df9fcb7495f5feae06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b013c0f8d83bb91a1385ed067377ea5bd81640f7b420160042c33a3112cbe153\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a1f835b7dfa74f56bbfdc5ef0a6674e6c04ae1be7119e3675d4a7e970bc37d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdea94d764eb25af4aec5da1f1aaf94b7d75e66292561a54803a079a6ca35cbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://279bdc70caff8b05f2eaff1d28bee03725a7069dcc99c4770a8df9a3e3ed0440\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e12d4480e256b933eebea365d0b57b3ab1be7aef00a03a2ff218e1bf3d7460a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e12d4480e256b933eebea365d0b57b3ab1be7aef00a03a2ff218e1bf3d7460a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T06:35:46Z\\\",\\\"message\\\":\\\"handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:46Z is after 2025-08-24T17:21:41Z]\\\\nI0122 06:35:46.440584 6161 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-machine-config-operator/machine-config-daemon_TCP_cluster\\\\\\\", UUID:\\\\\\\"a36f6289-d09f-43f8-8a8a-c9d2cc11eb0d\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-config-operator/machine-config-daemon\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-machine-config-operator/machine-config-daemon_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\",\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:45Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-pc2f4_openshift-ovn-kubernetes(9a725fa6-120e-41b1-bf7b-e1419e35c891)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd6371da033cf0ecd1c0b746cae82e14593edbf2353910a77284a2987741580d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a478b385adad231c65bafa44aef49fd983ef1a81babc0464478e199578b612b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a478b385adad231c65bafa44aef49fd983ef1a81babc0464478e199578b612b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pc2f4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:02Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:02 crc kubenswrapper[4720]: E0122 06:36:02.197326 4720 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T06:36:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T06:36:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T06:36:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T06:36:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T06:36:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T06:36:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T06:36:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T06:36:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"234f6209-cc86-46cc-ab69-026482c920c9\\\",\\\"systemUUID\\\":\\\"4713dd6d-99ec-4bb6-94e4-e7199d2e8be9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:02Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:02 crc kubenswrapper[4720]: I0122 06:36:02.202543 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:02 crc kubenswrapper[4720]: I0122 06:36:02.202599 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:02 crc kubenswrapper[4720]: I0122 06:36:02.202617 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:02 crc kubenswrapper[4720]: I0122 06:36:02.202645 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:02 crc kubenswrapper[4720]: I0122 06:36:02.202666 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:02Z","lastTransitionTime":"2026-01-22T06:36:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:02 crc kubenswrapper[4720]: I0122 06:36:02.209919 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 06:36:02 crc kubenswrapper[4720]: I0122 06:36:02.209995 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 06:36:02 crc kubenswrapper[4720]: E0122 06:36:02.210109 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 06:36:02 crc kubenswrapper[4720]: E0122 06:36:02.210373 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 06:36:02 crc kubenswrapper[4720]: I0122 06:36:02.217807 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lxzml" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c7b3c34a-9870-4c9f-990b-29b7e768d5a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec19859a546dc85e4807d0f522014187b1d22c1167a342776aafbcfd7c3a2cd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h66q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0e395a61e07527eace7f980f476ef9b16a0c6d525b461837ebd9abcbebfa9b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0e395a61e07527eace7f980f476ef9b16a0c6d525b461837ebd9abcbebfa9b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h66q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fa836ad8fb1d77bc1b9d30575db8ed7f2394a2b03d41f0b26a3907fa07ab154\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fa836ad8fb1d77bc1b9d30575db8ed7f2394a2b03d41f0b26a3907fa07ab154\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h66q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09cda77dbfa7cafc1f167478649fe1bc961d7610005bd3260407796956dd8fc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09cda77dbfa7cafc1f167478649fe1bc961d7610005bd3260407796956dd8fc3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h66q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d74c02c82589070c20f88c3ce9574341f2c4898af034fe9a686aa7b7c19ed93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d74c02c82589070c20f88c3ce9574341f2c4898af034fe9a686aa7b7c19ed93\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h66q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42e7374bf2c5e55d0afbe1e7564a2bc7fd23d0af7e22f0cab853fb9d75802422\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42e7374bf2c5e55d0afbe1e7564a2bc7fd23d0af7e22f0cab853fb9d75802422\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h66q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bbafab11f2868d0a97a54debc01d8e4cc6360b08a5b7e883a2f385db2e9ba0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3bbafab11f2868d0a97a54debc01d8e4cc6360b08a5b7e883a2f385db2e9ba0c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h66q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lxzml\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:02Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:02 crc kubenswrapper[4720]: I0122 06:36:02.236827 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5bmrh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"819a554b-cde8-41eb-bf3c-b965b5754ee9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fd548b8bf40d7be75f3433d2396eac3eb467cdc77a6e92f29c5321b0f2e11c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w949z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5bmrh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:02Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:02 crc kubenswrapper[4720]: E0122 06:36:02.242462 4720 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T06:36:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T06:36:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T06:36:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T06:36:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T06:36:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T06:36:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T06:36:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T06:36:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"234f6209-cc86-46cc-ab69-026482c920c9\\\",\\\"systemUUID\\\":\\\"4713dd6d-99ec-4bb6-94e4-e7199d2e8be9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:02Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:02 crc kubenswrapper[4720]: I0122 06:36:02.247554 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:02 crc kubenswrapper[4720]: I0122 06:36:02.247631 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:02 crc kubenswrapper[4720]: I0122 06:36:02.247651 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:02 crc kubenswrapper[4720]: I0122 06:36:02.247677 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:02 crc kubenswrapper[4720]: I0122 06:36:02.247696 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:02Z","lastTransitionTime":"2026-01-22T06:36:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:02 crc kubenswrapper[4720]: I0122 06:36:02.255490 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4c84t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83dddec7-9ecb-4d3b-97ac-e2f8f59e547c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ea91d23f213bf1c41ebffa43c30559153a3fdae5aac42557a24566cc90bd2b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl9b5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8570742a6d232a3695d42f42f4a8a7bfe79325cb20c8da4129148dda33df4683\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl9b5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4c84t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:02Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:02 crc kubenswrapper[4720]: E0122 06:36:02.269407 4720 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T06:36:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T06:36:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T06:36:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T06:36:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T06:36:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T06:36:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T06:36:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T06:36:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"234f6209-cc86-46cc-ab69-026482c920c9\\\",\\\"systemUUID\\\":\\\"4713dd6d-99ec-4bb6-94e4-e7199d2e8be9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:02Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:02 crc kubenswrapper[4720]: I0122 06:36:02.276316 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:02 crc kubenswrapper[4720]: I0122 06:36:02.276399 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:02 crc kubenswrapper[4720]: I0122 06:36:02.276417 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:02 crc kubenswrapper[4720]: I0122 06:36:02.276446 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:02 crc kubenswrapper[4720]: I0122 06:36:02.276464 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:02Z","lastTransitionTime":"2026-01-22T06:36:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:02 crc kubenswrapper[4720]: I0122 06:36:02.281510 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63aad383-f2cb-4bee-a51e-8f27e7b6f6dc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f91f6cde32c3019a63315238551af8420def8e585e4dbf3d29e55f62f155ba3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eff41c6fc4cac32edebfbabd313603d50771e92427938f4fc8c12bc75563133d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5dc3d31620607809111c9a1ed06ab7dfb1b24ff5ef18155979efc6b09b6d67d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://365ce842df37c87fae429e32228087a45828c6c717b63a2eb9818f26bf9aed7b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:08Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:02Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:02 crc kubenswrapper[4720]: E0122 06:36:02.292571 4720 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T06:36:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T06:36:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T06:36:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T06:36:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T06:36:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T06:36:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T06:36:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T06:36:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"234f6209-cc86-46cc-ab69-026482c920c9\\\",\\\"systemUUID\\\":\\\"4713dd6d-99ec-4bb6-94e4-e7199d2e8be9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:02Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:02 crc kubenswrapper[4720]: I0122 06:36:02.296422 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:02Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:02 crc kubenswrapper[4720]: I0122 06:36:02.297770 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:02 crc kubenswrapper[4720]: I0122 06:36:02.297836 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:02 crc kubenswrapper[4720]: I0122 06:36:02.297850 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:02 crc kubenswrapper[4720]: I0122 06:36:02.297868 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:02 crc kubenswrapper[4720]: I0122 06:36:02.297879 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:02Z","lastTransitionTime":"2026-01-22T06:36:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:02 crc kubenswrapper[4720]: I0122 06:36:02.307643 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kvtch" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"409f50e8-9b68-4efe-8eb4-bc144d383817\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhm9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhm9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:47Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kvtch\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:02Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:02 crc kubenswrapper[4720]: E0122 06:36:02.311154 4720 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T06:36:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T06:36:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T06:36:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T06:36:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T06:36:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T06:36:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T06:36:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T06:36:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"234f6209-cc86-46cc-ab69-026482c920c9\\\",\\\"systemUUID\\\":\\\"4713dd6d-99ec-4bb6-94e4-e7199d2e8be9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:02Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:02 crc kubenswrapper[4720]: E0122 06:36:02.311259 4720 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 22 06:36:02 crc kubenswrapper[4720]: I0122 06:36:02.312703 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:02 crc kubenswrapper[4720]: I0122 06:36:02.312722 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:02 crc kubenswrapper[4720]: I0122 06:36:02.312732 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:02 crc kubenswrapper[4720]: I0122 06:36:02.312749 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:02 crc kubenswrapper[4720]: I0122 06:36:02.312762 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:02Z","lastTransitionTime":"2026-01-22T06:36:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:02 crc kubenswrapper[4720]: I0122 06:36:02.323351 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://42d0f971bb1f2a686c44e255b97648123170034b41d626318aab29e111dda4a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:02Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:02 crc kubenswrapper[4720]: I0122 06:36:02.338259 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-n5w5r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85373343-156d-4de0-a72b-baaf7c4e3d08\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4da0abad7292a9d82b80caabd4b1f0fc15fbe54bfa5b4316c85b2131ea8b5b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlzmz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-n5w5r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:02Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:02 crc kubenswrapper[4720]: I0122 06:36:02.358820 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2336077a-0d11-443d-aa5f-3bc0f75aeb59\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7f8b443098a61889ca09134ff7b6e219efca59abcbcea3bb4c1aa9266ee8e55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfdba4f4bb3ed55acad58ed0c07d2dd58b1a1e9fd56443090c002bdefd3e7e10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2caddf6dd66b641ff10add3b52e37581c741507660ec4fe67a3b9ddde699e56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://108525dd49ce1847f1cc6c4e2a4c70cae3d2d990cda97eeb4aba8879b955432d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d49a501c34b86f0f9300742970a00c92ce3b8d352518d8f53d1ab8c734e2af6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49fa65139740d25a82a8becea2b7476e9846a3eebfe5a02bb0b3e8cc4e7fa30d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49fa65139740d25a82a8becea2b7476e9846a3eebfe5a02bb0b3e8cc4e7fa30d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e171d6a13b6108f7df37e46d2195682d2ce7a75e5fbfd0ccf4aca27871964cf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e171d6a13b6108f7df37e46d2195682d2ce7a75e5fbfd0ccf4aca27871964cf6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://136504ce05b178f4c60a839a1044c0b0892ea97babfe80181eac601c5057df16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://136504ce05b178f4c60a839a1044c0b0892ea97babfe80181eac601c5057df16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:08Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:02Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:02 crc kubenswrapper[4720]: I0122 06:36:02.373516 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71c3232e-a7c6-4127-b9ae-54b793cf40fc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b844e983a5de29cec0e9c51f209f04921cd45b437887ca38e93348bc0816832a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d21c2ba52009379f44498a3064c67bcd1a54a6277cc2b1df9db949bf0871e95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65a3043e63bb84b58bb5486070eb84265d51c514df2d766703a62382b3d6f66f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b944677a78ab673857bf2dd507dcc2eced1caef632613ec1e2054cb76ef57f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f51bf89aa732948c3672632435a055b62f58316da07931b2773bc2d3c10789e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T06:35:26Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 06:35:20.895806 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 06:35:20.896560 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2161447925/tls.crt::/tmp/serving-cert-2161447925/tls.key\\\\\\\"\\\\nI0122 06:35:26.673138 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 06:35:26.677382 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 06:35:26.677419 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 06:35:26.677459 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 06:35:26.677469 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 06:35:26.688473 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 06:35:26.688552 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 06:35:26.688566 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 06:35:26.688580 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 06:35:26.688588 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 06:35:26.688596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 06:35:26.688605 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 06:35:26.688875 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 06:35:26.690763 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://927a5a6aba8d7533888fb9e5e664fbaed25c6e6c0616c5b3c6f0afd6f45aa128\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3418e6eacd36147ae6ae1be72f23113dea7dbaab3597ba0fee76fbc846c0162f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3418e6eacd36147ae6ae1be72f23113dea7dbaab3597ba0fee76fbc846c0162f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:02Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:02 crc kubenswrapper[4720]: I0122 06:36:02.385972 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dtnxt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"518eedd0-2cb6-458d-a7a8-d8c8b8296401\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://794bf7a98b2300bd21bd914f5ad90c8f92ea7c055c78f62dbac7bc66c1c4d282\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqx2p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dtnxt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:02Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:02 crc kubenswrapper[4720]: I0122 06:36:02.399028 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:02Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:02 crc kubenswrapper[4720]: I0122 06:36:02.413413 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4b26e9d-6a95-4b1c-9750-88b6aa100c67\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57c69f24cbbb5aea3a63b602916230df0cc3d74344a32bd1528bd162786a6d4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f52f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88eb6692702bcb8523c759d764bb8dede5af5a2890217a1c6897a5b18a7197dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f52f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bnsvd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:02Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:02 crc kubenswrapper[4720]: I0122 06:36:02.415777 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:02 crc kubenswrapper[4720]: I0122 06:36:02.415836 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:02 crc kubenswrapper[4720]: I0122 06:36:02.415851 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:02 crc kubenswrapper[4720]: I0122 06:36:02.415872 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:02 crc kubenswrapper[4720]: I0122 06:36:02.415886 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:02Z","lastTransitionTime":"2026-01-22T06:36:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:02 crc kubenswrapper[4720]: I0122 06:36:02.431406 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc18b75d6761db0d4a459a096804ac6600c6c2026ca133d47671a8aadba175f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6616553b8c62205590801d10e6e316b396b7190584b08a774ebb87ab425cc122\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:02Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:02 crc kubenswrapper[4720]: I0122 06:36:02.519139 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:02 crc kubenswrapper[4720]: I0122 06:36:02.519217 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:02 crc kubenswrapper[4720]: I0122 06:36:02.519237 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:02 crc kubenswrapper[4720]: I0122 06:36:02.519264 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:02 crc kubenswrapper[4720]: I0122 06:36:02.519285 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:02Z","lastTransitionTime":"2026-01-22T06:36:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:02 crc kubenswrapper[4720]: I0122 06:36:02.622154 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:02 crc kubenswrapper[4720]: I0122 06:36:02.622223 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:02 crc kubenswrapper[4720]: I0122 06:36:02.622241 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:02 crc kubenswrapper[4720]: I0122 06:36:02.622268 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:02 crc kubenswrapper[4720]: I0122 06:36:02.622288 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:02Z","lastTransitionTime":"2026-01-22T06:36:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:02 crc kubenswrapper[4720]: I0122 06:36:02.726937 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:02 crc kubenswrapper[4720]: I0122 06:36:02.727004 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:02 crc kubenswrapper[4720]: I0122 06:36:02.727021 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:02 crc kubenswrapper[4720]: I0122 06:36:02.727051 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:02 crc kubenswrapper[4720]: I0122 06:36:02.727068 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:02Z","lastTransitionTime":"2026-01-22T06:36:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:02 crc kubenswrapper[4720]: I0122 06:36:02.830857 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:02 crc kubenswrapper[4720]: I0122 06:36:02.830965 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:02 crc kubenswrapper[4720]: I0122 06:36:02.830996 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:02 crc kubenswrapper[4720]: I0122 06:36:02.831030 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:02 crc kubenswrapper[4720]: I0122 06:36:02.831055 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:02Z","lastTransitionTime":"2026-01-22T06:36:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:02 crc kubenswrapper[4720]: I0122 06:36:02.936377 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:02 crc kubenswrapper[4720]: I0122 06:36:02.936437 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:02 crc kubenswrapper[4720]: I0122 06:36:02.936457 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:02 crc kubenswrapper[4720]: I0122 06:36:02.936489 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:02 crc kubenswrapper[4720]: I0122 06:36:02.936512 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:02Z","lastTransitionTime":"2026-01-22T06:36:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:03 crc kubenswrapper[4720]: I0122 06:36:03.039823 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:03 crc kubenswrapper[4720]: I0122 06:36:03.039879 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:03 crc kubenswrapper[4720]: I0122 06:36:03.039900 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:03 crc kubenswrapper[4720]: I0122 06:36:03.039973 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:03 crc kubenswrapper[4720]: I0122 06:36:03.039992 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:03Z","lastTransitionTime":"2026-01-22T06:36:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:03 crc kubenswrapper[4720]: I0122 06:36:03.143816 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:03 crc kubenswrapper[4720]: I0122 06:36:03.143884 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:03 crc kubenswrapper[4720]: I0122 06:36:03.143940 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:03 crc kubenswrapper[4720]: I0122 06:36:03.143979 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:03 crc kubenswrapper[4720]: I0122 06:36:03.144005 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:03Z","lastTransitionTime":"2026-01-22T06:36:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:03 crc kubenswrapper[4720]: I0122 06:36:03.168773 4720 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 01:32:13.333653476 +0000 UTC Jan 22 06:36:03 crc kubenswrapper[4720]: I0122 06:36:03.210593 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 06:36:03 crc kubenswrapper[4720]: I0122 06:36:03.210692 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kvtch" Jan 22 06:36:03 crc kubenswrapper[4720]: E0122 06:36:03.210825 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 06:36:03 crc kubenswrapper[4720]: E0122 06:36:03.211427 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kvtch" podUID="409f50e8-9b68-4efe-8eb4-bc144d383817" Jan 22 06:36:03 crc kubenswrapper[4720]: I0122 06:36:03.211906 4720 scope.go:117] "RemoveContainer" containerID="8e12d4480e256b933eebea365d0b57b3ab1be7aef00a03a2ff218e1bf3d7460a" Jan 22 06:36:03 crc kubenswrapper[4720]: I0122 06:36:03.248029 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:03 crc kubenswrapper[4720]: I0122 06:36:03.248256 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:03 crc kubenswrapper[4720]: I0122 06:36:03.248401 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:03 crc kubenswrapper[4720]: I0122 06:36:03.248540 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:03 crc kubenswrapper[4720]: I0122 06:36:03.248675 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:03Z","lastTransitionTime":"2026-01-22T06:36:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:03 crc kubenswrapper[4720]: I0122 06:36:03.267989 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/409f50e8-9b68-4efe-8eb4-bc144d383817-metrics-certs\") pod \"network-metrics-daemon-kvtch\" (UID: \"409f50e8-9b68-4efe-8eb4-bc144d383817\") " pod="openshift-multus/network-metrics-daemon-kvtch" Jan 22 06:36:03 crc kubenswrapper[4720]: E0122 06:36:03.268901 4720 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 22 06:36:03 crc kubenswrapper[4720]: E0122 06:36:03.269220 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/409f50e8-9b68-4efe-8eb4-bc144d383817-metrics-certs podName:409f50e8-9b68-4efe-8eb4-bc144d383817 nodeName:}" failed. No retries permitted until 2026-01-22 06:36:19.269190148 +0000 UTC m=+71.411096893 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/409f50e8-9b68-4efe-8eb4-bc144d383817-metrics-certs") pod "network-metrics-daemon-kvtch" (UID: "409f50e8-9b68-4efe-8eb4-bc144d383817") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 22 06:36:03 crc kubenswrapper[4720]: I0122 06:36:03.352576 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:03 crc kubenswrapper[4720]: I0122 06:36:03.353363 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:03 crc kubenswrapper[4720]: I0122 06:36:03.353500 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:03 crc kubenswrapper[4720]: I0122 06:36:03.353709 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:03 crc kubenswrapper[4720]: I0122 06:36:03.353874 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:03Z","lastTransitionTime":"2026-01-22T06:36:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:03 crc kubenswrapper[4720]: I0122 06:36:03.456572 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:03 crc kubenswrapper[4720]: I0122 06:36:03.456615 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:03 crc kubenswrapper[4720]: I0122 06:36:03.456628 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:03 crc kubenswrapper[4720]: I0122 06:36:03.456654 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:03 crc kubenswrapper[4720]: I0122 06:36:03.456667 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:03Z","lastTransitionTime":"2026-01-22T06:36:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:03 crc kubenswrapper[4720]: I0122 06:36:03.559610 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:03 crc kubenswrapper[4720]: I0122 06:36:03.559675 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:03 crc kubenswrapper[4720]: I0122 06:36:03.559694 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:03 crc kubenswrapper[4720]: I0122 06:36:03.559721 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:03 crc kubenswrapper[4720]: I0122 06:36:03.559738 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:03Z","lastTransitionTime":"2026-01-22T06:36:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:03 crc kubenswrapper[4720]: I0122 06:36:03.656424 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pc2f4_9a725fa6-120e-41b1-bf7b-e1419e35c891/ovnkube-controller/1.log" Jan 22 06:36:03 crc kubenswrapper[4720]: I0122 06:36:03.662347 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:03 crc kubenswrapper[4720]: I0122 06:36:03.662383 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:03 crc kubenswrapper[4720]: I0122 06:36:03.662396 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:03 crc kubenswrapper[4720]: I0122 06:36:03.662416 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:03 crc kubenswrapper[4720]: I0122 06:36:03.662433 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:03Z","lastTransitionTime":"2026-01-22T06:36:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:03 crc kubenswrapper[4720]: I0122 06:36:03.663587 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pc2f4" event={"ID":"9a725fa6-120e-41b1-bf7b-e1419e35c891","Type":"ContainerStarted","Data":"3ae75d823e484413b1163d5407d5560d4cae3d122fb5f1ce34f501c3d2fce1b4"} Jan 22 06:36:03 crc kubenswrapper[4720]: I0122 06:36:03.664272 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-pc2f4" Jan 22 06:36:03 crc kubenswrapper[4720]: I0122 06:36:03.685142 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pc2f4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a725fa6-120e-41b1-bf7b-e1419e35c891\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be8927bdfee25dd12f02d8f5e4923d1ef6e034a4ab3c036ee8fb76b699aa4a04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b14f48ebae84e1450337b5ce78c6b9e74f4ffd418eaaf9df9fcb7495f5feae06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b013c0f8d83bb91a1385ed067377ea5bd81640f7b420160042c33a3112cbe153\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a1f835b7dfa74f56bbfdc5ef0a6674e6c04ae1be7119e3675d4a7e970bc37d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdea94d764eb25af4aec5da1f1aaf94b7d75e66292561a54803a079a6ca35cbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://279bdc70caff8b05f2eaff1d28bee03725a7069dcc99c4770a8df9a3e3ed0440\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ae75d823e484413b1163d5407d5560d4cae3d122fb5f1ce34f501c3d2fce1b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e12d4480e256b933eebea365d0b57b3ab1be7aef00a03a2ff218e1bf3d7460a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T06:35:46Z\\\",\\\"message\\\":\\\"handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:46Z is after 2025-08-24T17:21:41Z]\\\\nI0122 06:35:46.440584 6161 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-machine-config-operator/machine-config-daemon_TCP_cluster\\\\\\\", UUID:\\\\\\\"a36f6289-d09f-43f8-8a8a-c9d2cc11eb0d\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-config-operator/machine-config-daemon\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-machine-config-operator/machine-config-daemon_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\",\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:45Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:36:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd6371da033cf0ecd1c0b746cae82e14593edbf2353910a77284a2987741580d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a478b385adad231c65bafa44aef49fd983ef1a81babc0464478e199578b612b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a478b385adad231c65bafa44aef49fd983ef1a81babc0464478e199578b612b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pc2f4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:03Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:03 crc kubenswrapper[4720]: I0122 06:36:03.699549 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lxzml" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c7b3c34a-9870-4c9f-990b-29b7e768d5a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec19859a546dc85e4807d0f522014187b1d22c1167a342776aafbcfd7c3a2cd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h66q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0e395a61e07527eace7f980f476ef9b16a0c6d525b461837ebd9abcbebfa9b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0e395a61e07527eace7f980f476ef9b16a0c6d525b461837ebd9abcbebfa9b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h66q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fa836ad8fb1d77bc1b9d30575db8ed7f2394a2b03d41f0b26a3907fa07ab154\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fa836ad8fb1d77bc1b9d30575db8ed7f2394a2b03d41f0b26a3907fa07ab154\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h66q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09cda77dbfa7cafc1f167478649fe1bc961d7610005bd3260407796956dd8fc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09cda77dbfa7cafc1f167478649fe1bc961d7610005bd3260407796956dd8fc3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h66q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d74c02c82589070c20f88c3ce9574341f2c4898af034fe9a686aa7b7c19ed93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d74c02c82589070c20f88c3ce9574341f2c4898af034fe9a686aa7b7c19ed93\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h66q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42e7374bf2c5e55d0afbe1e7564a2bc7fd23d0af7e22f0cab853fb9d75802422\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42e7374bf2c5e55d0afbe1e7564a2bc7fd23d0af7e22f0cab853fb9d75802422\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h66q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bbafab11f2868d0a97a54debc01d8e4cc6360b08a5b7e883a2f385db2e9ba0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3bbafab11f2868d0a97a54debc01d8e4cc6360b08a5b7e883a2f385db2e9ba0c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h66q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lxzml\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:03Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:03 crc kubenswrapper[4720]: I0122 06:36:03.709371 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5bmrh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"819a554b-cde8-41eb-bf3c-b965b5754ee9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fd548b8bf40d7be75f3433d2396eac3eb467cdc77a6e92f29c5321b0f2e11c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w949z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5bmrh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:03Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:03 crc kubenswrapper[4720]: I0122 06:36:03.718721 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4c84t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83dddec7-9ecb-4d3b-97ac-e2f8f59e547c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ea91d23f213bf1c41ebffa43c30559153a3fdae5aac42557a24566cc90bd2b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl9b5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8570742a6d232a3695d42f42f4a8a7bfe79325cb20c8da4129148dda33df4683\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl9b5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4c84t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:03Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:03 crc kubenswrapper[4720]: I0122 06:36:03.734041 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63aad383-f2cb-4bee-a51e-8f27e7b6f6dc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f91f6cde32c3019a63315238551af8420def8e585e4dbf3d29e55f62f155ba3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eff41c6fc4cac32edebfbabd313603d50771e92427938f4fc8c12bc75563133d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5dc3d31620607809111c9a1ed06ab7dfb1b24ff5ef18155979efc6b09b6d67d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://365ce842df37c87fae429e32228087a45828c6c717b63a2eb9818f26bf9aed7b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:08Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:03Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:03 crc kubenswrapper[4720]: I0122 06:36:03.747258 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:03Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:03 crc kubenswrapper[4720]: I0122 06:36:03.759335 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:03Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:03 crc kubenswrapper[4720]: I0122 06:36:03.765055 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:03 crc kubenswrapper[4720]: I0122 06:36:03.765105 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:03 crc kubenswrapper[4720]: I0122 06:36:03.765120 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:03 crc kubenswrapper[4720]: I0122 06:36:03.765146 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:03 crc kubenswrapper[4720]: I0122 06:36:03.765159 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:03Z","lastTransitionTime":"2026-01-22T06:36:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:03 crc kubenswrapper[4720]: I0122 06:36:03.772155 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3042f2c25662a3c0a0f010b2e431521ea9dc24b59303773b1fc4e2b43308937\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:03Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:03 crc kubenswrapper[4720]: I0122 06:36:03.783392 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kvtch" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"409f50e8-9b68-4efe-8eb4-bc144d383817\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhm9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhm9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:47Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kvtch\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:03Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:03 crc kubenswrapper[4720]: I0122 06:36:03.819758 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2336077a-0d11-443d-aa5f-3bc0f75aeb59\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7f8b443098a61889ca09134ff7b6e219efca59abcbcea3bb4c1aa9266ee8e55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfdba4f4bb3ed55acad58ed0c07d2dd58b1a1e9fd56443090c002bdefd3e7e10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2caddf6dd66b641ff10add3b52e37581c741507660ec4fe67a3b9ddde699e56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://108525dd49ce1847f1cc6c4e2a4c70cae3d2d990cda97eeb4aba8879b955432d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d49a501c34b86f0f9300742970a00c92ce3b8d352518d8f53d1ab8c734e2af6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49fa65139740d25a82a8becea2b7476e9846a3eebfe5a02bb0b3e8cc4e7fa30d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49fa65139740d25a82a8becea2b7476e9846a3eebfe5a02bb0b3e8cc4e7fa30d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e171d6a13b6108f7df37e46d2195682d2ce7a75e5fbfd0ccf4aca27871964cf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e171d6a13b6108f7df37e46d2195682d2ce7a75e5fbfd0ccf4aca27871964cf6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://136504ce05b178f4c60a839a1044c0b0892ea97babfe80181eac601c5057df16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://136504ce05b178f4c60a839a1044c0b0892ea97babfe80181eac601c5057df16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:08Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:03Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:03 crc kubenswrapper[4720]: I0122 06:36:03.856096 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71c3232e-a7c6-4127-b9ae-54b793cf40fc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b844e983a5de29cec0e9c51f209f04921cd45b437887ca38e93348bc0816832a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d21c2ba52009379f44498a3064c67bcd1a54a6277cc2b1df9db949bf0871e95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65a3043e63bb84b58bb5486070eb84265d51c514df2d766703a62382b3d6f66f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b944677a78ab673857bf2dd507dcc2eced1caef632613ec1e2054cb76ef57f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f51bf89aa732948c3672632435a055b62f58316da07931b2773bc2d3c10789e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T06:35:26Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 06:35:20.895806 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 06:35:20.896560 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2161447925/tls.crt::/tmp/serving-cert-2161447925/tls.key\\\\\\\"\\\\nI0122 06:35:26.673138 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 06:35:26.677382 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 06:35:26.677419 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 06:35:26.677459 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 06:35:26.677469 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 06:35:26.688473 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 06:35:26.688552 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 06:35:26.688566 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 06:35:26.688580 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 06:35:26.688588 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 06:35:26.688596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 06:35:26.688605 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 06:35:26.688875 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 06:35:26.690763 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://927a5a6aba8d7533888fb9e5e664fbaed25c6e6c0616c5b3c6f0afd6f45aa128\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3418e6eacd36147ae6ae1be72f23113dea7dbaab3597ba0fee76fbc846c0162f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3418e6eacd36147ae6ae1be72f23113dea7dbaab3597ba0fee76fbc846c0162f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:03Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:03 crc kubenswrapper[4720]: I0122 06:36:03.867651 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:03 crc kubenswrapper[4720]: I0122 06:36:03.867701 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:03 crc kubenswrapper[4720]: I0122 06:36:03.867714 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:03 crc kubenswrapper[4720]: I0122 06:36:03.867736 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:03 crc kubenswrapper[4720]: I0122 06:36:03.867746 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:03Z","lastTransitionTime":"2026-01-22T06:36:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:03 crc kubenswrapper[4720]: I0122 06:36:03.872535 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://42d0f971bb1f2a686c44e255b97648123170034b41d626318aab29e111dda4a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:03Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:03 crc kubenswrapper[4720]: I0122 06:36:03.886252 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-n5w5r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85373343-156d-4de0-a72b-baaf7c4e3d08\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4da0abad7292a9d82b80caabd4b1f0fc15fbe54bfa5b4316c85b2131ea8b5b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlzmz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-n5w5r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:03Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:03 crc kubenswrapper[4720]: I0122 06:36:03.898147 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73ba4f9c-33cb-4898-b2a3-21bf3327cf5b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:36:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:36:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b5ab589e0e928e47ac498164439f2fbd62bfe1130a9c17a9d96ec4cedd2c1e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5ff36eb3ab53efb54f45ab3e3030175237fd76ecd28ffcdc5a5079dfb93ec2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5d1e4cb487f75b95bc0da8ec3adbb6410d171fa2c95137c8127cea6023166f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7605a8cc85e1a2da51c6bedd5f03df930e1135340ec7de69d1ef643c907fd2bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7605a8cc85e1a2da51c6bedd5f03df930e1135340ec7de69d1ef643c907fd2bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:08Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:03Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:03 crc kubenswrapper[4720]: I0122 06:36:03.910485 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:03Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:03 crc kubenswrapper[4720]: I0122 06:36:03.923032 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dtnxt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"518eedd0-2cb6-458d-a7a8-d8c8b8296401\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://794bf7a98b2300bd21bd914f5ad90c8f92ea7c055c78f62dbac7bc66c1c4d282\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqx2p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dtnxt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:03Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:03 crc kubenswrapper[4720]: I0122 06:36:03.937885 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc18b75d6761db0d4a459a096804ac6600c6c2026ca133d47671a8aadba175f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6616553b8c62205590801d10e6e316b396b7190584b08a774ebb87ab425cc122\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:03Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:03 crc kubenswrapper[4720]: I0122 06:36:03.950073 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4b26e9d-6a95-4b1c-9750-88b6aa100c67\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57c69f24cbbb5aea3a63b602916230df0cc3d74344a32bd1528bd162786a6d4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f52f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88eb6692702bcb8523c759d764bb8dede5af5a2890217a1c6897a5b18a7197dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f52f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bnsvd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:03Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:03 crc kubenswrapper[4720]: I0122 06:36:03.969958 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:03 crc kubenswrapper[4720]: I0122 06:36:03.970012 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:03 crc kubenswrapper[4720]: I0122 06:36:03.970024 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:03 crc kubenswrapper[4720]: I0122 06:36:03.970044 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:03 crc kubenswrapper[4720]: I0122 06:36:03.970058 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:03Z","lastTransitionTime":"2026-01-22T06:36:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:04 crc kubenswrapper[4720]: I0122 06:36:04.072818 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:04 crc kubenswrapper[4720]: I0122 06:36:04.072855 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:04 crc kubenswrapper[4720]: I0122 06:36:04.072864 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:04 crc kubenswrapper[4720]: I0122 06:36:04.072881 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:04 crc kubenswrapper[4720]: I0122 06:36:04.072891 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:04Z","lastTransitionTime":"2026-01-22T06:36:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:04 crc kubenswrapper[4720]: I0122 06:36:04.169165 4720 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 04:54:57.419342777 +0000 UTC Jan 22 06:36:04 crc kubenswrapper[4720]: I0122 06:36:04.175394 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:04 crc kubenswrapper[4720]: I0122 06:36:04.175465 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:04 crc kubenswrapper[4720]: I0122 06:36:04.175479 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:04 crc kubenswrapper[4720]: I0122 06:36:04.175497 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:04 crc kubenswrapper[4720]: I0122 06:36:04.175509 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:04Z","lastTransitionTime":"2026-01-22T06:36:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:04 crc kubenswrapper[4720]: I0122 06:36:04.210271 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 06:36:04 crc kubenswrapper[4720]: I0122 06:36:04.210370 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 06:36:04 crc kubenswrapper[4720]: E0122 06:36:04.210482 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 06:36:04 crc kubenswrapper[4720]: E0122 06:36:04.210611 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 06:36:04 crc kubenswrapper[4720]: I0122 06:36:04.278168 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:04 crc kubenswrapper[4720]: I0122 06:36:04.278216 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:04 crc kubenswrapper[4720]: I0122 06:36:04.278233 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:04 crc kubenswrapper[4720]: I0122 06:36:04.278250 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:04 crc kubenswrapper[4720]: I0122 06:36:04.278262 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:04Z","lastTransitionTime":"2026-01-22T06:36:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:04 crc kubenswrapper[4720]: I0122 06:36:04.388186 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:04 crc kubenswrapper[4720]: I0122 06:36:04.388248 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:04 crc kubenswrapper[4720]: I0122 06:36:04.388268 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:04 crc kubenswrapper[4720]: I0122 06:36:04.388298 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:04 crc kubenswrapper[4720]: I0122 06:36:04.388318 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:04Z","lastTransitionTime":"2026-01-22T06:36:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:04 crc kubenswrapper[4720]: I0122 06:36:04.491315 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:04 crc kubenswrapper[4720]: I0122 06:36:04.491657 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:04 crc kubenswrapper[4720]: I0122 06:36:04.491802 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:04 crc kubenswrapper[4720]: I0122 06:36:04.491971 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:04 crc kubenswrapper[4720]: I0122 06:36:04.492139 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:04Z","lastTransitionTime":"2026-01-22T06:36:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:04 crc kubenswrapper[4720]: I0122 06:36:04.596405 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:04 crc kubenswrapper[4720]: I0122 06:36:04.596945 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:04 crc kubenswrapper[4720]: I0122 06:36:04.597202 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:04 crc kubenswrapper[4720]: I0122 06:36:04.597427 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:04 crc kubenswrapper[4720]: I0122 06:36:04.597623 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:04Z","lastTransitionTime":"2026-01-22T06:36:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:04 crc kubenswrapper[4720]: I0122 06:36:04.671729 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pc2f4_9a725fa6-120e-41b1-bf7b-e1419e35c891/ovnkube-controller/2.log" Jan 22 06:36:04 crc kubenswrapper[4720]: I0122 06:36:04.672886 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pc2f4_9a725fa6-120e-41b1-bf7b-e1419e35c891/ovnkube-controller/1.log" Jan 22 06:36:04 crc kubenswrapper[4720]: I0122 06:36:04.678255 4720 generic.go:334] "Generic (PLEG): container finished" podID="9a725fa6-120e-41b1-bf7b-e1419e35c891" containerID="3ae75d823e484413b1163d5407d5560d4cae3d122fb5f1ce34f501c3d2fce1b4" exitCode=1 Jan 22 06:36:04 crc kubenswrapper[4720]: I0122 06:36:04.678368 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pc2f4" event={"ID":"9a725fa6-120e-41b1-bf7b-e1419e35c891","Type":"ContainerDied","Data":"3ae75d823e484413b1163d5407d5560d4cae3d122fb5f1ce34f501c3d2fce1b4"} Jan 22 06:36:04 crc kubenswrapper[4720]: I0122 06:36:04.678603 4720 scope.go:117] "RemoveContainer" containerID="8e12d4480e256b933eebea365d0b57b3ab1be7aef00a03a2ff218e1bf3d7460a" Jan 22 06:36:04 crc kubenswrapper[4720]: I0122 06:36:04.679503 4720 scope.go:117] "RemoveContainer" containerID="3ae75d823e484413b1163d5407d5560d4cae3d122fb5f1ce34f501c3d2fce1b4" Jan 22 06:36:04 crc kubenswrapper[4720]: E0122 06:36:04.679964 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-pc2f4_openshift-ovn-kubernetes(9a725fa6-120e-41b1-bf7b-e1419e35c891)\"" pod="openshift-ovn-kubernetes/ovnkube-node-pc2f4" podUID="9a725fa6-120e-41b1-bf7b-e1419e35c891" Jan 22 06:36:04 crc kubenswrapper[4720]: I0122 06:36:04.701208 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:04 crc kubenswrapper[4720]: I0122 06:36:04.701273 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:04 crc kubenswrapper[4720]: I0122 06:36:04.701296 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:04 crc kubenswrapper[4720]: I0122 06:36:04.701330 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:04 crc kubenswrapper[4720]: I0122 06:36:04.701351 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:04Z","lastTransitionTime":"2026-01-22T06:36:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:04 crc kubenswrapper[4720]: I0122 06:36:04.703266 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73ba4f9c-33cb-4898-b2a3-21bf3327cf5b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:36:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:36:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b5ab589e0e928e47ac498164439f2fbd62bfe1130a9c17a9d96ec4cedd2c1e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5ff36eb3ab53efb54f45ab3e3030175237fd76ecd28ffcdc5a5079dfb93ec2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5d1e4cb487f75b95bc0da8ec3adbb6410d171fa2c95137c8127cea6023166f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7605a8cc85e1a2da51c6bedd5f03df930e1135340ec7de69d1ef643c907fd2bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7605a8cc85e1a2da51c6bedd5f03df930e1135340ec7de69d1ef643c907fd2bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:08Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:04Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:04 crc kubenswrapper[4720]: I0122 06:36:04.726858 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:04Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:04 crc kubenswrapper[4720]: I0122 06:36:04.747048 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dtnxt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"518eedd0-2cb6-458d-a7a8-d8c8b8296401\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://794bf7a98b2300bd21bd914f5ad90c8f92ea7c055c78f62dbac7bc66c1c4d282\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqx2p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dtnxt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:04Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:04 crc kubenswrapper[4720]: I0122 06:36:04.769724 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc18b75d6761db0d4a459a096804ac6600c6c2026ca133d47671a8aadba175f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6616553b8c62205590801d10e6e316b396b7190584b08a774ebb87ab425cc122\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:04Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:04 crc kubenswrapper[4720]: I0122 06:36:04.793154 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4b26e9d-6a95-4b1c-9750-88b6aa100c67\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57c69f24cbbb5aea3a63b602916230df0cc3d74344a32bd1528bd162786a6d4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f52f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88eb6692702bcb8523c759d764bb8dede5af5a2890217a1c6897a5b18a7197dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f52f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bnsvd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:04Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:04 crc kubenswrapper[4720]: I0122 06:36:04.804642 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:04 crc kubenswrapper[4720]: I0122 06:36:04.804726 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:04 crc kubenswrapper[4720]: I0122 06:36:04.804752 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:04 crc kubenswrapper[4720]: I0122 06:36:04.804824 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:04 crc kubenswrapper[4720]: I0122 06:36:04.804854 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:04Z","lastTransitionTime":"2026-01-22T06:36:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:04 crc kubenswrapper[4720]: I0122 06:36:04.814820 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3042f2c25662a3c0a0f010b2e431521ea9dc24b59303773b1fc4e2b43308937\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:04Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:04 crc kubenswrapper[4720]: I0122 06:36:04.847491 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pc2f4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a725fa6-120e-41b1-bf7b-e1419e35c891\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be8927bdfee25dd12f02d8f5e4923d1ef6e034a4ab3c036ee8fb76b699aa4a04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b14f48ebae84e1450337b5ce78c6b9e74f4ffd418eaaf9df9fcb7495f5feae06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b013c0f8d83bb91a1385ed067377ea5bd81640f7b420160042c33a3112cbe153\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a1f835b7dfa74f56bbfdc5ef0a6674e6c04ae1be7119e3675d4a7e970bc37d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdea94d764eb25af4aec5da1f1aaf94b7d75e66292561a54803a079a6ca35cbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://279bdc70caff8b05f2eaff1d28bee03725a7069dcc99c4770a8df9a3e3ed0440\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ae75d823e484413b1163d5407d5560d4cae3d122fb5f1ce34f501c3d2fce1b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e12d4480e256b933eebea365d0b57b3ab1be7aef00a03a2ff218e1bf3d7460a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T06:35:46Z\\\",\\\"message\\\":\\\"handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:35:46Z is after 2025-08-24T17:21:41Z]\\\\nI0122 06:35:46.440584 6161 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-machine-config-operator/machine-config-daemon_TCP_cluster\\\\\\\", UUID:\\\\\\\"a36f6289-d09f-43f8-8a8a-c9d2cc11eb0d\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-config-operator/machine-config-daemon\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-machine-config-operator/machine-config-daemon_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\",\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:45Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ae75d823e484413b1163d5407d5560d4cae3d122fb5f1ce34f501c3d2fce1b4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T06:36:04Z\\\",\\\"message\\\":\\\"min network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:04Z is after 2025-08-24T17:21:41Z]\\\\nI0122 06:36:04.142760 6361 ovn.go:134] Ensuring zone local for Pod openshift-dns/node-resolver-dtnxt in node crc\\\\nI0122 06:36:04.142770 6361 obj_retry.go:386] Retry successful for *v1.Pod openshift-dns/node-resolver-dtnxt after 0 failed attempt(s)\\\\nI0122 06:36:04.142777 6361 default_network_controller.go:776] Recording success event on pod openshift-dns/node-resolver-dtnxt\\\\nI0122 06:36:04.142318 6361 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-scheduler/openshift-kube-scheduler-crc\\\\nI0122 06:36:04.142789 6361 ovn.go:134] Ensuring zone local for Pod openshift-kube-scheduler/openshift-kube-scheduler-crc in node crc\\\\nI0122 06:36:04.142793 6361 obj_ret\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T06:36:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd6371da033cf0ecd1c0b746cae82e14593edbf2353910a77284a2987741580d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a478b385adad231c65bafa44aef49fd983ef1a81babc0464478e199578b612b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a478b385adad231c65bafa44aef49fd983ef1a81babc0464478e199578b612b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pc2f4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:04Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:04 crc kubenswrapper[4720]: I0122 06:36:04.876217 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lxzml" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c7b3c34a-9870-4c9f-990b-29b7e768d5a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec19859a546dc85e4807d0f522014187b1d22c1167a342776aafbcfd7c3a2cd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h66q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0e395a61e07527eace7f980f476ef9b16a0c6d525b461837ebd9abcbebfa9b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0e395a61e07527eace7f980f476ef9b16a0c6d525b461837ebd9abcbebfa9b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h66q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fa836ad8fb1d77bc1b9d30575db8ed7f2394a2b03d41f0b26a3907fa07ab154\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fa836ad8fb1d77bc1b9d30575db8ed7f2394a2b03d41f0b26a3907fa07ab154\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h66q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09cda77dbfa7cafc1f167478649fe1bc961d7610005bd3260407796956dd8fc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09cda77dbfa7cafc1f167478649fe1bc961d7610005bd3260407796956dd8fc3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h66q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d74c02c82589070c20f88c3ce9574341f2c4898af034fe9a686aa7b7c19ed93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d74c02c82589070c20f88c3ce9574341f2c4898af034fe9a686aa7b7c19ed93\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h66q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42e7374bf2c5e55d0afbe1e7564a2bc7fd23d0af7e22f0cab853fb9d75802422\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42e7374bf2c5e55d0afbe1e7564a2bc7fd23d0af7e22f0cab853fb9d75802422\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h66q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bbafab11f2868d0a97a54debc01d8e4cc6360b08a5b7e883a2f385db2e9ba0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3bbafab11f2868d0a97a54debc01d8e4cc6360b08a5b7e883a2f385db2e9ba0c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h66q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lxzml\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:04Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:04 crc kubenswrapper[4720]: I0122 06:36:04.893648 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5bmrh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"819a554b-cde8-41eb-bf3c-b965b5754ee9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fd548b8bf40d7be75f3433d2396eac3eb467cdc77a6e92f29c5321b0f2e11c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w949z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5bmrh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:04Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:04 crc kubenswrapper[4720]: I0122 06:36:04.908357 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:04 crc kubenswrapper[4720]: I0122 06:36:04.908418 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:04 crc kubenswrapper[4720]: I0122 06:36:04.908436 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:04 crc kubenswrapper[4720]: I0122 06:36:04.908462 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:04 crc kubenswrapper[4720]: I0122 06:36:04.908483 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:04Z","lastTransitionTime":"2026-01-22T06:36:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:04 crc kubenswrapper[4720]: I0122 06:36:04.911517 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4c84t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83dddec7-9ecb-4d3b-97ac-e2f8f59e547c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ea91d23f213bf1c41ebffa43c30559153a3fdae5aac42557a24566cc90bd2b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl9b5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8570742a6d232a3695d42f42f4a8a7bfe79325cb20c8da4129148dda33df4683\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl9b5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4c84t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:04Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:04 crc kubenswrapper[4720]: I0122 06:36:04.933800 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63aad383-f2cb-4bee-a51e-8f27e7b6f6dc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f91f6cde32c3019a63315238551af8420def8e585e4dbf3d29e55f62f155ba3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eff41c6fc4cac32edebfbabd313603d50771e92427938f4fc8c12bc75563133d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5dc3d31620607809111c9a1ed06ab7dfb1b24ff5ef18155979efc6b09b6d67d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://365ce842df37c87fae429e32228087a45828c6c717b63a2eb9818f26bf9aed7b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:08Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:04Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:04 crc kubenswrapper[4720]: I0122 06:36:04.954491 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:04Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:04 crc kubenswrapper[4720]: I0122 06:36:04.973606 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:04Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:04 crc kubenswrapper[4720]: I0122 06:36:04.989556 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kvtch" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"409f50e8-9b68-4efe-8eb4-bc144d383817\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhm9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhm9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:47Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kvtch\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:04Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:05 crc kubenswrapper[4720]: I0122 06:36:05.010973 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-n5w5r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85373343-156d-4de0-a72b-baaf7c4e3d08\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4da0abad7292a9d82b80caabd4b1f0fc15fbe54bfa5b4316c85b2131ea8b5b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlzmz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-n5w5r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:05Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:05 crc kubenswrapper[4720]: I0122 06:36:05.011315 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:05 crc kubenswrapper[4720]: I0122 06:36:05.011393 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:05 crc kubenswrapper[4720]: I0122 06:36:05.011411 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:05 crc kubenswrapper[4720]: I0122 06:36:05.011441 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:05 crc kubenswrapper[4720]: I0122 06:36:05.011467 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:05Z","lastTransitionTime":"2026-01-22T06:36:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:05 crc kubenswrapper[4720]: I0122 06:36:05.042374 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2336077a-0d11-443d-aa5f-3bc0f75aeb59\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7f8b443098a61889ca09134ff7b6e219efca59abcbcea3bb4c1aa9266ee8e55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfdba4f4bb3ed55acad58ed0c07d2dd58b1a1e9fd56443090c002bdefd3e7e10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2caddf6dd66b641ff10add3b52e37581c741507660ec4fe67a3b9ddde699e56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://108525dd49ce1847f1cc6c4e2a4c70cae3d2d990cda97eeb4aba8879b955432d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d49a501c34b86f0f9300742970a00c92ce3b8d352518d8f53d1ab8c734e2af6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49fa65139740d25a82a8becea2b7476e9846a3eebfe5a02bb0b3e8cc4e7fa30d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49fa65139740d25a82a8becea2b7476e9846a3eebfe5a02bb0b3e8cc4e7fa30d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e171d6a13b6108f7df37e46d2195682d2ce7a75e5fbfd0ccf4aca27871964cf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e171d6a13b6108f7df37e46d2195682d2ce7a75e5fbfd0ccf4aca27871964cf6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://136504ce05b178f4c60a839a1044c0b0892ea97babfe80181eac601c5057df16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://136504ce05b178f4c60a839a1044c0b0892ea97babfe80181eac601c5057df16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:08Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:05Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:05 crc kubenswrapper[4720]: I0122 06:36:05.065482 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71c3232e-a7c6-4127-b9ae-54b793cf40fc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b844e983a5de29cec0e9c51f209f04921cd45b437887ca38e93348bc0816832a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d21c2ba52009379f44498a3064c67bcd1a54a6277cc2b1df9db949bf0871e95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65a3043e63bb84b58bb5486070eb84265d51c514df2d766703a62382b3d6f66f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b944677a78ab673857bf2dd507dcc2eced1caef632613ec1e2054cb76ef57f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f51bf89aa732948c3672632435a055b62f58316da07931b2773bc2d3c10789e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T06:35:26Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 06:35:20.895806 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 06:35:20.896560 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2161447925/tls.crt::/tmp/serving-cert-2161447925/tls.key\\\\\\\"\\\\nI0122 06:35:26.673138 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 06:35:26.677382 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 06:35:26.677419 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 06:35:26.677459 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 06:35:26.677469 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 06:35:26.688473 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 06:35:26.688552 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 06:35:26.688566 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 06:35:26.688580 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 06:35:26.688588 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 06:35:26.688596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 06:35:26.688605 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 06:35:26.688875 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 06:35:26.690763 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://927a5a6aba8d7533888fb9e5e664fbaed25c6e6c0616c5b3c6f0afd6f45aa128\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3418e6eacd36147ae6ae1be72f23113dea7dbaab3597ba0fee76fbc846c0162f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3418e6eacd36147ae6ae1be72f23113dea7dbaab3597ba0fee76fbc846c0162f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:05Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:05 crc kubenswrapper[4720]: I0122 06:36:05.087139 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://42d0f971bb1f2a686c44e255b97648123170034b41d626318aab29e111dda4a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:05Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:05 crc kubenswrapper[4720]: I0122 06:36:05.115255 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:05 crc kubenswrapper[4720]: I0122 06:36:05.115341 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:05 crc kubenswrapper[4720]: I0122 06:36:05.115362 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:05 crc kubenswrapper[4720]: I0122 06:36:05.115388 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:05 crc kubenswrapper[4720]: I0122 06:36:05.115410 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:05Z","lastTransitionTime":"2026-01-22T06:36:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:05 crc kubenswrapper[4720]: I0122 06:36:05.170000 4720 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 01:20:26.184668923 +0000 UTC Jan 22 06:36:05 crc kubenswrapper[4720]: I0122 06:36:05.209657 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 06:36:05 crc kubenswrapper[4720]: I0122 06:36:05.209712 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kvtch" Jan 22 06:36:05 crc kubenswrapper[4720]: E0122 06:36:05.209866 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 06:36:05 crc kubenswrapper[4720]: E0122 06:36:05.210101 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kvtch" podUID="409f50e8-9b68-4efe-8eb4-bc144d383817" Jan 22 06:36:05 crc kubenswrapper[4720]: I0122 06:36:05.217714 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:05 crc kubenswrapper[4720]: I0122 06:36:05.217773 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:05 crc kubenswrapper[4720]: I0122 06:36:05.217791 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:05 crc kubenswrapper[4720]: I0122 06:36:05.217813 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:05 crc kubenswrapper[4720]: I0122 06:36:05.217836 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:05Z","lastTransitionTime":"2026-01-22T06:36:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:05 crc kubenswrapper[4720]: I0122 06:36:05.321413 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:05 crc kubenswrapper[4720]: I0122 06:36:05.321495 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:05 crc kubenswrapper[4720]: I0122 06:36:05.321519 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:05 crc kubenswrapper[4720]: I0122 06:36:05.321550 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:05 crc kubenswrapper[4720]: I0122 06:36:05.321612 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:05Z","lastTransitionTime":"2026-01-22T06:36:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:05 crc kubenswrapper[4720]: I0122 06:36:05.425423 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:05 crc kubenswrapper[4720]: I0122 06:36:05.425491 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:05 crc kubenswrapper[4720]: I0122 06:36:05.425513 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:05 crc kubenswrapper[4720]: I0122 06:36:05.425542 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:05 crc kubenswrapper[4720]: I0122 06:36:05.425562 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:05Z","lastTransitionTime":"2026-01-22T06:36:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:05 crc kubenswrapper[4720]: I0122 06:36:05.530767 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:05 crc kubenswrapper[4720]: I0122 06:36:05.530845 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:05 crc kubenswrapper[4720]: I0122 06:36:05.530871 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:05 crc kubenswrapper[4720]: I0122 06:36:05.530959 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:05 crc kubenswrapper[4720]: I0122 06:36:05.530990 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:05Z","lastTransitionTime":"2026-01-22T06:36:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:05 crc kubenswrapper[4720]: I0122 06:36:05.635305 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:05 crc kubenswrapper[4720]: I0122 06:36:05.635795 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:05 crc kubenswrapper[4720]: I0122 06:36:05.635830 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:05 crc kubenswrapper[4720]: I0122 06:36:05.635868 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:05 crc kubenswrapper[4720]: I0122 06:36:05.635895 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:05Z","lastTransitionTime":"2026-01-22T06:36:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:05 crc kubenswrapper[4720]: I0122 06:36:05.686699 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pc2f4_9a725fa6-120e-41b1-bf7b-e1419e35c891/ovnkube-controller/2.log" Jan 22 06:36:05 crc kubenswrapper[4720]: I0122 06:36:05.693734 4720 scope.go:117] "RemoveContainer" containerID="3ae75d823e484413b1163d5407d5560d4cae3d122fb5f1ce34f501c3d2fce1b4" Jan 22 06:36:05 crc kubenswrapper[4720]: E0122 06:36:05.694112 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-pc2f4_openshift-ovn-kubernetes(9a725fa6-120e-41b1-bf7b-e1419e35c891)\"" pod="openshift-ovn-kubernetes/ovnkube-node-pc2f4" podUID="9a725fa6-120e-41b1-bf7b-e1419e35c891" Jan 22 06:36:05 crc kubenswrapper[4720]: I0122 06:36:05.718320 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc18b75d6761db0d4a459a096804ac6600c6c2026ca133d47671a8aadba175f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6616553b8c62205590801d10e6e316b396b7190584b08a774ebb87ab425cc122\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:05Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:05 crc kubenswrapper[4720]: I0122 06:36:05.739269 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:05 crc kubenswrapper[4720]: I0122 06:36:05.739534 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:05 crc kubenswrapper[4720]: I0122 06:36:05.739603 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4b26e9d-6a95-4b1c-9750-88b6aa100c67\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57c69f24cbbb5aea3a63b602916230df0cc3d74344a32bd1528bd162786a6d4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f52f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88eb6692702bcb8523c759d764bb8dede5af5a2890217a1c6897a5b18a7197dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f52f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bnsvd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:05Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:05 crc kubenswrapper[4720]: I0122 06:36:05.739669 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:05 crc kubenswrapper[4720]: I0122 06:36:05.740020 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:05 crc kubenswrapper[4720]: I0122 06:36:05.740048 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:05Z","lastTransitionTime":"2026-01-22T06:36:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:05 crc kubenswrapper[4720]: I0122 06:36:05.755316 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5bmrh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"819a554b-cde8-41eb-bf3c-b965b5754ee9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fd548b8bf40d7be75f3433d2396eac3eb467cdc77a6e92f29c5321b0f2e11c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w949z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5bmrh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:05Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:05 crc kubenswrapper[4720]: I0122 06:36:05.775993 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4c84t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83dddec7-9ecb-4d3b-97ac-e2f8f59e547c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ea91d23f213bf1c41ebffa43c30559153a3fdae5aac42557a24566cc90bd2b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl9b5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8570742a6d232a3695d42f42f4a8a7bfe79325cb20c8da4129148dda33df4683\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl9b5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4c84t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:05Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:05 crc kubenswrapper[4720]: I0122 06:36:05.797538 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63aad383-f2cb-4bee-a51e-8f27e7b6f6dc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f91f6cde32c3019a63315238551af8420def8e585e4dbf3d29e55f62f155ba3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eff41c6fc4cac32edebfbabd313603d50771e92427938f4fc8c12bc75563133d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5dc3d31620607809111c9a1ed06ab7dfb1b24ff5ef18155979efc6b09b6d67d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://365ce842df37c87fae429e32228087a45828c6c717b63a2eb9818f26bf9aed7b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:08Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:05Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:05 crc kubenswrapper[4720]: I0122 06:36:05.818813 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:05Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:05 crc kubenswrapper[4720]: I0122 06:36:05.837535 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:05Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:05 crc kubenswrapper[4720]: I0122 06:36:05.843024 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:05 crc kubenswrapper[4720]: I0122 06:36:05.843074 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:05 crc kubenswrapper[4720]: I0122 06:36:05.843090 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:05 crc kubenswrapper[4720]: I0122 06:36:05.843118 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:05 crc kubenswrapper[4720]: I0122 06:36:05.843138 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:05Z","lastTransitionTime":"2026-01-22T06:36:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:05 crc kubenswrapper[4720]: I0122 06:36:05.853071 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3042f2c25662a3c0a0f010b2e431521ea9dc24b59303773b1fc4e2b43308937\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:05Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:05 crc kubenswrapper[4720]: I0122 06:36:05.884524 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pc2f4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a725fa6-120e-41b1-bf7b-e1419e35c891\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be8927bdfee25dd12f02d8f5e4923d1ef6e034a4ab3c036ee8fb76b699aa4a04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b14f48ebae84e1450337b5ce78c6b9e74f4ffd418eaaf9df9fcb7495f5feae06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b013c0f8d83bb91a1385ed067377ea5bd81640f7b420160042c33a3112cbe153\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a1f835b7dfa74f56bbfdc5ef0a6674e6c04ae1be7119e3675d4a7e970bc37d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdea94d764eb25af4aec5da1f1aaf94b7d75e66292561a54803a079a6ca35cbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://279bdc70caff8b05f2eaff1d28bee03725a7069dcc99c4770a8df9a3e3ed0440\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ae75d823e484413b1163d5407d5560d4cae3d122fb5f1ce34f501c3d2fce1b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ae75d823e484413b1163d5407d5560d4cae3d122fb5f1ce34f501c3d2fce1b4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T06:36:04Z\\\",\\\"message\\\":\\\"min network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:04Z is after 2025-08-24T17:21:41Z]\\\\nI0122 06:36:04.142760 6361 ovn.go:134] Ensuring zone local for Pod openshift-dns/node-resolver-dtnxt in node crc\\\\nI0122 06:36:04.142770 6361 obj_retry.go:386] Retry successful for *v1.Pod openshift-dns/node-resolver-dtnxt after 0 failed attempt(s)\\\\nI0122 06:36:04.142777 6361 default_network_controller.go:776] Recording success event on pod openshift-dns/node-resolver-dtnxt\\\\nI0122 06:36:04.142318 6361 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-scheduler/openshift-kube-scheduler-crc\\\\nI0122 06:36:04.142789 6361 ovn.go:134] Ensuring zone local for Pod openshift-kube-scheduler/openshift-kube-scheduler-crc in node crc\\\\nI0122 06:36:04.142793 6361 obj_ret\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T06:36:03Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-pc2f4_openshift-ovn-kubernetes(9a725fa6-120e-41b1-bf7b-e1419e35c891)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd6371da033cf0ecd1c0b746cae82e14593edbf2353910a77284a2987741580d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a478b385adad231c65bafa44aef49fd983ef1a81babc0464478e199578b612b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a478b385adad231c65bafa44aef49fd983ef1a81babc0464478e199578b612b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pc2f4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:05Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:05 crc kubenswrapper[4720]: I0122 06:36:05.908803 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lxzml" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c7b3c34a-9870-4c9f-990b-29b7e768d5a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec19859a546dc85e4807d0f522014187b1d22c1167a342776aafbcfd7c3a2cd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h66q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0e395a61e07527eace7f980f476ef9b16a0c6d525b461837ebd9abcbebfa9b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0e395a61e07527eace7f980f476ef9b16a0c6d525b461837ebd9abcbebfa9b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h66q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fa836ad8fb1d77bc1b9d30575db8ed7f2394a2b03d41f0b26a3907fa07ab154\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fa836ad8fb1d77bc1b9d30575db8ed7f2394a2b03d41f0b26a3907fa07ab154\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h66q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09cda77dbfa7cafc1f167478649fe1bc961d7610005bd3260407796956dd8fc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09cda77dbfa7cafc1f167478649fe1bc961d7610005bd3260407796956dd8fc3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h66q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d74c02c82589070c20f88c3ce9574341f2c4898af034fe9a686aa7b7c19ed93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d74c02c82589070c20f88c3ce9574341f2c4898af034fe9a686aa7b7c19ed93\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h66q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42e7374bf2c5e55d0afbe1e7564a2bc7fd23d0af7e22f0cab853fb9d75802422\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42e7374bf2c5e55d0afbe1e7564a2bc7fd23d0af7e22f0cab853fb9d75802422\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h66q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bbafab11f2868d0a97a54debc01d8e4cc6360b08a5b7e883a2f385db2e9ba0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3bbafab11f2868d0a97a54debc01d8e4cc6360b08a5b7e883a2f385db2e9ba0c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h66q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lxzml\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:05Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:05 crc kubenswrapper[4720]: I0122 06:36:05.927129 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kvtch" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"409f50e8-9b68-4efe-8eb4-bc144d383817\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhm9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhm9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:47Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kvtch\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:05Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:05 crc kubenswrapper[4720]: I0122 06:36:05.950130 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:05 crc kubenswrapper[4720]: I0122 06:36:05.950221 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:05 crc kubenswrapper[4720]: I0122 06:36:05.950242 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:05 crc kubenswrapper[4720]: I0122 06:36:05.950272 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:05 crc kubenswrapper[4720]: I0122 06:36:05.950324 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:05Z","lastTransitionTime":"2026-01-22T06:36:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:05 crc kubenswrapper[4720]: I0122 06:36:05.963728 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2336077a-0d11-443d-aa5f-3bc0f75aeb59\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7f8b443098a61889ca09134ff7b6e219efca59abcbcea3bb4c1aa9266ee8e55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfdba4f4bb3ed55acad58ed0c07d2dd58b1a1e9fd56443090c002bdefd3e7e10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2caddf6dd66b641ff10add3b52e37581c741507660ec4fe67a3b9ddde699e56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://108525dd49ce1847f1cc6c4e2a4c70cae3d2d990cda97eeb4aba8879b955432d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d49a501c34b86f0f9300742970a00c92ce3b8d352518d8f53d1ab8c734e2af6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49fa65139740d25a82a8becea2b7476e9846a3eebfe5a02bb0b3e8cc4e7fa30d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49fa65139740d25a82a8becea2b7476e9846a3eebfe5a02bb0b3e8cc4e7fa30d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e171d6a13b6108f7df37e46d2195682d2ce7a75e5fbfd0ccf4aca27871964cf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e171d6a13b6108f7df37e46d2195682d2ce7a75e5fbfd0ccf4aca27871964cf6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://136504ce05b178f4c60a839a1044c0b0892ea97babfe80181eac601c5057df16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://136504ce05b178f4c60a839a1044c0b0892ea97babfe80181eac601c5057df16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:08Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:05Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:05 crc kubenswrapper[4720]: I0122 06:36:05.989391 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71c3232e-a7c6-4127-b9ae-54b793cf40fc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b844e983a5de29cec0e9c51f209f04921cd45b437887ca38e93348bc0816832a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d21c2ba52009379f44498a3064c67bcd1a54a6277cc2b1df9db949bf0871e95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65a3043e63bb84b58bb5486070eb84265d51c514df2d766703a62382b3d6f66f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b944677a78ab673857bf2dd507dcc2eced1caef632613ec1e2054cb76ef57f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f51bf89aa732948c3672632435a055b62f58316da07931b2773bc2d3c10789e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T06:35:26Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 06:35:20.895806 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 06:35:20.896560 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2161447925/tls.crt::/tmp/serving-cert-2161447925/tls.key\\\\\\\"\\\\nI0122 06:35:26.673138 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 06:35:26.677382 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 06:35:26.677419 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 06:35:26.677459 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 06:35:26.677469 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 06:35:26.688473 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 06:35:26.688552 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 06:35:26.688566 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 06:35:26.688580 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 06:35:26.688588 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 06:35:26.688596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 06:35:26.688605 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 06:35:26.688875 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 06:35:26.690763 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://927a5a6aba8d7533888fb9e5e664fbaed25c6e6c0616c5b3c6f0afd6f45aa128\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3418e6eacd36147ae6ae1be72f23113dea7dbaab3597ba0fee76fbc846c0162f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3418e6eacd36147ae6ae1be72f23113dea7dbaab3597ba0fee76fbc846c0162f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:05Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:06 crc kubenswrapper[4720]: I0122 06:36:06.011855 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://42d0f971bb1f2a686c44e255b97648123170034b41d626318aab29e111dda4a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:06Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:06 crc kubenswrapper[4720]: I0122 06:36:06.031243 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-n5w5r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85373343-156d-4de0-a72b-baaf7c4e3d08\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4da0abad7292a9d82b80caabd4b1f0fc15fbe54bfa5b4316c85b2131ea8b5b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlzmz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-n5w5r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:06Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:06 crc kubenswrapper[4720]: I0122 06:36:06.047437 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73ba4f9c-33cb-4898-b2a3-21bf3327cf5b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:36:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:36:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b5ab589e0e928e47ac498164439f2fbd62bfe1130a9c17a9d96ec4cedd2c1e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5ff36eb3ab53efb54f45ab3e3030175237fd76ecd28ffcdc5a5079dfb93ec2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5d1e4cb487f75b95bc0da8ec3adbb6410d171fa2c95137c8127cea6023166f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7605a8cc85e1a2da51c6bedd5f03df930e1135340ec7de69d1ef643c907fd2bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7605a8cc85e1a2da51c6bedd5f03df930e1135340ec7de69d1ef643c907fd2bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:08Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:06Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:06 crc kubenswrapper[4720]: I0122 06:36:06.054256 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:06 crc kubenswrapper[4720]: I0122 06:36:06.054310 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:06 crc kubenswrapper[4720]: I0122 06:36:06.054325 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:06 crc kubenswrapper[4720]: I0122 06:36:06.054349 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:06 crc kubenswrapper[4720]: I0122 06:36:06.054368 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:06Z","lastTransitionTime":"2026-01-22T06:36:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:06 crc kubenswrapper[4720]: I0122 06:36:06.065699 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:06Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:06 crc kubenswrapper[4720]: I0122 06:36:06.080624 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dtnxt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"518eedd0-2cb6-458d-a7a8-d8c8b8296401\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://794bf7a98b2300bd21bd914f5ad90c8f92ea7c055c78f62dbac7bc66c1c4d282\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqx2p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dtnxt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:06Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:06 crc kubenswrapper[4720]: I0122 06:36:06.157691 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:06 crc kubenswrapper[4720]: I0122 06:36:06.157738 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:06 crc kubenswrapper[4720]: I0122 06:36:06.157751 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:06 crc kubenswrapper[4720]: I0122 06:36:06.157769 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:06 crc kubenswrapper[4720]: I0122 06:36:06.157786 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:06Z","lastTransitionTime":"2026-01-22T06:36:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:06 crc kubenswrapper[4720]: I0122 06:36:06.170331 4720 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 19:43:14.293983137 +0000 UTC Jan 22 06:36:06 crc kubenswrapper[4720]: I0122 06:36:06.210147 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 06:36:06 crc kubenswrapper[4720]: I0122 06:36:06.210174 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 06:36:06 crc kubenswrapper[4720]: E0122 06:36:06.210441 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 06:36:06 crc kubenswrapper[4720]: E0122 06:36:06.210505 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 06:36:06 crc kubenswrapper[4720]: I0122 06:36:06.261491 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:06 crc kubenswrapper[4720]: I0122 06:36:06.261550 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:06 crc kubenswrapper[4720]: I0122 06:36:06.261565 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:06 crc kubenswrapper[4720]: I0122 06:36:06.261589 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:06 crc kubenswrapper[4720]: I0122 06:36:06.261603 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:06Z","lastTransitionTime":"2026-01-22T06:36:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:06 crc kubenswrapper[4720]: I0122 06:36:06.364762 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:06 crc kubenswrapper[4720]: I0122 06:36:06.364834 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:06 crc kubenswrapper[4720]: I0122 06:36:06.364853 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:06 crc kubenswrapper[4720]: I0122 06:36:06.364882 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:06 crc kubenswrapper[4720]: I0122 06:36:06.364900 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:06Z","lastTransitionTime":"2026-01-22T06:36:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:06 crc kubenswrapper[4720]: I0122 06:36:06.468702 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:06 crc kubenswrapper[4720]: I0122 06:36:06.468773 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:06 crc kubenswrapper[4720]: I0122 06:36:06.468796 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:06 crc kubenswrapper[4720]: I0122 06:36:06.468823 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:06 crc kubenswrapper[4720]: I0122 06:36:06.468841 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:06Z","lastTransitionTime":"2026-01-22T06:36:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:06 crc kubenswrapper[4720]: I0122 06:36:06.573591 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:06 crc kubenswrapper[4720]: I0122 06:36:06.573658 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:06 crc kubenswrapper[4720]: I0122 06:36:06.573686 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:06 crc kubenswrapper[4720]: I0122 06:36:06.573717 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:06 crc kubenswrapper[4720]: I0122 06:36:06.573735 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:06Z","lastTransitionTime":"2026-01-22T06:36:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:06 crc kubenswrapper[4720]: I0122 06:36:06.677285 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:06 crc kubenswrapper[4720]: I0122 06:36:06.677341 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:06 crc kubenswrapper[4720]: I0122 06:36:06.677353 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:06 crc kubenswrapper[4720]: I0122 06:36:06.677373 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:06 crc kubenswrapper[4720]: I0122 06:36:06.677386 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:06Z","lastTransitionTime":"2026-01-22T06:36:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:06 crc kubenswrapper[4720]: I0122 06:36:06.780385 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:06 crc kubenswrapper[4720]: I0122 06:36:06.780452 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:06 crc kubenswrapper[4720]: I0122 06:36:06.780471 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:06 crc kubenswrapper[4720]: I0122 06:36:06.780498 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:06 crc kubenswrapper[4720]: I0122 06:36:06.780521 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:06Z","lastTransitionTime":"2026-01-22T06:36:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:06 crc kubenswrapper[4720]: I0122 06:36:06.884416 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:06 crc kubenswrapper[4720]: I0122 06:36:06.884482 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:06 crc kubenswrapper[4720]: I0122 06:36:06.884495 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:06 crc kubenswrapper[4720]: I0122 06:36:06.884517 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:06 crc kubenswrapper[4720]: I0122 06:36:06.884530 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:06Z","lastTransitionTime":"2026-01-22T06:36:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:06 crc kubenswrapper[4720]: I0122 06:36:06.988658 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:06 crc kubenswrapper[4720]: I0122 06:36:06.988712 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:06 crc kubenswrapper[4720]: I0122 06:36:06.988724 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:06 crc kubenswrapper[4720]: I0122 06:36:06.988746 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:06 crc kubenswrapper[4720]: I0122 06:36:06.988761 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:06Z","lastTransitionTime":"2026-01-22T06:36:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:07 crc kubenswrapper[4720]: I0122 06:36:07.091925 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:07 crc kubenswrapper[4720]: I0122 06:36:07.091979 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:07 crc kubenswrapper[4720]: I0122 06:36:07.091989 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:07 crc kubenswrapper[4720]: I0122 06:36:07.092058 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:07 crc kubenswrapper[4720]: I0122 06:36:07.092071 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:07Z","lastTransitionTime":"2026-01-22T06:36:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:07 crc kubenswrapper[4720]: I0122 06:36:07.170787 4720 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 03:28:08.969320306 +0000 UTC Jan 22 06:36:07 crc kubenswrapper[4720]: I0122 06:36:07.195500 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:07 crc kubenswrapper[4720]: I0122 06:36:07.195564 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:07 crc kubenswrapper[4720]: I0122 06:36:07.195583 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:07 crc kubenswrapper[4720]: I0122 06:36:07.195611 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:07 crc kubenswrapper[4720]: I0122 06:36:07.195632 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:07Z","lastTransitionTime":"2026-01-22T06:36:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:07 crc kubenswrapper[4720]: I0122 06:36:07.210130 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 06:36:07 crc kubenswrapper[4720]: I0122 06:36:07.210321 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kvtch" Jan 22 06:36:07 crc kubenswrapper[4720]: E0122 06:36:07.210527 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 06:36:07 crc kubenswrapper[4720]: E0122 06:36:07.210781 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kvtch" podUID="409f50e8-9b68-4efe-8eb4-bc144d383817" Jan 22 06:36:07 crc kubenswrapper[4720]: I0122 06:36:07.299565 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:07 crc kubenswrapper[4720]: I0122 06:36:07.299657 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:07 crc kubenswrapper[4720]: I0122 06:36:07.299675 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:07 crc kubenswrapper[4720]: I0122 06:36:07.299701 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:07 crc kubenswrapper[4720]: I0122 06:36:07.299720 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:07Z","lastTransitionTime":"2026-01-22T06:36:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:07 crc kubenswrapper[4720]: I0122 06:36:07.404028 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:07 crc kubenswrapper[4720]: I0122 06:36:07.404113 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:07 crc kubenswrapper[4720]: I0122 06:36:07.404137 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:07 crc kubenswrapper[4720]: I0122 06:36:07.404176 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:07 crc kubenswrapper[4720]: I0122 06:36:07.404200 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:07Z","lastTransitionTime":"2026-01-22T06:36:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:07 crc kubenswrapper[4720]: I0122 06:36:07.508161 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:07 crc kubenswrapper[4720]: I0122 06:36:07.508233 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:07 crc kubenswrapper[4720]: I0122 06:36:07.508257 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:07 crc kubenswrapper[4720]: I0122 06:36:07.508292 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:07 crc kubenswrapper[4720]: I0122 06:36:07.508313 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:07Z","lastTransitionTime":"2026-01-22T06:36:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:07 crc kubenswrapper[4720]: I0122 06:36:07.612186 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:07 crc kubenswrapper[4720]: I0122 06:36:07.612762 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:07 crc kubenswrapper[4720]: I0122 06:36:07.612788 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:07 crc kubenswrapper[4720]: I0122 06:36:07.612827 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:07 crc kubenswrapper[4720]: I0122 06:36:07.612849 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:07Z","lastTransitionTime":"2026-01-22T06:36:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:07 crc kubenswrapper[4720]: I0122 06:36:07.715983 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:07 crc kubenswrapper[4720]: I0122 06:36:07.716069 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:07 crc kubenswrapper[4720]: I0122 06:36:07.716086 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:07 crc kubenswrapper[4720]: I0122 06:36:07.716117 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:07 crc kubenswrapper[4720]: I0122 06:36:07.716137 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:07Z","lastTransitionTime":"2026-01-22T06:36:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:07 crc kubenswrapper[4720]: I0122 06:36:07.819424 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:07 crc kubenswrapper[4720]: I0122 06:36:07.819477 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:07 crc kubenswrapper[4720]: I0122 06:36:07.819486 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:07 crc kubenswrapper[4720]: I0122 06:36:07.819508 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:07 crc kubenswrapper[4720]: I0122 06:36:07.819520 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:07Z","lastTransitionTime":"2026-01-22T06:36:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:07 crc kubenswrapper[4720]: I0122 06:36:07.923494 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:07 crc kubenswrapper[4720]: I0122 06:36:07.923549 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:07 crc kubenswrapper[4720]: I0122 06:36:07.923566 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:07 crc kubenswrapper[4720]: I0122 06:36:07.923595 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:07 crc kubenswrapper[4720]: I0122 06:36:07.923614 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:07Z","lastTransitionTime":"2026-01-22T06:36:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:08 crc kubenswrapper[4720]: I0122 06:36:08.027305 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:08 crc kubenswrapper[4720]: I0122 06:36:08.027384 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:08 crc kubenswrapper[4720]: I0122 06:36:08.027409 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:08 crc kubenswrapper[4720]: I0122 06:36:08.027478 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:08 crc kubenswrapper[4720]: I0122 06:36:08.027505 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:08Z","lastTransitionTime":"2026-01-22T06:36:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:08 crc kubenswrapper[4720]: I0122 06:36:08.130682 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:08 crc kubenswrapper[4720]: I0122 06:36:08.130751 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:08 crc kubenswrapper[4720]: I0122 06:36:08.130769 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:08 crc kubenswrapper[4720]: I0122 06:36:08.130801 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:08 crc kubenswrapper[4720]: I0122 06:36:08.130819 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:08Z","lastTransitionTime":"2026-01-22T06:36:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:08 crc kubenswrapper[4720]: I0122 06:36:08.171160 4720 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 21:36:10.729561058 +0000 UTC Jan 22 06:36:08 crc kubenswrapper[4720]: I0122 06:36:08.211842 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 06:36:08 crc kubenswrapper[4720]: I0122 06:36:08.211967 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 06:36:08 crc kubenswrapper[4720]: E0122 06:36:08.212253 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 06:36:08 crc kubenswrapper[4720]: E0122 06:36:08.212867 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 06:36:08 crc kubenswrapper[4720]: I0122 06:36:08.234340 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:08 crc kubenswrapper[4720]: I0122 06:36:08.234487 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:08 crc kubenswrapper[4720]: I0122 06:36:08.234512 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:08 crc kubenswrapper[4720]: I0122 06:36:08.235201 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:08 crc kubenswrapper[4720]: I0122 06:36:08.235240 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:08Z","lastTransitionTime":"2026-01-22T06:36:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:08 crc kubenswrapper[4720]: I0122 06:36:08.239537 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73ba4f9c-33cb-4898-b2a3-21bf3327cf5b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:36:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:36:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b5ab589e0e928e47ac498164439f2fbd62bfe1130a9c17a9d96ec4cedd2c1e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5ff36eb3ab53efb54f45ab3e3030175237fd76ecd28ffcdc5a5079dfb93ec2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5d1e4cb487f75b95bc0da8ec3adbb6410d171fa2c95137c8127cea6023166f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7605a8cc85e1a2da51c6bedd5f03df930e1135340ec7de69d1ef643c907fd2bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7605a8cc85e1a2da51c6bedd5f03df930e1135340ec7de69d1ef643c907fd2bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:08Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:08Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:08 crc kubenswrapper[4720]: I0122 06:36:08.258887 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:08Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:08 crc kubenswrapper[4720]: I0122 06:36:08.279362 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dtnxt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"518eedd0-2cb6-458d-a7a8-d8c8b8296401\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://794bf7a98b2300bd21bd914f5ad90c8f92ea7c055c78f62dbac7bc66c1c4d282\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqx2p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dtnxt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:08Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:08 crc kubenswrapper[4720]: I0122 06:36:08.298842 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc18b75d6761db0d4a459a096804ac6600c6c2026ca133d47671a8aadba175f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6616553b8c62205590801d10e6e316b396b7190584b08a774ebb87ab425cc122\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:08Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:08 crc kubenswrapper[4720]: I0122 06:36:08.320387 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4b26e9d-6a95-4b1c-9750-88b6aa100c67\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57c69f24cbbb5aea3a63b602916230df0cc3d74344a32bd1528bd162786a6d4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f52f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88eb6692702bcb8523c759d764bb8dede5af5a2890217a1c6897a5b18a7197dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f52f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bnsvd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:08Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:08 crc kubenswrapper[4720]: I0122 06:36:08.338797 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:08 crc kubenswrapper[4720]: I0122 06:36:08.338862 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:08 crc kubenswrapper[4720]: I0122 06:36:08.338888 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:08 crc kubenswrapper[4720]: I0122 06:36:08.338953 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:08 crc kubenswrapper[4720]: I0122 06:36:08.338977 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:08Z","lastTransitionTime":"2026-01-22T06:36:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:08 crc kubenswrapper[4720]: I0122 06:36:08.343900 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3042f2c25662a3c0a0f010b2e431521ea9dc24b59303773b1fc4e2b43308937\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:08Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:08 crc kubenswrapper[4720]: I0122 06:36:08.375572 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pc2f4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a725fa6-120e-41b1-bf7b-e1419e35c891\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be8927bdfee25dd12f02d8f5e4923d1ef6e034a4ab3c036ee8fb76b699aa4a04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b14f48ebae84e1450337b5ce78c6b9e74f4ffd418eaaf9df9fcb7495f5feae06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b013c0f8d83bb91a1385ed067377ea5bd81640f7b420160042c33a3112cbe153\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a1f835b7dfa74f56bbfdc5ef0a6674e6c04ae1be7119e3675d4a7e970bc37d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdea94d764eb25af4aec5da1f1aaf94b7d75e66292561a54803a079a6ca35cbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://279bdc70caff8b05f2eaff1d28bee03725a7069dcc99c4770a8df9a3e3ed0440\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ae75d823e484413b1163d5407d5560d4cae3d122fb5f1ce34f501c3d2fce1b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ae75d823e484413b1163d5407d5560d4cae3d122fb5f1ce34f501c3d2fce1b4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T06:36:04Z\\\",\\\"message\\\":\\\"min network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:04Z is after 2025-08-24T17:21:41Z]\\\\nI0122 06:36:04.142760 6361 ovn.go:134] Ensuring zone local for Pod openshift-dns/node-resolver-dtnxt in node crc\\\\nI0122 06:36:04.142770 6361 obj_retry.go:386] Retry successful for *v1.Pod openshift-dns/node-resolver-dtnxt after 0 failed attempt(s)\\\\nI0122 06:36:04.142777 6361 default_network_controller.go:776] Recording success event on pod openshift-dns/node-resolver-dtnxt\\\\nI0122 06:36:04.142318 6361 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-scheduler/openshift-kube-scheduler-crc\\\\nI0122 06:36:04.142789 6361 ovn.go:134] Ensuring zone local for Pod openshift-kube-scheduler/openshift-kube-scheduler-crc in node crc\\\\nI0122 06:36:04.142793 6361 obj_ret\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T06:36:03Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-pc2f4_openshift-ovn-kubernetes(9a725fa6-120e-41b1-bf7b-e1419e35c891)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd6371da033cf0ecd1c0b746cae82e14593edbf2353910a77284a2987741580d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a478b385adad231c65bafa44aef49fd983ef1a81babc0464478e199578b612b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a478b385adad231c65bafa44aef49fd983ef1a81babc0464478e199578b612b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pc2f4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:08Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:08 crc kubenswrapper[4720]: I0122 06:36:08.399821 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lxzml" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c7b3c34a-9870-4c9f-990b-29b7e768d5a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec19859a546dc85e4807d0f522014187b1d22c1167a342776aafbcfd7c3a2cd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h66q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0e395a61e07527eace7f980f476ef9b16a0c6d525b461837ebd9abcbebfa9b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0e395a61e07527eace7f980f476ef9b16a0c6d525b461837ebd9abcbebfa9b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h66q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fa836ad8fb1d77bc1b9d30575db8ed7f2394a2b03d41f0b26a3907fa07ab154\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fa836ad8fb1d77bc1b9d30575db8ed7f2394a2b03d41f0b26a3907fa07ab154\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h66q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09cda77dbfa7cafc1f167478649fe1bc961d7610005bd3260407796956dd8fc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09cda77dbfa7cafc1f167478649fe1bc961d7610005bd3260407796956dd8fc3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h66q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d74c02c82589070c20f88c3ce9574341f2c4898af034fe9a686aa7b7c19ed93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d74c02c82589070c20f88c3ce9574341f2c4898af034fe9a686aa7b7c19ed93\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h66q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42e7374bf2c5e55d0afbe1e7564a2bc7fd23d0af7e22f0cab853fb9d75802422\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42e7374bf2c5e55d0afbe1e7564a2bc7fd23d0af7e22f0cab853fb9d75802422\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h66q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bbafab11f2868d0a97a54debc01d8e4cc6360b08a5b7e883a2f385db2e9ba0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3bbafab11f2868d0a97a54debc01d8e4cc6360b08a5b7e883a2f385db2e9ba0c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h66q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lxzml\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:08Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:08 crc kubenswrapper[4720]: I0122 06:36:08.413041 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5bmrh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"819a554b-cde8-41eb-bf3c-b965b5754ee9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fd548b8bf40d7be75f3433d2396eac3eb467cdc77a6e92f29c5321b0f2e11c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w949z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5bmrh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:08Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:08 crc kubenswrapper[4720]: I0122 06:36:08.430871 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4c84t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83dddec7-9ecb-4d3b-97ac-e2f8f59e547c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ea91d23f213bf1c41ebffa43c30559153a3fdae5aac42557a24566cc90bd2b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl9b5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8570742a6d232a3695d42f42f4a8a7bfe79325cb20c8da4129148dda33df4683\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl9b5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4c84t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:08Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:08 crc kubenswrapper[4720]: I0122 06:36:08.442897 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:08 crc kubenswrapper[4720]: I0122 06:36:08.442991 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:08 crc kubenswrapper[4720]: I0122 06:36:08.443012 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:08 crc kubenswrapper[4720]: I0122 06:36:08.443047 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:08 crc kubenswrapper[4720]: I0122 06:36:08.443067 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:08Z","lastTransitionTime":"2026-01-22T06:36:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:08 crc kubenswrapper[4720]: I0122 06:36:08.451637 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63aad383-f2cb-4bee-a51e-8f27e7b6f6dc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f91f6cde32c3019a63315238551af8420def8e585e4dbf3d29e55f62f155ba3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eff41c6fc4cac32edebfbabd313603d50771e92427938f4fc8c12bc75563133d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5dc3d31620607809111c9a1ed06ab7dfb1b24ff5ef18155979efc6b09b6d67d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://365ce842df37c87fae429e32228087a45828c6c717b63a2eb9818f26bf9aed7b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:08Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:08Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:08 crc kubenswrapper[4720]: I0122 06:36:08.468005 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:08Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:08 crc kubenswrapper[4720]: I0122 06:36:08.484097 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:08Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:08 crc kubenswrapper[4720]: I0122 06:36:08.499646 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kvtch" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"409f50e8-9b68-4efe-8eb4-bc144d383817\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhm9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhm9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:47Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kvtch\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:08Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:08 crc kubenswrapper[4720]: I0122 06:36:08.522290 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-n5w5r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85373343-156d-4de0-a72b-baaf7c4e3d08\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4da0abad7292a9d82b80caabd4b1f0fc15fbe54bfa5b4316c85b2131ea8b5b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlzmz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-n5w5r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:08Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:08 crc kubenswrapper[4720]: I0122 06:36:08.546957 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:08 crc kubenswrapper[4720]: I0122 06:36:08.547070 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:08 crc kubenswrapper[4720]: I0122 06:36:08.547092 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:08 crc kubenswrapper[4720]: I0122 06:36:08.547159 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:08 crc kubenswrapper[4720]: I0122 06:36:08.547181 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:08Z","lastTransitionTime":"2026-01-22T06:36:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:08 crc kubenswrapper[4720]: I0122 06:36:08.548224 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2336077a-0d11-443d-aa5f-3bc0f75aeb59\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7f8b443098a61889ca09134ff7b6e219efca59abcbcea3bb4c1aa9266ee8e55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfdba4f4bb3ed55acad58ed0c07d2dd58b1a1e9fd56443090c002bdefd3e7e10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2caddf6dd66b641ff10add3b52e37581c741507660ec4fe67a3b9ddde699e56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://108525dd49ce1847f1cc6c4e2a4c70cae3d2d990cda97eeb4aba8879b955432d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d49a501c34b86f0f9300742970a00c92ce3b8d352518d8f53d1ab8c734e2af6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49fa65139740d25a82a8becea2b7476e9846a3eebfe5a02bb0b3e8cc4e7fa30d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49fa65139740d25a82a8becea2b7476e9846a3eebfe5a02bb0b3e8cc4e7fa30d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e171d6a13b6108f7df37e46d2195682d2ce7a75e5fbfd0ccf4aca27871964cf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e171d6a13b6108f7df37e46d2195682d2ce7a75e5fbfd0ccf4aca27871964cf6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://136504ce05b178f4c60a839a1044c0b0892ea97babfe80181eac601c5057df16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://136504ce05b178f4c60a839a1044c0b0892ea97babfe80181eac601c5057df16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:08Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:08Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:08 crc kubenswrapper[4720]: I0122 06:36:08.569264 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71c3232e-a7c6-4127-b9ae-54b793cf40fc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b844e983a5de29cec0e9c51f209f04921cd45b437887ca38e93348bc0816832a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d21c2ba52009379f44498a3064c67bcd1a54a6277cc2b1df9db949bf0871e95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65a3043e63bb84b58bb5486070eb84265d51c514df2d766703a62382b3d6f66f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b944677a78ab673857bf2dd507dcc2eced1caef632613ec1e2054cb76ef57f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f51bf89aa732948c3672632435a055b62f58316da07931b2773bc2d3c10789e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T06:35:26Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 06:35:20.895806 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 06:35:20.896560 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2161447925/tls.crt::/tmp/serving-cert-2161447925/tls.key\\\\\\\"\\\\nI0122 06:35:26.673138 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 06:35:26.677382 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 06:35:26.677419 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 06:35:26.677459 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 06:35:26.677469 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 06:35:26.688473 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 06:35:26.688552 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 06:35:26.688566 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 06:35:26.688580 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 06:35:26.688588 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 06:35:26.688596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 06:35:26.688605 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 06:35:26.688875 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 06:35:26.690763 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://927a5a6aba8d7533888fb9e5e664fbaed25c6e6c0616c5b3c6f0afd6f45aa128\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3418e6eacd36147ae6ae1be72f23113dea7dbaab3597ba0fee76fbc846c0162f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3418e6eacd36147ae6ae1be72f23113dea7dbaab3597ba0fee76fbc846c0162f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:08Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:08 crc kubenswrapper[4720]: I0122 06:36:08.590101 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://42d0f971bb1f2a686c44e255b97648123170034b41d626318aab29e111dda4a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:08Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:08 crc kubenswrapper[4720]: I0122 06:36:08.650614 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:08 crc kubenswrapper[4720]: I0122 06:36:08.650683 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:08 crc kubenswrapper[4720]: I0122 06:36:08.650706 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:08 crc kubenswrapper[4720]: I0122 06:36:08.650739 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:08 crc kubenswrapper[4720]: I0122 06:36:08.650763 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:08Z","lastTransitionTime":"2026-01-22T06:36:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:08 crc kubenswrapper[4720]: I0122 06:36:08.754651 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:08 crc kubenswrapper[4720]: I0122 06:36:08.754719 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:08 crc kubenswrapper[4720]: I0122 06:36:08.754739 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:08 crc kubenswrapper[4720]: I0122 06:36:08.754767 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:08 crc kubenswrapper[4720]: I0122 06:36:08.754788 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:08Z","lastTransitionTime":"2026-01-22T06:36:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:08 crc kubenswrapper[4720]: I0122 06:36:08.857865 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:08 crc kubenswrapper[4720]: I0122 06:36:08.857954 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:08 crc kubenswrapper[4720]: I0122 06:36:08.857975 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:08 crc kubenswrapper[4720]: I0122 06:36:08.857997 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:08 crc kubenswrapper[4720]: I0122 06:36:08.858015 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:08Z","lastTransitionTime":"2026-01-22T06:36:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:08 crc kubenswrapper[4720]: I0122 06:36:08.961273 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:08 crc kubenswrapper[4720]: I0122 06:36:08.961346 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:08 crc kubenswrapper[4720]: I0122 06:36:08.961370 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:08 crc kubenswrapper[4720]: I0122 06:36:08.961396 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:08 crc kubenswrapper[4720]: I0122 06:36:08.961416 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:08Z","lastTransitionTime":"2026-01-22T06:36:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:09 crc kubenswrapper[4720]: I0122 06:36:09.065871 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:09 crc kubenswrapper[4720]: I0122 06:36:09.065934 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:09 crc kubenswrapper[4720]: I0122 06:36:09.065944 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:09 crc kubenswrapper[4720]: I0122 06:36:09.065961 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:09 crc kubenswrapper[4720]: I0122 06:36:09.065971 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:09Z","lastTransitionTime":"2026-01-22T06:36:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:09 crc kubenswrapper[4720]: I0122 06:36:09.169870 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:09 crc kubenswrapper[4720]: I0122 06:36:09.169966 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:09 crc kubenswrapper[4720]: I0122 06:36:09.169982 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:09 crc kubenswrapper[4720]: I0122 06:36:09.170006 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:09 crc kubenswrapper[4720]: I0122 06:36:09.170023 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:09Z","lastTransitionTime":"2026-01-22T06:36:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:09 crc kubenswrapper[4720]: I0122 06:36:09.171312 4720 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 19:25:01.055474813 +0000 UTC Jan 22 06:36:09 crc kubenswrapper[4720]: I0122 06:36:09.210025 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kvtch" Jan 22 06:36:09 crc kubenswrapper[4720]: I0122 06:36:09.210051 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 06:36:09 crc kubenswrapper[4720]: E0122 06:36:09.210221 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kvtch" podUID="409f50e8-9b68-4efe-8eb4-bc144d383817" Jan 22 06:36:09 crc kubenswrapper[4720]: E0122 06:36:09.210293 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 06:36:09 crc kubenswrapper[4720]: I0122 06:36:09.272969 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:09 crc kubenswrapper[4720]: I0122 06:36:09.273058 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:09 crc kubenswrapper[4720]: I0122 06:36:09.273083 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:09 crc kubenswrapper[4720]: I0122 06:36:09.273116 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:09 crc kubenswrapper[4720]: I0122 06:36:09.273142 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:09Z","lastTransitionTime":"2026-01-22T06:36:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:09 crc kubenswrapper[4720]: I0122 06:36:09.376865 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:09 crc kubenswrapper[4720]: I0122 06:36:09.376965 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:09 crc kubenswrapper[4720]: I0122 06:36:09.376984 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:09 crc kubenswrapper[4720]: I0122 06:36:09.377011 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:09 crc kubenswrapper[4720]: I0122 06:36:09.377028 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:09Z","lastTransitionTime":"2026-01-22T06:36:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:09 crc kubenswrapper[4720]: I0122 06:36:09.480145 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:09 crc kubenswrapper[4720]: I0122 06:36:09.480231 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:09 crc kubenswrapper[4720]: I0122 06:36:09.480258 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:09 crc kubenswrapper[4720]: I0122 06:36:09.480292 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:09 crc kubenswrapper[4720]: I0122 06:36:09.480316 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:09Z","lastTransitionTime":"2026-01-22T06:36:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:09 crc kubenswrapper[4720]: I0122 06:36:09.583939 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:09 crc kubenswrapper[4720]: I0122 06:36:09.584015 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:09 crc kubenswrapper[4720]: I0122 06:36:09.584034 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:09 crc kubenswrapper[4720]: I0122 06:36:09.584097 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:09 crc kubenswrapper[4720]: I0122 06:36:09.584116 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:09Z","lastTransitionTime":"2026-01-22T06:36:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:09 crc kubenswrapper[4720]: I0122 06:36:09.688185 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:09 crc kubenswrapper[4720]: I0122 06:36:09.688260 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:09 crc kubenswrapper[4720]: I0122 06:36:09.688279 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:09 crc kubenswrapper[4720]: I0122 06:36:09.688311 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:09 crc kubenswrapper[4720]: I0122 06:36:09.688331 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:09Z","lastTransitionTime":"2026-01-22T06:36:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:09 crc kubenswrapper[4720]: I0122 06:36:09.792084 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:09 crc kubenswrapper[4720]: I0122 06:36:09.792149 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:09 crc kubenswrapper[4720]: I0122 06:36:09.792167 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:09 crc kubenswrapper[4720]: I0122 06:36:09.792198 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:09 crc kubenswrapper[4720]: I0122 06:36:09.792218 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:09Z","lastTransitionTime":"2026-01-22T06:36:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:09 crc kubenswrapper[4720]: I0122 06:36:09.896542 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:09 crc kubenswrapper[4720]: I0122 06:36:09.896607 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:09 crc kubenswrapper[4720]: I0122 06:36:09.896636 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:09 crc kubenswrapper[4720]: I0122 06:36:09.896667 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:09 crc kubenswrapper[4720]: I0122 06:36:09.896692 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:09Z","lastTransitionTime":"2026-01-22T06:36:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:09 crc kubenswrapper[4720]: I0122 06:36:09.999514 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:09 crc kubenswrapper[4720]: I0122 06:36:09.999571 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:09 crc kubenswrapper[4720]: I0122 06:36:09.999593 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:09 crc kubenswrapper[4720]: I0122 06:36:09.999623 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:09 crc kubenswrapper[4720]: I0122 06:36:09.999643 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:09Z","lastTransitionTime":"2026-01-22T06:36:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:10 crc kubenswrapper[4720]: I0122 06:36:10.102606 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:10 crc kubenswrapper[4720]: I0122 06:36:10.102668 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:10 crc kubenswrapper[4720]: I0122 06:36:10.102691 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:10 crc kubenswrapper[4720]: I0122 06:36:10.102721 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:10 crc kubenswrapper[4720]: I0122 06:36:10.102743 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:10Z","lastTransitionTime":"2026-01-22T06:36:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:10 crc kubenswrapper[4720]: I0122 06:36:10.171738 4720 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 05:16:16.36007892 +0000 UTC Jan 22 06:36:10 crc kubenswrapper[4720]: I0122 06:36:10.206089 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:10 crc kubenswrapper[4720]: I0122 06:36:10.206181 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:10 crc kubenswrapper[4720]: I0122 06:36:10.206201 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:10 crc kubenswrapper[4720]: I0122 06:36:10.206229 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:10 crc kubenswrapper[4720]: I0122 06:36:10.206248 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:10Z","lastTransitionTime":"2026-01-22T06:36:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:10 crc kubenswrapper[4720]: I0122 06:36:10.210403 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 06:36:10 crc kubenswrapper[4720]: I0122 06:36:10.210568 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 06:36:10 crc kubenswrapper[4720]: E0122 06:36:10.210584 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 06:36:10 crc kubenswrapper[4720]: E0122 06:36:10.210978 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 06:36:10 crc kubenswrapper[4720]: I0122 06:36:10.309573 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:10 crc kubenswrapper[4720]: I0122 06:36:10.309653 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:10 crc kubenswrapper[4720]: I0122 06:36:10.309671 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:10 crc kubenswrapper[4720]: I0122 06:36:10.309701 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:10 crc kubenswrapper[4720]: I0122 06:36:10.309721 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:10Z","lastTransitionTime":"2026-01-22T06:36:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:10 crc kubenswrapper[4720]: I0122 06:36:10.412580 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:10 crc kubenswrapper[4720]: I0122 06:36:10.412645 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:10 crc kubenswrapper[4720]: I0122 06:36:10.412663 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:10 crc kubenswrapper[4720]: I0122 06:36:10.412691 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:10 crc kubenswrapper[4720]: I0122 06:36:10.412714 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:10Z","lastTransitionTime":"2026-01-22T06:36:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:10 crc kubenswrapper[4720]: I0122 06:36:10.515421 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:10 crc kubenswrapper[4720]: I0122 06:36:10.515468 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:10 crc kubenswrapper[4720]: I0122 06:36:10.515484 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:10 crc kubenswrapper[4720]: I0122 06:36:10.515515 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:10 crc kubenswrapper[4720]: I0122 06:36:10.515537 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:10Z","lastTransitionTime":"2026-01-22T06:36:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:10 crc kubenswrapper[4720]: I0122 06:36:10.618860 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:10 crc kubenswrapper[4720]: I0122 06:36:10.618927 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:10 crc kubenswrapper[4720]: I0122 06:36:10.618939 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:10 crc kubenswrapper[4720]: I0122 06:36:10.618960 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:10 crc kubenswrapper[4720]: I0122 06:36:10.618972 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:10Z","lastTransitionTime":"2026-01-22T06:36:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:10 crc kubenswrapper[4720]: I0122 06:36:10.721826 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:10 crc kubenswrapper[4720]: I0122 06:36:10.721865 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:10 crc kubenswrapper[4720]: I0122 06:36:10.721874 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:10 crc kubenswrapper[4720]: I0122 06:36:10.721890 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:10 crc kubenswrapper[4720]: I0122 06:36:10.721902 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:10Z","lastTransitionTime":"2026-01-22T06:36:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:10 crc kubenswrapper[4720]: I0122 06:36:10.825131 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:10 crc kubenswrapper[4720]: I0122 06:36:10.825203 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:10 crc kubenswrapper[4720]: I0122 06:36:10.825228 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:10 crc kubenswrapper[4720]: I0122 06:36:10.825470 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:10 crc kubenswrapper[4720]: I0122 06:36:10.825518 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:10Z","lastTransitionTime":"2026-01-22T06:36:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:10 crc kubenswrapper[4720]: I0122 06:36:10.929187 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:10 crc kubenswrapper[4720]: I0122 06:36:10.929228 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:10 crc kubenswrapper[4720]: I0122 06:36:10.929239 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:10 crc kubenswrapper[4720]: I0122 06:36:10.929258 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:10 crc kubenswrapper[4720]: I0122 06:36:10.929272 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:10Z","lastTransitionTime":"2026-01-22T06:36:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:11 crc kubenswrapper[4720]: I0122 06:36:11.033348 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:11 crc kubenswrapper[4720]: I0122 06:36:11.033408 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:11 crc kubenswrapper[4720]: I0122 06:36:11.033433 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:11 crc kubenswrapper[4720]: I0122 06:36:11.033466 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:11 crc kubenswrapper[4720]: I0122 06:36:11.033491 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:11Z","lastTransitionTime":"2026-01-22T06:36:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:11 crc kubenswrapper[4720]: I0122 06:36:11.136745 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:11 crc kubenswrapper[4720]: I0122 06:36:11.136799 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:11 crc kubenswrapper[4720]: I0122 06:36:11.136817 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:11 crc kubenswrapper[4720]: I0122 06:36:11.136843 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:11 crc kubenswrapper[4720]: I0122 06:36:11.136862 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:11Z","lastTransitionTime":"2026-01-22T06:36:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:11 crc kubenswrapper[4720]: I0122 06:36:11.172133 4720 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 17:13:25.274683594 +0000 UTC Jan 22 06:36:11 crc kubenswrapper[4720]: I0122 06:36:11.210792 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 06:36:11 crc kubenswrapper[4720]: I0122 06:36:11.210820 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kvtch" Jan 22 06:36:11 crc kubenswrapper[4720]: E0122 06:36:11.211148 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 06:36:11 crc kubenswrapper[4720]: E0122 06:36:11.211234 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kvtch" podUID="409f50e8-9b68-4efe-8eb4-bc144d383817" Jan 22 06:36:11 crc kubenswrapper[4720]: I0122 06:36:11.241993 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:11 crc kubenswrapper[4720]: I0122 06:36:11.242067 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:11 crc kubenswrapper[4720]: I0122 06:36:11.242091 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:11 crc kubenswrapper[4720]: I0122 06:36:11.242122 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:11 crc kubenswrapper[4720]: I0122 06:36:11.242146 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:11Z","lastTransitionTime":"2026-01-22T06:36:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:11 crc kubenswrapper[4720]: I0122 06:36:11.345666 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:11 crc kubenswrapper[4720]: I0122 06:36:11.345733 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:11 crc kubenswrapper[4720]: I0122 06:36:11.345751 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:11 crc kubenswrapper[4720]: I0122 06:36:11.345778 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:11 crc kubenswrapper[4720]: I0122 06:36:11.345797 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:11Z","lastTransitionTime":"2026-01-22T06:36:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:11 crc kubenswrapper[4720]: I0122 06:36:11.448722 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:11 crc kubenswrapper[4720]: I0122 06:36:11.448775 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:11 crc kubenswrapper[4720]: I0122 06:36:11.448788 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:11 crc kubenswrapper[4720]: I0122 06:36:11.448807 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:11 crc kubenswrapper[4720]: I0122 06:36:11.448820 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:11Z","lastTransitionTime":"2026-01-22T06:36:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:11 crc kubenswrapper[4720]: I0122 06:36:11.551514 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:11 crc kubenswrapper[4720]: I0122 06:36:11.551566 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:11 crc kubenswrapper[4720]: I0122 06:36:11.551580 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:11 crc kubenswrapper[4720]: I0122 06:36:11.551603 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:11 crc kubenswrapper[4720]: I0122 06:36:11.551617 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:11Z","lastTransitionTime":"2026-01-22T06:36:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:11 crc kubenswrapper[4720]: I0122 06:36:11.654565 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:11 crc kubenswrapper[4720]: I0122 06:36:11.654645 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:11 crc kubenswrapper[4720]: I0122 06:36:11.654665 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:11 crc kubenswrapper[4720]: I0122 06:36:11.654694 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:11 crc kubenswrapper[4720]: I0122 06:36:11.654715 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:11Z","lastTransitionTime":"2026-01-22T06:36:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:11 crc kubenswrapper[4720]: I0122 06:36:11.757611 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:11 crc kubenswrapper[4720]: I0122 06:36:11.757662 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:11 crc kubenswrapper[4720]: I0122 06:36:11.757675 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:11 crc kubenswrapper[4720]: I0122 06:36:11.757696 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:11 crc kubenswrapper[4720]: I0122 06:36:11.757712 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:11Z","lastTransitionTime":"2026-01-22T06:36:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:11 crc kubenswrapper[4720]: I0122 06:36:11.860722 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:11 crc kubenswrapper[4720]: I0122 06:36:11.860767 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:11 crc kubenswrapper[4720]: I0122 06:36:11.860780 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:11 crc kubenswrapper[4720]: I0122 06:36:11.860803 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:11 crc kubenswrapper[4720]: I0122 06:36:11.860817 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:11Z","lastTransitionTime":"2026-01-22T06:36:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:11 crc kubenswrapper[4720]: I0122 06:36:11.964317 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:11 crc kubenswrapper[4720]: I0122 06:36:11.964391 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:11 crc kubenswrapper[4720]: I0122 06:36:11.964412 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:11 crc kubenswrapper[4720]: I0122 06:36:11.964443 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:11 crc kubenswrapper[4720]: I0122 06:36:11.964469 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:11Z","lastTransitionTime":"2026-01-22T06:36:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:12 crc kubenswrapper[4720]: I0122 06:36:12.067571 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:12 crc kubenswrapper[4720]: I0122 06:36:12.068358 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:12 crc kubenswrapper[4720]: I0122 06:36:12.068429 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:12 crc kubenswrapper[4720]: I0122 06:36:12.068466 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:12 crc kubenswrapper[4720]: I0122 06:36:12.068486 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:12Z","lastTransitionTime":"2026-01-22T06:36:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:12 crc kubenswrapper[4720]: I0122 06:36:12.171402 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:12 crc kubenswrapper[4720]: I0122 06:36:12.171482 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:12 crc kubenswrapper[4720]: I0122 06:36:12.171507 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:12 crc kubenswrapper[4720]: I0122 06:36:12.171537 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:12 crc kubenswrapper[4720]: I0122 06:36:12.171556 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:12Z","lastTransitionTime":"2026-01-22T06:36:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:12 crc kubenswrapper[4720]: I0122 06:36:12.172617 4720 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 09:43:43.602175043 +0000 UTC Jan 22 06:36:12 crc kubenswrapper[4720]: I0122 06:36:12.214061 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 06:36:12 crc kubenswrapper[4720]: E0122 06:36:12.214198 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 06:36:12 crc kubenswrapper[4720]: I0122 06:36:12.214261 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 06:36:12 crc kubenswrapper[4720]: E0122 06:36:12.214487 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 06:36:12 crc kubenswrapper[4720]: I0122 06:36:12.274198 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:12 crc kubenswrapper[4720]: I0122 06:36:12.274244 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:12 crc kubenswrapper[4720]: I0122 06:36:12.274258 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:12 crc kubenswrapper[4720]: I0122 06:36:12.274279 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:12 crc kubenswrapper[4720]: I0122 06:36:12.274293 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:12Z","lastTransitionTime":"2026-01-22T06:36:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:12 crc kubenswrapper[4720]: I0122 06:36:12.377251 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:12 crc kubenswrapper[4720]: I0122 06:36:12.377337 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:12 crc kubenswrapper[4720]: I0122 06:36:12.377356 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:12 crc kubenswrapper[4720]: I0122 06:36:12.377388 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:12 crc kubenswrapper[4720]: I0122 06:36:12.377412 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:12Z","lastTransitionTime":"2026-01-22T06:36:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:12 crc kubenswrapper[4720]: I0122 06:36:12.446741 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:12 crc kubenswrapper[4720]: I0122 06:36:12.446790 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:12 crc kubenswrapper[4720]: I0122 06:36:12.446800 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:12 crc kubenswrapper[4720]: I0122 06:36:12.446819 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:12 crc kubenswrapper[4720]: I0122 06:36:12.446831 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:12Z","lastTransitionTime":"2026-01-22T06:36:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:12 crc kubenswrapper[4720]: E0122 06:36:12.462656 4720 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T06:36:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T06:36:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T06:36:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T06:36:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T06:36:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T06:36:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T06:36:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T06:36:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"234f6209-cc86-46cc-ab69-026482c920c9\\\",\\\"systemUUID\\\":\\\"4713dd6d-99ec-4bb6-94e4-e7199d2e8be9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:12Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:12 crc kubenswrapper[4720]: I0122 06:36:12.467471 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:12 crc kubenswrapper[4720]: I0122 06:36:12.467525 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:12 crc kubenswrapper[4720]: I0122 06:36:12.467543 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:12 crc kubenswrapper[4720]: I0122 06:36:12.467572 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:12 crc kubenswrapper[4720]: I0122 06:36:12.467588 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:12Z","lastTransitionTime":"2026-01-22T06:36:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:12 crc kubenswrapper[4720]: E0122 06:36:12.481769 4720 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T06:36:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T06:36:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T06:36:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T06:36:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T06:36:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T06:36:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T06:36:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T06:36:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"234f6209-cc86-46cc-ab69-026482c920c9\\\",\\\"systemUUID\\\":\\\"4713dd6d-99ec-4bb6-94e4-e7199d2e8be9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:12Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:12 crc kubenswrapper[4720]: I0122 06:36:12.486825 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:12 crc kubenswrapper[4720]: I0122 06:36:12.486880 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:12 crc kubenswrapper[4720]: I0122 06:36:12.486892 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:12 crc kubenswrapper[4720]: I0122 06:36:12.486930 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:12 crc kubenswrapper[4720]: I0122 06:36:12.486946 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:12Z","lastTransitionTime":"2026-01-22T06:36:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:12 crc kubenswrapper[4720]: E0122 06:36:12.499153 4720 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T06:36:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T06:36:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T06:36:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T06:36:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T06:36:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T06:36:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T06:36:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T06:36:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"234f6209-cc86-46cc-ab69-026482c920c9\\\",\\\"systemUUID\\\":\\\"4713dd6d-99ec-4bb6-94e4-e7199d2e8be9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:12Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:12 crc kubenswrapper[4720]: I0122 06:36:12.502997 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:12 crc kubenswrapper[4720]: I0122 06:36:12.503055 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:12 crc kubenswrapper[4720]: I0122 06:36:12.503073 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:12 crc kubenswrapper[4720]: I0122 06:36:12.503100 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:12 crc kubenswrapper[4720]: I0122 06:36:12.503123 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:12Z","lastTransitionTime":"2026-01-22T06:36:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:12 crc kubenswrapper[4720]: E0122 06:36:12.515041 4720 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T06:36:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T06:36:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T06:36:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T06:36:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T06:36:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T06:36:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T06:36:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T06:36:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"234f6209-cc86-46cc-ab69-026482c920c9\\\",\\\"systemUUID\\\":\\\"4713dd6d-99ec-4bb6-94e4-e7199d2e8be9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:12Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:12 crc kubenswrapper[4720]: I0122 06:36:12.519432 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:12 crc kubenswrapper[4720]: I0122 06:36:12.519487 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:12 crc kubenswrapper[4720]: I0122 06:36:12.519617 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:12 crc kubenswrapper[4720]: I0122 06:36:12.519769 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:12 crc kubenswrapper[4720]: I0122 06:36:12.519922 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:12Z","lastTransitionTime":"2026-01-22T06:36:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:12 crc kubenswrapper[4720]: E0122 06:36:12.533985 4720 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T06:36:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T06:36:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T06:36:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T06:36:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T06:36:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T06:36:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T06:36:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T06:36:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"234f6209-cc86-46cc-ab69-026482c920c9\\\",\\\"systemUUID\\\":\\\"4713dd6d-99ec-4bb6-94e4-e7199d2e8be9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:12Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:12 crc kubenswrapper[4720]: E0122 06:36:12.534160 4720 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 22 06:36:12 crc kubenswrapper[4720]: I0122 06:36:12.536450 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:12 crc kubenswrapper[4720]: I0122 06:36:12.536519 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:12 crc kubenswrapper[4720]: I0122 06:36:12.536543 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:12 crc kubenswrapper[4720]: I0122 06:36:12.536576 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:12 crc kubenswrapper[4720]: I0122 06:36:12.536599 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:12Z","lastTransitionTime":"2026-01-22T06:36:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:12 crc kubenswrapper[4720]: I0122 06:36:12.638419 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:12 crc kubenswrapper[4720]: I0122 06:36:12.638451 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:12 crc kubenswrapper[4720]: I0122 06:36:12.638462 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:12 crc kubenswrapper[4720]: I0122 06:36:12.638480 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:12 crc kubenswrapper[4720]: I0122 06:36:12.638492 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:12Z","lastTransitionTime":"2026-01-22T06:36:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:12 crc kubenswrapper[4720]: I0122 06:36:12.741455 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:12 crc kubenswrapper[4720]: I0122 06:36:12.741574 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:12 crc kubenswrapper[4720]: I0122 06:36:12.741593 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:12 crc kubenswrapper[4720]: I0122 06:36:12.741639 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:12 crc kubenswrapper[4720]: I0122 06:36:12.741659 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:12Z","lastTransitionTime":"2026-01-22T06:36:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:12 crc kubenswrapper[4720]: I0122 06:36:12.845198 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:12 crc kubenswrapper[4720]: I0122 06:36:12.845240 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:12 crc kubenswrapper[4720]: I0122 06:36:12.845249 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:12 crc kubenswrapper[4720]: I0122 06:36:12.845266 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:12 crc kubenswrapper[4720]: I0122 06:36:12.845278 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:12Z","lastTransitionTime":"2026-01-22T06:36:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:12 crc kubenswrapper[4720]: I0122 06:36:12.948352 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:12 crc kubenswrapper[4720]: I0122 06:36:12.948435 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:12 crc kubenswrapper[4720]: I0122 06:36:12.948477 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:12 crc kubenswrapper[4720]: I0122 06:36:12.948516 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:12 crc kubenswrapper[4720]: I0122 06:36:12.948540 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:12Z","lastTransitionTime":"2026-01-22T06:36:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:13 crc kubenswrapper[4720]: I0122 06:36:13.051457 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:13 crc kubenswrapper[4720]: I0122 06:36:13.051515 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:13 crc kubenswrapper[4720]: I0122 06:36:13.051534 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:13 crc kubenswrapper[4720]: I0122 06:36:13.051561 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:13 crc kubenswrapper[4720]: I0122 06:36:13.051579 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:13Z","lastTransitionTime":"2026-01-22T06:36:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:13 crc kubenswrapper[4720]: I0122 06:36:13.154628 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:13 crc kubenswrapper[4720]: I0122 06:36:13.154668 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:13 crc kubenswrapper[4720]: I0122 06:36:13.154686 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:13 crc kubenswrapper[4720]: I0122 06:36:13.154715 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:13 crc kubenswrapper[4720]: I0122 06:36:13.154734 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:13Z","lastTransitionTime":"2026-01-22T06:36:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:13 crc kubenswrapper[4720]: I0122 06:36:13.172851 4720 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 19:54:35.023075947 +0000 UTC Jan 22 06:36:13 crc kubenswrapper[4720]: I0122 06:36:13.210402 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 06:36:13 crc kubenswrapper[4720]: E0122 06:36:13.210569 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 06:36:13 crc kubenswrapper[4720]: I0122 06:36:13.210833 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kvtch" Jan 22 06:36:13 crc kubenswrapper[4720]: E0122 06:36:13.210972 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kvtch" podUID="409f50e8-9b68-4efe-8eb4-bc144d383817" Jan 22 06:36:13 crc kubenswrapper[4720]: I0122 06:36:13.257457 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:13 crc kubenswrapper[4720]: I0122 06:36:13.257494 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:13 crc kubenswrapper[4720]: I0122 06:36:13.257513 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:13 crc kubenswrapper[4720]: I0122 06:36:13.257536 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:13 crc kubenswrapper[4720]: I0122 06:36:13.257555 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:13Z","lastTransitionTime":"2026-01-22T06:36:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:13 crc kubenswrapper[4720]: I0122 06:36:13.360546 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:13 crc kubenswrapper[4720]: I0122 06:36:13.360612 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:13 crc kubenswrapper[4720]: I0122 06:36:13.360630 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:13 crc kubenswrapper[4720]: I0122 06:36:13.360657 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:13 crc kubenswrapper[4720]: I0122 06:36:13.360674 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:13Z","lastTransitionTime":"2026-01-22T06:36:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:13 crc kubenswrapper[4720]: I0122 06:36:13.463356 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:13 crc kubenswrapper[4720]: I0122 06:36:13.463414 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:13 crc kubenswrapper[4720]: I0122 06:36:13.463425 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:13 crc kubenswrapper[4720]: I0122 06:36:13.463444 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:13 crc kubenswrapper[4720]: I0122 06:36:13.463457 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:13Z","lastTransitionTime":"2026-01-22T06:36:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:13 crc kubenswrapper[4720]: I0122 06:36:13.565449 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:13 crc kubenswrapper[4720]: I0122 06:36:13.565489 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:13 crc kubenswrapper[4720]: I0122 06:36:13.565502 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:13 crc kubenswrapper[4720]: I0122 06:36:13.565521 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:13 crc kubenswrapper[4720]: I0122 06:36:13.565530 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:13Z","lastTransitionTime":"2026-01-22T06:36:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:13 crc kubenswrapper[4720]: I0122 06:36:13.668683 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:13 crc kubenswrapper[4720]: I0122 06:36:13.668737 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:13 crc kubenswrapper[4720]: I0122 06:36:13.668750 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:13 crc kubenswrapper[4720]: I0122 06:36:13.668771 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:13 crc kubenswrapper[4720]: I0122 06:36:13.668785 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:13Z","lastTransitionTime":"2026-01-22T06:36:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:13 crc kubenswrapper[4720]: I0122 06:36:13.772006 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:13 crc kubenswrapper[4720]: I0122 06:36:13.772065 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:13 crc kubenswrapper[4720]: I0122 06:36:13.772087 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:13 crc kubenswrapper[4720]: I0122 06:36:13.772115 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:13 crc kubenswrapper[4720]: I0122 06:36:13.772132 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:13Z","lastTransitionTime":"2026-01-22T06:36:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:13 crc kubenswrapper[4720]: I0122 06:36:13.874960 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:13 crc kubenswrapper[4720]: I0122 06:36:13.875004 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:13 crc kubenswrapper[4720]: I0122 06:36:13.875018 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:13 crc kubenswrapper[4720]: I0122 06:36:13.875034 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:13 crc kubenswrapper[4720]: I0122 06:36:13.875044 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:13Z","lastTransitionTime":"2026-01-22T06:36:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:13 crc kubenswrapper[4720]: I0122 06:36:13.978244 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:13 crc kubenswrapper[4720]: I0122 06:36:13.978266 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:13 crc kubenswrapper[4720]: I0122 06:36:13.978278 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:13 crc kubenswrapper[4720]: I0122 06:36:13.978292 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:13 crc kubenswrapper[4720]: I0122 06:36:13.978300 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:13Z","lastTransitionTime":"2026-01-22T06:36:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:14 crc kubenswrapper[4720]: I0122 06:36:14.080711 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:14 crc kubenswrapper[4720]: I0122 06:36:14.080763 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:14 crc kubenswrapper[4720]: I0122 06:36:14.080780 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:14 crc kubenswrapper[4720]: I0122 06:36:14.080808 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:14 crc kubenswrapper[4720]: I0122 06:36:14.080824 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:14Z","lastTransitionTime":"2026-01-22T06:36:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:14 crc kubenswrapper[4720]: I0122 06:36:14.173691 4720 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 07:12:42.713599296 +0000 UTC Jan 22 06:36:14 crc kubenswrapper[4720]: I0122 06:36:14.183554 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:14 crc kubenswrapper[4720]: I0122 06:36:14.183600 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:14 crc kubenswrapper[4720]: I0122 06:36:14.183615 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:14 crc kubenswrapper[4720]: I0122 06:36:14.183640 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:14 crc kubenswrapper[4720]: I0122 06:36:14.183655 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:14Z","lastTransitionTime":"2026-01-22T06:36:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:14 crc kubenswrapper[4720]: I0122 06:36:14.209716 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 06:36:14 crc kubenswrapper[4720]: I0122 06:36:14.209748 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 06:36:14 crc kubenswrapper[4720]: E0122 06:36:14.209856 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 06:36:14 crc kubenswrapper[4720]: E0122 06:36:14.210052 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 06:36:14 crc kubenswrapper[4720]: I0122 06:36:14.285956 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:14 crc kubenswrapper[4720]: I0122 06:36:14.286040 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:14 crc kubenswrapper[4720]: I0122 06:36:14.286065 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:14 crc kubenswrapper[4720]: I0122 06:36:14.286090 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:14 crc kubenswrapper[4720]: I0122 06:36:14.286107 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:14Z","lastTransitionTime":"2026-01-22T06:36:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:14 crc kubenswrapper[4720]: I0122 06:36:14.388617 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:14 crc kubenswrapper[4720]: I0122 06:36:14.388667 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:14 crc kubenswrapper[4720]: I0122 06:36:14.388680 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:14 crc kubenswrapper[4720]: I0122 06:36:14.388698 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:14 crc kubenswrapper[4720]: I0122 06:36:14.388711 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:14Z","lastTransitionTime":"2026-01-22T06:36:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:14 crc kubenswrapper[4720]: I0122 06:36:14.490835 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:14 crc kubenswrapper[4720]: I0122 06:36:14.490875 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:14 crc kubenswrapper[4720]: I0122 06:36:14.490889 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:14 crc kubenswrapper[4720]: I0122 06:36:14.490924 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:14 crc kubenswrapper[4720]: I0122 06:36:14.490937 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:14Z","lastTransitionTime":"2026-01-22T06:36:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:14 crc kubenswrapper[4720]: I0122 06:36:14.593096 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:14 crc kubenswrapper[4720]: I0122 06:36:14.593142 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:14 crc kubenswrapper[4720]: I0122 06:36:14.593159 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:14 crc kubenswrapper[4720]: I0122 06:36:14.593179 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:14 crc kubenswrapper[4720]: I0122 06:36:14.593196 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:14Z","lastTransitionTime":"2026-01-22T06:36:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:14 crc kubenswrapper[4720]: I0122 06:36:14.696025 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:14 crc kubenswrapper[4720]: I0122 06:36:14.696054 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:14 crc kubenswrapper[4720]: I0122 06:36:14.696063 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:14 crc kubenswrapper[4720]: I0122 06:36:14.696078 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:14 crc kubenswrapper[4720]: I0122 06:36:14.696088 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:14Z","lastTransitionTime":"2026-01-22T06:36:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:14 crc kubenswrapper[4720]: I0122 06:36:14.798201 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:14 crc kubenswrapper[4720]: I0122 06:36:14.798247 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:14 crc kubenswrapper[4720]: I0122 06:36:14.798266 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:14 crc kubenswrapper[4720]: I0122 06:36:14.798286 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:14 crc kubenswrapper[4720]: I0122 06:36:14.798301 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:14Z","lastTransitionTime":"2026-01-22T06:36:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:14 crc kubenswrapper[4720]: I0122 06:36:14.900318 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:14 crc kubenswrapper[4720]: I0122 06:36:14.900364 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:14 crc kubenswrapper[4720]: I0122 06:36:14.900375 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:14 crc kubenswrapper[4720]: I0122 06:36:14.900393 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:14 crc kubenswrapper[4720]: I0122 06:36:14.900403 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:14Z","lastTransitionTime":"2026-01-22T06:36:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:15 crc kubenswrapper[4720]: I0122 06:36:15.002475 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:15 crc kubenswrapper[4720]: I0122 06:36:15.002563 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:15 crc kubenswrapper[4720]: I0122 06:36:15.002598 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:15 crc kubenswrapper[4720]: I0122 06:36:15.002634 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:15 crc kubenswrapper[4720]: I0122 06:36:15.002663 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:15Z","lastTransitionTime":"2026-01-22T06:36:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:15 crc kubenswrapper[4720]: I0122 06:36:15.105087 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:15 crc kubenswrapper[4720]: I0122 06:36:15.105123 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:15 crc kubenswrapper[4720]: I0122 06:36:15.105132 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:15 crc kubenswrapper[4720]: I0122 06:36:15.105149 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:15 crc kubenswrapper[4720]: I0122 06:36:15.105158 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:15Z","lastTransitionTime":"2026-01-22T06:36:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:15 crc kubenswrapper[4720]: I0122 06:36:15.174642 4720 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 18:03:12.42550915 +0000 UTC Jan 22 06:36:15 crc kubenswrapper[4720]: I0122 06:36:15.207674 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:15 crc kubenswrapper[4720]: I0122 06:36:15.207710 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:15 crc kubenswrapper[4720]: I0122 06:36:15.207718 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:15 crc kubenswrapper[4720]: I0122 06:36:15.207735 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:15 crc kubenswrapper[4720]: I0122 06:36:15.207744 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:15Z","lastTransitionTime":"2026-01-22T06:36:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:15 crc kubenswrapper[4720]: I0122 06:36:15.209945 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 06:36:15 crc kubenswrapper[4720]: I0122 06:36:15.210021 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kvtch" Jan 22 06:36:15 crc kubenswrapper[4720]: E0122 06:36:15.210068 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 06:36:15 crc kubenswrapper[4720]: E0122 06:36:15.210161 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kvtch" podUID="409f50e8-9b68-4efe-8eb4-bc144d383817" Jan 22 06:36:15 crc kubenswrapper[4720]: I0122 06:36:15.309971 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:15 crc kubenswrapper[4720]: I0122 06:36:15.310013 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:15 crc kubenswrapper[4720]: I0122 06:36:15.310024 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:15 crc kubenswrapper[4720]: I0122 06:36:15.310039 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:15 crc kubenswrapper[4720]: I0122 06:36:15.310051 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:15Z","lastTransitionTime":"2026-01-22T06:36:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:15 crc kubenswrapper[4720]: I0122 06:36:15.412353 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:15 crc kubenswrapper[4720]: I0122 06:36:15.412383 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:15 crc kubenswrapper[4720]: I0122 06:36:15.412391 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:15 crc kubenswrapper[4720]: I0122 06:36:15.412404 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:15 crc kubenswrapper[4720]: I0122 06:36:15.412413 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:15Z","lastTransitionTime":"2026-01-22T06:36:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:15 crc kubenswrapper[4720]: I0122 06:36:15.515180 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:15 crc kubenswrapper[4720]: I0122 06:36:15.515236 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:15 crc kubenswrapper[4720]: I0122 06:36:15.515253 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:15 crc kubenswrapper[4720]: I0122 06:36:15.515276 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:15 crc kubenswrapper[4720]: I0122 06:36:15.515293 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:15Z","lastTransitionTime":"2026-01-22T06:36:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:15 crc kubenswrapper[4720]: I0122 06:36:15.617561 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:15 crc kubenswrapper[4720]: I0122 06:36:15.617642 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:15 crc kubenswrapper[4720]: I0122 06:36:15.617678 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:15 crc kubenswrapper[4720]: I0122 06:36:15.617713 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:15 crc kubenswrapper[4720]: I0122 06:36:15.617735 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:15Z","lastTransitionTime":"2026-01-22T06:36:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:15 crc kubenswrapper[4720]: I0122 06:36:15.720207 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:15 crc kubenswrapper[4720]: I0122 06:36:15.720255 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:15 crc kubenswrapper[4720]: I0122 06:36:15.720264 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:15 crc kubenswrapper[4720]: I0122 06:36:15.720282 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:15 crc kubenswrapper[4720]: I0122 06:36:15.720293 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:15Z","lastTransitionTime":"2026-01-22T06:36:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:15 crc kubenswrapper[4720]: I0122 06:36:15.822510 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:15 crc kubenswrapper[4720]: I0122 06:36:15.822623 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:15 crc kubenswrapper[4720]: I0122 06:36:15.822636 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:15 crc kubenswrapper[4720]: I0122 06:36:15.822651 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:15 crc kubenswrapper[4720]: I0122 06:36:15.822662 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:15Z","lastTransitionTime":"2026-01-22T06:36:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:15 crc kubenswrapper[4720]: I0122 06:36:15.924758 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:15 crc kubenswrapper[4720]: I0122 06:36:15.924799 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:15 crc kubenswrapper[4720]: I0122 06:36:15.924811 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:15 crc kubenswrapper[4720]: I0122 06:36:15.924827 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:15 crc kubenswrapper[4720]: I0122 06:36:15.924836 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:15Z","lastTransitionTime":"2026-01-22T06:36:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:16 crc kubenswrapper[4720]: I0122 06:36:16.026873 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:16 crc kubenswrapper[4720]: I0122 06:36:16.026951 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:16 crc kubenswrapper[4720]: I0122 06:36:16.026968 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:16 crc kubenswrapper[4720]: I0122 06:36:16.026984 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:16 crc kubenswrapper[4720]: I0122 06:36:16.026998 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:16Z","lastTransitionTime":"2026-01-22T06:36:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:16 crc kubenswrapper[4720]: I0122 06:36:16.129836 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:16 crc kubenswrapper[4720]: I0122 06:36:16.129881 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:16 crc kubenswrapper[4720]: I0122 06:36:16.129892 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:16 crc kubenswrapper[4720]: I0122 06:36:16.129929 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:16 crc kubenswrapper[4720]: I0122 06:36:16.129944 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:16Z","lastTransitionTime":"2026-01-22T06:36:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:16 crc kubenswrapper[4720]: I0122 06:36:16.175134 4720 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 09:44:07.582047442 +0000 UTC Jan 22 06:36:16 crc kubenswrapper[4720]: I0122 06:36:16.210124 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 06:36:16 crc kubenswrapper[4720]: E0122 06:36:16.210312 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 06:36:16 crc kubenswrapper[4720]: I0122 06:36:16.210582 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 06:36:16 crc kubenswrapper[4720]: E0122 06:36:16.210687 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 06:36:16 crc kubenswrapper[4720]: I0122 06:36:16.232513 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:16 crc kubenswrapper[4720]: I0122 06:36:16.232558 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:16 crc kubenswrapper[4720]: I0122 06:36:16.232568 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:16 crc kubenswrapper[4720]: I0122 06:36:16.232587 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:16 crc kubenswrapper[4720]: I0122 06:36:16.232599 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:16Z","lastTransitionTime":"2026-01-22T06:36:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:16 crc kubenswrapper[4720]: I0122 06:36:16.336395 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:16 crc kubenswrapper[4720]: I0122 06:36:16.336499 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:16 crc kubenswrapper[4720]: I0122 06:36:16.336519 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:16 crc kubenswrapper[4720]: I0122 06:36:16.336546 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:16 crc kubenswrapper[4720]: I0122 06:36:16.336568 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:16Z","lastTransitionTime":"2026-01-22T06:36:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:16 crc kubenswrapper[4720]: I0122 06:36:16.440377 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:16 crc kubenswrapper[4720]: I0122 06:36:16.440431 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:16 crc kubenswrapper[4720]: I0122 06:36:16.440442 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:16 crc kubenswrapper[4720]: I0122 06:36:16.440463 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:16 crc kubenswrapper[4720]: I0122 06:36:16.440476 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:16Z","lastTransitionTime":"2026-01-22T06:36:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:16 crc kubenswrapper[4720]: I0122 06:36:16.544173 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:16 crc kubenswrapper[4720]: I0122 06:36:16.544227 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:16 crc kubenswrapper[4720]: I0122 06:36:16.544237 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:16 crc kubenswrapper[4720]: I0122 06:36:16.544259 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:16 crc kubenswrapper[4720]: I0122 06:36:16.544270 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:16Z","lastTransitionTime":"2026-01-22T06:36:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:16 crc kubenswrapper[4720]: I0122 06:36:16.647312 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:16 crc kubenswrapper[4720]: I0122 06:36:16.647353 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:16 crc kubenswrapper[4720]: I0122 06:36:16.647368 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:16 crc kubenswrapper[4720]: I0122 06:36:16.647393 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:16 crc kubenswrapper[4720]: I0122 06:36:16.647409 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:16Z","lastTransitionTime":"2026-01-22T06:36:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:16 crc kubenswrapper[4720]: I0122 06:36:16.749383 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:16 crc kubenswrapper[4720]: I0122 06:36:16.749446 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:16 crc kubenswrapper[4720]: I0122 06:36:16.749472 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:16 crc kubenswrapper[4720]: I0122 06:36:16.749497 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:16 crc kubenswrapper[4720]: I0122 06:36:16.749515 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:16Z","lastTransitionTime":"2026-01-22T06:36:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:16 crc kubenswrapper[4720]: I0122 06:36:16.851568 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:16 crc kubenswrapper[4720]: I0122 06:36:16.851603 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:16 crc kubenswrapper[4720]: I0122 06:36:16.851637 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:16 crc kubenswrapper[4720]: I0122 06:36:16.851655 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:16 crc kubenswrapper[4720]: I0122 06:36:16.851666 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:16Z","lastTransitionTime":"2026-01-22T06:36:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:16 crc kubenswrapper[4720]: I0122 06:36:16.954491 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:16 crc kubenswrapper[4720]: I0122 06:36:16.954552 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:16 crc kubenswrapper[4720]: I0122 06:36:16.954570 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:16 crc kubenswrapper[4720]: I0122 06:36:16.954597 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:16 crc kubenswrapper[4720]: I0122 06:36:16.954620 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:16Z","lastTransitionTime":"2026-01-22T06:36:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:17 crc kubenswrapper[4720]: I0122 06:36:17.057606 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:17 crc kubenswrapper[4720]: I0122 06:36:17.057645 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:17 crc kubenswrapper[4720]: I0122 06:36:17.057654 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:17 crc kubenswrapper[4720]: I0122 06:36:17.057672 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:17 crc kubenswrapper[4720]: I0122 06:36:17.057682 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:17Z","lastTransitionTime":"2026-01-22T06:36:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:17 crc kubenswrapper[4720]: I0122 06:36:17.160180 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:17 crc kubenswrapper[4720]: I0122 06:36:17.160216 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:17 crc kubenswrapper[4720]: I0122 06:36:17.160225 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:17 crc kubenswrapper[4720]: I0122 06:36:17.160244 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:17 crc kubenswrapper[4720]: I0122 06:36:17.160254 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:17Z","lastTransitionTime":"2026-01-22T06:36:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:17 crc kubenswrapper[4720]: I0122 06:36:17.175331 4720 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 14:26:29.627355216 +0000 UTC Jan 22 06:36:17 crc kubenswrapper[4720]: I0122 06:36:17.210116 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 06:36:17 crc kubenswrapper[4720]: I0122 06:36:17.210158 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kvtch" Jan 22 06:36:17 crc kubenswrapper[4720]: E0122 06:36:17.210240 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 06:36:17 crc kubenswrapper[4720]: E0122 06:36:17.210358 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kvtch" podUID="409f50e8-9b68-4efe-8eb4-bc144d383817" Jan 22 06:36:17 crc kubenswrapper[4720]: I0122 06:36:17.263110 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:17 crc kubenswrapper[4720]: I0122 06:36:17.263141 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:17 crc kubenswrapper[4720]: I0122 06:36:17.263151 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:17 crc kubenswrapper[4720]: I0122 06:36:17.263166 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:17 crc kubenswrapper[4720]: I0122 06:36:17.263175 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:17Z","lastTransitionTime":"2026-01-22T06:36:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:17 crc kubenswrapper[4720]: I0122 06:36:17.365861 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:17 crc kubenswrapper[4720]: I0122 06:36:17.365922 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:17 crc kubenswrapper[4720]: I0122 06:36:17.365936 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:17 crc kubenswrapper[4720]: I0122 06:36:17.365951 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:17 crc kubenswrapper[4720]: I0122 06:36:17.365964 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:17Z","lastTransitionTime":"2026-01-22T06:36:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:17 crc kubenswrapper[4720]: I0122 06:36:17.468519 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:17 crc kubenswrapper[4720]: I0122 06:36:17.468574 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:17 crc kubenswrapper[4720]: I0122 06:36:17.468596 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:17 crc kubenswrapper[4720]: I0122 06:36:17.468616 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:17 crc kubenswrapper[4720]: I0122 06:36:17.468625 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:17Z","lastTransitionTime":"2026-01-22T06:36:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:17 crc kubenswrapper[4720]: I0122 06:36:17.571742 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:17 crc kubenswrapper[4720]: I0122 06:36:17.571812 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:17 crc kubenswrapper[4720]: I0122 06:36:17.571834 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:17 crc kubenswrapper[4720]: I0122 06:36:17.571867 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:17 crc kubenswrapper[4720]: I0122 06:36:17.571903 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:17Z","lastTransitionTime":"2026-01-22T06:36:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:17 crc kubenswrapper[4720]: I0122 06:36:17.675097 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:17 crc kubenswrapper[4720]: I0122 06:36:17.675163 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:17 crc kubenswrapper[4720]: I0122 06:36:17.675182 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:17 crc kubenswrapper[4720]: I0122 06:36:17.675211 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:17 crc kubenswrapper[4720]: I0122 06:36:17.675233 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:17Z","lastTransitionTime":"2026-01-22T06:36:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:17 crc kubenswrapper[4720]: I0122 06:36:17.778704 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:17 crc kubenswrapper[4720]: I0122 06:36:17.778777 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:17 crc kubenswrapper[4720]: I0122 06:36:17.778789 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:17 crc kubenswrapper[4720]: I0122 06:36:17.778808 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:17 crc kubenswrapper[4720]: I0122 06:36:17.778821 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:17Z","lastTransitionTime":"2026-01-22T06:36:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:17 crc kubenswrapper[4720]: I0122 06:36:17.881188 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:17 crc kubenswrapper[4720]: I0122 06:36:17.881241 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:17 crc kubenswrapper[4720]: I0122 06:36:17.881251 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:17 crc kubenswrapper[4720]: I0122 06:36:17.881270 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:17 crc kubenswrapper[4720]: I0122 06:36:17.881281 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:17Z","lastTransitionTime":"2026-01-22T06:36:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:17 crc kubenswrapper[4720]: I0122 06:36:17.984432 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:17 crc kubenswrapper[4720]: I0122 06:36:17.984500 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:17 crc kubenswrapper[4720]: I0122 06:36:17.984514 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:17 crc kubenswrapper[4720]: I0122 06:36:17.984539 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:17 crc kubenswrapper[4720]: I0122 06:36:17.984553 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:17Z","lastTransitionTime":"2026-01-22T06:36:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:18 crc kubenswrapper[4720]: I0122 06:36:18.087817 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:18 crc kubenswrapper[4720]: I0122 06:36:18.087898 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:18 crc kubenswrapper[4720]: I0122 06:36:18.087960 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:18 crc kubenswrapper[4720]: I0122 06:36:18.088001 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:18 crc kubenswrapper[4720]: I0122 06:36:18.088030 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:18Z","lastTransitionTime":"2026-01-22T06:36:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:18 crc kubenswrapper[4720]: I0122 06:36:18.175669 4720 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 02:04:38.160588794 +0000 UTC Jan 22 06:36:18 crc kubenswrapper[4720]: I0122 06:36:18.190636 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:18 crc kubenswrapper[4720]: I0122 06:36:18.190674 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:18 crc kubenswrapper[4720]: I0122 06:36:18.190683 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:18 crc kubenswrapper[4720]: I0122 06:36:18.190698 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:18 crc kubenswrapper[4720]: I0122 06:36:18.190708 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:18Z","lastTransitionTime":"2026-01-22T06:36:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:18 crc kubenswrapper[4720]: I0122 06:36:18.210179 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 06:36:18 crc kubenswrapper[4720]: I0122 06:36:18.210215 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 06:36:18 crc kubenswrapper[4720]: E0122 06:36:18.210315 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 06:36:18 crc kubenswrapper[4720]: E0122 06:36:18.210416 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 06:36:18 crc kubenswrapper[4720]: I0122 06:36:18.236476 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc18b75d6761db0d4a459a096804ac6600c6c2026ca133d47671a8aadba175f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6616553b8c62205590801d10e6e316b396b7190584b08a774ebb87ab425cc122\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:18Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:18 crc kubenswrapper[4720]: I0122 06:36:18.253160 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4b26e9d-6a95-4b1c-9750-88b6aa100c67\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57c69f24cbbb5aea3a63b602916230df0cc3d74344a32bd1528bd162786a6d4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f52f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88eb6692702bcb8523c759d764bb8dede5af5a2890217a1c6897a5b18a7197dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f52f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bnsvd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:18Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:18 crc kubenswrapper[4720]: I0122 06:36:18.268065 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4c84t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83dddec7-9ecb-4d3b-97ac-e2f8f59e547c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ea91d23f213bf1c41ebffa43c30559153a3fdae5aac42557a24566cc90bd2b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl9b5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8570742a6d232a3695d42f42f4a8a7bfe79325cb20c8da4129148dda33df4683\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl9b5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4c84t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:18Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:18 crc kubenswrapper[4720]: I0122 06:36:18.279675 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63aad383-f2cb-4bee-a51e-8f27e7b6f6dc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f91f6cde32c3019a63315238551af8420def8e585e4dbf3d29e55f62f155ba3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eff41c6fc4cac32edebfbabd313603d50771e92427938f4fc8c12bc75563133d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5dc3d31620607809111c9a1ed06ab7dfb1b24ff5ef18155979efc6b09b6d67d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://365ce842df37c87fae429e32228087a45828c6c717b63a2eb9818f26bf9aed7b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:08Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:18Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:18 crc kubenswrapper[4720]: I0122 06:36:18.291080 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:18Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:18 crc kubenswrapper[4720]: I0122 06:36:18.293801 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:18 crc kubenswrapper[4720]: I0122 06:36:18.293833 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:18 crc kubenswrapper[4720]: I0122 06:36:18.293848 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:18 crc kubenswrapper[4720]: I0122 06:36:18.293866 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:18 crc kubenswrapper[4720]: I0122 06:36:18.293878 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:18Z","lastTransitionTime":"2026-01-22T06:36:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:18 crc kubenswrapper[4720]: I0122 06:36:18.302350 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:18Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:18 crc kubenswrapper[4720]: I0122 06:36:18.316775 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3042f2c25662a3c0a0f010b2e431521ea9dc24b59303773b1fc4e2b43308937\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:18Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:18 crc kubenswrapper[4720]: I0122 06:36:18.345542 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pc2f4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a725fa6-120e-41b1-bf7b-e1419e35c891\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be8927bdfee25dd12f02d8f5e4923d1ef6e034a4ab3c036ee8fb76b699aa4a04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b14f48ebae84e1450337b5ce78c6b9e74f4ffd418eaaf9df9fcb7495f5feae06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b013c0f8d83bb91a1385ed067377ea5bd81640f7b420160042c33a3112cbe153\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a1f835b7dfa74f56bbfdc5ef0a6674e6c04ae1be7119e3675d4a7e970bc37d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdea94d764eb25af4aec5da1f1aaf94b7d75e66292561a54803a079a6ca35cbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://279bdc70caff8b05f2eaff1d28bee03725a7069dcc99c4770a8df9a3e3ed0440\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ae75d823e484413b1163d5407d5560d4cae3d122fb5f1ce34f501c3d2fce1b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ae75d823e484413b1163d5407d5560d4cae3d122fb5f1ce34f501c3d2fce1b4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T06:36:04Z\\\",\\\"message\\\":\\\"min network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:04Z is after 2025-08-24T17:21:41Z]\\\\nI0122 06:36:04.142760 6361 ovn.go:134] Ensuring zone local for Pod openshift-dns/node-resolver-dtnxt in node crc\\\\nI0122 06:36:04.142770 6361 obj_retry.go:386] Retry successful for *v1.Pod openshift-dns/node-resolver-dtnxt after 0 failed attempt(s)\\\\nI0122 06:36:04.142777 6361 default_network_controller.go:776] Recording success event on pod openshift-dns/node-resolver-dtnxt\\\\nI0122 06:36:04.142318 6361 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-scheduler/openshift-kube-scheduler-crc\\\\nI0122 06:36:04.142789 6361 ovn.go:134] Ensuring zone local for Pod openshift-kube-scheduler/openshift-kube-scheduler-crc in node crc\\\\nI0122 06:36:04.142793 6361 obj_ret\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T06:36:03Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-pc2f4_openshift-ovn-kubernetes(9a725fa6-120e-41b1-bf7b-e1419e35c891)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd6371da033cf0ecd1c0b746cae82e14593edbf2353910a77284a2987741580d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a478b385adad231c65bafa44aef49fd983ef1a81babc0464478e199578b612b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a478b385adad231c65bafa44aef49fd983ef1a81babc0464478e199578b612b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pc2f4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:18Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:18 crc kubenswrapper[4720]: I0122 06:36:18.365405 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lxzml" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c7b3c34a-9870-4c9f-990b-29b7e768d5a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec19859a546dc85e4807d0f522014187b1d22c1167a342776aafbcfd7c3a2cd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h66q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0e395a61e07527eace7f980f476ef9b16a0c6d525b461837ebd9abcbebfa9b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0e395a61e07527eace7f980f476ef9b16a0c6d525b461837ebd9abcbebfa9b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h66q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fa836ad8fb1d77bc1b9d30575db8ed7f2394a2b03d41f0b26a3907fa07ab154\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fa836ad8fb1d77bc1b9d30575db8ed7f2394a2b03d41f0b26a3907fa07ab154\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h66q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09cda77dbfa7cafc1f167478649fe1bc961d7610005bd3260407796956dd8fc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09cda77dbfa7cafc1f167478649fe1bc961d7610005bd3260407796956dd8fc3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h66q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d74c02c82589070c20f88c3ce9574341f2c4898af034fe9a686aa7b7c19ed93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d74c02c82589070c20f88c3ce9574341f2c4898af034fe9a686aa7b7c19ed93\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h66q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42e7374bf2c5e55d0afbe1e7564a2bc7fd23d0af7e22f0cab853fb9d75802422\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42e7374bf2c5e55d0afbe1e7564a2bc7fd23d0af7e22f0cab853fb9d75802422\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h66q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bbafab11f2868d0a97a54debc01d8e4cc6360b08a5b7e883a2f385db2e9ba0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3bbafab11f2868d0a97a54debc01d8e4cc6360b08a5b7e883a2f385db2e9ba0c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h66q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lxzml\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:18Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:18 crc kubenswrapper[4720]: I0122 06:36:18.376152 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5bmrh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"819a554b-cde8-41eb-bf3c-b965b5754ee9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fd548b8bf40d7be75f3433d2396eac3eb467cdc77a6e92f29c5321b0f2e11c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w949z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5bmrh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:18Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:18 crc kubenswrapper[4720]: I0122 06:36:18.388141 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kvtch" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"409f50e8-9b68-4efe-8eb4-bc144d383817\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhm9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhm9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:47Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kvtch\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:18Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:18 crc kubenswrapper[4720]: I0122 06:36:18.396511 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:18 crc kubenswrapper[4720]: I0122 06:36:18.396540 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:18 crc kubenswrapper[4720]: I0122 06:36:18.396552 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:18 crc kubenswrapper[4720]: I0122 06:36:18.396570 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:18 crc kubenswrapper[4720]: I0122 06:36:18.396582 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:18Z","lastTransitionTime":"2026-01-22T06:36:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:18 crc kubenswrapper[4720]: I0122 06:36:18.417352 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2336077a-0d11-443d-aa5f-3bc0f75aeb59\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7f8b443098a61889ca09134ff7b6e219efca59abcbcea3bb4c1aa9266ee8e55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfdba4f4bb3ed55acad58ed0c07d2dd58b1a1e9fd56443090c002bdefd3e7e10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2caddf6dd66b641ff10add3b52e37581c741507660ec4fe67a3b9ddde699e56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://108525dd49ce1847f1cc6c4e2a4c70cae3d2d990cda97eeb4aba8879b955432d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d49a501c34b86f0f9300742970a00c92ce3b8d352518d8f53d1ab8c734e2af6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49fa65139740d25a82a8becea2b7476e9846a3eebfe5a02bb0b3e8cc4e7fa30d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49fa65139740d25a82a8becea2b7476e9846a3eebfe5a02bb0b3e8cc4e7fa30d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e171d6a13b6108f7df37e46d2195682d2ce7a75e5fbfd0ccf4aca27871964cf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e171d6a13b6108f7df37e46d2195682d2ce7a75e5fbfd0ccf4aca27871964cf6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://136504ce05b178f4c60a839a1044c0b0892ea97babfe80181eac601c5057df16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://136504ce05b178f4c60a839a1044c0b0892ea97babfe80181eac601c5057df16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:08Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:18Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:18 crc kubenswrapper[4720]: I0122 06:36:18.436688 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71c3232e-a7c6-4127-b9ae-54b793cf40fc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b844e983a5de29cec0e9c51f209f04921cd45b437887ca38e93348bc0816832a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d21c2ba52009379f44498a3064c67bcd1a54a6277cc2b1df9db949bf0871e95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65a3043e63bb84b58bb5486070eb84265d51c514df2d766703a62382b3d6f66f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b944677a78ab673857bf2dd507dcc2eced1caef632613ec1e2054cb76ef57f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f51bf89aa732948c3672632435a055b62f58316da07931b2773bc2d3c10789e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T06:35:26Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 06:35:20.895806 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 06:35:20.896560 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2161447925/tls.crt::/tmp/serving-cert-2161447925/tls.key\\\\\\\"\\\\nI0122 06:35:26.673138 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 06:35:26.677382 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 06:35:26.677419 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 06:35:26.677459 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 06:35:26.677469 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 06:35:26.688473 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 06:35:26.688552 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 06:35:26.688566 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 06:35:26.688580 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 06:35:26.688588 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 06:35:26.688596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 06:35:26.688605 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 06:35:26.688875 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 06:35:26.690763 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://927a5a6aba8d7533888fb9e5e664fbaed25c6e6c0616c5b3c6f0afd6f45aa128\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3418e6eacd36147ae6ae1be72f23113dea7dbaab3597ba0fee76fbc846c0162f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3418e6eacd36147ae6ae1be72f23113dea7dbaab3597ba0fee76fbc846c0162f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:18Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:18 crc kubenswrapper[4720]: I0122 06:36:18.450805 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://42d0f971bb1f2a686c44e255b97648123170034b41d626318aab29e111dda4a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:18Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:18 crc kubenswrapper[4720]: I0122 06:36:18.467202 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-n5w5r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85373343-156d-4de0-a72b-baaf7c4e3d08\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4da0abad7292a9d82b80caabd4b1f0fc15fbe54bfa5b4316c85b2131ea8b5b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlzmz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-n5w5r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:18Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:18 crc kubenswrapper[4720]: I0122 06:36:18.478059 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73ba4f9c-33cb-4898-b2a3-21bf3327cf5b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:36:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:36:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b5ab589e0e928e47ac498164439f2fbd62bfe1130a9c17a9d96ec4cedd2c1e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5ff36eb3ab53efb54f45ab3e3030175237fd76ecd28ffcdc5a5079dfb93ec2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5d1e4cb487f75b95bc0da8ec3adbb6410d171fa2c95137c8127cea6023166f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7605a8cc85e1a2da51c6bedd5f03df930e1135340ec7de69d1ef643c907fd2bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7605a8cc85e1a2da51c6bedd5f03df930e1135340ec7de69d1ef643c907fd2bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:08Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:18Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:18 crc kubenswrapper[4720]: I0122 06:36:18.488998 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:18Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:18 crc kubenswrapper[4720]: I0122 06:36:18.498702 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:18 crc kubenswrapper[4720]: I0122 06:36:18.498742 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:18 crc kubenswrapper[4720]: I0122 06:36:18.498764 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:18 crc kubenswrapper[4720]: I0122 06:36:18.498797 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:18 crc kubenswrapper[4720]: I0122 06:36:18.498820 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:18Z","lastTransitionTime":"2026-01-22T06:36:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:18 crc kubenswrapper[4720]: I0122 06:36:18.498927 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dtnxt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"518eedd0-2cb6-458d-a7a8-d8c8b8296401\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://794bf7a98b2300bd21bd914f5ad90c8f92ea7c055c78f62dbac7bc66c1c4d282\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqx2p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dtnxt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:18Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:18 crc kubenswrapper[4720]: I0122 06:36:18.600452 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:18 crc kubenswrapper[4720]: I0122 06:36:18.600827 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:18 crc kubenswrapper[4720]: I0122 06:36:18.601005 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:18 crc kubenswrapper[4720]: I0122 06:36:18.601154 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:18 crc kubenswrapper[4720]: I0122 06:36:18.601285 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:18Z","lastTransitionTime":"2026-01-22T06:36:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:18 crc kubenswrapper[4720]: I0122 06:36:18.704598 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:18 crc kubenswrapper[4720]: I0122 06:36:18.704665 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:18 crc kubenswrapper[4720]: I0122 06:36:18.704678 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:18 crc kubenswrapper[4720]: I0122 06:36:18.704703 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:18 crc kubenswrapper[4720]: I0122 06:36:18.704719 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:18Z","lastTransitionTime":"2026-01-22T06:36:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:18 crc kubenswrapper[4720]: I0122 06:36:18.807288 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:18 crc kubenswrapper[4720]: I0122 06:36:18.807628 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:18 crc kubenswrapper[4720]: I0122 06:36:18.807759 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:18 crc kubenswrapper[4720]: I0122 06:36:18.807954 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:18 crc kubenswrapper[4720]: I0122 06:36:18.808121 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:18Z","lastTransitionTime":"2026-01-22T06:36:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:18 crc kubenswrapper[4720]: I0122 06:36:18.911298 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:18 crc kubenswrapper[4720]: I0122 06:36:18.911361 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:18 crc kubenswrapper[4720]: I0122 06:36:18.911372 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:18 crc kubenswrapper[4720]: I0122 06:36:18.911393 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:18 crc kubenswrapper[4720]: I0122 06:36:18.911406 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:18Z","lastTransitionTime":"2026-01-22T06:36:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:19 crc kubenswrapper[4720]: I0122 06:36:19.014609 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:19 crc kubenswrapper[4720]: I0122 06:36:19.015033 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:19 crc kubenswrapper[4720]: I0122 06:36:19.015198 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:19 crc kubenswrapper[4720]: I0122 06:36:19.015367 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:19 crc kubenswrapper[4720]: I0122 06:36:19.015503 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:19Z","lastTransitionTime":"2026-01-22T06:36:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:19 crc kubenswrapper[4720]: I0122 06:36:19.117892 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:19 crc kubenswrapper[4720]: I0122 06:36:19.117968 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:19 crc kubenswrapper[4720]: I0122 06:36:19.117979 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:19 crc kubenswrapper[4720]: I0122 06:36:19.117998 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:19 crc kubenswrapper[4720]: I0122 06:36:19.118010 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:19Z","lastTransitionTime":"2026-01-22T06:36:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:19 crc kubenswrapper[4720]: I0122 06:36:19.176587 4720 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 12:26:35.420123667 +0000 UTC Jan 22 06:36:19 crc kubenswrapper[4720]: I0122 06:36:19.210227 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 06:36:19 crc kubenswrapper[4720]: I0122 06:36:19.210327 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kvtch" Jan 22 06:36:19 crc kubenswrapper[4720]: E0122 06:36:19.211095 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 06:36:19 crc kubenswrapper[4720]: I0122 06:36:19.211232 4720 scope.go:117] "RemoveContainer" containerID="3ae75d823e484413b1163d5407d5560d4cae3d122fb5f1ce34f501c3d2fce1b4" Jan 22 06:36:19 crc kubenswrapper[4720]: E0122 06:36:19.211333 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kvtch" podUID="409f50e8-9b68-4efe-8eb4-bc144d383817" Jan 22 06:36:19 crc kubenswrapper[4720]: E0122 06:36:19.211748 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-pc2f4_openshift-ovn-kubernetes(9a725fa6-120e-41b1-bf7b-e1419e35c891)\"" pod="openshift-ovn-kubernetes/ovnkube-node-pc2f4" podUID="9a725fa6-120e-41b1-bf7b-e1419e35c891" Jan 22 06:36:19 crc kubenswrapper[4720]: I0122 06:36:19.222273 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:19 crc kubenswrapper[4720]: I0122 06:36:19.222348 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:19 crc kubenswrapper[4720]: I0122 06:36:19.222377 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:19 crc kubenswrapper[4720]: I0122 06:36:19.222399 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:19 crc kubenswrapper[4720]: I0122 06:36:19.222410 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:19Z","lastTransitionTime":"2026-01-22T06:36:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:19 crc kubenswrapper[4720]: I0122 06:36:19.325026 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:19 crc kubenswrapper[4720]: I0122 06:36:19.325080 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:19 crc kubenswrapper[4720]: I0122 06:36:19.325090 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:19 crc kubenswrapper[4720]: I0122 06:36:19.325108 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:19 crc kubenswrapper[4720]: I0122 06:36:19.325118 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:19Z","lastTransitionTime":"2026-01-22T06:36:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:19 crc kubenswrapper[4720]: I0122 06:36:19.358780 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/409f50e8-9b68-4efe-8eb4-bc144d383817-metrics-certs\") pod \"network-metrics-daemon-kvtch\" (UID: \"409f50e8-9b68-4efe-8eb4-bc144d383817\") " pod="openshift-multus/network-metrics-daemon-kvtch" Jan 22 06:36:19 crc kubenswrapper[4720]: E0122 06:36:19.358939 4720 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 22 06:36:19 crc kubenswrapper[4720]: E0122 06:36:19.358997 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/409f50e8-9b68-4efe-8eb4-bc144d383817-metrics-certs podName:409f50e8-9b68-4efe-8eb4-bc144d383817 nodeName:}" failed. No retries permitted until 2026-01-22 06:36:51.358980203 +0000 UTC m=+103.500886908 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/409f50e8-9b68-4efe-8eb4-bc144d383817-metrics-certs") pod "network-metrics-daemon-kvtch" (UID: "409f50e8-9b68-4efe-8eb4-bc144d383817") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 22 06:36:19 crc kubenswrapper[4720]: I0122 06:36:19.427323 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:19 crc kubenswrapper[4720]: I0122 06:36:19.427371 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:19 crc kubenswrapper[4720]: I0122 06:36:19.427397 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:19 crc kubenswrapper[4720]: I0122 06:36:19.427415 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:19 crc kubenswrapper[4720]: I0122 06:36:19.427425 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:19Z","lastTransitionTime":"2026-01-22T06:36:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:19 crc kubenswrapper[4720]: I0122 06:36:19.530330 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:19 crc kubenswrapper[4720]: I0122 06:36:19.530638 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:19 crc kubenswrapper[4720]: I0122 06:36:19.530710 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:19 crc kubenswrapper[4720]: I0122 06:36:19.530806 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:19 crc kubenswrapper[4720]: I0122 06:36:19.530877 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:19Z","lastTransitionTime":"2026-01-22T06:36:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:19 crc kubenswrapper[4720]: I0122 06:36:19.634382 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:19 crc kubenswrapper[4720]: I0122 06:36:19.634443 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:19 crc kubenswrapper[4720]: I0122 06:36:19.634458 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:19 crc kubenswrapper[4720]: I0122 06:36:19.634486 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:19 crc kubenswrapper[4720]: I0122 06:36:19.634524 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:19Z","lastTransitionTime":"2026-01-22T06:36:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:19 crc kubenswrapper[4720]: I0122 06:36:19.737448 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:19 crc kubenswrapper[4720]: I0122 06:36:19.737485 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:19 crc kubenswrapper[4720]: I0122 06:36:19.737497 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:19 crc kubenswrapper[4720]: I0122 06:36:19.737516 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:19 crc kubenswrapper[4720]: I0122 06:36:19.737528 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:19Z","lastTransitionTime":"2026-01-22T06:36:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:19 crc kubenswrapper[4720]: I0122 06:36:19.741347 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-n5w5r_85373343-156d-4de0-a72b-baaf7c4e3d08/kube-multus/0.log" Jan 22 06:36:19 crc kubenswrapper[4720]: I0122 06:36:19.741416 4720 generic.go:334] "Generic (PLEG): container finished" podID="85373343-156d-4de0-a72b-baaf7c4e3d08" containerID="e4da0abad7292a9d82b80caabd4b1f0fc15fbe54bfa5b4316c85b2131ea8b5b7" exitCode=1 Jan 22 06:36:19 crc kubenswrapper[4720]: I0122 06:36:19.741457 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-n5w5r" event={"ID":"85373343-156d-4de0-a72b-baaf7c4e3d08","Type":"ContainerDied","Data":"e4da0abad7292a9d82b80caabd4b1f0fc15fbe54bfa5b4316c85b2131ea8b5b7"} Jan 22 06:36:19 crc kubenswrapper[4720]: I0122 06:36:19.741981 4720 scope.go:117] "RemoveContainer" containerID="e4da0abad7292a9d82b80caabd4b1f0fc15fbe54bfa5b4316c85b2131ea8b5b7" Jan 22 06:36:19 crc kubenswrapper[4720]: I0122 06:36:19.754935 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dtnxt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"518eedd0-2cb6-458d-a7a8-d8c8b8296401\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://794bf7a98b2300bd21bd914f5ad90c8f92ea7c055c78f62dbac7bc66c1c4d282\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqx2p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dtnxt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:19Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:19 crc kubenswrapper[4720]: I0122 06:36:19.772577 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73ba4f9c-33cb-4898-b2a3-21bf3327cf5b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:36:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:36:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b5ab589e0e928e47ac498164439f2fbd62bfe1130a9c17a9d96ec4cedd2c1e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5ff36eb3ab53efb54f45ab3e3030175237fd76ecd28ffcdc5a5079dfb93ec2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5d1e4cb487f75b95bc0da8ec3adbb6410d171fa2c95137c8127cea6023166f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7605a8cc85e1a2da51c6bedd5f03df930e1135340ec7de69d1ef643c907fd2bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7605a8cc85e1a2da51c6bedd5f03df930e1135340ec7de69d1ef643c907fd2bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:08Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:19Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:19 crc kubenswrapper[4720]: I0122 06:36:19.785636 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:19Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:19 crc kubenswrapper[4720]: I0122 06:36:19.798682 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4b26e9d-6a95-4b1c-9750-88b6aa100c67\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57c69f24cbbb5aea3a63b602916230df0cc3d74344a32bd1528bd162786a6d4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f52f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88eb6692702bcb8523c759d764bb8dede5af5a2890217a1c6897a5b18a7197dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f52f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bnsvd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:19Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:19 crc kubenswrapper[4720]: I0122 06:36:19.815002 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc18b75d6761db0d4a459a096804ac6600c6c2026ca133d47671a8aadba175f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6616553b8c62205590801d10e6e316b396b7190584b08a774ebb87ab425cc122\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:19Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:19 crc kubenswrapper[4720]: I0122 06:36:19.827361 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:19Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:19 crc kubenswrapper[4720]: I0122 06:36:19.839521 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:19 crc kubenswrapper[4720]: I0122 06:36:19.839580 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:19 crc kubenswrapper[4720]: I0122 06:36:19.839603 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:19 crc kubenswrapper[4720]: I0122 06:36:19.839634 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:19 crc kubenswrapper[4720]: I0122 06:36:19.839657 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:19Z","lastTransitionTime":"2026-01-22T06:36:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:19 crc kubenswrapper[4720]: I0122 06:36:19.841135 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3042f2c25662a3c0a0f010b2e431521ea9dc24b59303773b1fc4e2b43308937\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:19Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:19 crc kubenswrapper[4720]: I0122 06:36:19.860358 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pc2f4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a725fa6-120e-41b1-bf7b-e1419e35c891\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be8927bdfee25dd12f02d8f5e4923d1ef6e034a4ab3c036ee8fb76b699aa4a04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b14f48ebae84e1450337b5ce78c6b9e74f4ffd418eaaf9df9fcb7495f5feae06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b013c0f8d83bb91a1385ed067377ea5bd81640f7b420160042c33a3112cbe153\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a1f835b7dfa74f56bbfdc5ef0a6674e6c04ae1be7119e3675d4a7e970bc37d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdea94d764eb25af4aec5da1f1aaf94b7d75e66292561a54803a079a6ca35cbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://279bdc70caff8b05f2eaff1d28bee03725a7069dcc99c4770a8df9a3e3ed0440\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ae75d823e484413b1163d5407d5560d4cae3d122fb5f1ce34f501c3d2fce1b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ae75d823e484413b1163d5407d5560d4cae3d122fb5f1ce34f501c3d2fce1b4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T06:36:04Z\\\",\\\"message\\\":\\\"min network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:04Z is after 2025-08-24T17:21:41Z]\\\\nI0122 06:36:04.142760 6361 ovn.go:134] Ensuring zone local for Pod openshift-dns/node-resolver-dtnxt in node crc\\\\nI0122 06:36:04.142770 6361 obj_retry.go:386] Retry successful for *v1.Pod openshift-dns/node-resolver-dtnxt after 0 failed attempt(s)\\\\nI0122 06:36:04.142777 6361 default_network_controller.go:776] Recording success event on pod openshift-dns/node-resolver-dtnxt\\\\nI0122 06:36:04.142318 6361 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-scheduler/openshift-kube-scheduler-crc\\\\nI0122 06:36:04.142789 6361 ovn.go:134] Ensuring zone local for Pod openshift-kube-scheduler/openshift-kube-scheduler-crc in node crc\\\\nI0122 06:36:04.142793 6361 obj_ret\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T06:36:03Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-pc2f4_openshift-ovn-kubernetes(9a725fa6-120e-41b1-bf7b-e1419e35c891)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd6371da033cf0ecd1c0b746cae82e14593edbf2353910a77284a2987741580d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a478b385adad231c65bafa44aef49fd983ef1a81babc0464478e199578b612b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a478b385adad231c65bafa44aef49fd983ef1a81babc0464478e199578b612b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pc2f4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:19Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:19 crc kubenswrapper[4720]: I0122 06:36:19.874192 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lxzml" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c7b3c34a-9870-4c9f-990b-29b7e768d5a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec19859a546dc85e4807d0f522014187b1d22c1167a342776aafbcfd7c3a2cd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h66q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0e395a61e07527eace7f980f476ef9b16a0c6d525b461837ebd9abcbebfa9b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0e395a61e07527eace7f980f476ef9b16a0c6d525b461837ebd9abcbebfa9b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h66q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fa836ad8fb1d77bc1b9d30575db8ed7f2394a2b03d41f0b26a3907fa07ab154\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fa836ad8fb1d77bc1b9d30575db8ed7f2394a2b03d41f0b26a3907fa07ab154\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h66q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09cda77dbfa7cafc1f167478649fe1bc961d7610005bd3260407796956dd8fc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09cda77dbfa7cafc1f167478649fe1bc961d7610005bd3260407796956dd8fc3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h66q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d74c02c82589070c20f88c3ce9574341f2c4898af034fe9a686aa7b7c19ed93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d74c02c82589070c20f88c3ce9574341f2c4898af034fe9a686aa7b7c19ed93\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h66q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42e7374bf2c5e55d0afbe1e7564a2bc7fd23d0af7e22f0cab853fb9d75802422\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42e7374bf2c5e55d0afbe1e7564a2bc7fd23d0af7e22f0cab853fb9d75802422\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h66q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bbafab11f2868d0a97a54debc01d8e4cc6360b08a5b7e883a2f385db2e9ba0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3bbafab11f2868d0a97a54debc01d8e4cc6360b08a5b7e883a2f385db2e9ba0c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h66q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lxzml\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:19Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:19 crc kubenswrapper[4720]: I0122 06:36:19.887664 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5bmrh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"819a554b-cde8-41eb-bf3c-b965b5754ee9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fd548b8bf40d7be75f3433d2396eac3eb467cdc77a6e92f29c5321b0f2e11c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w949z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5bmrh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:19Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:19 crc kubenswrapper[4720]: I0122 06:36:19.900431 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4c84t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83dddec7-9ecb-4d3b-97ac-e2f8f59e547c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ea91d23f213bf1c41ebffa43c30559153a3fdae5aac42557a24566cc90bd2b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl9b5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8570742a6d232a3695d42f42f4a8a7bfe79325cb20c8da4129148dda33df4683\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl9b5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4c84t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:19Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:19 crc kubenswrapper[4720]: I0122 06:36:19.912122 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63aad383-f2cb-4bee-a51e-8f27e7b6f6dc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f91f6cde32c3019a63315238551af8420def8e585e4dbf3d29e55f62f155ba3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eff41c6fc4cac32edebfbabd313603d50771e92427938f4fc8c12bc75563133d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5dc3d31620607809111c9a1ed06ab7dfb1b24ff5ef18155979efc6b09b6d67d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://365ce842df37c87fae429e32228087a45828c6c717b63a2eb9818f26bf9aed7b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:08Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:19Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:19 crc kubenswrapper[4720]: I0122 06:36:19.925331 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:19Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:19 crc kubenswrapper[4720]: I0122 06:36:19.938385 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kvtch" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"409f50e8-9b68-4efe-8eb4-bc144d383817\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhm9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhm9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:47Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kvtch\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:19Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:19 crc kubenswrapper[4720]: I0122 06:36:19.941932 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:19 crc kubenswrapper[4720]: I0122 06:36:19.941988 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:19 crc kubenswrapper[4720]: I0122 06:36:19.942003 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:19 crc kubenswrapper[4720]: I0122 06:36:19.942026 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:19 crc kubenswrapper[4720]: I0122 06:36:19.942039 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:19Z","lastTransitionTime":"2026-01-22T06:36:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:19 crc kubenswrapper[4720]: I0122 06:36:19.953229 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://42d0f971bb1f2a686c44e255b97648123170034b41d626318aab29e111dda4a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:19Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:19 crc kubenswrapper[4720]: I0122 06:36:19.968381 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-n5w5r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85373343-156d-4de0-a72b-baaf7c4e3d08\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:36:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:36:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e4da0abad7292a9d82b80caabd4b1f0fc15fbe54bfa5b4316c85b2131ea8b5b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4da0abad7292a9d82b80caabd4b1f0fc15fbe54bfa5b4316c85b2131ea8b5b7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T06:36:19Z\\\",\\\"message\\\":\\\"2026-01-22T06:35:34+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_310a9cdb-d8e6-49bd-b096-1bf7adc9b43b\\\\n2026-01-22T06:35:34+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_310a9cdb-d8e6-49bd-b096-1bf7adc9b43b to /host/opt/cni/bin/\\\\n2026-01-22T06:35:34Z [verbose] multus-daemon started\\\\n2026-01-22T06:35:34Z [verbose] Readiness Indicator file check\\\\n2026-01-22T06:36:19Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlzmz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-n5w5r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:19Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:19 crc kubenswrapper[4720]: I0122 06:36:19.989520 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2336077a-0d11-443d-aa5f-3bc0f75aeb59\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7f8b443098a61889ca09134ff7b6e219efca59abcbcea3bb4c1aa9266ee8e55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfdba4f4bb3ed55acad58ed0c07d2dd58b1a1e9fd56443090c002bdefd3e7e10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2caddf6dd66b641ff10add3b52e37581c741507660ec4fe67a3b9ddde699e56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://108525dd49ce1847f1cc6c4e2a4c70cae3d2d990cda97eeb4aba8879b955432d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d49a501c34b86f0f9300742970a00c92ce3b8d352518d8f53d1ab8c734e2af6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49fa65139740d25a82a8becea2b7476e9846a3eebfe5a02bb0b3e8cc4e7fa30d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49fa65139740d25a82a8becea2b7476e9846a3eebfe5a02bb0b3e8cc4e7fa30d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e171d6a13b6108f7df37e46d2195682d2ce7a75e5fbfd0ccf4aca27871964cf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e171d6a13b6108f7df37e46d2195682d2ce7a75e5fbfd0ccf4aca27871964cf6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://136504ce05b178f4c60a839a1044c0b0892ea97babfe80181eac601c5057df16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://136504ce05b178f4c60a839a1044c0b0892ea97babfe80181eac601c5057df16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:08Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:19Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:20 crc kubenswrapper[4720]: I0122 06:36:20.005366 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71c3232e-a7c6-4127-b9ae-54b793cf40fc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b844e983a5de29cec0e9c51f209f04921cd45b437887ca38e93348bc0816832a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d21c2ba52009379f44498a3064c67bcd1a54a6277cc2b1df9db949bf0871e95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65a3043e63bb84b58bb5486070eb84265d51c514df2d766703a62382b3d6f66f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b944677a78ab673857bf2dd507dcc2eced1caef632613ec1e2054cb76ef57f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f51bf89aa732948c3672632435a055b62f58316da07931b2773bc2d3c10789e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T06:35:26Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 06:35:20.895806 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 06:35:20.896560 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2161447925/tls.crt::/tmp/serving-cert-2161447925/tls.key\\\\\\\"\\\\nI0122 06:35:26.673138 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 06:35:26.677382 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 06:35:26.677419 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 06:35:26.677459 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 06:35:26.677469 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 06:35:26.688473 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 06:35:26.688552 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 06:35:26.688566 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 06:35:26.688580 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 06:35:26.688588 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 06:35:26.688596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 06:35:26.688605 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 06:35:26.688875 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 06:35:26.690763 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://927a5a6aba8d7533888fb9e5e664fbaed25c6e6c0616c5b3c6f0afd6f45aa128\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3418e6eacd36147ae6ae1be72f23113dea7dbaab3597ba0fee76fbc846c0162f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3418e6eacd36147ae6ae1be72f23113dea7dbaab3597ba0fee76fbc846c0162f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:20Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:20 crc kubenswrapper[4720]: I0122 06:36:20.044594 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:20 crc kubenswrapper[4720]: I0122 06:36:20.044650 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:20 crc kubenswrapper[4720]: I0122 06:36:20.044668 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:20 crc kubenswrapper[4720]: I0122 06:36:20.044727 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:20 crc kubenswrapper[4720]: I0122 06:36:20.044748 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:20Z","lastTransitionTime":"2026-01-22T06:36:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:20 crc kubenswrapper[4720]: I0122 06:36:20.147437 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:20 crc kubenswrapper[4720]: I0122 06:36:20.147485 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:20 crc kubenswrapper[4720]: I0122 06:36:20.147498 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:20 crc kubenswrapper[4720]: I0122 06:36:20.147519 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:20 crc kubenswrapper[4720]: I0122 06:36:20.147529 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:20Z","lastTransitionTime":"2026-01-22T06:36:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:20 crc kubenswrapper[4720]: I0122 06:36:20.176891 4720 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 12:41:42.298270346 +0000 UTC Jan 22 06:36:20 crc kubenswrapper[4720]: I0122 06:36:20.210277 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 06:36:20 crc kubenswrapper[4720]: I0122 06:36:20.210307 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 06:36:20 crc kubenswrapper[4720]: E0122 06:36:20.210449 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 06:36:20 crc kubenswrapper[4720]: E0122 06:36:20.210560 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 06:36:20 crc kubenswrapper[4720]: I0122 06:36:20.250208 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:20 crc kubenswrapper[4720]: I0122 06:36:20.250251 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:20 crc kubenswrapper[4720]: I0122 06:36:20.250269 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:20 crc kubenswrapper[4720]: I0122 06:36:20.250289 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:20 crc kubenswrapper[4720]: I0122 06:36:20.250301 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:20Z","lastTransitionTime":"2026-01-22T06:36:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:20 crc kubenswrapper[4720]: I0122 06:36:20.352961 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:20 crc kubenswrapper[4720]: I0122 06:36:20.353002 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:20 crc kubenswrapper[4720]: I0122 06:36:20.353023 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:20 crc kubenswrapper[4720]: I0122 06:36:20.353043 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:20 crc kubenswrapper[4720]: I0122 06:36:20.353053 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:20Z","lastTransitionTime":"2026-01-22T06:36:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:20 crc kubenswrapper[4720]: I0122 06:36:20.455817 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:20 crc kubenswrapper[4720]: I0122 06:36:20.455888 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:20 crc kubenswrapper[4720]: I0122 06:36:20.455903 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:20 crc kubenswrapper[4720]: I0122 06:36:20.455940 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:20 crc kubenswrapper[4720]: I0122 06:36:20.455955 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:20Z","lastTransitionTime":"2026-01-22T06:36:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:20 crc kubenswrapper[4720]: I0122 06:36:20.559221 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:20 crc kubenswrapper[4720]: I0122 06:36:20.559315 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:20 crc kubenswrapper[4720]: I0122 06:36:20.559340 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:20 crc kubenswrapper[4720]: I0122 06:36:20.559372 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:20 crc kubenswrapper[4720]: I0122 06:36:20.559395 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:20Z","lastTransitionTime":"2026-01-22T06:36:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:20 crc kubenswrapper[4720]: I0122 06:36:20.662386 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:20 crc kubenswrapper[4720]: I0122 06:36:20.662426 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:20 crc kubenswrapper[4720]: I0122 06:36:20.662436 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:20 crc kubenswrapper[4720]: I0122 06:36:20.662453 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:20 crc kubenswrapper[4720]: I0122 06:36:20.662462 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:20Z","lastTransitionTime":"2026-01-22T06:36:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:20 crc kubenswrapper[4720]: I0122 06:36:20.748042 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-n5w5r_85373343-156d-4de0-a72b-baaf7c4e3d08/kube-multus/0.log" Jan 22 06:36:20 crc kubenswrapper[4720]: I0122 06:36:20.748129 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-n5w5r" event={"ID":"85373343-156d-4de0-a72b-baaf7c4e3d08","Type":"ContainerStarted","Data":"b71047289bcefd19da4f70da8db4ee3456912a253f598d85540effeea52ca966"} Jan 22 06:36:20 crc kubenswrapper[4720]: I0122 06:36:20.760983 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4b26e9d-6a95-4b1c-9750-88b6aa100c67\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57c69f24cbbb5aea3a63b602916230df0cc3d74344a32bd1528bd162786a6d4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f52f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88eb6692702bcb8523c759d764bb8dede5af5a2890217a1c6897a5b18a7197dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f52f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bnsvd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:20Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:20 crc kubenswrapper[4720]: I0122 06:36:20.765298 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:20 crc kubenswrapper[4720]: I0122 06:36:20.765336 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:20 crc kubenswrapper[4720]: I0122 06:36:20.765345 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:20 crc kubenswrapper[4720]: I0122 06:36:20.765360 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:20 crc kubenswrapper[4720]: I0122 06:36:20.765372 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:20Z","lastTransitionTime":"2026-01-22T06:36:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:20 crc kubenswrapper[4720]: I0122 06:36:20.772687 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc18b75d6761db0d4a459a096804ac6600c6c2026ca133d47671a8aadba175f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6616553b8c62205590801d10e6e316b396b7190584b08a774ebb87ab425cc122\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:20Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:20 crc kubenswrapper[4720]: I0122 06:36:20.787622 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:20Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:20 crc kubenswrapper[4720]: I0122 06:36:20.806536 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3042f2c25662a3c0a0f010b2e431521ea9dc24b59303773b1fc4e2b43308937\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:20Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:20 crc kubenswrapper[4720]: I0122 06:36:20.837163 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pc2f4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a725fa6-120e-41b1-bf7b-e1419e35c891\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be8927bdfee25dd12f02d8f5e4923d1ef6e034a4ab3c036ee8fb76b699aa4a04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b14f48ebae84e1450337b5ce78c6b9e74f4ffd418eaaf9df9fcb7495f5feae06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b013c0f8d83bb91a1385ed067377ea5bd81640f7b420160042c33a3112cbe153\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a1f835b7dfa74f56bbfdc5ef0a6674e6c04ae1be7119e3675d4a7e970bc37d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdea94d764eb25af4aec5da1f1aaf94b7d75e66292561a54803a079a6ca35cbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://279bdc70caff8b05f2eaff1d28bee03725a7069dcc99c4770a8df9a3e3ed0440\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ae75d823e484413b1163d5407d5560d4cae3d122fb5f1ce34f501c3d2fce1b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ae75d823e484413b1163d5407d5560d4cae3d122fb5f1ce34f501c3d2fce1b4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T06:36:04Z\\\",\\\"message\\\":\\\"min network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:04Z is after 2025-08-24T17:21:41Z]\\\\nI0122 06:36:04.142760 6361 ovn.go:134] Ensuring zone local for Pod openshift-dns/node-resolver-dtnxt in node crc\\\\nI0122 06:36:04.142770 6361 obj_retry.go:386] Retry successful for *v1.Pod openshift-dns/node-resolver-dtnxt after 0 failed attempt(s)\\\\nI0122 06:36:04.142777 6361 default_network_controller.go:776] Recording success event on pod openshift-dns/node-resolver-dtnxt\\\\nI0122 06:36:04.142318 6361 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-scheduler/openshift-kube-scheduler-crc\\\\nI0122 06:36:04.142789 6361 ovn.go:134] Ensuring zone local for Pod openshift-kube-scheduler/openshift-kube-scheduler-crc in node crc\\\\nI0122 06:36:04.142793 6361 obj_ret\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T06:36:03Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-pc2f4_openshift-ovn-kubernetes(9a725fa6-120e-41b1-bf7b-e1419e35c891)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd6371da033cf0ecd1c0b746cae82e14593edbf2353910a77284a2987741580d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a478b385adad231c65bafa44aef49fd983ef1a81babc0464478e199578b612b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a478b385adad231c65bafa44aef49fd983ef1a81babc0464478e199578b612b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pc2f4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:20Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:20 crc kubenswrapper[4720]: I0122 06:36:20.860044 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lxzml" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c7b3c34a-9870-4c9f-990b-29b7e768d5a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec19859a546dc85e4807d0f522014187b1d22c1167a342776aafbcfd7c3a2cd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h66q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0e395a61e07527eace7f980f476ef9b16a0c6d525b461837ebd9abcbebfa9b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0e395a61e07527eace7f980f476ef9b16a0c6d525b461837ebd9abcbebfa9b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h66q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fa836ad8fb1d77bc1b9d30575db8ed7f2394a2b03d41f0b26a3907fa07ab154\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fa836ad8fb1d77bc1b9d30575db8ed7f2394a2b03d41f0b26a3907fa07ab154\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h66q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09cda77dbfa7cafc1f167478649fe1bc961d7610005bd3260407796956dd8fc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09cda77dbfa7cafc1f167478649fe1bc961d7610005bd3260407796956dd8fc3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h66q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d74c02c82589070c20f88c3ce9574341f2c4898af034fe9a686aa7b7c19ed93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d74c02c82589070c20f88c3ce9574341f2c4898af034fe9a686aa7b7c19ed93\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h66q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42e7374bf2c5e55d0afbe1e7564a2bc7fd23d0af7e22f0cab853fb9d75802422\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42e7374bf2c5e55d0afbe1e7564a2bc7fd23d0af7e22f0cab853fb9d75802422\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h66q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bbafab11f2868d0a97a54debc01d8e4cc6360b08a5b7e883a2f385db2e9ba0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3bbafab11f2868d0a97a54debc01d8e4cc6360b08a5b7e883a2f385db2e9ba0c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h66q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lxzml\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:20Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:20 crc kubenswrapper[4720]: I0122 06:36:20.868355 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:20 crc kubenswrapper[4720]: I0122 06:36:20.868383 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:20 crc kubenswrapper[4720]: I0122 06:36:20.868391 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:20 crc kubenswrapper[4720]: I0122 06:36:20.868406 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:20 crc kubenswrapper[4720]: I0122 06:36:20.868416 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:20Z","lastTransitionTime":"2026-01-22T06:36:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:20 crc kubenswrapper[4720]: I0122 06:36:20.875827 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5bmrh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"819a554b-cde8-41eb-bf3c-b965b5754ee9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fd548b8bf40d7be75f3433d2396eac3eb467cdc77a6e92f29c5321b0f2e11c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w949z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5bmrh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:20Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:20 crc kubenswrapper[4720]: I0122 06:36:20.893080 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4c84t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83dddec7-9ecb-4d3b-97ac-e2f8f59e547c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ea91d23f213bf1c41ebffa43c30559153a3fdae5aac42557a24566cc90bd2b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl9b5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8570742a6d232a3695d42f42f4a8a7bfe79325cb20c8da4129148dda33df4683\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl9b5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4c84t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:20Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:20 crc kubenswrapper[4720]: I0122 06:36:20.913622 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63aad383-f2cb-4bee-a51e-8f27e7b6f6dc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f91f6cde32c3019a63315238551af8420def8e585e4dbf3d29e55f62f155ba3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eff41c6fc4cac32edebfbabd313603d50771e92427938f4fc8c12bc75563133d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5dc3d31620607809111c9a1ed06ab7dfb1b24ff5ef18155979efc6b09b6d67d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://365ce842df37c87fae429e32228087a45828c6c717b63a2eb9818f26bf9aed7b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:08Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:20Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:20 crc kubenswrapper[4720]: I0122 06:36:20.934154 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:20Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:20 crc kubenswrapper[4720]: I0122 06:36:20.956184 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kvtch" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"409f50e8-9b68-4efe-8eb4-bc144d383817\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhm9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhm9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:47Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kvtch\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:20Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:20 crc kubenswrapper[4720]: I0122 06:36:20.971001 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:20 crc kubenswrapper[4720]: I0122 06:36:20.971050 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:20 crc kubenswrapper[4720]: I0122 06:36:20.971062 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:20 crc kubenswrapper[4720]: I0122 06:36:20.971078 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:20 crc kubenswrapper[4720]: I0122 06:36:20.971088 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:20Z","lastTransitionTime":"2026-01-22T06:36:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:20 crc kubenswrapper[4720]: I0122 06:36:20.979007 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://42d0f971bb1f2a686c44e255b97648123170034b41d626318aab29e111dda4a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:20Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:20 crc kubenswrapper[4720]: I0122 06:36:20.998865 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-n5w5r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85373343-156d-4de0-a72b-baaf7c4e3d08\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:36:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:36:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b71047289bcefd19da4f70da8db4ee3456912a253f598d85540effeea52ca966\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4da0abad7292a9d82b80caabd4b1f0fc15fbe54bfa5b4316c85b2131ea8b5b7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T06:36:19Z\\\",\\\"message\\\":\\\"2026-01-22T06:35:34+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_310a9cdb-d8e6-49bd-b096-1bf7adc9b43b\\\\n2026-01-22T06:35:34+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_310a9cdb-d8e6-49bd-b096-1bf7adc9b43b to /host/opt/cni/bin/\\\\n2026-01-22T06:35:34Z [verbose] multus-daemon started\\\\n2026-01-22T06:35:34Z [verbose] Readiness Indicator file check\\\\n2026-01-22T06:36:19Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:33Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:36:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlzmz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-n5w5r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:20Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:21 crc kubenswrapper[4720]: I0122 06:36:21.031157 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2336077a-0d11-443d-aa5f-3bc0f75aeb59\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7f8b443098a61889ca09134ff7b6e219efca59abcbcea3bb4c1aa9266ee8e55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfdba4f4bb3ed55acad58ed0c07d2dd58b1a1e9fd56443090c002bdefd3e7e10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2caddf6dd66b641ff10add3b52e37581c741507660ec4fe67a3b9ddde699e56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://108525dd49ce1847f1cc6c4e2a4c70cae3d2d990cda97eeb4aba8879b955432d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d49a501c34b86f0f9300742970a00c92ce3b8d352518d8f53d1ab8c734e2af6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49fa65139740d25a82a8becea2b7476e9846a3eebfe5a02bb0b3e8cc4e7fa30d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49fa65139740d25a82a8becea2b7476e9846a3eebfe5a02bb0b3e8cc4e7fa30d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e171d6a13b6108f7df37e46d2195682d2ce7a75e5fbfd0ccf4aca27871964cf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e171d6a13b6108f7df37e46d2195682d2ce7a75e5fbfd0ccf4aca27871964cf6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://136504ce05b178f4c60a839a1044c0b0892ea97babfe80181eac601c5057df16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://136504ce05b178f4c60a839a1044c0b0892ea97babfe80181eac601c5057df16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:08Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:21Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:21 crc kubenswrapper[4720]: I0122 06:36:21.053676 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71c3232e-a7c6-4127-b9ae-54b793cf40fc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b844e983a5de29cec0e9c51f209f04921cd45b437887ca38e93348bc0816832a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d21c2ba52009379f44498a3064c67bcd1a54a6277cc2b1df9db949bf0871e95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65a3043e63bb84b58bb5486070eb84265d51c514df2d766703a62382b3d6f66f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b944677a78ab673857bf2dd507dcc2eced1caef632613ec1e2054cb76ef57f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f51bf89aa732948c3672632435a055b62f58316da07931b2773bc2d3c10789e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T06:35:26Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 06:35:20.895806 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 06:35:20.896560 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2161447925/tls.crt::/tmp/serving-cert-2161447925/tls.key\\\\\\\"\\\\nI0122 06:35:26.673138 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 06:35:26.677382 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 06:35:26.677419 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 06:35:26.677459 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 06:35:26.677469 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 06:35:26.688473 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 06:35:26.688552 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 06:35:26.688566 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 06:35:26.688580 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 06:35:26.688588 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 06:35:26.688596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 06:35:26.688605 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 06:35:26.688875 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 06:35:26.690763 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://927a5a6aba8d7533888fb9e5e664fbaed25c6e6c0616c5b3c6f0afd6f45aa128\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3418e6eacd36147ae6ae1be72f23113dea7dbaab3597ba0fee76fbc846c0162f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3418e6eacd36147ae6ae1be72f23113dea7dbaab3597ba0fee76fbc846c0162f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:21Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:21 crc kubenswrapper[4720]: I0122 06:36:21.068679 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dtnxt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"518eedd0-2cb6-458d-a7a8-d8c8b8296401\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://794bf7a98b2300bd21bd914f5ad90c8f92ea7c055c78f62dbac7bc66c1c4d282\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqx2p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dtnxt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:21Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:21 crc kubenswrapper[4720]: I0122 06:36:21.073895 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:21 crc kubenswrapper[4720]: I0122 06:36:21.073945 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:21 crc kubenswrapper[4720]: I0122 06:36:21.073953 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:21 crc kubenswrapper[4720]: I0122 06:36:21.073972 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:21 crc kubenswrapper[4720]: I0122 06:36:21.073983 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:21Z","lastTransitionTime":"2026-01-22T06:36:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:21 crc kubenswrapper[4720]: I0122 06:36:21.087399 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73ba4f9c-33cb-4898-b2a3-21bf3327cf5b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:36:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:36:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b5ab589e0e928e47ac498164439f2fbd62bfe1130a9c17a9d96ec4cedd2c1e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5ff36eb3ab53efb54f45ab3e3030175237fd76ecd28ffcdc5a5079dfb93ec2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5d1e4cb487f75b95bc0da8ec3adbb6410d171fa2c95137c8127cea6023166f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7605a8cc85e1a2da51c6bedd5f03df930e1135340ec7de69d1ef643c907fd2bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7605a8cc85e1a2da51c6bedd5f03df930e1135340ec7de69d1ef643c907fd2bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:08Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:21Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:21 crc kubenswrapper[4720]: I0122 06:36:21.106095 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:21Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:21 crc kubenswrapper[4720]: I0122 06:36:21.177067 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:21 crc kubenswrapper[4720]: I0122 06:36:21.177122 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:21 crc kubenswrapper[4720]: I0122 06:36:21.177142 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:21 crc kubenswrapper[4720]: I0122 06:36:21.177206 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:21 crc kubenswrapper[4720]: I0122 06:36:21.177104 4720 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 12:55:47.590027704 +0000 UTC Jan 22 06:36:21 crc kubenswrapper[4720]: I0122 06:36:21.177224 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:21Z","lastTransitionTime":"2026-01-22T06:36:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:21 crc kubenswrapper[4720]: I0122 06:36:21.210618 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 06:36:21 crc kubenswrapper[4720]: E0122 06:36:21.210789 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 06:36:21 crc kubenswrapper[4720]: I0122 06:36:21.210622 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kvtch" Jan 22 06:36:21 crc kubenswrapper[4720]: E0122 06:36:21.211372 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kvtch" podUID="409f50e8-9b68-4efe-8eb4-bc144d383817" Jan 22 06:36:21 crc kubenswrapper[4720]: I0122 06:36:21.280085 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:21 crc kubenswrapper[4720]: I0122 06:36:21.280128 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:21 crc kubenswrapper[4720]: I0122 06:36:21.280139 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:21 crc kubenswrapper[4720]: I0122 06:36:21.280161 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:21 crc kubenswrapper[4720]: I0122 06:36:21.280175 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:21Z","lastTransitionTime":"2026-01-22T06:36:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:21 crc kubenswrapper[4720]: I0122 06:36:21.383222 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:21 crc kubenswrapper[4720]: I0122 06:36:21.383286 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:21 crc kubenswrapper[4720]: I0122 06:36:21.383302 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:21 crc kubenswrapper[4720]: I0122 06:36:21.383326 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:21 crc kubenswrapper[4720]: I0122 06:36:21.383340 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:21Z","lastTransitionTime":"2026-01-22T06:36:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:21 crc kubenswrapper[4720]: I0122 06:36:21.486531 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:21 crc kubenswrapper[4720]: I0122 06:36:21.486591 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:21 crc kubenswrapper[4720]: I0122 06:36:21.486609 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:21 crc kubenswrapper[4720]: I0122 06:36:21.486636 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:21 crc kubenswrapper[4720]: I0122 06:36:21.486654 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:21Z","lastTransitionTime":"2026-01-22T06:36:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:21 crc kubenswrapper[4720]: I0122 06:36:21.589259 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:21 crc kubenswrapper[4720]: I0122 06:36:21.589306 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:21 crc kubenswrapper[4720]: I0122 06:36:21.589317 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:21 crc kubenswrapper[4720]: I0122 06:36:21.589337 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:21 crc kubenswrapper[4720]: I0122 06:36:21.589350 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:21Z","lastTransitionTime":"2026-01-22T06:36:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:21 crc kubenswrapper[4720]: I0122 06:36:21.691978 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:21 crc kubenswrapper[4720]: I0122 06:36:21.692025 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:21 crc kubenswrapper[4720]: I0122 06:36:21.692037 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:21 crc kubenswrapper[4720]: I0122 06:36:21.692057 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:21 crc kubenswrapper[4720]: I0122 06:36:21.692067 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:21Z","lastTransitionTime":"2026-01-22T06:36:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:21 crc kubenswrapper[4720]: I0122 06:36:21.794152 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:21 crc kubenswrapper[4720]: I0122 06:36:21.794191 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:21 crc kubenswrapper[4720]: I0122 06:36:21.794200 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:21 crc kubenswrapper[4720]: I0122 06:36:21.794215 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:21 crc kubenswrapper[4720]: I0122 06:36:21.794224 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:21Z","lastTransitionTime":"2026-01-22T06:36:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:21 crc kubenswrapper[4720]: I0122 06:36:21.896957 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:21 crc kubenswrapper[4720]: I0122 06:36:21.897007 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:21 crc kubenswrapper[4720]: I0122 06:36:21.897017 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:21 crc kubenswrapper[4720]: I0122 06:36:21.897035 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:21 crc kubenswrapper[4720]: I0122 06:36:21.897047 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:21Z","lastTransitionTime":"2026-01-22T06:36:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:21 crc kubenswrapper[4720]: I0122 06:36:21.999526 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:21 crc kubenswrapper[4720]: I0122 06:36:21.999588 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:21 crc kubenswrapper[4720]: I0122 06:36:21.999604 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:21 crc kubenswrapper[4720]: I0122 06:36:21.999631 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:21 crc kubenswrapper[4720]: I0122 06:36:21.999648 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:21Z","lastTransitionTime":"2026-01-22T06:36:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:22 crc kubenswrapper[4720]: I0122 06:36:22.103196 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:22 crc kubenswrapper[4720]: I0122 06:36:22.103261 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:22 crc kubenswrapper[4720]: I0122 06:36:22.103286 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:22 crc kubenswrapper[4720]: I0122 06:36:22.103319 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:22 crc kubenswrapper[4720]: I0122 06:36:22.103342 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:22Z","lastTransitionTime":"2026-01-22T06:36:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:22 crc kubenswrapper[4720]: I0122 06:36:22.177634 4720 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 20:45:51.047644845 +0000 UTC Jan 22 06:36:22 crc kubenswrapper[4720]: I0122 06:36:22.206034 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:22 crc kubenswrapper[4720]: I0122 06:36:22.206081 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:22 crc kubenswrapper[4720]: I0122 06:36:22.206099 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:22 crc kubenswrapper[4720]: I0122 06:36:22.206126 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:22 crc kubenswrapper[4720]: I0122 06:36:22.206143 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:22Z","lastTransitionTime":"2026-01-22T06:36:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:22 crc kubenswrapper[4720]: I0122 06:36:22.210589 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 06:36:22 crc kubenswrapper[4720]: E0122 06:36:22.210730 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 06:36:22 crc kubenswrapper[4720]: I0122 06:36:22.210582 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 06:36:22 crc kubenswrapper[4720]: E0122 06:36:22.211174 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 06:36:22 crc kubenswrapper[4720]: I0122 06:36:22.308031 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:22 crc kubenswrapper[4720]: I0122 06:36:22.308096 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:22 crc kubenswrapper[4720]: I0122 06:36:22.308118 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:22 crc kubenswrapper[4720]: I0122 06:36:22.308151 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:22 crc kubenswrapper[4720]: I0122 06:36:22.308174 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:22Z","lastTransitionTime":"2026-01-22T06:36:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:22 crc kubenswrapper[4720]: I0122 06:36:22.410625 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:22 crc kubenswrapper[4720]: I0122 06:36:22.410695 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:22 crc kubenswrapper[4720]: I0122 06:36:22.410716 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:22 crc kubenswrapper[4720]: I0122 06:36:22.410743 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:22 crc kubenswrapper[4720]: I0122 06:36:22.410761 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:22Z","lastTransitionTime":"2026-01-22T06:36:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:22 crc kubenswrapper[4720]: I0122 06:36:22.513644 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:22 crc kubenswrapper[4720]: I0122 06:36:22.513707 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:22 crc kubenswrapper[4720]: I0122 06:36:22.513731 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:22 crc kubenswrapper[4720]: I0122 06:36:22.513831 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:22 crc kubenswrapper[4720]: I0122 06:36:22.513859 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:22Z","lastTransitionTime":"2026-01-22T06:36:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:22 crc kubenswrapper[4720]: I0122 06:36:22.616532 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:22 crc kubenswrapper[4720]: I0122 06:36:22.616592 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:22 crc kubenswrapper[4720]: I0122 06:36:22.616611 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:22 crc kubenswrapper[4720]: I0122 06:36:22.616639 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:22 crc kubenswrapper[4720]: I0122 06:36:22.616658 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:22Z","lastTransitionTime":"2026-01-22T06:36:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:22 crc kubenswrapper[4720]: I0122 06:36:22.685213 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:22 crc kubenswrapper[4720]: I0122 06:36:22.685265 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:22 crc kubenswrapper[4720]: I0122 06:36:22.685281 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:22 crc kubenswrapper[4720]: I0122 06:36:22.685307 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:22 crc kubenswrapper[4720]: I0122 06:36:22.685326 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:22Z","lastTransitionTime":"2026-01-22T06:36:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:22 crc kubenswrapper[4720]: E0122 06:36:22.707772 4720 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T06:36:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T06:36:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T06:36:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T06:36:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T06:36:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T06:36:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T06:36:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T06:36:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"234f6209-cc86-46cc-ab69-026482c920c9\\\",\\\"systemUUID\\\":\\\"4713dd6d-99ec-4bb6-94e4-e7199d2e8be9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:22Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:22 crc kubenswrapper[4720]: I0122 06:36:22.713880 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:22 crc kubenswrapper[4720]: I0122 06:36:22.713972 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:22 crc kubenswrapper[4720]: I0122 06:36:22.713995 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:22 crc kubenswrapper[4720]: I0122 06:36:22.714028 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:22 crc kubenswrapper[4720]: I0122 06:36:22.714049 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:22Z","lastTransitionTime":"2026-01-22T06:36:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:22 crc kubenswrapper[4720]: E0122 06:36:22.730134 4720 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T06:36:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T06:36:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T06:36:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T06:36:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T06:36:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T06:36:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T06:36:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T06:36:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"234f6209-cc86-46cc-ab69-026482c920c9\\\",\\\"systemUUID\\\":\\\"4713dd6d-99ec-4bb6-94e4-e7199d2e8be9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:22Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:22 crc kubenswrapper[4720]: I0122 06:36:22.734421 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:22 crc kubenswrapper[4720]: I0122 06:36:22.734466 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:22 crc kubenswrapper[4720]: I0122 06:36:22.734479 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:22 crc kubenswrapper[4720]: I0122 06:36:22.734502 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:22 crc kubenswrapper[4720]: I0122 06:36:22.734518 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:22Z","lastTransitionTime":"2026-01-22T06:36:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:22 crc kubenswrapper[4720]: E0122 06:36:22.749089 4720 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T06:36:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T06:36:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T06:36:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T06:36:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T06:36:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T06:36:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T06:36:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T06:36:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"234f6209-cc86-46cc-ab69-026482c920c9\\\",\\\"systemUUID\\\":\\\"4713dd6d-99ec-4bb6-94e4-e7199d2e8be9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:22Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:22 crc kubenswrapper[4720]: I0122 06:36:22.753379 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:22 crc kubenswrapper[4720]: I0122 06:36:22.753428 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:22 crc kubenswrapper[4720]: I0122 06:36:22.753441 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:22 crc kubenswrapper[4720]: I0122 06:36:22.753461 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:22 crc kubenswrapper[4720]: I0122 06:36:22.753474 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:22Z","lastTransitionTime":"2026-01-22T06:36:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:22 crc kubenswrapper[4720]: E0122 06:36:22.769474 4720 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T06:36:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T06:36:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T06:36:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T06:36:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T06:36:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T06:36:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T06:36:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T06:36:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"234f6209-cc86-46cc-ab69-026482c920c9\\\",\\\"systemUUID\\\":\\\"4713dd6d-99ec-4bb6-94e4-e7199d2e8be9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:22Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:22 crc kubenswrapper[4720]: I0122 06:36:22.774158 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:22 crc kubenswrapper[4720]: I0122 06:36:22.774207 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:22 crc kubenswrapper[4720]: I0122 06:36:22.774221 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:22 crc kubenswrapper[4720]: I0122 06:36:22.774237 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:22 crc kubenswrapper[4720]: I0122 06:36:22.774248 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:22Z","lastTransitionTime":"2026-01-22T06:36:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:22 crc kubenswrapper[4720]: E0122 06:36:22.790201 4720 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T06:36:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T06:36:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T06:36:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T06:36:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T06:36:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T06:36:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T06:36:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T06:36:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"234f6209-cc86-46cc-ab69-026482c920c9\\\",\\\"systemUUID\\\":\\\"4713dd6d-99ec-4bb6-94e4-e7199d2e8be9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:22Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:22 crc kubenswrapper[4720]: E0122 06:36:22.790380 4720 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 22 06:36:22 crc kubenswrapper[4720]: I0122 06:36:22.791824 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:22 crc kubenswrapper[4720]: I0122 06:36:22.791855 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:22 crc kubenswrapper[4720]: I0122 06:36:22.791867 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:22 crc kubenswrapper[4720]: I0122 06:36:22.791893 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:22 crc kubenswrapper[4720]: I0122 06:36:22.791922 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:22Z","lastTransitionTime":"2026-01-22T06:36:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:22 crc kubenswrapper[4720]: I0122 06:36:22.896246 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:22 crc kubenswrapper[4720]: I0122 06:36:22.896278 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:22 crc kubenswrapper[4720]: I0122 06:36:22.896317 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:22 crc kubenswrapper[4720]: I0122 06:36:22.896336 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:22 crc kubenswrapper[4720]: I0122 06:36:22.896348 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:22Z","lastTransitionTime":"2026-01-22T06:36:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:23 crc kubenswrapper[4720]: I0122 06:36:23.035696 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:23 crc kubenswrapper[4720]: I0122 06:36:23.035771 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:23 crc kubenswrapper[4720]: I0122 06:36:23.035796 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:23 crc kubenswrapper[4720]: I0122 06:36:23.035833 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:23 crc kubenswrapper[4720]: I0122 06:36:23.035858 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:23Z","lastTransitionTime":"2026-01-22T06:36:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:23 crc kubenswrapper[4720]: I0122 06:36:23.139340 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:23 crc kubenswrapper[4720]: I0122 06:36:23.139407 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:23 crc kubenswrapper[4720]: I0122 06:36:23.139425 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:23 crc kubenswrapper[4720]: I0122 06:36:23.139461 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:23 crc kubenswrapper[4720]: I0122 06:36:23.139483 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:23Z","lastTransitionTime":"2026-01-22T06:36:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:23 crc kubenswrapper[4720]: I0122 06:36:23.178838 4720 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 06:52:28.525177195 +0000 UTC Jan 22 06:36:23 crc kubenswrapper[4720]: I0122 06:36:23.210700 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kvtch" Jan 22 06:36:23 crc kubenswrapper[4720]: I0122 06:36:23.210809 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 06:36:23 crc kubenswrapper[4720]: E0122 06:36:23.210973 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kvtch" podUID="409f50e8-9b68-4efe-8eb4-bc144d383817" Jan 22 06:36:23 crc kubenswrapper[4720]: E0122 06:36:23.211108 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 06:36:23 crc kubenswrapper[4720]: I0122 06:36:23.243034 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:23 crc kubenswrapper[4720]: I0122 06:36:23.243112 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:23 crc kubenswrapper[4720]: I0122 06:36:23.243141 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:23 crc kubenswrapper[4720]: I0122 06:36:23.243186 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:23 crc kubenswrapper[4720]: I0122 06:36:23.243217 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:23Z","lastTransitionTime":"2026-01-22T06:36:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:23 crc kubenswrapper[4720]: I0122 06:36:23.346593 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:23 crc kubenswrapper[4720]: I0122 06:36:23.346749 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:23 crc kubenswrapper[4720]: I0122 06:36:23.346775 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:23 crc kubenswrapper[4720]: I0122 06:36:23.346805 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:23 crc kubenswrapper[4720]: I0122 06:36:23.346828 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:23Z","lastTransitionTime":"2026-01-22T06:36:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:23 crc kubenswrapper[4720]: I0122 06:36:23.448992 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:23 crc kubenswrapper[4720]: I0122 06:36:23.449074 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:23 crc kubenswrapper[4720]: I0122 06:36:23.449092 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:23 crc kubenswrapper[4720]: I0122 06:36:23.449121 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:23 crc kubenswrapper[4720]: I0122 06:36:23.449139 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:23Z","lastTransitionTime":"2026-01-22T06:36:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:23 crc kubenswrapper[4720]: I0122 06:36:23.552209 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:23 crc kubenswrapper[4720]: I0122 06:36:23.552275 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:23 crc kubenswrapper[4720]: I0122 06:36:23.552292 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:23 crc kubenswrapper[4720]: I0122 06:36:23.552319 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:23 crc kubenswrapper[4720]: I0122 06:36:23.552340 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:23Z","lastTransitionTime":"2026-01-22T06:36:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:23 crc kubenswrapper[4720]: I0122 06:36:23.654452 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:23 crc kubenswrapper[4720]: I0122 06:36:23.654519 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:23 crc kubenswrapper[4720]: I0122 06:36:23.654546 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:23 crc kubenswrapper[4720]: I0122 06:36:23.654580 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:23 crc kubenswrapper[4720]: I0122 06:36:23.654599 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:23Z","lastTransitionTime":"2026-01-22T06:36:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:23 crc kubenswrapper[4720]: I0122 06:36:23.757441 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:23 crc kubenswrapper[4720]: I0122 06:36:23.758067 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:23 crc kubenswrapper[4720]: I0122 06:36:23.760145 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:23 crc kubenswrapper[4720]: I0122 06:36:23.760245 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:23 crc kubenswrapper[4720]: I0122 06:36:23.760266 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:23Z","lastTransitionTime":"2026-01-22T06:36:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:23 crc kubenswrapper[4720]: I0122 06:36:23.864634 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:23 crc kubenswrapper[4720]: I0122 06:36:23.864767 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:23 crc kubenswrapper[4720]: I0122 06:36:23.864795 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:23 crc kubenswrapper[4720]: I0122 06:36:23.864827 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:23 crc kubenswrapper[4720]: I0122 06:36:23.864850 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:23Z","lastTransitionTime":"2026-01-22T06:36:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:23 crc kubenswrapper[4720]: I0122 06:36:23.968134 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:23 crc kubenswrapper[4720]: I0122 06:36:23.968195 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:23 crc kubenswrapper[4720]: I0122 06:36:23.968212 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:23 crc kubenswrapper[4720]: I0122 06:36:23.968237 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:23 crc kubenswrapper[4720]: I0122 06:36:23.968256 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:23Z","lastTransitionTime":"2026-01-22T06:36:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:24 crc kubenswrapper[4720]: I0122 06:36:24.071468 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:24 crc kubenswrapper[4720]: I0122 06:36:24.071530 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:24 crc kubenswrapper[4720]: I0122 06:36:24.071547 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:24 crc kubenswrapper[4720]: I0122 06:36:24.071574 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:24 crc kubenswrapper[4720]: I0122 06:36:24.071590 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:24Z","lastTransitionTime":"2026-01-22T06:36:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:24 crc kubenswrapper[4720]: I0122 06:36:24.174713 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:24 crc kubenswrapper[4720]: I0122 06:36:24.174753 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:24 crc kubenswrapper[4720]: I0122 06:36:24.174763 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:24 crc kubenswrapper[4720]: I0122 06:36:24.174780 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:24 crc kubenswrapper[4720]: I0122 06:36:24.174840 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:24Z","lastTransitionTime":"2026-01-22T06:36:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:24 crc kubenswrapper[4720]: I0122 06:36:24.180060 4720 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 12:53:37.458170661 +0000 UTC Jan 22 06:36:24 crc kubenswrapper[4720]: I0122 06:36:24.210110 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 06:36:24 crc kubenswrapper[4720]: E0122 06:36:24.210259 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 06:36:24 crc kubenswrapper[4720]: I0122 06:36:24.210293 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 06:36:24 crc kubenswrapper[4720]: E0122 06:36:24.210454 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 06:36:24 crc kubenswrapper[4720]: I0122 06:36:24.278010 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:24 crc kubenswrapper[4720]: I0122 06:36:24.278074 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:24 crc kubenswrapper[4720]: I0122 06:36:24.278102 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:24 crc kubenswrapper[4720]: I0122 06:36:24.278126 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:24 crc kubenswrapper[4720]: I0122 06:36:24.278147 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:24Z","lastTransitionTime":"2026-01-22T06:36:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:24 crc kubenswrapper[4720]: I0122 06:36:24.381416 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:24 crc kubenswrapper[4720]: I0122 06:36:24.381485 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:24 crc kubenswrapper[4720]: I0122 06:36:24.381503 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:24 crc kubenswrapper[4720]: I0122 06:36:24.381531 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:24 crc kubenswrapper[4720]: I0122 06:36:24.381565 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:24Z","lastTransitionTime":"2026-01-22T06:36:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:24 crc kubenswrapper[4720]: I0122 06:36:24.484503 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:24 crc kubenswrapper[4720]: I0122 06:36:24.484560 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:24 crc kubenswrapper[4720]: I0122 06:36:24.484574 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:24 crc kubenswrapper[4720]: I0122 06:36:24.484597 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:24 crc kubenswrapper[4720]: I0122 06:36:24.484613 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:24Z","lastTransitionTime":"2026-01-22T06:36:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:24 crc kubenswrapper[4720]: I0122 06:36:24.587372 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:24 crc kubenswrapper[4720]: I0122 06:36:24.587471 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:24 crc kubenswrapper[4720]: I0122 06:36:24.587495 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:24 crc kubenswrapper[4720]: I0122 06:36:24.587518 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:24 crc kubenswrapper[4720]: I0122 06:36:24.587568 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:24Z","lastTransitionTime":"2026-01-22T06:36:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:24 crc kubenswrapper[4720]: I0122 06:36:24.690259 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:24 crc kubenswrapper[4720]: I0122 06:36:24.690322 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:24 crc kubenswrapper[4720]: I0122 06:36:24.690343 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:24 crc kubenswrapper[4720]: I0122 06:36:24.690366 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:24 crc kubenswrapper[4720]: I0122 06:36:24.690384 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:24Z","lastTransitionTime":"2026-01-22T06:36:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:24 crc kubenswrapper[4720]: I0122 06:36:24.793232 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:24 crc kubenswrapper[4720]: I0122 06:36:24.793302 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:24 crc kubenswrapper[4720]: I0122 06:36:24.793319 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:24 crc kubenswrapper[4720]: I0122 06:36:24.793348 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:24 crc kubenswrapper[4720]: I0122 06:36:24.793367 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:24Z","lastTransitionTime":"2026-01-22T06:36:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:24 crc kubenswrapper[4720]: I0122 06:36:24.897041 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:24 crc kubenswrapper[4720]: I0122 06:36:24.897097 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:24 crc kubenswrapper[4720]: I0122 06:36:24.897113 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:24 crc kubenswrapper[4720]: I0122 06:36:24.897209 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:24 crc kubenswrapper[4720]: I0122 06:36:24.897223 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:24Z","lastTransitionTime":"2026-01-22T06:36:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:24 crc kubenswrapper[4720]: I0122 06:36:24.999881 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:24 crc kubenswrapper[4720]: I0122 06:36:25.000001 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:25 crc kubenswrapper[4720]: I0122 06:36:25.000030 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:25 crc kubenswrapper[4720]: I0122 06:36:25.000076 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:25 crc kubenswrapper[4720]: I0122 06:36:25.000105 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:25Z","lastTransitionTime":"2026-01-22T06:36:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:25 crc kubenswrapper[4720]: I0122 06:36:25.103193 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:25 crc kubenswrapper[4720]: I0122 06:36:25.103241 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:25 crc kubenswrapper[4720]: I0122 06:36:25.103252 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:25 crc kubenswrapper[4720]: I0122 06:36:25.103269 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:25 crc kubenswrapper[4720]: I0122 06:36:25.103282 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:25Z","lastTransitionTime":"2026-01-22T06:36:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:25 crc kubenswrapper[4720]: I0122 06:36:25.181044 4720 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 11:41:50.767849092 +0000 UTC Jan 22 06:36:25 crc kubenswrapper[4720]: I0122 06:36:25.206440 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:25 crc kubenswrapper[4720]: I0122 06:36:25.206511 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:25 crc kubenswrapper[4720]: I0122 06:36:25.206531 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:25 crc kubenswrapper[4720]: I0122 06:36:25.206607 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:25 crc kubenswrapper[4720]: I0122 06:36:25.206632 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:25Z","lastTransitionTime":"2026-01-22T06:36:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:25 crc kubenswrapper[4720]: I0122 06:36:25.209788 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 06:36:25 crc kubenswrapper[4720]: I0122 06:36:25.209788 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kvtch" Jan 22 06:36:25 crc kubenswrapper[4720]: E0122 06:36:25.210043 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 06:36:25 crc kubenswrapper[4720]: E0122 06:36:25.210162 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kvtch" podUID="409f50e8-9b68-4efe-8eb4-bc144d383817" Jan 22 06:36:25 crc kubenswrapper[4720]: I0122 06:36:25.310039 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:25 crc kubenswrapper[4720]: I0122 06:36:25.310085 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:25 crc kubenswrapper[4720]: I0122 06:36:25.310097 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:25 crc kubenswrapper[4720]: I0122 06:36:25.310117 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:25 crc kubenswrapper[4720]: I0122 06:36:25.310128 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:25Z","lastTransitionTime":"2026-01-22T06:36:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:25 crc kubenswrapper[4720]: I0122 06:36:25.414350 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:25 crc kubenswrapper[4720]: I0122 06:36:25.414510 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:25 crc kubenswrapper[4720]: I0122 06:36:25.414548 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:25 crc kubenswrapper[4720]: I0122 06:36:25.414590 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:25 crc kubenswrapper[4720]: I0122 06:36:25.414615 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:25Z","lastTransitionTime":"2026-01-22T06:36:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:25 crc kubenswrapper[4720]: I0122 06:36:25.517020 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:25 crc kubenswrapper[4720]: I0122 06:36:25.517086 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:25 crc kubenswrapper[4720]: I0122 06:36:25.517116 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:25 crc kubenswrapper[4720]: I0122 06:36:25.517156 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:25 crc kubenswrapper[4720]: I0122 06:36:25.517182 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:25Z","lastTransitionTime":"2026-01-22T06:36:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:25 crc kubenswrapper[4720]: I0122 06:36:25.620006 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:25 crc kubenswrapper[4720]: I0122 06:36:25.620071 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:25 crc kubenswrapper[4720]: I0122 06:36:25.620089 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:25 crc kubenswrapper[4720]: I0122 06:36:25.620117 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:25 crc kubenswrapper[4720]: I0122 06:36:25.620136 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:25Z","lastTransitionTime":"2026-01-22T06:36:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:25 crc kubenswrapper[4720]: I0122 06:36:25.723344 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:25 crc kubenswrapper[4720]: I0122 06:36:25.723377 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:25 crc kubenswrapper[4720]: I0122 06:36:25.723385 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:25 crc kubenswrapper[4720]: I0122 06:36:25.723401 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:25 crc kubenswrapper[4720]: I0122 06:36:25.723410 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:25Z","lastTransitionTime":"2026-01-22T06:36:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:25 crc kubenswrapper[4720]: I0122 06:36:25.825839 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:25 crc kubenswrapper[4720]: I0122 06:36:25.825895 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:25 crc kubenswrapper[4720]: I0122 06:36:25.825942 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:25 crc kubenswrapper[4720]: I0122 06:36:25.825966 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:25 crc kubenswrapper[4720]: I0122 06:36:25.825983 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:25Z","lastTransitionTime":"2026-01-22T06:36:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:25 crc kubenswrapper[4720]: I0122 06:36:25.928663 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:25 crc kubenswrapper[4720]: I0122 06:36:25.928715 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:25 crc kubenswrapper[4720]: I0122 06:36:25.928731 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:25 crc kubenswrapper[4720]: I0122 06:36:25.928753 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:25 crc kubenswrapper[4720]: I0122 06:36:25.928769 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:25Z","lastTransitionTime":"2026-01-22T06:36:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:26 crc kubenswrapper[4720]: I0122 06:36:26.032173 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:26 crc kubenswrapper[4720]: I0122 06:36:26.032281 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:26 crc kubenswrapper[4720]: I0122 06:36:26.032344 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:26 crc kubenswrapper[4720]: I0122 06:36:26.032382 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:26 crc kubenswrapper[4720]: I0122 06:36:26.032404 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:26Z","lastTransitionTime":"2026-01-22T06:36:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:26 crc kubenswrapper[4720]: I0122 06:36:26.136131 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:26 crc kubenswrapper[4720]: I0122 06:36:26.136202 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:26 crc kubenswrapper[4720]: I0122 06:36:26.136220 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:26 crc kubenswrapper[4720]: I0122 06:36:26.136249 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:26 crc kubenswrapper[4720]: I0122 06:36:26.136267 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:26Z","lastTransitionTime":"2026-01-22T06:36:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:26 crc kubenswrapper[4720]: I0122 06:36:26.181709 4720 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 05:01:44.221891743 +0000 UTC Jan 22 06:36:26 crc kubenswrapper[4720]: I0122 06:36:26.210533 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 06:36:26 crc kubenswrapper[4720]: I0122 06:36:26.210533 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 06:36:26 crc kubenswrapper[4720]: E0122 06:36:26.210829 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 06:36:26 crc kubenswrapper[4720]: E0122 06:36:26.210867 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 06:36:26 crc kubenswrapper[4720]: I0122 06:36:26.239087 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:26 crc kubenswrapper[4720]: I0122 06:36:26.239200 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:26 crc kubenswrapper[4720]: I0122 06:36:26.239221 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:26 crc kubenswrapper[4720]: I0122 06:36:26.239248 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:26 crc kubenswrapper[4720]: I0122 06:36:26.239267 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:26Z","lastTransitionTime":"2026-01-22T06:36:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:26 crc kubenswrapper[4720]: I0122 06:36:26.343335 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:26 crc kubenswrapper[4720]: I0122 06:36:26.343407 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:26 crc kubenswrapper[4720]: I0122 06:36:26.343426 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:26 crc kubenswrapper[4720]: I0122 06:36:26.343455 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:26 crc kubenswrapper[4720]: I0122 06:36:26.343480 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:26Z","lastTransitionTime":"2026-01-22T06:36:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:26 crc kubenswrapper[4720]: I0122 06:36:26.448138 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:26 crc kubenswrapper[4720]: I0122 06:36:26.448201 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:26 crc kubenswrapper[4720]: I0122 06:36:26.448218 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:26 crc kubenswrapper[4720]: I0122 06:36:26.448245 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:26 crc kubenswrapper[4720]: I0122 06:36:26.448265 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:26Z","lastTransitionTime":"2026-01-22T06:36:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:26 crc kubenswrapper[4720]: I0122 06:36:26.552033 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:26 crc kubenswrapper[4720]: I0122 06:36:26.552101 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:26 crc kubenswrapper[4720]: I0122 06:36:26.552119 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:26 crc kubenswrapper[4720]: I0122 06:36:26.552146 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:26 crc kubenswrapper[4720]: I0122 06:36:26.552164 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:26Z","lastTransitionTime":"2026-01-22T06:36:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:26 crc kubenswrapper[4720]: I0122 06:36:26.655449 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:26 crc kubenswrapper[4720]: I0122 06:36:26.655516 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:26 crc kubenswrapper[4720]: I0122 06:36:26.655540 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:26 crc kubenswrapper[4720]: I0122 06:36:26.655573 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:26 crc kubenswrapper[4720]: I0122 06:36:26.655596 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:26Z","lastTransitionTime":"2026-01-22T06:36:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:26 crc kubenswrapper[4720]: I0122 06:36:26.758558 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:26 crc kubenswrapper[4720]: I0122 06:36:26.758615 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:26 crc kubenswrapper[4720]: I0122 06:36:26.758632 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:26 crc kubenswrapper[4720]: I0122 06:36:26.758659 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:26 crc kubenswrapper[4720]: I0122 06:36:26.758684 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:26Z","lastTransitionTime":"2026-01-22T06:36:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:26 crc kubenswrapper[4720]: I0122 06:36:26.861980 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:26 crc kubenswrapper[4720]: I0122 06:36:26.862055 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:26 crc kubenswrapper[4720]: I0122 06:36:26.862072 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:26 crc kubenswrapper[4720]: I0122 06:36:26.862101 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:26 crc kubenswrapper[4720]: I0122 06:36:26.862121 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:26Z","lastTransitionTime":"2026-01-22T06:36:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:26 crc kubenswrapper[4720]: I0122 06:36:26.965472 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:26 crc kubenswrapper[4720]: I0122 06:36:26.965531 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:26 crc kubenswrapper[4720]: I0122 06:36:26.965548 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:26 crc kubenswrapper[4720]: I0122 06:36:26.965573 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:26 crc kubenswrapper[4720]: I0122 06:36:26.965592 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:26Z","lastTransitionTime":"2026-01-22T06:36:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:27 crc kubenswrapper[4720]: I0122 06:36:27.068164 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:27 crc kubenswrapper[4720]: I0122 06:36:27.068225 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:27 crc kubenswrapper[4720]: I0122 06:36:27.068244 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:27 crc kubenswrapper[4720]: I0122 06:36:27.068268 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:27 crc kubenswrapper[4720]: I0122 06:36:27.068287 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:27Z","lastTransitionTime":"2026-01-22T06:36:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:27 crc kubenswrapper[4720]: I0122 06:36:27.171520 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:27 crc kubenswrapper[4720]: I0122 06:36:27.171586 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:27 crc kubenswrapper[4720]: I0122 06:36:27.171606 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:27 crc kubenswrapper[4720]: I0122 06:36:27.171634 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:27 crc kubenswrapper[4720]: I0122 06:36:27.171654 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:27Z","lastTransitionTime":"2026-01-22T06:36:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:27 crc kubenswrapper[4720]: I0122 06:36:27.182427 4720 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 13:07:22.632053164 +0000 UTC Jan 22 06:36:27 crc kubenswrapper[4720]: I0122 06:36:27.210319 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kvtch" Jan 22 06:36:27 crc kubenswrapper[4720]: I0122 06:36:27.210400 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 06:36:27 crc kubenswrapper[4720]: E0122 06:36:27.210541 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kvtch" podUID="409f50e8-9b68-4efe-8eb4-bc144d383817" Jan 22 06:36:27 crc kubenswrapper[4720]: E0122 06:36:27.210685 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 06:36:27 crc kubenswrapper[4720]: I0122 06:36:27.275719 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:27 crc kubenswrapper[4720]: I0122 06:36:27.275784 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:27 crc kubenswrapper[4720]: I0122 06:36:27.275803 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:27 crc kubenswrapper[4720]: I0122 06:36:27.275828 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:27 crc kubenswrapper[4720]: I0122 06:36:27.275849 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:27Z","lastTransitionTime":"2026-01-22T06:36:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:27 crc kubenswrapper[4720]: I0122 06:36:27.379522 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:27 crc kubenswrapper[4720]: I0122 06:36:27.379584 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:27 crc kubenswrapper[4720]: I0122 06:36:27.379607 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:27 crc kubenswrapper[4720]: I0122 06:36:27.379637 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:27 crc kubenswrapper[4720]: I0122 06:36:27.379656 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:27Z","lastTransitionTime":"2026-01-22T06:36:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:27 crc kubenswrapper[4720]: I0122 06:36:27.482750 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:27 crc kubenswrapper[4720]: I0122 06:36:27.482836 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:27 crc kubenswrapper[4720]: I0122 06:36:27.482864 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:27 crc kubenswrapper[4720]: I0122 06:36:27.482892 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:27 crc kubenswrapper[4720]: I0122 06:36:27.482945 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:27Z","lastTransitionTime":"2026-01-22T06:36:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:27 crc kubenswrapper[4720]: I0122 06:36:27.586874 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:27 crc kubenswrapper[4720]: I0122 06:36:27.586984 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:27 crc kubenswrapper[4720]: I0122 06:36:27.587014 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:27 crc kubenswrapper[4720]: I0122 06:36:27.587046 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:27 crc kubenswrapper[4720]: I0122 06:36:27.587068 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:27Z","lastTransitionTime":"2026-01-22T06:36:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:27 crc kubenswrapper[4720]: I0122 06:36:27.690342 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:27 crc kubenswrapper[4720]: I0122 06:36:27.690426 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:27 crc kubenswrapper[4720]: I0122 06:36:27.690448 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:27 crc kubenswrapper[4720]: I0122 06:36:27.690476 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:27 crc kubenswrapper[4720]: I0122 06:36:27.690497 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:27Z","lastTransitionTime":"2026-01-22T06:36:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:27 crc kubenswrapper[4720]: I0122 06:36:27.793302 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:27 crc kubenswrapper[4720]: I0122 06:36:27.793364 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:27 crc kubenswrapper[4720]: I0122 06:36:27.793384 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:27 crc kubenswrapper[4720]: I0122 06:36:27.793408 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:27 crc kubenswrapper[4720]: I0122 06:36:27.793425 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:27Z","lastTransitionTime":"2026-01-22T06:36:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:27 crc kubenswrapper[4720]: I0122 06:36:27.897053 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:27 crc kubenswrapper[4720]: I0122 06:36:27.897147 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:27 crc kubenswrapper[4720]: I0122 06:36:27.897171 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:27 crc kubenswrapper[4720]: I0122 06:36:27.897197 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:27 crc kubenswrapper[4720]: I0122 06:36:27.897214 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:27Z","lastTransitionTime":"2026-01-22T06:36:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:28 crc kubenswrapper[4720]: I0122 06:36:28.000867 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:28 crc kubenswrapper[4720]: I0122 06:36:28.000960 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:28 crc kubenswrapper[4720]: I0122 06:36:28.000974 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:28 crc kubenswrapper[4720]: I0122 06:36:28.000996 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:28 crc kubenswrapper[4720]: I0122 06:36:28.001010 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:28Z","lastTransitionTime":"2026-01-22T06:36:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:28 crc kubenswrapper[4720]: I0122 06:36:28.103710 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:28 crc kubenswrapper[4720]: I0122 06:36:28.103821 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:28 crc kubenswrapper[4720]: I0122 06:36:28.103849 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:28 crc kubenswrapper[4720]: I0122 06:36:28.103881 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:28 crc kubenswrapper[4720]: I0122 06:36:28.103936 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:28Z","lastTransitionTime":"2026-01-22T06:36:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:28 crc kubenswrapper[4720]: I0122 06:36:28.182984 4720 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 21:13:57.580139632 +0000 UTC Jan 22 06:36:28 crc kubenswrapper[4720]: I0122 06:36:28.207572 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:28 crc kubenswrapper[4720]: I0122 06:36:28.207675 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:28 crc kubenswrapper[4720]: I0122 06:36:28.207700 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:28 crc kubenswrapper[4720]: I0122 06:36:28.207732 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:28 crc kubenswrapper[4720]: I0122 06:36:28.207757 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:28Z","lastTransitionTime":"2026-01-22T06:36:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:28 crc kubenswrapper[4720]: I0122 06:36:28.210031 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 06:36:28 crc kubenswrapper[4720]: I0122 06:36:28.210111 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 06:36:28 crc kubenswrapper[4720]: E0122 06:36:28.210576 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 06:36:28 crc kubenswrapper[4720]: E0122 06:36:28.210272 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 06:36:28 crc kubenswrapper[4720]: I0122 06:36:28.246724 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2336077a-0d11-443d-aa5f-3bc0f75aeb59\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7f8b443098a61889ca09134ff7b6e219efca59abcbcea3bb4c1aa9266ee8e55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfdba4f4bb3ed55acad58ed0c07d2dd58b1a1e9fd56443090c002bdefd3e7e10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2caddf6dd66b641ff10add3b52e37581c741507660ec4fe67a3b9ddde699e56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://108525dd49ce1847f1cc6c4e2a4c70cae3d2d990cda97eeb4aba8879b955432d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d49a501c34b86f0f9300742970a00c92ce3b8d352518d8f53d1ab8c734e2af6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49fa65139740d25a82a8becea2b7476e9846a3eebfe5a02bb0b3e8cc4e7fa30d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49fa65139740d25a82a8becea2b7476e9846a3eebfe5a02bb0b3e8cc4e7fa30d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e171d6a13b6108f7df37e46d2195682d2ce7a75e5fbfd0ccf4aca27871964cf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e171d6a13b6108f7df37e46d2195682d2ce7a75e5fbfd0ccf4aca27871964cf6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://136504ce05b178f4c60a839a1044c0b0892ea97babfe80181eac601c5057df16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://136504ce05b178f4c60a839a1044c0b0892ea97babfe80181eac601c5057df16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:08Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:28Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:28 crc kubenswrapper[4720]: I0122 06:36:28.269684 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71c3232e-a7c6-4127-b9ae-54b793cf40fc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b844e983a5de29cec0e9c51f209f04921cd45b437887ca38e93348bc0816832a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d21c2ba52009379f44498a3064c67bcd1a54a6277cc2b1df9db949bf0871e95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65a3043e63bb84b58bb5486070eb84265d51c514df2d766703a62382b3d6f66f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b944677a78ab673857bf2dd507dcc2eced1caef632613ec1e2054cb76ef57f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f51bf89aa732948c3672632435a055b62f58316da07931b2773bc2d3c10789e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T06:35:26Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 06:35:20.895806 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 06:35:20.896560 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2161447925/tls.crt::/tmp/serving-cert-2161447925/tls.key\\\\\\\"\\\\nI0122 06:35:26.673138 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 06:35:26.677382 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 06:35:26.677419 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 06:35:26.677459 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 06:35:26.677469 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 06:35:26.688473 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 06:35:26.688552 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 06:35:26.688566 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 06:35:26.688580 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 06:35:26.688588 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 06:35:26.688596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 06:35:26.688605 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 06:35:26.688875 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 06:35:26.690763 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://927a5a6aba8d7533888fb9e5e664fbaed25c6e6c0616c5b3c6f0afd6f45aa128\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3418e6eacd36147ae6ae1be72f23113dea7dbaab3597ba0fee76fbc846c0162f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3418e6eacd36147ae6ae1be72f23113dea7dbaab3597ba0fee76fbc846c0162f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:28Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:28 crc kubenswrapper[4720]: I0122 06:36:28.292512 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://42d0f971bb1f2a686c44e255b97648123170034b41d626318aab29e111dda4a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:28Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:28 crc kubenswrapper[4720]: I0122 06:36:28.310538 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:28 crc kubenswrapper[4720]: I0122 06:36:28.310655 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:28 crc kubenswrapper[4720]: I0122 06:36:28.310691 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:28 crc kubenswrapper[4720]: I0122 06:36:28.310733 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:28 crc kubenswrapper[4720]: I0122 06:36:28.310761 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:28Z","lastTransitionTime":"2026-01-22T06:36:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:28 crc kubenswrapper[4720]: I0122 06:36:28.316096 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-n5w5r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85373343-156d-4de0-a72b-baaf7c4e3d08\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:36:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:36:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b71047289bcefd19da4f70da8db4ee3456912a253f598d85540effeea52ca966\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4da0abad7292a9d82b80caabd4b1f0fc15fbe54bfa5b4316c85b2131ea8b5b7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T06:36:19Z\\\",\\\"message\\\":\\\"2026-01-22T06:35:34+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_310a9cdb-d8e6-49bd-b096-1bf7adc9b43b\\\\n2026-01-22T06:35:34+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_310a9cdb-d8e6-49bd-b096-1bf7adc9b43b to /host/opt/cni/bin/\\\\n2026-01-22T06:35:34Z [verbose] multus-daemon started\\\\n2026-01-22T06:35:34Z [verbose] Readiness Indicator file check\\\\n2026-01-22T06:36:19Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:33Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:36:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlzmz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-n5w5r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:28Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:28 crc kubenswrapper[4720]: I0122 06:36:28.335921 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73ba4f9c-33cb-4898-b2a3-21bf3327cf5b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:36:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:36:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b5ab589e0e928e47ac498164439f2fbd62bfe1130a9c17a9d96ec4cedd2c1e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5ff36eb3ab53efb54f45ab3e3030175237fd76ecd28ffcdc5a5079dfb93ec2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5d1e4cb487f75b95bc0da8ec3adbb6410d171fa2c95137c8127cea6023166f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7605a8cc85e1a2da51c6bedd5f03df930e1135340ec7de69d1ef643c907fd2bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7605a8cc85e1a2da51c6bedd5f03df930e1135340ec7de69d1ef643c907fd2bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:08Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:28Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:28 crc kubenswrapper[4720]: I0122 06:36:28.347857 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:28Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:28 crc kubenswrapper[4720]: I0122 06:36:28.357161 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dtnxt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"518eedd0-2cb6-458d-a7a8-d8c8b8296401\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://794bf7a98b2300bd21bd914f5ad90c8f92ea7c055c78f62dbac7bc66c1c4d282\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqx2p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dtnxt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:28Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:28 crc kubenswrapper[4720]: I0122 06:36:28.368696 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc18b75d6761db0d4a459a096804ac6600c6c2026ca133d47671a8aadba175f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6616553b8c62205590801d10e6e316b396b7190584b08a774ebb87ab425cc122\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:28Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:28 crc kubenswrapper[4720]: I0122 06:36:28.379328 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4b26e9d-6a95-4b1c-9750-88b6aa100c67\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57c69f24cbbb5aea3a63b602916230df0cc3d74344a32bd1528bd162786a6d4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f52f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88eb6692702bcb8523c759d764bb8dede5af5a2890217a1c6897a5b18a7197dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f52f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bnsvd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:28Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:28 crc kubenswrapper[4720]: I0122 06:36:28.398522 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lxzml" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c7b3c34a-9870-4c9f-990b-29b7e768d5a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec19859a546dc85e4807d0f522014187b1d22c1167a342776aafbcfd7c3a2cd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h66q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0e395a61e07527eace7f980f476ef9b16a0c6d525b461837ebd9abcbebfa9b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0e395a61e07527eace7f980f476ef9b16a0c6d525b461837ebd9abcbebfa9b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h66q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fa836ad8fb1d77bc1b9d30575db8ed7f2394a2b03d41f0b26a3907fa07ab154\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fa836ad8fb1d77bc1b9d30575db8ed7f2394a2b03d41f0b26a3907fa07ab154\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h66q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09cda77dbfa7cafc1f167478649fe1bc961d7610005bd3260407796956dd8fc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09cda77dbfa7cafc1f167478649fe1bc961d7610005bd3260407796956dd8fc3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h66q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d74c02c82589070c20f88c3ce9574341f2c4898af034fe9a686aa7b7c19ed93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d74c02c82589070c20f88c3ce9574341f2c4898af034fe9a686aa7b7c19ed93\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h66q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42e7374bf2c5e55d0afbe1e7564a2bc7fd23d0af7e22f0cab853fb9d75802422\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42e7374bf2c5e55d0afbe1e7564a2bc7fd23d0af7e22f0cab853fb9d75802422\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h66q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bbafab11f2868d0a97a54debc01d8e4cc6360b08a5b7e883a2f385db2e9ba0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3bbafab11f2868d0a97a54debc01d8e4cc6360b08a5b7e883a2f385db2e9ba0c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h66q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lxzml\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:28Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:28 crc kubenswrapper[4720]: I0122 06:36:28.409329 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5bmrh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"819a554b-cde8-41eb-bf3c-b965b5754ee9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fd548b8bf40d7be75f3433d2396eac3eb467cdc77a6e92f29c5321b0f2e11c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w949z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5bmrh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:28Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:28 crc kubenswrapper[4720]: I0122 06:36:28.413547 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:28 crc kubenswrapper[4720]: I0122 06:36:28.413580 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:28 crc kubenswrapper[4720]: I0122 06:36:28.413592 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:28 crc kubenswrapper[4720]: I0122 06:36:28.413610 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:28 crc kubenswrapper[4720]: I0122 06:36:28.413620 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:28Z","lastTransitionTime":"2026-01-22T06:36:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:28 crc kubenswrapper[4720]: I0122 06:36:28.421279 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4c84t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83dddec7-9ecb-4d3b-97ac-e2f8f59e547c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ea91d23f213bf1c41ebffa43c30559153a3fdae5aac42557a24566cc90bd2b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl9b5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8570742a6d232a3695d42f42f4a8a7bfe79325cb20c8da4129148dda33df4683\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl9b5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4c84t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:28Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:28 crc kubenswrapper[4720]: I0122 06:36:28.435198 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63aad383-f2cb-4bee-a51e-8f27e7b6f6dc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f91f6cde32c3019a63315238551af8420def8e585e4dbf3d29e55f62f155ba3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eff41c6fc4cac32edebfbabd313603d50771e92427938f4fc8c12bc75563133d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5dc3d31620607809111c9a1ed06ab7dfb1b24ff5ef18155979efc6b09b6d67d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://365ce842df37c87fae429e32228087a45828c6c717b63a2eb9818f26bf9aed7b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:08Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:28Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:28 crc kubenswrapper[4720]: I0122 06:36:28.446642 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:28Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:28 crc kubenswrapper[4720]: I0122 06:36:28.458720 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:28Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:28 crc kubenswrapper[4720]: I0122 06:36:28.474966 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3042f2c25662a3c0a0f010b2e431521ea9dc24b59303773b1fc4e2b43308937\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:28Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:28 crc kubenswrapper[4720]: I0122 06:36:28.503430 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pc2f4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a725fa6-120e-41b1-bf7b-e1419e35c891\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be8927bdfee25dd12f02d8f5e4923d1ef6e034a4ab3c036ee8fb76b699aa4a04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b14f48ebae84e1450337b5ce78c6b9e74f4ffd418eaaf9df9fcb7495f5feae06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b013c0f8d83bb91a1385ed067377ea5bd81640f7b420160042c33a3112cbe153\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a1f835b7dfa74f56bbfdc5ef0a6674e6c04ae1be7119e3675d4a7e970bc37d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdea94d764eb25af4aec5da1f1aaf94b7d75e66292561a54803a079a6ca35cbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://279bdc70caff8b05f2eaff1d28bee03725a7069dcc99c4770a8df9a3e3ed0440\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ae75d823e484413b1163d5407d5560d4cae3d122fb5f1ce34f501c3d2fce1b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ae75d823e484413b1163d5407d5560d4cae3d122fb5f1ce34f501c3d2fce1b4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T06:36:04Z\\\",\\\"message\\\":\\\"min network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:04Z is after 2025-08-24T17:21:41Z]\\\\nI0122 06:36:04.142760 6361 ovn.go:134] Ensuring zone local for Pod openshift-dns/node-resolver-dtnxt in node crc\\\\nI0122 06:36:04.142770 6361 obj_retry.go:386] Retry successful for *v1.Pod openshift-dns/node-resolver-dtnxt after 0 failed attempt(s)\\\\nI0122 06:36:04.142777 6361 default_network_controller.go:776] Recording success event on pod openshift-dns/node-resolver-dtnxt\\\\nI0122 06:36:04.142318 6361 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-scheduler/openshift-kube-scheduler-crc\\\\nI0122 06:36:04.142789 6361 ovn.go:134] Ensuring zone local for Pod openshift-kube-scheduler/openshift-kube-scheduler-crc in node crc\\\\nI0122 06:36:04.142793 6361 obj_ret\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T06:36:03Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-pc2f4_openshift-ovn-kubernetes(9a725fa6-120e-41b1-bf7b-e1419e35c891)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd6371da033cf0ecd1c0b746cae82e14593edbf2353910a77284a2987741580d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a478b385adad231c65bafa44aef49fd983ef1a81babc0464478e199578b612b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a478b385adad231c65bafa44aef49fd983ef1a81babc0464478e199578b612b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pc2f4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:28Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:28 crc kubenswrapper[4720]: I0122 06:36:28.516601 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:28 crc kubenswrapper[4720]: I0122 06:36:28.516631 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:28 crc kubenswrapper[4720]: I0122 06:36:28.516641 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:28 crc kubenswrapper[4720]: I0122 06:36:28.516659 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:28 crc kubenswrapper[4720]: I0122 06:36:28.516670 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:28Z","lastTransitionTime":"2026-01-22T06:36:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:28 crc kubenswrapper[4720]: I0122 06:36:28.521502 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kvtch" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"409f50e8-9b68-4efe-8eb4-bc144d383817\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhm9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhm9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:47Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kvtch\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:28Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:28 crc kubenswrapper[4720]: I0122 06:36:28.622078 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:28 crc kubenswrapper[4720]: I0122 06:36:28.622113 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:28 crc kubenswrapper[4720]: I0122 06:36:28.622129 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:28 crc kubenswrapper[4720]: I0122 06:36:28.622154 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:28 crc kubenswrapper[4720]: I0122 06:36:28.622171 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:28Z","lastTransitionTime":"2026-01-22T06:36:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:28 crc kubenswrapper[4720]: I0122 06:36:28.724506 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:28 crc kubenswrapper[4720]: I0122 06:36:28.724542 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:28 crc kubenswrapper[4720]: I0122 06:36:28.724553 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:28 crc kubenswrapper[4720]: I0122 06:36:28.724572 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:28 crc kubenswrapper[4720]: I0122 06:36:28.724587 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:28Z","lastTransitionTime":"2026-01-22T06:36:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:28 crc kubenswrapper[4720]: I0122 06:36:28.826829 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:28 crc kubenswrapper[4720]: I0122 06:36:28.826881 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:28 crc kubenswrapper[4720]: I0122 06:36:28.826896 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:28 crc kubenswrapper[4720]: I0122 06:36:28.826938 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:28 crc kubenswrapper[4720]: I0122 06:36:28.826956 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:28Z","lastTransitionTime":"2026-01-22T06:36:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:28 crc kubenswrapper[4720]: I0122 06:36:28.930190 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:28 crc kubenswrapper[4720]: I0122 06:36:28.930266 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:28 crc kubenswrapper[4720]: I0122 06:36:28.930289 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:28 crc kubenswrapper[4720]: I0122 06:36:28.930321 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:28 crc kubenswrapper[4720]: I0122 06:36:28.930345 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:28Z","lastTransitionTime":"2026-01-22T06:36:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:29 crc kubenswrapper[4720]: I0122 06:36:29.033443 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:29 crc kubenswrapper[4720]: I0122 06:36:29.033716 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:29 crc kubenswrapper[4720]: I0122 06:36:29.033813 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:29 crc kubenswrapper[4720]: I0122 06:36:29.033967 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:29 crc kubenswrapper[4720]: I0122 06:36:29.034086 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:29Z","lastTransitionTime":"2026-01-22T06:36:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:29 crc kubenswrapper[4720]: I0122 06:36:29.137560 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:29 crc kubenswrapper[4720]: I0122 06:36:29.138096 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:29 crc kubenswrapper[4720]: I0122 06:36:29.138345 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:29 crc kubenswrapper[4720]: I0122 06:36:29.138554 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:29 crc kubenswrapper[4720]: I0122 06:36:29.138755 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:29Z","lastTransitionTime":"2026-01-22T06:36:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:29 crc kubenswrapper[4720]: I0122 06:36:29.184041 4720 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 18:12:27.154341988 +0000 UTC Jan 22 06:36:29 crc kubenswrapper[4720]: I0122 06:36:29.210410 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 06:36:29 crc kubenswrapper[4720]: I0122 06:36:29.210506 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kvtch" Jan 22 06:36:29 crc kubenswrapper[4720]: E0122 06:36:29.210769 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 06:36:29 crc kubenswrapper[4720]: E0122 06:36:29.211026 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kvtch" podUID="409f50e8-9b68-4efe-8eb4-bc144d383817" Jan 22 06:36:29 crc kubenswrapper[4720]: I0122 06:36:29.242137 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:29 crc kubenswrapper[4720]: I0122 06:36:29.242199 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:29 crc kubenswrapper[4720]: I0122 06:36:29.242222 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:29 crc kubenswrapper[4720]: I0122 06:36:29.242252 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:29 crc kubenswrapper[4720]: I0122 06:36:29.242274 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:29Z","lastTransitionTime":"2026-01-22T06:36:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:29 crc kubenswrapper[4720]: I0122 06:36:29.345545 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:29 crc kubenswrapper[4720]: I0122 06:36:29.345840 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:29 crc kubenswrapper[4720]: I0122 06:36:29.346037 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:29 crc kubenswrapper[4720]: I0122 06:36:29.346175 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:29 crc kubenswrapper[4720]: I0122 06:36:29.346288 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:29Z","lastTransitionTime":"2026-01-22T06:36:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:29 crc kubenswrapper[4720]: I0122 06:36:29.449446 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:29 crc kubenswrapper[4720]: I0122 06:36:29.449522 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:29 crc kubenswrapper[4720]: I0122 06:36:29.449540 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:29 crc kubenswrapper[4720]: I0122 06:36:29.449571 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:29 crc kubenswrapper[4720]: I0122 06:36:29.449589 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:29Z","lastTransitionTime":"2026-01-22T06:36:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:29 crc kubenswrapper[4720]: I0122 06:36:29.552340 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:29 crc kubenswrapper[4720]: I0122 06:36:29.552401 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:29 crc kubenswrapper[4720]: I0122 06:36:29.552418 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:29 crc kubenswrapper[4720]: I0122 06:36:29.552445 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:29 crc kubenswrapper[4720]: I0122 06:36:29.552465 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:29Z","lastTransitionTime":"2026-01-22T06:36:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:29 crc kubenswrapper[4720]: I0122 06:36:29.655986 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:29 crc kubenswrapper[4720]: I0122 06:36:29.656049 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:29 crc kubenswrapper[4720]: I0122 06:36:29.656070 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:29 crc kubenswrapper[4720]: I0122 06:36:29.656097 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:29 crc kubenswrapper[4720]: I0122 06:36:29.656116 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:29Z","lastTransitionTime":"2026-01-22T06:36:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:29 crc kubenswrapper[4720]: I0122 06:36:29.759085 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:29 crc kubenswrapper[4720]: I0122 06:36:29.759127 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:29 crc kubenswrapper[4720]: I0122 06:36:29.759135 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:29 crc kubenswrapper[4720]: I0122 06:36:29.759152 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:29 crc kubenswrapper[4720]: I0122 06:36:29.759163 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:29Z","lastTransitionTime":"2026-01-22T06:36:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:29 crc kubenswrapper[4720]: I0122 06:36:29.862391 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:29 crc kubenswrapper[4720]: I0122 06:36:29.862493 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:29 crc kubenswrapper[4720]: I0122 06:36:29.862524 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:29 crc kubenswrapper[4720]: I0122 06:36:29.862554 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:29 crc kubenswrapper[4720]: I0122 06:36:29.862578 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:29Z","lastTransitionTime":"2026-01-22T06:36:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:29 crc kubenswrapper[4720]: I0122 06:36:29.966472 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:29 crc kubenswrapper[4720]: I0122 06:36:29.966530 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:29 crc kubenswrapper[4720]: I0122 06:36:29.966546 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:29 crc kubenswrapper[4720]: I0122 06:36:29.966571 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:29 crc kubenswrapper[4720]: I0122 06:36:29.966592 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:29Z","lastTransitionTime":"2026-01-22T06:36:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:30 crc kubenswrapper[4720]: I0122 06:36:30.069660 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:30 crc kubenswrapper[4720]: I0122 06:36:30.069749 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:30 crc kubenswrapper[4720]: I0122 06:36:30.069779 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:30 crc kubenswrapper[4720]: I0122 06:36:30.069820 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:30 crc kubenswrapper[4720]: I0122 06:36:30.069844 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:30Z","lastTransitionTime":"2026-01-22T06:36:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:30 crc kubenswrapper[4720]: I0122 06:36:30.173137 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:30 crc kubenswrapper[4720]: I0122 06:36:30.173194 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:30 crc kubenswrapper[4720]: I0122 06:36:30.173209 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:30 crc kubenswrapper[4720]: I0122 06:36:30.173235 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:30 crc kubenswrapper[4720]: I0122 06:36:30.173251 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:30Z","lastTransitionTime":"2026-01-22T06:36:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:30 crc kubenswrapper[4720]: I0122 06:36:30.184842 4720 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 13:05:20.7288432 +0000 UTC Jan 22 06:36:30 crc kubenswrapper[4720]: I0122 06:36:30.210329 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 06:36:30 crc kubenswrapper[4720]: I0122 06:36:30.210365 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 06:36:30 crc kubenswrapper[4720]: E0122 06:36:30.210577 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 06:36:30 crc kubenswrapper[4720]: E0122 06:36:30.210713 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 06:36:30 crc kubenswrapper[4720]: I0122 06:36:30.277096 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:30 crc kubenswrapper[4720]: I0122 06:36:30.277162 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:30 crc kubenswrapper[4720]: I0122 06:36:30.277179 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:30 crc kubenswrapper[4720]: I0122 06:36:30.277208 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:30 crc kubenswrapper[4720]: I0122 06:36:30.277227 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:30Z","lastTransitionTime":"2026-01-22T06:36:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:30 crc kubenswrapper[4720]: I0122 06:36:30.381585 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:30 crc kubenswrapper[4720]: I0122 06:36:30.382028 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:30 crc kubenswrapper[4720]: I0122 06:36:30.382184 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:30 crc kubenswrapper[4720]: I0122 06:36:30.382397 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:30 crc kubenswrapper[4720]: I0122 06:36:30.382628 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:30Z","lastTransitionTime":"2026-01-22T06:36:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:30 crc kubenswrapper[4720]: I0122 06:36:30.486290 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:30 crc kubenswrapper[4720]: I0122 06:36:30.486359 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:30 crc kubenswrapper[4720]: I0122 06:36:30.486383 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:30 crc kubenswrapper[4720]: I0122 06:36:30.486406 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:30 crc kubenswrapper[4720]: I0122 06:36:30.486422 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:30Z","lastTransitionTime":"2026-01-22T06:36:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:30 crc kubenswrapper[4720]: I0122 06:36:30.531995 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 06:36:30 crc kubenswrapper[4720]: E0122 06:36:30.532219 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 06:37:34.532185458 +0000 UTC m=+146.674092173 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 06:36:30 crc kubenswrapper[4720]: I0122 06:36:30.532322 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 06:36:30 crc kubenswrapper[4720]: I0122 06:36:30.532401 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 06:36:30 crc kubenswrapper[4720]: E0122 06:36:30.532563 4720 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 22 06:36:30 crc kubenswrapper[4720]: E0122 06:36:30.532623 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-22 06:37:34.53261376 +0000 UTC m=+146.674520475 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 22 06:36:30 crc kubenswrapper[4720]: E0122 06:36:30.532740 4720 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 22 06:36:30 crc kubenswrapper[4720]: E0122 06:36:30.532891 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-22 06:37:34.532876708 +0000 UTC m=+146.674783423 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 22 06:36:30 crc kubenswrapper[4720]: I0122 06:36:30.589124 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:30 crc kubenswrapper[4720]: I0122 06:36:30.589369 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:30 crc kubenswrapper[4720]: I0122 06:36:30.589482 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:30 crc kubenswrapper[4720]: I0122 06:36:30.589568 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:30 crc kubenswrapper[4720]: I0122 06:36:30.589658 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:30Z","lastTransitionTime":"2026-01-22T06:36:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:30 crc kubenswrapper[4720]: I0122 06:36:30.634518 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 06:36:30 crc kubenswrapper[4720]: I0122 06:36:30.634593 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 06:36:30 crc kubenswrapper[4720]: E0122 06:36:30.635009 4720 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 22 06:36:30 crc kubenswrapper[4720]: E0122 06:36:30.635047 4720 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 22 06:36:30 crc kubenswrapper[4720]: E0122 06:36:30.635068 4720 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 06:36:30 crc kubenswrapper[4720]: E0122 06:36:30.635140 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-22 06:37:34.635116503 +0000 UTC m=+146.777023248 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 06:36:30 crc kubenswrapper[4720]: E0122 06:36:30.635407 4720 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 22 06:36:30 crc kubenswrapper[4720]: E0122 06:36:30.635445 4720 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 22 06:36:30 crc kubenswrapper[4720]: E0122 06:36:30.635465 4720 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 06:36:30 crc kubenswrapper[4720]: E0122 06:36:30.635525 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-22 06:37:34.635504784 +0000 UTC m=+146.777411529 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 22 06:36:30 crc kubenswrapper[4720]: I0122 06:36:30.693072 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:30 crc kubenswrapper[4720]: I0122 06:36:30.693391 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:30 crc kubenswrapper[4720]: I0122 06:36:30.693473 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:30 crc kubenswrapper[4720]: I0122 06:36:30.693566 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:30 crc kubenswrapper[4720]: I0122 06:36:30.693658 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:30Z","lastTransitionTime":"2026-01-22T06:36:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:30 crc kubenswrapper[4720]: I0122 06:36:30.796771 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:30 crc kubenswrapper[4720]: I0122 06:36:30.796863 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:30 crc kubenswrapper[4720]: I0122 06:36:30.796888 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:30 crc kubenswrapper[4720]: I0122 06:36:30.796956 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:30 crc kubenswrapper[4720]: I0122 06:36:30.796983 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:30Z","lastTransitionTime":"2026-01-22T06:36:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:30 crc kubenswrapper[4720]: I0122 06:36:30.899545 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:30 crc kubenswrapper[4720]: I0122 06:36:30.899607 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:30 crc kubenswrapper[4720]: I0122 06:36:30.899624 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:30 crc kubenswrapper[4720]: I0122 06:36:30.899687 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:30 crc kubenswrapper[4720]: I0122 06:36:30.899708 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:30Z","lastTransitionTime":"2026-01-22T06:36:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:31 crc kubenswrapper[4720]: I0122 06:36:31.002822 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:31 crc kubenswrapper[4720]: I0122 06:36:31.003175 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:31 crc kubenswrapper[4720]: I0122 06:36:31.003338 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:31 crc kubenswrapper[4720]: I0122 06:36:31.003492 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:31 crc kubenswrapper[4720]: I0122 06:36:31.003613 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:31Z","lastTransitionTime":"2026-01-22T06:36:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:31 crc kubenswrapper[4720]: I0122 06:36:31.108080 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:31 crc kubenswrapper[4720]: I0122 06:36:31.108571 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:31 crc kubenswrapper[4720]: I0122 06:36:31.108765 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:31 crc kubenswrapper[4720]: I0122 06:36:31.108988 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:31 crc kubenswrapper[4720]: I0122 06:36:31.109308 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:31Z","lastTransitionTime":"2026-01-22T06:36:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:31 crc kubenswrapper[4720]: I0122 06:36:31.185619 4720 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 02:44:22.117733711 +0000 UTC Jan 22 06:36:31 crc kubenswrapper[4720]: I0122 06:36:31.210165 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 06:36:31 crc kubenswrapper[4720]: I0122 06:36:31.210227 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kvtch" Jan 22 06:36:31 crc kubenswrapper[4720]: E0122 06:36:31.210392 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 06:36:31 crc kubenswrapper[4720]: E0122 06:36:31.210562 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kvtch" podUID="409f50e8-9b68-4efe-8eb4-bc144d383817" Jan 22 06:36:31 crc kubenswrapper[4720]: I0122 06:36:31.213021 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:31 crc kubenswrapper[4720]: I0122 06:36:31.213244 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:31 crc kubenswrapper[4720]: I0122 06:36:31.213414 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:31 crc kubenswrapper[4720]: I0122 06:36:31.213557 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:31 crc kubenswrapper[4720]: I0122 06:36:31.213697 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:31Z","lastTransitionTime":"2026-01-22T06:36:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:31 crc kubenswrapper[4720]: I0122 06:36:31.317293 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:31 crc kubenswrapper[4720]: I0122 06:36:31.317359 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:31 crc kubenswrapper[4720]: I0122 06:36:31.317371 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:31 crc kubenswrapper[4720]: I0122 06:36:31.317391 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:31 crc kubenswrapper[4720]: I0122 06:36:31.317406 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:31Z","lastTransitionTime":"2026-01-22T06:36:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:31 crc kubenswrapper[4720]: I0122 06:36:31.419800 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:31 crc kubenswrapper[4720]: I0122 06:36:31.420058 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:31 crc kubenswrapper[4720]: I0122 06:36:31.420083 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:31 crc kubenswrapper[4720]: I0122 06:36:31.420113 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:31 crc kubenswrapper[4720]: I0122 06:36:31.420131 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:31Z","lastTransitionTime":"2026-01-22T06:36:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:31 crc kubenswrapper[4720]: I0122 06:36:31.523972 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:31 crc kubenswrapper[4720]: I0122 06:36:31.524047 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:31 crc kubenswrapper[4720]: I0122 06:36:31.524069 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:31 crc kubenswrapper[4720]: I0122 06:36:31.524102 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:31 crc kubenswrapper[4720]: I0122 06:36:31.524123 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:31Z","lastTransitionTime":"2026-01-22T06:36:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:31 crc kubenswrapper[4720]: I0122 06:36:31.627545 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:31 crc kubenswrapper[4720]: I0122 06:36:31.627609 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:31 crc kubenswrapper[4720]: I0122 06:36:31.627623 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:31 crc kubenswrapper[4720]: I0122 06:36:31.627643 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:31 crc kubenswrapper[4720]: I0122 06:36:31.627659 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:31Z","lastTransitionTime":"2026-01-22T06:36:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:31 crc kubenswrapper[4720]: I0122 06:36:31.730651 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:31 crc kubenswrapper[4720]: I0122 06:36:31.730719 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:31 crc kubenswrapper[4720]: I0122 06:36:31.730739 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:31 crc kubenswrapper[4720]: I0122 06:36:31.730768 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:31 crc kubenswrapper[4720]: I0122 06:36:31.730790 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:31Z","lastTransitionTime":"2026-01-22T06:36:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:31 crc kubenswrapper[4720]: I0122 06:36:31.834204 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:31 crc kubenswrapper[4720]: I0122 06:36:31.834362 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:31 crc kubenswrapper[4720]: I0122 06:36:31.834378 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:31 crc kubenswrapper[4720]: I0122 06:36:31.834405 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:31 crc kubenswrapper[4720]: I0122 06:36:31.834421 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:31Z","lastTransitionTime":"2026-01-22T06:36:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:31 crc kubenswrapper[4720]: I0122 06:36:31.937018 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:31 crc kubenswrapper[4720]: I0122 06:36:31.937077 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:31 crc kubenswrapper[4720]: I0122 06:36:31.937090 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:31 crc kubenswrapper[4720]: I0122 06:36:31.937113 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:31 crc kubenswrapper[4720]: I0122 06:36:31.937127 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:31Z","lastTransitionTime":"2026-01-22T06:36:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:32 crc kubenswrapper[4720]: I0122 06:36:32.040502 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:32 crc kubenswrapper[4720]: I0122 06:36:32.040572 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:32 crc kubenswrapper[4720]: I0122 06:36:32.040589 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:32 crc kubenswrapper[4720]: I0122 06:36:32.040619 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:32 crc kubenswrapper[4720]: I0122 06:36:32.040636 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:32Z","lastTransitionTime":"2026-01-22T06:36:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:32 crc kubenswrapper[4720]: I0122 06:36:32.143727 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:32 crc kubenswrapper[4720]: I0122 06:36:32.143785 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:32 crc kubenswrapper[4720]: I0122 06:36:32.143798 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:32 crc kubenswrapper[4720]: I0122 06:36:32.143824 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:32 crc kubenswrapper[4720]: I0122 06:36:32.143841 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:32Z","lastTransitionTime":"2026-01-22T06:36:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:32 crc kubenswrapper[4720]: I0122 06:36:32.185756 4720 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 23:54:14.900436019 +0000 UTC Jan 22 06:36:32 crc kubenswrapper[4720]: I0122 06:36:32.210473 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 06:36:32 crc kubenswrapper[4720]: E0122 06:36:32.210650 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 06:36:32 crc kubenswrapper[4720]: I0122 06:36:32.210832 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 06:36:32 crc kubenswrapper[4720]: E0122 06:36:32.211191 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 06:36:32 crc kubenswrapper[4720]: I0122 06:36:32.246951 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:32 crc kubenswrapper[4720]: I0122 06:36:32.247008 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:32 crc kubenswrapper[4720]: I0122 06:36:32.247024 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:32 crc kubenswrapper[4720]: I0122 06:36:32.247042 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:32 crc kubenswrapper[4720]: I0122 06:36:32.247056 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:32Z","lastTransitionTime":"2026-01-22T06:36:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:32 crc kubenswrapper[4720]: I0122 06:36:32.349686 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:32 crc kubenswrapper[4720]: I0122 06:36:32.349762 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:32 crc kubenswrapper[4720]: I0122 06:36:32.349781 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:32 crc kubenswrapper[4720]: I0122 06:36:32.349812 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:32 crc kubenswrapper[4720]: I0122 06:36:32.349833 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:32Z","lastTransitionTime":"2026-01-22T06:36:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:32 crc kubenswrapper[4720]: I0122 06:36:32.453741 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:32 crc kubenswrapper[4720]: I0122 06:36:32.453821 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:32 crc kubenswrapper[4720]: I0122 06:36:32.453850 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:32 crc kubenswrapper[4720]: I0122 06:36:32.453884 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:32 crc kubenswrapper[4720]: I0122 06:36:32.453949 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:32Z","lastTransitionTime":"2026-01-22T06:36:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:32 crc kubenswrapper[4720]: I0122 06:36:32.557854 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:32 crc kubenswrapper[4720]: I0122 06:36:32.557974 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:32 crc kubenswrapper[4720]: I0122 06:36:32.557998 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:32 crc kubenswrapper[4720]: I0122 06:36:32.558031 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:32 crc kubenswrapper[4720]: I0122 06:36:32.558055 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:32Z","lastTransitionTime":"2026-01-22T06:36:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:32 crc kubenswrapper[4720]: I0122 06:36:32.661793 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:32 crc kubenswrapper[4720]: I0122 06:36:32.661858 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:32 crc kubenswrapper[4720]: I0122 06:36:32.661877 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:32 crc kubenswrapper[4720]: I0122 06:36:32.661904 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:32 crc kubenswrapper[4720]: I0122 06:36:32.661957 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:32Z","lastTransitionTime":"2026-01-22T06:36:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:32 crc kubenswrapper[4720]: I0122 06:36:32.765779 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:32 crc kubenswrapper[4720]: I0122 06:36:32.765863 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:32 crc kubenswrapper[4720]: I0122 06:36:32.765884 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:32 crc kubenswrapper[4720]: I0122 06:36:32.765947 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:32 crc kubenswrapper[4720]: I0122 06:36:32.765975 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:32Z","lastTransitionTime":"2026-01-22T06:36:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:32 crc kubenswrapper[4720]: I0122 06:36:32.869572 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:32 crc kubenswrapper[4720]: I0122 06:36:32.869638 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:32 crc kubenswrapper[4720]: I0122 06:36:32.869656 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:32 crc kubenswrapper[4720]: I0122 06:36:32.869689 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:32 crc kubenswrapper[4720]: I0122 06:36:32.869710 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:32Z","lastTransitionTime":"2026-01-22T06:36:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:32 crc kubenswrapper[4720]: I0122 06:36:32.893240 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:32 crc kubenswrapper[4720]: I0122 06:36:32.893489 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:32 crc kubenswrapper[4720]: I0122 06:36:32.893641 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:32 crc kubenswrapper[4720]: I0122 06:36:32.893784 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:32 crc kubenswrapper[4720]: I0122 06:36:32.893943 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:32Z","lastTransitionTime":"2026-01-22T06:36:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:32 crc kubenswrapper[4720]: E0122 06:36:32.917556 4720 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T06:36:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T06:36:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T06:36:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T06:36:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T06:36:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T06:36:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T06:36:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T06:36:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"234f6209-cc86-46cc-ab69-026482c920c9\\\",\\\"systemUUID\\\":\\\"4713dd6d-99ec-4bb6-94e4-e7199d2e8be9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:32Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:32 crc kubenswrapper[4720]: I0122 06:36:32.923272 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:32 crc kubenswrapper[4720]: I0122 06:36:32.923509 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:32 crc kubenswrapper[4720]: I0122 06:36:32.923954 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:32 crc kubenswrapper[4720]: I0122 06:36:32.924144 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:32 crc kubenswrapper[4720]: I0122 06:36:32.924302 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:32Z","lastTransitionTime":"2026-01-22T06:36:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:32 crc kubenswrapper[4720]: E0122 06:36:32.945460 4720 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T06:36:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T06:36:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T06:36:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T06:36:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T06:36:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T06:36:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T06:36:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T06:36:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"234f6209-cc86-46cc-ab69-026482c920c9\\\",\\\"systemUUID\\\":\\\"4713dd6d-99ec-4bb6-94e4-e7199d2e8be9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:32Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:32 crc kubenswrapper[4720]: I0122 06:36:32.951350 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:32 crc kubenswrapper[4720]: I0122 06:36:32.951393 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:32 crc kubenswrapper[4720]: I0122 06:36:32.951410 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:32 crc kubenswrapper[4720]: I0122 06:36:32.951438 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:32 crc kubenswrapper[4720]: I0122 06:36:32.951461 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:32Z","lastTransitionTime":"2026-01-22T06:36:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:32 crc kubenswrapper[4720]: E0122 06:36:32.974263 4720 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T06:36:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T06:36:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T06:36:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T06:36:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T06:36:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T06:36:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T06:36:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T06:36:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"234f6209-cc86-46cc-ab69-026482c920c9\\\",\\\"systemUUID\\\":\\\"4713dd6d-99ec-4bb6-94e4-e7199d2e8be9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:32Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:32 crc kubenswrapper[4720]: I0122 06:36:32.981198 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:32 crc kubenswrapper[4720]: I0122 06:36:32.981424 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:32 crc kubenswrapper[4720]: I0122 06:36:32.981575 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:32 crc kubenswrapper[4720]: I0122 06:36:32.981781 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:32 crc kubenswrapper[4720]: I0122 06:36:32.982039 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:32Z","lastTransitionTime":"2026-01-22T06:36:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:33 crc kubenswrapper[4720]: E0122 06:36:33.004274 4720 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T06:36:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T06:36:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T06:36:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T06:36:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T06:36:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T06:36:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T06:36:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T06:36:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"234f6209-cc86-46cc-ab69-026482c920c9\\\",\\\"systemUUID\\\":\\\"4713dd6d-99ec-4bb6-94e4-e7199d2e8be9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:33Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:33 crc kubenswrapper[4720]: I0122 06:36:33.016289 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:33 crc kubenswrapper[4720]: I0122 06:36:33.016333 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:33 crc kubenswrapper[4720]: I0122 06:36:33.016352 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:33 crc kubenswrapper[4720]: I0122 06:36:33.016378 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:33 crc kubenswrapper[4720]: I0122 06:36:33.016397 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:33Z","lastTransitionTime":"2026-01-22T06:36:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:33 crc kubenswrapper[4720]: E0122 06:36:33.039657 4720 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T06:36:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T06:36:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T06:36:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T06:36:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T06:36:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T06:36:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T06:36:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T06:36:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"234f6209-cc86-46cc-ab69-026482c920c9\\\",\\\"systemUUID\\\":\\\"4713dd6d-99ec-4bb6-94e4-e7199d2e8be9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:33Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:33 crc kubenswrapper[4720]: E0122 06:36:33.039789 4720 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 22 06:36:33 crc kubenswrapper[4720]: I0122 06:36:33.041884 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:33 crc kubenswrapper[4720]: I0122 06:36:33.041973 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:33 crc kubenswrapper[4720]: I0122 06:36:33.042268 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:33 crc kubenswrapper[4720]: I0122 06:36:33.042308 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:33 crc kubenswrapper[4720]: I0122 06:36:33.042327 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:33Z","lastTransitionTime":"2026-01-22T06:36:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:33 crc kubenswrapper[4720]: I0122 06:36:33.146197 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:33 crc kubenswrapper[4720]: I0122 06:36:33.146258 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:33 crc kubenswrapper[4720]: I0122 06:36:33.146463 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:33 crc kubenswrapper[4720]: I0122 06:36:33.146491 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:33 crc kubenswrapper[4720]: I0122 06:36:33.146510 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:33Z","lastTransitionTime":"2026-01-22T06:36:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:33 crc kubenswrapper[4720]: I0122 06:36:33.186146 4720 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 08:41:59.544841106 +0000 UTC Jan 22 06:36:33 crc kubenswrapper[4720]: I0122 06:36:33.210636 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kvtch" Jan 22 06:36:33 crc kubenswrapper[4720]: E0122 06:36:33.210868 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kvtch" podUID="409f50e8-9b68-4efe-8eb4-bc144d383817" Jan 22 06:36:33 crc kubenswrapper[4720]: I0122 06:36:33.210944 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 06:36:33 crc kubenswrapper[4720]: E0122 06:36:33.212163 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 06:36:33 crc kubenswrapper[4720]: I0122 06:36:33.219348 4720 scope.go:117] "RemoveContainer" containerID="3ae75d823e484413b1163d5407d5560d4cae3d122fb5f1ce34f501c3d2fce1b4" Jan 22 06:36:33 crc kubenswrapper[4720]: I0122 06:36:33.250785 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:33 crc kubenswrapper[4720]: I0122 06:36:33.250871 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:33 crc kubenswrapper[4720]: I0122 06:36:33.250897 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:33 crc kubenswrapper[4720]: I0122 06:36:33.250958 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:33 crc kubenswrapper[4720]: I0122 06:36:33.250980 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:33Z","lastTransitionTime":"2026-01-22T06:36:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:33 crc kubenswrapper[4720]: I0122 06:36:33.355007 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:33 crc kubenswrapper[4720]: I0122 06:36:33.355441 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:33 crc kubenswrapper[4720]: I0122 06:36:33.355459 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:33 crc kubenswrapper[4720]: I0122 06:36:33.355485 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:33 crc kubenswrapper[4720]: I0122 06:36:33.355503 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:33Z","lastTransitionTime":"2026-01-22T06:36:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:33 crc kubenswrapper[4720]: I0122 06:36:33.459516 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:33 crc kubenswrapper[4720]: I0122 06:36:33.459586 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:33 crc kubenswrapper[4720]: I0122 06:36:33.459607 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:33 crc kubenswrapper[4720]: I0122 06:36:33.459636 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:33 crc kubenswrapper[4720]: I0122 06:36:33.459659 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:33Z","lastTransitionTime":"2026-01-22T06:36:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:33 crc kubenswrapper[4720]: I0122 06:36:33.563706 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:33 crc kubenswrapper[4720]: I0122 06:36:33.563806 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:33 crc kubenswrapper[4720]: I0122 06:36:33.563828 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:33 crc kubenswrapper[4720]: I0122 06:36:33.563893 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:33 crc kubenswrapper[4720]: I0122 06:36:33.563990 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:33Z","lastTransitionTime":"2026-01-22T06:36:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:33 crc kubenswrapper[4720]: I0122 06:36:33.671302 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:33 crc kubenswrapper[4720]: I0122 06:36:33.671376 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:33 crc kubenswrapper[4720]: I0122 06:36:33.671396 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:33 crc kubenswrapper[4720]: I0122 06:36:33.671424 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:33 crc kubenswrapper[4720]: I0122 06:36:33.671443 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:33Z","lastTransitionTime":"2026-01-22T06:36:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:33 crc kubenswrapper[4720]: I0122 06:36:33.774713 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:33 crc kubenswrapper[4720]: I0122 06:36:33.774773 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:33 crc kubenswrapper[4720]: I0122 06:36:33.774787 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:33 crc kubenswrapper[4720]: I0122 06:36:33.774815 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:33 crc kubenswrapper[4720]: I0122 06:36:33.774829 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:33Z","lastTransitionTime":"2026-01-22T06:36:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:33 crc kubenswrapper[4720]: I0122 06:36:33.878459 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:33 crc kubenswrapper[4720]: I0122 06:36:33.878530 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:33 crc kubenswrapper[4720]: I0122 06:36:33.878547 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:33 crc kubenswrapper[4720]: I0122 06:36:33.878582 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:33 crc kubenswrapper[4720]: I0122 06:36:33.878608 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:33Z","lastTransitionTime":"2026-01-22T06:36:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:33 crc kubenswrapper[4720]: I0122 06:36:33.982565 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:33 crc kubenswrapper[4720]: I0122 06:36:33.982639 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:33 crc kubenswrapper[4720]: I0122 06:36:33.982658 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:33 crc kubenswrapper[4720]: I0122 06:36:33.982688 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:33 crc kubenswrapper[4720]: I0122 06:36:33.982714 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:33Z","lastTransitionTime":"2026-01-22T06:36:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:34 crc kubenswrapper[4720]: I0122 06:36:34.085413 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:34 crc kubenswrapper[4720]: I0122 06:36:34.085482 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:34 crc kubenswrapper[4720]: I0122 06:36:34.085501 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:34 crc kubenswrapper[4720]: I0122 06:36:34.085531 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:34 crc kubenswrapper[4720]: I0122 06:36:34.085556 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:34Z","lastTransitionTime":"2026-01-22T06:36:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:34 crc kubenswrapper[4720]: I0122 06:36:34.186236 4720 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 21:27:13.299391332 +0000 UTC Jan 22 06:36:34 crc kubenswrapper[4720]: I0122 06:36:34.188301 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:34 crc kubenswrapper[4720]: I0122 06:36:34.188357 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:34 crc kubenswrapper[4720]: I0122 06:36:34.188380 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:34 crc kubenswrapper[4720]: I0122 06:36:34.188411 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:34 crc kubenswrapper[4720]: I0122 06:36:34.188434 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:34Z","lastTransitionTime":"2026-01-22T06:36:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:34 crc kubenswrapper[4720]: I0122 06:36:34.209944 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 06:36:34 crc kubenswrapper[4720]: I0122 06:36:34.210079 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 06:36:34 crc kubenswrapper[4720]: E0122 06:36:34.210128 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 06:36:34 crc kubenswrapper[4720]: E0122 06:36:34.210306 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 06:36:34 crc kubenswrapper[4720]: I0122 06:36:34.292198 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:34 crc kubenswrapper[4720]: I0122 06:36:34.292252 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:34 crc kubenswrapper[4720]: I0122 06:36:34.292262 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:34 crc kubenswrapper[4720]: I0122 06:36:34.292283 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:34 crc kubenswrapper[4720]: I0122 06:36:34.292295 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:34Z","lastTransitionTime":"2026-01-22T06:36:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:34 crc kubenswrapper[4720]: I0122 06:36:34.395882 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:34 crc kubenswrapper[4720]: I0122 06:36:34.395970 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:34 crc kubenswrapper[4720]: I0122 06:36:34.395988 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:34 crc kubenswrapper[4720]: I0122 06:36:34.396014 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:34 crc kubenswrapper[4720]: I0122 06:36:34.396033 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:34Z","lastTransitionTime":"2026-01-22T06:36:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:34 crc kubenswrapper[4720]: I0122 06:36:34.499656 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:34 crc kubenswrapper[4720]: I0122 06:36:34.499722 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:34 crc kubenswrapper[4720]: I0122 06:36:34.499736 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:34 crc kubenswrapper[4720]: I0122 06:36:34.499761 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:34 crc kubenswrapper[4720]: I0122 06:36:34.499777 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:34Z","lastTransitionTime":"2026-01-22T06:36:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:34 crc kubenswrapper[4720]: I0122 06:36:34.603046 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:34 crc kubenswrapper[4720]: I0122 06:36:34.603108 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:34 crc kubenswrapper[4720]: I0122 06:36:34.603125 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:34 crc kubenswrapper[4720]: I0122 06:36:34.603159 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:34 crc kubenswrapper[4720]: I0122 06:36:34.603178 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:34Z","lastTransitionTime":"2026-01-22T06:36:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:34 crc kubenswrapper[4720]: I0122 06:36:34.705753 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:34 crc kubenswrapper[4720]: I0122 06:36:34.705813 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:34 crc kubenswrapper[4720]: I0122 06:36:34.705828 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:34 crc kubenswrapper[4720]: I0122 06:36:34.705856 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:34 crc kubenswrapper[4720]: I0122 06:36:34.705873 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:34Z","lastTransitionTime":"2026-01-22T06:36:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:34 crc kubenswrapper[4720]: I0122 06:36:34.804485 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pc2f4_9a725fa6-120e-41b1-bf7b-e1419e35c891/ovnkube-controller/2.log" Jan 22 06:36:34 crc kubenswrapper[4720]: I0122 06:36:34.807736 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:34 crc kubenswrapper[4720]: I0122 06:36:34.807778 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:34 crc kubenswrapper[4720]: I0122 06:36:34.807788 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:34 crc kubenswrapper[4720]: I0122 06:36:34.807806 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:34 crc kubenswrapper[4720]: I0122 06:36:34.807819 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:34Z","lastTransitionTime":"2026-01-22T06:36:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:34 crc kubenswrapper[4720]: I0122 06:36:34.810167 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pc2f4" event={"ID":"9a725fa6-120e-41b1-bf7b-e1419e35c891","Type":"ContainerStarted","Data":"4bab2488a753e9278defd984cada047c9d9b5411a54e88204c7c67add2341e64"} Jan 22 06:36:34 crc kubenswrapper[4720]: I0122 06:36:34.810868 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-pc2f4" Jan 22 06:36:34 crc kubenswrapper[4720]: I0122 06:36:34.825829 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73ba4f9c-33cb-4898-b2a3-21bf3327cf5b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:36:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:36:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b5ab589e0e928e47ac498164439f2fbd62bfe1130a9c17a9d96ec4cedd2c1e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5ff36eb3ab53efb54f45ab3e3030175237fd76ecd28ffcdc5a5079dfb93ec2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5d1e4cb487f75b95bc0da8ec3adbb6410d171fa2c95137c8127cea6023166f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7605a8cc85e1a2da51c6bedd5f03df930e1135340ec7de69d1ef643c907fd2bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7605a8cc85e1a2da51c6bedd5f03df930e1135340ec7de69d1ef643c907fd2bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:08Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:34Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:34 crc kubenswrapper[4720]: I0122 06:36:34.839356 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:34Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:34 crc kubenswrapper[4720]: I0122 06:36:34.856344 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dtnxt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"518eedd0-2cb6-458d-a7a8-d8c8b8296401\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://794bf7a98b2300bd21bd914f5ad90c8f92ea7c055c78f62dbac7bc66c1c4d282\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqx2p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dtnxt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:34Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:34 crc kubenswrapper[4720]: I0122 06:36:34.879477 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc18b75d6761db0d4a459a096804ac6600c6c2026ca133d47671a8aadba175f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6616553b8c62205590801d10e6e316b396b7190584b08a774ebb87ab425cc122\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:34Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:34 crc kubenswrapper[4720]: I0122 06:36:34.903372 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4b26e9d-6a95-4b1c-9750-88b6aa100c67\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57c69f24cbbb5aea3a63b602916230df0cc3d74344a32bd1528bd162786a6d4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f52f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88eb6692702bcb8523c759d764bb8dede5af5a2890217a1c6897a5b18a7197dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f52f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bnsvd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:34Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:34 crc kubenswrapper[4720]: I0122 06:36:34.910827 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:34 crc kubenswrapper[4720]: I0122 06:36:34.910886 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:34 crc kubenswrapper[4720]: I0122 06:36:34.910906 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:34 crc kubenswrapper[4720]: I0122 06:36:34.910960 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:34 crc kubenswrapper[4720]: I0122 06:36:34.910983 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:34Z","lastTransitionTime":"2026-01-22T06:36:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:34 crc kubenswrapper[4720]: I0122 06:36:34.936355 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pc2f4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a725fa6-120e-41b1-bf7b-e1419e35c891\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be8927bdfee25dd12f02d8f5e4923d1ef6e034a4ab3c036ee8fb76b699aa4a04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b14f48ebae84e1450337b5ce78c6b9e74f4ffd418eaaf9df9fcb7495f5feae06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b013c0f8d83bb91a1385ed067377ea5bd81640f7b420160042c33a3112cbe153\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a1f835b7dfa74f56bbfdc5ef0a6674e6c04ae1be7119e3675d4a7e970bc37d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdea94d764eb25af4aec5da1f1aaf94b7d75e66292561a54803a079a6ca35cbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://279bdc70caff8b05f2eaff1d28bee03725a7069dcc99c4770a8df9a3e3ed0440\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bab2488a753e9278defd984cada047c9d9b5411a54e88204c7c67add2341e64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ae75d823e484413b1163d5407d5560d4cae3d122fb5f1ce34f501c3d2fce1b4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T06:36:04Z\\\",\\\"message\\\":\\\"min network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:04Z is after 2025-08-24T17:21:41Z]\\\\nI0122 06:36:04.142760 6361 ovn.go:134] Ensuring zone local for Pod openshift-dns/node-resolver-dtnxt in node crc\\\\nI0122 06:36:04.142770 6361 obj_retry.go:386] Retry successful for *v1.Pod openshift-dns/node-resolver-dtnxt after 0 failed attempt(s)\\\\nI0122 06:36:04.142777 6361 default_network_controller.go:776] Recording success event on pod openshift-dns/node-resolver-dtnxt\\\\nI0122 06:36:04.142318 6361 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-scheduler/openshift-kube-scheduler-crc\\\\nI0122 06:36:04.142789 6361 ovn.go:134] Ensuring zone local for Pod openshift-kube-scheduler/openshift-kube-scheduler-crc in node crc\\\\nI0122 06:36:04.142793 6361 obj_ret\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T06:36:03Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:36:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd6371da033cf0ecd1c0b746cae82e14593edbf2353910a77284a2987741580d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a478b385adad231c65bafa44aef49fd983ef1a81babc0464478e199578b612b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a478b385adad231c65bafa44aef49fd983ef1a81babc0464478e199578b612b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pc2f4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:34Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:34 crc kubenswrapper[4720]: I0122 06:36:34.956730 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lxzml" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c7b3c34a-9870-4c9f-990b-29b7e768d5a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec19859a546dc85e4807d0f522014187b1d22c1167a342776aafbcfd7c3a2cd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h66q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0e395a61e07527eace7f980f476ef9b16a0c6d525b461837ebd9abcbebfa9b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0e395a61e07527eace7f980f476ef9b16a0c6d525b461837ebd9abcbebfa9b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h66q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fa836ad8fb1d77bc1b9d30575db8ed7f2394a2b03d41f0b26a3907fa07ab154\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fa836ad8fb1d77bc1b9d30575db8ed7f2394a2b03d41f0b26a3907fa07ab154\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h66q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09cda77dbfa7cafc1f167478649fe1bc961d7610005bd3260407796956dd8fc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09cda77dbfa7cafc1f167478649fe1bc961d7610005bd3260407796956dd8fc3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h66q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d74c02c82589070c20f88c3ce9574341f2c4898af034fe9a686aa7b7c19ed93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d74c02c82589070c20f88c3ce9574341f2c4898af034fe9a686aa7b7c19ed93\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h66q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42e7374bf2c5e55d0afbe1e7564a2bc7fd23d0af7e22f0cab853fb9d75802422\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42e7374bf2c5e55d0afbe1e7564a2bc7fd23d0af7e22f0cab853fb9d75802422\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h66q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bbafab11f2868d0a97a54debc01d8e4cc6360b08a5b7e883a2f385db2e9ba0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3bbafab11f2868d0a97a54debc01d8e4cc6360b08a5b7e883a2f385db2e9ba0c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h66q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lxzml\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:34Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:34 crc kubenswrapper[4720]: I0122 06:36:34.969340 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5bmrh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"819a554b-cde8-41eb-bf3c-b965b5754ee9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fd548b8bf40d7be75f3433d2396eac3eb467cdc77a6e92f29c5321b0f2e11c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w949z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5bmrh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:34Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:34 crc kubenswrapper[4720]: I0122 06:36:34.982367 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4c84t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83dddec7-9ecb-4d3b-97ac-e2f8f59e547c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ea91d23f213bf1c41ebffa43c30559153a3fdae5aac42557a24566cc90bd2b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl9b5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8570742a6d232a3695d42f42f4a8a7bfe79325cb20c8da4129148dda33df4683\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl9b5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4c84t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:34Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:35 crc kubenswrapper[4720]: I0122 06:36:35.001488 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63aad383-f2cb-4bee-a51e-8f27e7b6f6dc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f91f6cde32c3019a63315238551af8420def8e585e4dbf3d29e55f62f155ba3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eff41c6fc4cac32edebfbabd313603d50771e92427938f4fc8c12bc75563133d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5dc3d31620607809111c9a1ed06ab7dfb1b24ff5ef18155979efc6b09b6d67d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://365ce842df37c87fae429e32228087a45828c6c717b63a2eb9818f26bf9aed7b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:08Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:34Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:35 crc kubenswrapper[4720]: I0122 06:36:35.014534 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:35 crc kubenswrapper[4720]: I0122 06:36:35.014590 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:35 crc kubenswrapper[4720]: I0122 06:36:35.014605 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:35 crc kubenswrapper[4720]: I0122 06:36:35.014628 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:35 crc kubenswrapper[4720]: I0122 06:36:35.014643 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:35Z","lastTransitionTime":"2026-01-22T06:36:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:35 crc kubenswrapper[4720]: I0122 06:36:35.023726 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:35Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:35 crc kubenswrapper[4720]: I0122 06:36:35.044879 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:35Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:35 crc kubenswrapper[4720]: I0122 06:36:35.064838 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3042f2c25662a3c0a0f010b2e431521ea9dc24b59303773b1fc4e2b43308937\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:35Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:35 crc kubenswrapper[4720]: I0122 06:36:35.081796 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kvtch" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"409f50e8-9b68-4efe-8eb4-bc144d383817\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhm9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhm9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:47Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kvtch\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:35Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:35 crc kubenswrapper[4720]: I0122 06:36:35.115311 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2336077a-0d11-443d-aa5f-3bc0f75aeb59\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7f8b443098a61889ca09134ff7b6e219efca59abcbcea3bb4c1aa9266ee8e55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfdba4f4bb3ed55acad58ed0c07d2dd58b1a1e9fd56443090c002bdefd3e7e10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2caddf6dd66b641ff10add3b52e37581c741507660ec4fe67a3b9ddde699e56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://108525dd49ce1847f1cc6c4e2a4c70cae3d2d990cda97eeb4aba8879b955432d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d49a501c34b86f0f9300742970a00c92ce3b8d352518d8f53d1ab8c734e2af6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49fa65139740d25a82a8becea2b7476e9846a3eebfe5a02bb0b3e8cc4e7fa30d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49fa65139740d25a82a8becea2b7476e9846a3eebfe5a02bb0b3e8cc4e7fa30d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e171d6a13b6108f7df37e46d2195682d2ce7a75e5fbfd0ccf4aca27871964cf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e171d6a13b6108f7df37e46d2195682d2ce7a75e5fbfd0ccf4aca27871964cf6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://136504ce05b178f4c60a839a1044c0b0892ea97babfe80181eac601c5057df16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://136504ce05b178f4c60a839a1044c0b0892ea97babfe80181eac601c5057df16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:08Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:35Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:35 crc kubenswrapper[4720]: I0122 06:36:35.117335 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:35 crc kubenswrapper[4720]: I0122 06:36:35.117459 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:35 crc kubenswrapper[4720]: I0122 06:36:35.117483 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:35 crc kubenswrapper[4720]: I0122 06:36:35.117510 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:35 crc kubenswrapper[4720]: I0122 06:36:35.117531 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:35Z","lastTransitionTime":"2026-01-22T06:36:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:35 crc kubenswrapper[4720]: I0122 06:36:35.140762 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71c3232e-a7c6-4127-b9ae-54b793cf40fc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b844e983a5de29cec0e9c51f209f04921cd45b437887ca38e93348bc0816832a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d21c2ba52009379f44498a3064c67bcd1a54a6277cc2b1df9db949bf0871e95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65a3043e63bb84b58bb5486070eb84265d51c514df2d766703a62382b3d6f66f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b944677a78ab673857bf2dd507dcc2eced1caef632613ec1e2054cb76ef57f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f51bf89aa732948c3672632435a055b62f58316da07931b2773bc2d3c10789e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T06:35:26Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 06:35:20.895806 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 06:35:20.896560 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2161447925/tls.crt::/tmp/serving-cert-2161447925/tls.key\\\\\\\"\\\\nI0122 06:35:26.673138 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 06:35:26.677382 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 06:35:26.677419 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 06:35:26.677459 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 06:35:26.677469 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 06:35:26.688473 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 06:35:26.688552 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 06:35:26.688566 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 06:35:26.688580 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 06:35:26.688588 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 06:35:26.688596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 06:35:26.688605 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 06:35:26.688875 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 06:35:26.690763 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://927a5a6aba8d7533888fb9e5e664fbaed25c6e6c0616c5b3c6f0afd6f45aa128\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3418e6eacd36147ae6ae1be72f23113dea7dbaab3597ba0fee76fbc846c0162f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3418e6eacd36147ae6ae1be72f23113dea7dbaab3597ba0fee76fbc846c0162f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:35Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:35 crc kubenswrapper[4720]: I0122 06:36:35.161342 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://42d0f971bb1f2a686c44e255b97648123170034b41d626318aab29e111dda4a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:35Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:35 crc kubenswrapper[4720]: I0122 06:36:35.183023 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-n5w5r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85373343-156d-4de0-a72b-baaf7c4e3d08\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:36:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:36:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b71047289bcefd19da4f70da8db4ee3456912a253f598d85540effeea52ca966\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4da0abad7292a9d82b80caabd4b1f0fc15fbe54bfa5b4316c85b2131ea8b5b7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T06:36:19Z\\\",\\\"message\\\":\\\"2026-01-22T06:35:34+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_310a9cdb-d8e6-49bd-b096-1bf7adc9b43b\\\\n2026-01-22T06:35:34+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_310a9cdb-d8e6-49bd-b096-1bf7adc9b43b to /host/opt/cni/bin/\\\\n2026-01-22T06:35:34Z [verbose] multus-daemon started\\\\n2026-01-22T06:35:34Z [verbose] Readiness Indicator file check\\\\n2026-01-22T06:36:19Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:33Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:36:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlzmz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-n5w5r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:35Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:35 crc kubenswrapper[4720]: I0122 06:36:35.187147 4720 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 13:43:42.686395732 +0000 UTC Jan 22 06:36:35 crc kubenswrapper[4720]: I0122 06:36:35.209713 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 06:36:35 crc kubenswrapper[4720]: I0122 06:36:35.209734 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kvtch" Jan 22 06:36:35 crc kubenswrapper[4720]: E0122 06:36:35.209944 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 06:36:35 crc kubenswrapper[4720]: E0122 06:36:35.210076 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kvtch" podUID="409f50e8-9b68-4efe-8eb4-bc144d383817" Jan 22 06:36:35 crc kubenswrapper[4720]: I0122 06:36:35.221249 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:35 crc kubenswrapper[4720]: I0122 06:36:35.221300 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:35 crc kubenswrapper[4720]: I0122 06:36:35.221323 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:35 crc kubenswrapper[4720]: I0122 06:36:35.221350 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:35 crc kubenswrapper[4720]: I0122 06:36:35.221372 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:35Z","lastTransitionTime":"2026-01-22T06:36:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:35 crc kubenswrapper[4720]: I0122 06:36:35.324224 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:35 crc kubenswrapper[4720]: I0122 06:36:35.324722 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:35 crc kubenswrapper[4720]: I0122 06:36:35.324854 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:35 crc kubenswrapper[4720]: I0122 06:36:35.325081 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:35 crc kubenswrapper[4720]: I0122 06:36:35.325220 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:35Z","lastTransitionTime":"2026-01-22T06:36:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:35 crc kubenswrapper[4720]: I0122 06:36:35.428323 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:35 crc kubenswrapper[4720]: I0122 06:36:35.428387 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:35 crc kubenswrapper[4720]: I0122 06:36:35.428409 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:35 crc kubenswrapper[4720]: I0122 06:36:35.428437 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:35 crc kubenswrapper[4720]: I0122 06:36:35.428459 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:35Z","lastTransitionTime":"2026-01-22T06:36:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:35 crc kubenswrapper[4720]: I0122 06:36:35.531320 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:35 crc kubenswrapper[4720]: I0122 06:36:35.531405 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:35 crc kubenswrapper[4720]: I0122 06:36:35.531425 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:35 crc kubenswrapper[4720]: I0122 06:36:35.531464 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:35 crc kubenswrapper[4720]: I0122 06:36:35.531487 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:35Z","lastTransitionTime":"2026-01-22T06:36:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:35 crc kubenswrapper[4720]: I0122 06:36:35.635173 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:35 crc kubenswrapper[4720]: I0122 06:36:35.635250 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:35 crc kubenswrapper[4720]: I0122 06:36:35.635268 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:35 crc kubenswrapper[4720]: I0122 06:36:35.635297 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:35 crc kubenswrapper[4720]: I0122 06:36:35.635319 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:35Z","lastTransitionTime":"2026-01-22T06:36:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:35 crc kubenswrapper[4720]: I0122 06:36:35.738221 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:35 crc kubenswrapper[4720]: I0122 06:36:35.738689 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:35 crc kubenswrapper[4720]: I0122 06:36:35.738851 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:35 crc kubenswrapper[4720]: I0122 06:36:35.739069 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:35 crc kubenswrapper[4720]: I0122 06:36:35.739215 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:35Z","lastTransitionTime":"2026-01-22T06:36:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:35 crc kubenswrapper[4720]: I0122 06:36:35.818999 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pc2f4_9a725fa6-120e-41b1-bf7b-e1419e35c891/ovnkube-controller/3.log" Jan 22 06:36:35 crc kubenswrapper[4720]: I0122 06:36:35.820373 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pc2f4_9a725fa6-120e-41b1-bf7b-e1419e35c891/ovnkube-controller/2.log" Jan 22 06:36:35 crc kubenswrapper[4720]: I0122 06:36:35.826407 4720 generic.go:334] "Generic (PLEG): container finished" podID="9a725fa6-120e-41b1-bf7b-e1419e35c891" containerID="4bab2488a753e9278defd984cada047c9d9b5411a54e88204c7c67add2341e64" exitCode=1 Jan 22 06:36:35 crc kubenswrapper[4720]: I0122 06:36:35.826482 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pc2f4" event={"ID":"9a725fa6-120e-41b1-bf7b-e1419e35c891","Type":"ContainerDied","Data":"4bab2488a753e9278defd984cada047c9d9b5411a54e88204c7c67add2341e64"} Jan 22 06:36:35 crc kubenswrapper[4720]: I0122 06:36:35.826551 4720 scope.go:117] "RemoveContainer" containerID="3ae75d823e484413b1163d5407d5560d4cae3d122fb5f1ce34f501c3d2fce1b4" Jan 22 06:36:35 crc kubenswrapper[4720]: I0122 06:36:35.828374 4720 scope.go:117] "RemoveContainer" containerID="4bab2488a753e9278defd984cada047c9d9b5411a54e88204c7c67add2341e64" Jan 22 06:36:35 crc kubenswrapper[4720]: E0122 06:36:35.828991 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-pc2f4_openshift-ovn-kubernetes(9a725fa6-120e-41b1-bf7b-e1419e35c891)\"" pod="openshift-ovn-kubernetes/ovnkube-node-pc2f4" podUID="9a725fa6-120e-41b1-bf7b-e1419e35c891" Jan 22 06:36:35 crc kubenswrapper[4720]: I0122 06:36:35.842539 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:35 crc kubenswrapper[4720]: I0122 06:36:35.842933 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:35 crc kubenswrapper[4720]: I0122 06:36:35.842953 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:35 crc kubenswrapper[4720]: I0122 06:36:35.842979 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:35 crc kubenswrapper[4720]: I0122 06:36:35.842998 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:35Z","lastTransitionTime":"2026-01-22T06:36:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:35 crc kubenswrapper[4720]: I0122 06:36:35.859038 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc18b75d6761db0d4a459a096804ac6600c6c2026ca133d47671a8aadba175f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6616553b8c62205590801d10e6e316b396b7190584b08a774ebb87ab425cc122\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:35Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:35 crc kubenswrapper[4720]: I0122 06:36:35.881007 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4b26e9d-6a95-4b1c-9750-88b6aa100c67\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57c69f24cbbb5aea3a63b602916230df0cc3d74344a32bd1528bd162786a6d4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f52f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88eb6692702bcb8523c759d764bb8dede5af5a2890217a1c6897a5b18a7197dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f52f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bnsvd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:35Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:35 crc kubenswrapper[4720]: I0122 06:36:35.946148 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:35 crc kubenswrapper[4720]: I0122 06:36:35.946228 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:35 crc kubenswrapper[4720]: I0122 06:36:35.946246 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:35 crc kubenswrapper[4720]: I0122 06:36:35.946270 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:35 crc kubenswrapper[4720]: I0122 06:36:35.946288 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:35Z","lastTransitionTime":"2026-01-22T06:36:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:35 crc kubenswrapper[4720]: I0122 06:36:35.950327 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pc2f4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a725fa6-120e-41b1-bf7b-e1419e35c891\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be8927bdfee25dd12f02d8f5e4923d1ef6e034a4ab3c036ee8fb76b699aa4a04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b14f48ebae84e1450337b5ce78c6b9e74f4ffd418eaaf9df9fcb7495f5feae06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b013c0f8d83bb91a1385ed067377ea5bd81640f7b420160042c33a3112cbe153\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a1f835b7dfa74f56bbfdc5ef0a6674e6c04ae1be7119e3675d4a7e970bc37d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdea94d764eb25af4aec5da1f1aaf94b7d75e66292561a54803a079a6ca35cbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://279bdc70caff8b05f2eaff1d28bee03725a7069dcc99c4770a8df9a3e3ed0440\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bab2488a753e9278defd984cada047c9d9b5411a54e88204c7c67add2341e64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ae75d823e484413b1163d5407d5560d4cae3d122fb5f1ce34f501c3d2fce1b4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T06:36:04Z\\\",\\\"message\\\":\\\"min network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:04Z is after 2025-08-24T17:21:41Z]\\\\nI0122 06:36:04.142760 6361 ovn.go:134] Ensuring zone local for Pod openshift-dns/node-resolver-dtnxt in node crc\\\\nI0122 06:36:04.142770 6361 obj_retry.go:386] Retry successful for *v1.Pod openshift-dns/node-resolver-dtnxt after 0 failed attempt(s)\\\\nI0122 06:36:04.142777 6361 default_network_controller.go:776] Recording success event on pod openshift-dns/node-resolver-dtnxt\\\\nI0122 06:36:04.142318 6361 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-scheduler/openshift-kube-scheduler-crc\\\\nI0122 06:36:04.142789 6361 ovn.go:134] Ensuring zone local for Pod openshift-kube-scheduler/openshift-kube-scheduler-crc in node crc\\\\nI0122 06:36:04.142793 6361 obj_ret\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T06:36:03Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4bab2488a753e9278defd984cada047c9d9b5411a54e88204c7c67add2341e64\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T06:36:35Z\\\",\\\"message\\\":\\\"lt network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:35Z is after 2025-08-24T17:21:41Z]\\\\nI0122 06:36:35.081612 6797 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-config-operator/metrics]} name:Service_openshift-config-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.161:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {f32857b5-f652-4313-a0d7-455c3156dd99}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T06:36:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd6371da033cf0ecd1c0b746cae82e14593edbf2353910a77284a2987741580d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a478b385adad231c65bafa44aef49fd983ef1a81babc0464478e199578b612b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a478b385adad231c65bafa44aef49fd983ef1a81babc0464478e199578b612b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pc2f4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:35Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:35 crc kubenswrapper[4720]: I0122 06:36:35.977925 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lxzml" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c7b3c34a-9870-4c9f-990b-29b7e768d5a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec19859a546dc85e4807d0f522014187b1d22c1167a342776aafbcfd7c3a2cd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h66q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0e395a61e07527eace7f980f476ef9b16a0c6d525b461837ebd9abcbebfa9b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0e395a61e07527eace7f980f476ef9b16a0c6d525b461837ebd9abcbebfa9b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h66q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fa836ad8fb1d77bc1b9d30575db8ed7f2394a2b03d41f0b26a3907fa07ab154\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fa836ad8fb1d77bc1b9d30575db8ed7f2394a2b03d41f0b26a3907fa07ab154\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h66q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09cda77dbfa7cafc1f167478649fe1bc961d7610005bd3260407796956dd8fc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09cda77dbfa7cafc1f167478649fe1bc961d7610005bd3260407796956dd8fc3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h66q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d74c02c82589070c20f88c3ce9574341f2c4898af034fe9a686aa7b7c19ed93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d74c02c82589070c20f88c3ce9574341f2c4898af034fe9a686aa7b7c19ed93\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h66q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42e7374bf2c5e55d0afbe1e7564a2bc7fd23d0af7e22f0cab853fb9d75802422\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42e7374bf2c5e55d0afbe1e7564a2bc7fd23d0af7e22f0cab853fb9d75802422\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h66q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bbafab11f2868d0a97a54debc01d8e4cc6360b08a5b7e883a2f385db2e9ba0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3bbafab11f2868d0a97a54debc01d8e4cc6360b08a5b7e883a2f385db2e9ba0c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h66q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lxzml\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:35Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:35 crc kubenswrapper[4720]: I0122 06:36:35.991151 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5bmrh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"819a554b-cde8-41eb-bf3c-b965b5754ee9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fd548b8bf40d7be75f3433d2396eac3eb467cdc77a6e92f29c5321b0f2e11c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w949z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5bmrh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:35Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:36 crc kubenswrapper[4720]: I0122 06:36:36.005415 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4c84t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83dddec7-9ecb-4d3b-97ac-e2f8f59e547c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ea91d23f213bf1c41ebffa43c30559153a3fdae5aac42557a24566cc90bd2b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl9b5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8570742a6d232a3695d42f42f4a8a7bfe79325cb20c8da4129148dda33df4683\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl9b5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4c84t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:36Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:36 crc kubenswrapper[4720]: I0122 06:36:36.028181 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63aad383-f2cb-4bee-a51e-8f27e7b6f6dc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f91f6cde32c3019a63315238551af8420def8e585e4dbf3d29e55f62f155ba3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eff41c6fc4cac32edebfbabd313603d50771e92427938f4fc8c12bc75563133d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5dc3d31620607809111c9a1ed06ab7dfb1b24ff5ef18155979efc6b09b6d67d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://365ce842df37c87fae429e32228087a45828c6c717b63a2eb9818f26bf9aed7b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:08Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:36Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:36 crc kubenswrapper[4720]: I0122 06:36:36.043979 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:36Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:36 crc kubenswrapper[4720]: I0122 06:36:36.048900 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:36 crc kubenswrapper[4720]: I0122 06:36:36.048965 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:36 crc kubenswrapper[4720]: I0122 06:36:36.048983 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:36 crc kubenswrapper[4720]: I0122 06:36:36.049008 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:36 crc kubenswrapper[4720]: I0122 06:36:36.049026 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:36Z","lastTransitionTime":"2026-01-22T06:36:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:36 crc kubenswrapper[4720]: I0122 06:36:36.063678 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:36Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:36 crc kubenswrapper[4720]: I0122 06:36:36.086115 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3042f2c25662a3c0a0f010b2e431521ea9dc24b59303773b1fc4e2b43308937\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:36Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:36 crc kubenswrapper[4720]: I0122 06:36:36.105349 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kvtch" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"409f50e8-9b68-4efe-8eb4-bc144d383817\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhm9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhm9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:47Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kvtch\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:36Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:36 crc kubenswrapper[4720]: I0122 06:36:36.139195 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2336077a-0d11-443d-aa5f-3bc0f75aeb59\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7f8b443098a61889ca09134ff7b6e219efca59abcbcea3bb4c1aa9266ee8e55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfdba4f4bb3ed55acad58ed0c07d2dd58b1a1e9fd56443090c002bdefd3e7e10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2caddf6dd66b641ff10add3b52e37581c741507660ec4fe67a3b9ddde699e56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://108525dd49ce1847f1cc6c4e2a4c70cae3d2d990cda97eeb4aba8879b955432d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d49a501c34b86f0f9300742970a00c92ce3b8d352518d8f53d1ab8c734e2af6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49fa65139740d25a82a8becea2b7476e9846a3eebfe5a02bb0b3e8cc4e7fa30d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49fa65139740d25a82a8becea2b7476e9846a3eebfe5a02bb0b3e8cc4e7fa30d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e171d6a13b6108f7df37e46d2195682d2ce7a75e5fbfd0ccf4aca27871964cf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e171d6a13b6108f7df37e46d2195682d2ce7a75e5fbfd0ccf4aca27871964cf6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://136504ce05b178f4c60a839a1044c0b0892ea97babfe80181eac601c5057df16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://136504ce05b178f4c60a839a1044c0b0892ea97babfe80181eac601c5057df16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:08Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:36Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:36 crc kubenswrapper[4720]: I0122 06:36:36.152092 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:36 crc kubenswrapper[4720]: I0122 06:36:36.152160 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:36 crc kubenswrapper[4720]: I0122 06:36:36.152179 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:36 crc kubenswrapper[4720]: I0122 06:36:36.152206 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:36 crc kubenswrapper[4720]: I0122 06:36:36.152225 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:36Z","lastTransitionTime":"2026-01-22T06:36:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:36 crc kubenswrapper[4720]: I0122 06:36:36.162754 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71c3232e-a7c6-4127-b9ae-54b793cf40fc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b844e983a5de29cec0e9c51f209f04921cd45b437887ca38e93348bc0816832a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d21c2ba52009379f44498a3064c67bcd1a54a6277cc2b1df9db949bf0871e95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65a3043e63bb84b58bb5486070eb84265d51c514df2d766703a62382b3d6f66f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b944677a78ab673857bf2dd507dcc2eced1caef632613ec1e2054cb76ef57f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f51bf89aa732948c3672632435a055b62f58316da07931b2773bc2d3c10789e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T06:35:26Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 06:35:20.895806 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 06:35:20.896560 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2161447925/tls.crt::/tmp/serving-cert-2161447925/tls.key\\\\\\\"\\\\nI0122 06:35:26.673138 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 06:35:26.677382 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 06:35:26.677419 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 06:35:26.677459 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 06:35:26.677469 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 06:35:26.688473 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 06:35:26.688552 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 06:35:26.688566 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 06:35:26.688580 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 06:35:26.688588 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 06:35:26.688596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 06:35:26.688605 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 06:35:26.688875 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 06:35:26.690763 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://927a5a6aba8d7533888fb9e5e664fbaed25c6e6c0616c5b3c6f0afd6f45aa128\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3418e6eacd36147ae6ae1be72f23113dea7dbaab3597ba0fee76fbc846c0162f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3418e6eacd36147ae6ae1be72f23113dea7dbaab3597ba0fee76fbc846c0162f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:36Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:36 crc kubenswrapper[4720]: I0122 06:36:36.181801 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://42d0f971bb1f2a686c44e255b97648123170034b41d626318aab29e111dda4a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:36Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:36 crc kubenswrapper[4720]: I0122 06:36:36.188248 4720 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 07:48:08.484668622 +0000 UTC Jan 22 06:36:36 crc kubenswrapper[4720]: I0122 06:36:36.202369 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-n5w5r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85373343-156d-4de0-a72b-baaf7c4e3d08\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:36:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:36:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b71047289bcefd19da4f70da8db4ee3456912a253f598d85540effeea52ca966\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4da0abad7292a9d82b80caabd4b1f0fc15fbe54bfa5b4316c85b2131ea8b5b7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T06:36:19Z\\\",\\\"message\\\":\\\"2026-01-22T06:35:34+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_310a9cdb-d8e6-49bd-b096-1bf7adc9b43b\\\\n2026-01-22T06:35:34+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_310a9cdb-d8e6-49bd-b096-1bf7adc9b43b to /host/opt/cni/bin/\\\\n2026-01-22T06:35:34Z [verbose] multus-daemon started\\\\n2026-01-22T06:35:34Z [verbose] Readiness Indicator file check\\\\n2026-01-22T06:36:19Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:33Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:36:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlzmz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-n5w5r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:36Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:36 crc kubenswrapper[4720]: I0122 06:36:36.210344 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 06:36:36 crc kubenswrapper[4720]: I0122 06:36:36.210385 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 06:36:36 crc kubenswrapper[4720]: E0122 06:36:36.210667 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 06:36:36 crc kubenswrapper[4720]: E0122 06:36:36.210990 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 06:36:36 crc kubenswrapper[4720]: I0122 06:36:36.224322 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73ba4f9c-33cb-4898-b2a3-21bf3327cf5b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:36:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:36:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b5ab589e0e928e47ac498164439f2fbd62bfe1130a9c17a9d96ec4cedd2c1e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5ff36eb3ab53efb54f45ab3e3030175237fd76ecd28ffcdc5a5079dfb93ec2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5d1e4cb487f75b95bc0da8ec3adbb6410d171fa2c95137c8127cea6023166f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7605a8cc85e1a2da51c6bedd5f03df930e1135340ec7de69d1ef643c907fd2bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7605a8cc85e1a2da51c6bedd5f03df930e1135340ec7de69d1ef643c907fd2bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:08Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:36Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:36 crc kubenswrapper[4720]: I0122 06:36:36.228457 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Jan 22 06:36:36 crc kubenswrapper[4720]: I0122 06:36:36.247749 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:36Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:36 crc kubenswrapper[4720]: I0122 06:36:36.255561 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:36 crc kubenswrapper[4720]: I0122 06:36:36.255611 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:36 crc kubenswrapper[4720]: I0122 06:36:36.255631 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:36 crc kubenswrapper[4720]: I0122 06:36:36.255658 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:36 crc kubenswrapper[4720]: I0122 06:36:36.255678 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:36Z","lastTransitionTime":"2026-01-22T06:36:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:36 crc kubenswrapper[4720]: I0122 06:36:36.267821 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dtnxt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"518eedd0-2cb6-458d-a7a8-d8c8b8296401\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://794bf7a98b2300bd21bd914f5ad90c8f92ea7c055c78f62dbac7bc66c1c4d282\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqx2p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dtnxt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:36Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:36 crc kubenswrapper[4720]: I0122 06:36:36.359033 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:36 crc kubenswrapper[4720]: I0122 06:36:36.359112 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:36 crc kubenswrapper[4720]: I0122 06:36:36.359130 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:36 crc kubenswrapper[4720]: I0122 06:36:36.359153 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:36 crc kubenswrapper[4720]: I0122 06:36:36.359196 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:36Z","lastTransitionTime":"2026-01-22T06:36:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:36 crc kubenswrapper[4720]: I0122 06:36:36.463116 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:36 crc kubenswrapper[4720]: I0122 06:36:36.463162 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:36 crc kubenswrapper[4720]: I0122 06:36:36.463174 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:36 crc kubenswrapper[4720]: I0122 06:36:36.463194 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:36 crc kubenswrapper[4720]: I0122 06:36:36.463209 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:36Z","lastTransitionTime":"2026-01-22T06:36:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:36 crc kubenswrapper[4720]: I0122 06:36:36.565801 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:36 crc kubenswrapper[4720]: I0122 06:36:36.565851 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:36 crc kubenswrapper[4720]: I0122 06:36:36.565868 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:36 crc kubenswrapper[4720]: I0122 06:36:36.565892 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:36 crc kubenswrapper[4720]: I0122 06:36:36.565956 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:36Z","lastTransitionTime":"2026-01-22T06:36:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:36 crc kubenswrapper[4720]: I0122 06:36:36.669283 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:36 crc kubenswrapper[4720]: I0122 06:36:36.669330 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:36 crc kubenswrapper[4720]: I0122 06:36:36.669343 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:36 crc kubenswrapper[4720]: I0122 06:36:36.669364 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:36 crc kubenswrapper[4720]: I0122 06:36:36.669379 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:36Z","lastTransitionTime":"2026-01-22T06:36:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:36 crc kubenswrapper[4720]: I0122 06:36:36.773638 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:36 crc kubenswrapper[4720]: I0122 06:36:36.773729 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:36 crc kubenswrapper[4720]: I0122 06:36:36.773746 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:36 crc kubenswrapper[4720]: I0122 06:36:36.773773 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:36 crc kubenswrapper[4720]: I0122 06:36:36.773797 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:36Z","lastTransitionTime":"2026-01-22T06:36:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:36 crc kubenswrapper[4720]: I0122 06:36:36.835378 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pc2f4_9a725fa6-120e-41b1-bf7b-e1419e35c891/ovnkube-controller/3.log" Jan 22 06:36:36 crc kubenswrapper[4720]: I0122 06:36:36.842833 4720 scope.go:117] "RemoveContainer" containerID="4bab2488a753e9278defd984cada047c9d9b5411a54e88204c7c67add2341e64" Jan 22 06:36:36 crc kubenswrapper[4720]: E0122 06:36:36.843499 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-pc2f4_openshift-ovn-kubernetes(9a725fa6-120e-41b1-bf7b-e1419e35c891)\"" pod="openshift-ovn-kubernetes/ovnkube-node-pc2f4" podUID="9a725fa6-120e-41b1-bf7b-e1419e35c891" Jan 22 06:36:36 crc kubenswrapper[4720]: I0122 06:36:36.868492 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://42d0f971bb1f2a686c44e255b97648123170034b41d626318aab29e111dda4a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:36Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:36 crc kubenswrapper[4720]: I0122 06:36:36.877296 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:36 crc kubenswrapper[4720]: I0122 06:36:36.877366 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:36 crc kubenswrapper[4720]: I0122 06:36:36.877387 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:36 crc kubenswrapper[4720]: I0122 06:36:36.877424 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:36 crc kubenswrapper[4720]: I0122 06:36:36.877445 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:36Z","lastTransitionTime":"2026-01-22T06:36:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:36 crc kubenswrapper[4720]: I0122 06:36:36.896779 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-n5w5r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85373343-156d-4de0-a72b-baaf7c4e3d08\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:36:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:36:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b71047289bcefd19da4f70da8db4ee3456912a253f598d85540effeea52ca966\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4da0abad7292a9d82b80caabd4b1f0fc15fbe54bfa5b4316c85b2131ea8b5b7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T06:36:19Z\\\",\\\"message\\\":\\\"2026-01-22T06:35:34+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_310a9cdb-d8e6-49bd-b096-1bf7adc9b43b\\\\n2026-01-22T06:35:34+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_310a9cdb-d8e6-49bd-b096-1bf7adc9b43b to /host/opt/cni/bin/\\\\n2026-01-22T06:35:34Z [verbose] multus-daemon started\\\\n2026-01-22T06:35:34Z [verbose] Readiness Indicator file check\\\\n2026-01-22T06:36:19Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:33Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:36:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlzmz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-n5w5r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:36Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:36 crc kubenswrapper[4720]: I0122 06:36:36.931426 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2336077a-0d11-443d-aa5f-3bc0f75aeb59\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7f8b443098a61889ca09134ff7b6e219efca59abcbcea3bb4c1aa9266ee8e55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfdba4f4bb3ed55acad58ed0c07d2dd58b1a1e9fd56443090c002bdefd3e7e10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2caddf6dd66b641ff10add3b52e37581c741507660ec4fe67a3b9ddde699e56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://108525dd49ce1847f1cc6c4e2a4c70cae3d2d990cda97eeb4aba8879b955432d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d49a501c34b86f0f9300742970a00c92ce3b8d352518d8f53d1ab8c734e2af6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49fa65139740d25a82a8becea2b7476e9846a3eebfe5a02bb0b3e8cc4e7fa30d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49fa65139740d25a82a8becea2b7476e9846a3eebfe5a02bb0b3e8cc4e7fa30d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e171d6a13b6108f7df37e46d2195682d2ce7a75e5fbfd0ccf4aca27871964cf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e171d6a13b6108f7df37e46d2195682d2ce7a75e5fbfd0ccf4aca27871964cf6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://136504ce05b178f4c60a839a1044c0b0892ea97babfe80181eac601c5057df16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://136504ce05b178f4c60a839a1044c0b0892ea97babfe80181eac601c5057df16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:08Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:36Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:36 crc kubenswrapper[4720]: I0122 06:36:36.956352 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71c3232e-a7c6-4127-b9ae-54b793cf40fc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b844e983a5de29cec0e9c51f209f04921cd45b437887ca38e93348bc0816832a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d21c2ba52009379f44498a3064c67bcd1a54a6277cc2b1df9db949bf0871e95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65a3043e63bb84b58bb5486070eb84265d51c514df2d766703a62382b3d6f66f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b944677a78ab673857bf2dd507dcc2eced1caef632613ec1e2054cb76ef57f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f51bf89aa732948c3672632435a055b62f58316da07931b2773bc2d3c10789e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T06:35:26Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 06:35:20.895806 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 06:35:20.896560 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2161447925/tls.crt::/tmp/serving-cert-2161447925/tls.key\\\\\\\"\\\\nI0122 06:35:26.673138 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 06:35:26.677382 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 06:35:26.677419 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 06:35:26.677459 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 06:35:26.677469 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 06:35:26.688473 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 06:35:26.688552 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 06:35:26.688566 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 06:35:26.688580 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 06:35:26.688588 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 06:35:26.688596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 06:35:26.688605 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 06:35:26.688875 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 06:35:26.690763 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://927a5a6aba8d7533888fb9e5e664fbaed25c6e6c0616c5b3c6f0afd6f45aa128\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3418e6eacd36147ae6ae1be72f23113dea7dbaab3597ba0fee76fbc846c0162f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3418e6eacd36147ae6ae1be72f23113dea7dbaab3597ba0fee76fbc846c0162f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:36Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:36 crc kubenswrapper[4720]: I0122 06:36:36.978554 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dtnxt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"518eedd0-2cb6-458d-a7a8-d8c8b8296401\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://794bf7a98b2300bd21bd914f5ad90c8f92ea7c055c78f62dbac7bc66c1c4d282\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqx2p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dtnxt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:36Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:36 crc kubenswrapper[4720]: I0122 06:36:36.981024 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:36 crc kubenswrapper[4720]: I0122 06:36:36.981098 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:36 crc kubenswrapper[4720]: I0122 06:36:36.981120 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:36 crc kubenswrapper[4720]: I0122 06:36:36.981151 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:36 crc kubenswrapper[4720]: I0122 06:36:36.981172 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:36Z","lastTransitionTime":"2026-01-22T06:36:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:37 crc kubenswrapper[4720]: I0122 06:36:37.001716 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73ba4f9c-33cb-4898-b2a3-21bf3327cf5b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:36:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:36:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b5ab589e0e928e47ac498164439f2fbd62bfe1130a9c17a9d96ec4cedd2c1e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5ff36eb3ab53efb54f45ab3e3030175237fd76ecd28ffcdc5a5079dfb93ec2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5d1e4cb487f75b95bc0da8ec3adbb6410d171fa2c95137c8127cea6023166f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7605a8cc85e1a2da51c6bedd5f03df930e1135340ec7de69d1ef643c907fd2bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7605a8cc85e1a2da51c6bedd5f03df930e1135340ec7de69d1ef643c907fd2bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:08Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:36Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:37 crc kubenswrapper[4720]: I0122 06:36:37.027458 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:37Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:37 crc kubenswrapper[4720]: I0122 06:36:37.046690 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4b26e9d-6a95-4b1c-9750-88b6aa100c67\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57c69f24cbbb5aea3a63b602916230df0cc3d74344a32bd1528bd162786a6d4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f52f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88eb6692702bcb8523c759d764bb8dede5af5a2890217a1c6897a5b18a7197dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f52f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bnsvd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:37Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:37 crc kubenswrapper[4720]: I0122 06:36:37.064960 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a98bb6d-6ab7-413e-ab94-7bb5d73babe8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a954389852d9be1f01cad5b53c0ee3a1e22d956897c2fec4bbeffdf558ec585\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02c044f6a9997893116f043639407239a8dc4cf8a30435557910df3c594389cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02c044f6a9997893116f043639407239a8dc4cf8a30435557910df3c594389cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:37Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:37 crc kubenswrapper[4720]: I0122 06:36:37.085004 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:37 crc kubenswrapper[4720]: I0122 06:36:37.085076 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:37 crc kubenswrapper[4720]: I0122 06:36:37.085088 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:37 crc kubenswrapper[4720]: I0122 06:36:37.085111 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:37 crc kubenswrapper[4720]: I0122 06:36:37.085131 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:37Z","lastTransitionTime":"2026-01-22T06:36:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:37 crc kubenswrapper[4720]: I0122 06:36:37.088714 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc18b75d6761db0d4a459a096804ac6600c6c2026ca133d47671a8aadba175f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6616553b8c62205590801d10e6e316b396b7190584b08a774ebb87ab425cc122\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:37Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:37 crc kubenswrapper[4720]: I0122 06:36:37.108990 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:37Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:37 crc kubenswrapper[4720]: I0122 06:36:37.127104 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3042f2c25662a3c0a0f010b2e431521ea9dc24b59303773b1fc4e2b43308937\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:37Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:37 crc kubenswrapper[4720]: I0122 06:36:37.159015 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pc2f4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a725fa6-120e-41b1-bf7b-e1419e35c891\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be8927bdfee25dd12f02d8f5e4923d1ef6e034a4ab3c036ee8fb76b699aa4a04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b14f48ebae84e1450337b5ce78c6b9e74f4ffd418eaaf9df9fcb7495f5feae06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b013c0f8d83bb91a1385ed067377ea5bd81640f7b420160042c33a3112cbe153\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a1f835b7dfa74f56bbfdc5ef0a6674e6c04ae1be7119e3675d4a7e970bc37d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdea94d764eb25af4aec5da1f1aaf94b7d75e66292561a54803a079a6ca35cbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://279bdc70caff8b05f2eaff1d28bee03725a7069dcc99c4770a8df9a3e3ed0440\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bab2488a753e9278defd984cada047c9d9b5411a54e88204c7c67add2341e64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4bab2488a753e9278defd984cada047c9d9b5411a54e88204c7c67add2341e64\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T06:36:35Z\\\",\\\"message\\\":\\\"lt network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:35Z is after 2025-08-24T17:21:41Z]\\\\nI0122 06:36:35.081612 6797 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-config-operator/metrics]} name:Service_openshift-config-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.161:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {f32857b5-f652-4313-a0d7-455c3156dd99}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T06:36:34Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-pc2f4_openshift-ovn-kubernetes(9a725fa6-120e-41b1-bf7b-e1419e35c891)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd6371da033cf0ecd1c0b746cae82e14593edbf2353910a77284a2987741580d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a478b385adad231c65bafa44aef49fd983ef1a81babc0464478e199578b612b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a478b385adad231c65bafa44aef49fd983ef1a81babc0464478e199578b612b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pc2f4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:37Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:37 crc kubenswrapper[4720]: I0122 06:36:37.185351 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lxzml" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c7b3c34a-9870-4c9f-990b-29b7e768d5a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec19859a546dc85e4807d0f522014187b1d22c1167a342776aafbcfd7c3a2cd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h66q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0e395a61e07527eace7f980f476ef9b16a0c6d525b461837ebd9abcbebfa9b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0e395a61e07527eace7f980f476ef9b16a0c6d525b461837ebd9abcbebfa9b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h66q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fa836ad8fb1d77bc1b9d30575db8ed7f2394a2b03d41f0b26a3907fa07ab154\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fa836ad8fb1d77bc1b9d30575db8ed7f2394a2b03d41f0b26a3907fa07ab154\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h66q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09cda77dbfa7cafc1f167478649fe1bc961d7610005bd3260407796956dd8fc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09cda77dbfa7cafc1f167478649fe1bc961d7610005bd3260407796956dd8fc3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h66q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d74c02c82589070c20f88c3ce9574341f2c4898af034fe9a686aa7b7c19ed93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d74c02c82589070c20f88c3ce9574341f2c4898af034fe9a686aa7b7c19ed93\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h66q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42e7374bf2c5e55d0afbe1e7564a2bc7fd23d0af7e22f0cab853fb9d75802422\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42e7374bf2c5e55d0afbe1e7564a2bc7fd23d0af7e22f0cab853fb9d75802422\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h66q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bbafab11f2868d0a97a54debc01d8e4cc6360b08a5b7e883a2f385db2e9ba0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3bbafab11f2868d0a97a54debc01d8e4cc6360b08a5b7e883a2f385db2e9ba0c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h66q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lxzml\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:37Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:37 crc kubenswrapper[4720]: I0122 06:36:37.189534 4720 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 10:29:08.675146086 +0000 UTC Jan 22 06:36:37 crc kubenswrapper[4720]: I0122 06:36:37.193850 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:37 crc kubenswrapper[4720]: I0122 06:36:37.193958 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:37 crc kubenswrapper[4720]: I0122 06:36:37.193983 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:37 crc kubenswrapper[4720]: I0122 06:36:37.194017 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:37 crc kubenswrapper[4720]: I0122 06:36:37.194049 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:37Z","lastTransitionTime":"2026-01-22T06:36:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:37 crc kubenswrapper[4720]: I0122 06:36:37.206735 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5bmrh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"819a554b-cde8-41eb-bf3c-b965b5754ee9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fd548b8bf40d7be75f3433d2396eac3eb467cdc77a6e92f29c5321b0f2e11c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w949z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5bmrh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:37Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:37 crc kubenswrapper[4720]: I0122 06:36:37.210160 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 06:36:37 crc kubenswrapper[4720]: I0122 06:36:37.210237 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kvtch" Jan 22 06:36:37 crc kubenswrapper[4720]: E0122 06:36:37.210416 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 06:36:37 crc kubenswrapper[4720]: E0122 06:36:37.210634 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kvtch" podUID="409f50e8-9b68-4efe-8eb4-bc144d383817" Jan 22 06:36:37 crc kubenswrapper[4720]: I0122 06:36:37.225386 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4c84t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83dddec7-9ecb-4d3b-97ac-e2f8f59e547c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ea91d23f213bf1c41ebffa43c30559153a3fdae5aac42557a24566cc90bd2b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl9b5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8570742a6d232a3695d42f42f4a8a7bfe79325cb20c8da4129148dda33df4683\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl9b5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4c84t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:37Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:37 crc kubenswrapper[4720]: I0122 06:36:37.247237 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63aad383-f2cb-4bee-a51e-8f27e7b6f6dc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f91f6cde32c3019a63315238551af8420def8e585e4dbf3d29e55f62f155ba3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eff41c6fc4cac32edebfbabd313603d50771e92427938f4fc8c12bc75563133d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5dc3d31620607809111c9a1ed06ab7dfb1b24ff5ef18155979efc6b09b6d67d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://365ce842df37c87fae429e32228087a45828c6c717b63a2eb9818f26bf9aed7b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:08Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:37Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:37 crc kubenswrapper[4720]: I0122 06:36:37.270735 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:37Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:37 crc kubenswrapper[4720]: I0122 06:36:37.291033 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kvtch" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"409f50e8-9b68-4efe-8eb4-bc144d383817\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhm9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhm9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:47Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kvtch\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:37Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:37 crc kubenswrapper[4720]: I0122 06:36:37.297582 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:37 crc kubenswrapper[4720]: I0122 06:36:37.297635 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:37 crc kubenswrapper[4720]: I0122 06:36:37.297654 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:37 crc kubenswrapper[4720]: I0122 06:36:37.297686 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:37 crc kubenswrapper[4720]: I0122 06:36:37.297705 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:37Z","lastTransitionTime":"2026-01-22T06:36:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:37 crc kubenswrapper[4720]: I0122 06:36:37.400987 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:37 crc kubenswrapper[4720]: I0122 06:36:37.401057 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:37 crc kubenswrapper[4720]: I0122 06:36:37.401079 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:37 crc kubenswrapper[4720]: I0122 06:36:37.401113 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:37 crc kubenswrapper[4720]: I0122 06:36:37.401137 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:37Z","lastTransitionTime":"2026-01-22T06:36:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:37 crc kubenswrapper[4720]: I0122 06:36:37.504903 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:37 crc kubenswrapper[4720]: I0122 06:36:37.505029 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:37 crc kubenswrapper[4720]: I0122 06:36:37.505049 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:37 crc kubenswrapper[4720]: I0122 06:36:37.505075 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:37 crc kubenswrapper[4720]: I0122 06:36:37.505095 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:37Z","lastTransitionTime":"2026-01-22T06:36:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:37 crc kubenswrapper[4720]: I0122 06:36:37.608635 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:37 crc kubenswrapper[4720]: I0122 06:36:37.608695 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:37 crc kubenswrapper[4720]: I0122 06:36:37.608714 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:37 crc kubenswrapper[4720]: I0122 06:36:37.608741 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:37 crc kubenswrapper[4720]: I0122 06:36:37.608758 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:37Z","lastTransitionTime":"2026-01-22T06:36:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:37 crc kubenswrapper[4720]: I0122 06:36:37.712424 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:37 crc kubenswrapper[4720]: I0122 06:36:37.712472 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:37 crc kubenswrapper[4720]: I0122 06:36:37.712490 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:37 crc kubenswrapper[4720]: I0122 06:36:37.712522 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:37 crc kubenswrapper[4720]: I0122 06:36:37.712543 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:37Z","lastTransitionTime":"2026-01-22T06:36:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:37 crc kubenswrapper[4720]: I0122 06:36:37.816207 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:37 crc kubenswrapper[4720]: I0122 06:36:37.816270 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:37 crc kubenswrapper[4720]: I0122 06:36:37.816296 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:37 crc kubenswrapper[4720]: I0122 06:36:37.816345 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:37 crc kubenswrapper[4720]: I0122 06:36:37.816371 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:37Z","lastTransitionTime":"2026-01-22T06:36:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:37 crc kubenswrapper[4720]: I0122 06:36:37.920282 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:37 crc kubenswrapper[4720]: I0122 06:36:37.920345 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:37 crc kubenswrapper[4720]: I0122 06:36:37.920364 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:37 crc kubenswrapper[4720]: I0122 06:36:37.920398 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:37 crc kubenswrapper[4720]: I0122 06:36:37.920418 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:37Z","lastTransitionTime":"2026-01-22T06:36:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:38 crc kubenswrapper[4720]: I0122 06:36:38.023566 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:38 crc kubenswrapper[4720]: I0122 06:36:38.023647 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:38 crc kubenswrapper[4720]: I0122 06:36:38.023674 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:38 crc kubenswrapper[4720]: I0122 06:36:38.023710 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:38 crc kubenswrapper[4720]: I0122 06:36:38.023806 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:38Z","lastTransitionTime":"2026-01-22T06:36:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:38 crc kubenswrapper[4720]: I0122 06:36:38.128012 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:38 crc kubenswrapper[4720]: I0122 06:36:38.128083 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:38 crc kubenswrapper[4720]: I0122 06:36:38.128100 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:38 crc kubenswrapper[4720]: I0122 06:36:38.128130 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:38 crc kubenswrapper[4720]: I0122 06:36:38.128150 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:38Z","lastTransitionTime":"2026-01-22T06:36:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:38 crc kubenswrapper[4720]: I0122 06:36:38.190399 4720 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 00:35:07.496486391 +0000 UTC Jan 22 06:36:38 crc kubenswrapper[4720]: I0122 06:36:38.210000 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 06:36:38 crc kubenswrapper[4720]: I0122 06:36:38.210187 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 06:36:38 crc kubenswrapper[4720]: E0122 06:36:38.211382 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 06:36:38 crc kubenswrapper[4720]: E0122 06:36:38.211535 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 06:36:38 crc kubenswrapper[4720]: I0122 06:36:38.232220 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:38 crc kubenswrapper[4720]: I0122 06:36:38.232301 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:38 crc kubenswrapper[4720]: I0122 06:36:38.232322 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:38 crc kubenswrapper[4720]: I0122 06:36:38.232350 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:38 crc kubenswrapper[4720]: I0122 06:36:38.232372 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:38Z","lastTransitionTime":"2026-01-22T06:36:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:38 crc kubenswrapper[4720]: I0122 06:36:38.236968 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:38Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:38 crc kubenswrapper[4720]: I0122 06:36:38.264031 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:38Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:38 crc kubenswrapper[4720]: I0122 06:36:38.285422 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3042f2c25662a3c0a0f010b2e431521ea9dc24b59303773b1fc4e2b43308937\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:38Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:38 crc kubenswrapper[4720]: I0122 06:36:38.313757 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pc2f4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a725fa6-120e-41b1-bf7b-e1419e35c891\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be8927bdfee25dd12f02d8f5e4923d1ef6e034a4ab3c036ee8fb76b699aa4a04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b14f48ebae84e1450337b5ce78c6b9e74f4ffd418eaaf9df9fcb7495f5feae06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b013c0f8d83bb91a1385ed067377ea5bd81640f7b420160042c33a3112cbe153\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a1f835b7dfa74f56bbfdc5ef0a6674e6c04ae1be7119e3675d4a7e970bc37d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdea94d764eb25af4aec5da1f1aaf94b7d75e66292561a54803a079a6ca35cbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://279bdc70caff8b05f2eaff1d28bee03725a7069dcc99c4770a8df9a3e3ed0440\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bab2488a753e9278defd984cada047c9d9b5411a54e88204c7c67add2341e64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4bab2488a753e9278defd984cada047c9d9b5411a54e88204c7c67add2341e64\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T06:36:35Z\\\",\\\"message\\\":\\\"lt network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:35Z is after 2025-08-24T17:21:41Z]\\\\nI0122 06:36:35.081612 6797 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-config-operator/metrics]} name:Service_openshift-config-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.161:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {f32857b5-f652-4313-a0d7-455c3156dd99}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T06:36:34Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-pc2f4_openshift-ovn-kubernetes(9a725fa6-120e-41b1-bf7b-e1419e35c891)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd6371da033cf0ecd1c0b746cae82e14593edbf2353910a77284a2987741580d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a478b385adad231c65bafa44aef49fd983ef1a81babc0464478e199578b612b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a478b385adad231c65bafa44aef49fd983ef1a81babc0464478e199578b612b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pc2f4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:38Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:38 crc kubenswrapper[4720]: I0122 06:36:38.335771 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:38 crc kubenswrapper[4720]: I0122 06:36:38.335835 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:38 crc kubenswrapper[4720]: I0122 06:36:38.335853 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:38 crc kubenswrapper[4720]: I0122 06:36:38.335882 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:38 crc kubenswrapper[4720]: I0122 06:36:38.335902 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:38Z","lastTransitionTime":"2026-01-22T06:36:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:38 crc kubenswrapper[4720]: I0122 06:36:38.341806 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lxzml" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c7b3c34a-9870-4c9f-990b-29b7e768d5a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec19859a546dc85e4807d0f522014187b1d22c1167a342776aafbcfd7c3a2cd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h66q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0e395a61e07527eace7f980f476ef9b16a0c6d525b461837ebd9abcbebfa9b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0e395a61e07527eace7f980f476ef9b16a0c6d525b461837ebd9abcbebfa9b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h66q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fa836ad8fb1d77bc1b9d30575db8ed7f2394a2b03d41f0b26a3907fa07ab154\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fa836ad8fb1d77bc1b9d30575db8ed7f2394a2b03d41f0b26a3907fa07ab154\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h66q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09cda77dbfa7cafc1f167478649fe1bc961d7610005bd3260407796956dd8fc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09cda77dbfa7cafc1f167478649fe1bc961d7610005bd3260407796956dd8fc3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h66q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d74c02c82589070c20f88c3ce9574341f2c4898af034fe9a686aa7b7c19ed93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d74c02c82589070c20f88c3ce9574341f2c4898af034fe9a686aa7b7c19ed93\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h66q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42e7374bf2c5e55d0afbe1e7564a2bc7fd23d0af7e22f0cab853fb9d75802422\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42e7374bf2c5e55d0afbe1e7564a2bc7fd23d0af7e22f0cab853fb9d75802422\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h66q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bbafab11f2868d0a97a54debc01d8e4cc6360b08a5b7e883a2f385db2e9ba0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3bbafab11f2868d0a97a54debc01d8e4cc6360b08a5b7e883a2f385db2e9ba0c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h66q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lxzml\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:38Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:38 crc kubenswrapper[4720]: I0122 06:36:38.360906 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5bmrh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"819a554b-cde8-41eb-bf3c-b965b5754ee9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fd548b8bf40d7be75f3433d2396eac3eb467cdc77a6e92f29c5321b0f2e11c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w949z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5bmrh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:38Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:38 crc kubenswrapper[4720]: I0122 06:36:38.378254 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4c84t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83dddec7-9ecb-4d3b-97ac-e2f8f59e547c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ea91d23f213bf1c41ebffa43c30559153a3fdae5aac42557a24566cc90bd2b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl9b5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8570742a6d232a3695d42f42f4a8a7bfe79325cb20c8da4129148dda33df4683\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl9b5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4c84t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:38Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:38 crc kubenswrapper[4720]: I0122 06:36:38.405690 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63aad383-f2cb-4bee-a51e-8f27e7b6f6dc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f91f6cde32c3019a63315238551af8420def8e585e4dbf3d29e55f62f155ba3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eff41c6fc4cac32edebfbabd313603d50771e92427938f4fc8c12bc75563133d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5dc3d31620607809111c9a1ed06ab7dfb1b24ff5ef18155979efc6b09b6d67d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://365ce842df37c87fae429e32228087a45828c6c717b63a2eb9818f26bf9aed7b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:08Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:38Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:38 crc kubenswrapper[4720]: I0122 06:36:38.419175 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kvtch" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"409f50e8-9b68-4efe-8eb4-bc144d383817\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhm9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhm9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:47Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kvtch\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:38Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:38 crc kubenswrapper[4720]: I0122 06:36:38.434211 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71c3232e-a7c6-4127-b9ae-54b793cf40fc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b844e983a5de29cec0e9c51f209f04921cd45b437887ca38e93348bc0816832a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d21c2ba52009379f44498a3064c67bcd1a54a6277cc2b1df9db949bf0871e95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65a3043e63bb84b58bb5486070eb84265d51c514df2d766703a62382b3d6f66f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b944677a78ab673857bf2dd507dcc2eced1caef632613ec1e2054cb76ef57f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f51bf89aa732948c3672632435a055b62f58316da07931b2773bc2d3c10789e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T06:35:26Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 06:35:20.895806 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 06:35:20.896560 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2161447925/tls.crt::/tmp/serving-cert-2161447925/tls.key\\\\\\\"\\\\nI0122 06:35:26.673138 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 06:35:26.677382 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 06:35:26.677419 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 06:35:26.677459 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 06:35:26.677469 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 06:35:26.688473 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 06:35:26.688552 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 06:35:26.688566 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 06:35:26.688580 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 06:35:26.688588 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 06:35:26.688596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 06:35:26.688605 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 06:35:26.688875 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 06:35:26.690763 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://927a5a6aba8d7533888fb9e5e664fbaed25c6e6c0616c5b3c6f0afd6f45aa128\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3418e6eacd36147ae6ae1be72f23113dea7dbaab3597ba0fee76fbc846c0162f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3418e6eacd36147ae6ae1be72f23113dea7dbaab3597ba0fee76fbc846c0162f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:38Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:38 crc kubenswrapper[4720]: I0122 06:36:38.440521 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:38 crc kubenswrapper[4720]: I0122 06:36:38.440655 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:38 crc kubenswrapper[4720]: I0122 06:36:38.440795 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:38 crc kubenswrapper[4720]: I0122 06:36:38.440903 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:38 crc kubenswrapper[4720]: I0122 06:36:38.441026 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:38Z","lastTransitionTime":"2026-01-22T06:36:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:38 crc kubenswrapper[4720]: I0122 06:36:38.446152 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://42d0f971bb1f2a686c44e255b97648123170034b41d626318aab29e111dda4a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:38Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:38 crc kubenswrapper[4720]: I0122 06:36:38.457734 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-n5w5r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85373343-156d-4de0-a72b-baaf7c4e3d08\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:36:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:36:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b71047289bcefd19da4f70da8db4ee3456912a253f598d85540effeea52ca966\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4da0abad7292a9d82b80caabd4b1f0fc15fbe54bfa5b4316c85b2131ea8b5b7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T06:36:19Z\\\",\\\"message\\\":\\\"2026-01-22T06:35:34+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_310a9cdb-d8e6-49bd-b096-1bf7adc9b43b\\\\n2026-01-22T06:35:34+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_310a9cdb-d8e6-49bd-b096-1bf7adc9b43b to /host/opt/cni/bin/\\\\n2026-01-22T06:35:34Z [verbose] multus-daemon started\\\\n2026-01-22T06:35:34Z [verbose] Readiness Indicator file check\\\\n2026-01-22T06:36:19Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:33Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:36:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlzmz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-n5w5r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:38Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:38 crc kubenswrapper[4720]: I0122 06:36:38.477067 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2336077a-0d11-443d-aa5f-3bc0f75aeb59\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7f8b443098a61889ca09134ff7b6e219efca59abcbcea3bb4c1aa9266ee8e55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfdba4f4bb3ed55acad58ed0c07d2dd58b1a1e9fd56443090c002bdefd3e7e10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2caddf6dd66b641ff10add3b52e37581c741507660ec4fe67a3b9ddde699e56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://108525dd49ce1847f1cc6c4e2a4c70cae3d2d990cda97eeb4aba8879b955432d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d49a501c34b86f0f9300742970a00c92ce3b8d352518d8f53d1ab8c734e2af6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49fa65139740d25a82a8becea2b7476e9846a3eebfe5a02bb0b3e8cc4e7fa30d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49fa65139740d25a82a8becea2b7476e9846a3eebfe5a02bb0b3e8cc4e7fa30d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e171d6a13b6108f7df37e46d2195682d2ce7a75e5fbfd0ccf4aca27871964cf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e171d6a13b6108f7df37e46d2195682d2ce7a75e5fbfd0ccf4aca27871964cf6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://136504ce05b178f4c60a839a1044c0b0892ea97babfe80181eac601c5057df16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://136504ce05b178f4c60a839a1044c0b0892ea97babfe80181eac601c5057df16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:08Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:38Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:38 crc kubenswrapper[4720]: I0122 06:36:38.490054 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:38Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:38 crc kubenswrapper[4720]: I0122 06:36:38.498699 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dtnxt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"518eedd0-2cb6-458d-a7a8-d8c8b8296401\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://794bf7a98b2300bd21bd914f5ad90c8f92ea7c055c78f62dbac7bc66c1c4d282\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqx2p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dtnxt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:38Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:38 crc kubenswrapper[4720]: I0122 06:36:38.509774 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73ba4f9c-33cb-4898-b2a3-21bf3327cf5b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:36:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:36:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b5ab589e0e928e47ac498164439f2fbd62bfe1130a9c17a9d96ec4cedd2c1e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5ff36eb3ab53efb54f45ab3e3030175237fd76ecd28ffcdc5a5079dfb93ec2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5d1e4cb487f75b95bc0da8ec3adbb6410d171fa2c95137c8127cea6023166f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7605a8cc85e1a2da51c6bedd5f03df930e1135340ec7de69d1ef643c907fd2bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7605a8cc85e1a2da51c6bedd5f03df930e1135340ec7de69d1ef643c907fd2bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:08Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:38Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:38 crc kubenswrapper[4720]: I0122 06:36:38.523405 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc18b75d6761db0d4a459a096804ac6600c6c2026ca133d47671a8aadba175f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6616553b8c62205590801d10e6e316b396b7190584b08a774ebb87ab425cc122\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:38Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:38 crc kubenswrapper[4720]: I0122 06:36:38.533268 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4b26e9d-6a95-4b1c-9750-88b6aa100c67\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57c69f24cbbb5aea3a63b602916230df0cc3d74344a32bd1528bd162786a6d4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f52f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88eb6692702bcb8523c759d764bb8dede5af5a2890217a1c6897a5b18a7197dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f52f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bnsvd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:38Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:38 crc kubenswrapper[4720]: I0122 06:36:38.541991 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a98bb6d-6ab7-413e-ab94-7bb5d73babe8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a954389852d9be1f01cad5b53c0ee3a1e22d956897c2fec4bbeffdf558ec585\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02c044f6a9997893116f043639407239a8dc4cf8a30435557910df3c594389cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02c044f6a9997893116f043639407239a8dc4cf8a30435557910df3c594389cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:38Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:38 crc kubenswrapper[4720]: I0122 06:36:38.543224 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:38 crc kubenswrapper[4720]: I0122 06:36:38.543261 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:38 crc kubenswrapper[4720]: I0122 06:36:38.543271 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:38 crc kubenswrapper[4720]: I0122 06:36:38.543288 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:38 crc kubenswrapper[4720]: I0122 06:36:38.543299 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:38Z","lastTransitionTime":"2026-01-22T06:36:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:38 crc kubenswrapper[4720]: I0122 06:36:38.651267 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:38 crc kubenswrapper[4720]: I0122 06:36:38.651321 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:38 crc kubenswrapper[4720]: I0122 06:36:38.651333 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:38 crc kubenswrapper[4720]: I0122 06:36:38.651355 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:38 crc kubenswrapper[4720]: I0122 06:36:38.651371 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:38Z","lastTransitionTime":"2026-01-22T06:36:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:38 crc kubenswrapper[4720]: I0122 06:36:38.754701 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:38 crc kubenswrapper[4720]: I0122 06:36:38.754763 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:38 crc kubenswrapper[4720]: I0122 06:36:38.754787 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:38 crc kubenswrapper[4720]: I0122 06:36:38.754821 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:38 crc kubenswrapper[4720]: I0122 06:36:38.754846 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:38Z","lastTransitionTime":"2026-01-22T06:36:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:38 crc kubenswrapper[4720]: I0122 06:36:38.857012 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:38 crc kubenswrapper[4720]: I0122 06:36:38.857043 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:38 crc kubenswrapper[4720]: I0122 06:36:38.857054 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:38 crc kubenswrapper[4720]: I0122 06:36:38.857069 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:38 crc kubenswrapper[4720]: I0122 06:36:38.857085 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:38Z","lastTransitionTime":"2026-01-22T06:36:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:38 crc kubenswrapper[4720]: I0122 06:36:38.959935 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:38 crc kubenswrapper[4720]: I0122 06:36:38.959972 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:38 crc kubenswrapper[4720]: I0122 06:36:38.959981 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:38 crc kubenswrapper[4720]: I0122 06:36:38.959996 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:38 crc kubenswrapper[4720]: I0122 06:36:38.960006 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:38Z","lastTransitionTime":"2026-01-22T06:36:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:39 crc kubenswrapper[4720]: I0122 06:36:39.063007 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:39 crc kubenswrapper[4720]: I0122 06:36:39.063056 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:39 crc kubenswrapper[4720]: I0122 06:36:39.063068 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:39 crc kubenswrapper[4720]: I0122 06:36:39.063106 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:39 crc kubenswrapper[4720]: I0122 06:36:39.063124 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:39Z","lastTransitionTime":"2026-01-22T06:36:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:39 crc kubenswrapper[4720]: I0122 06:36:39.166433 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:39 crc kubenswrapper[4720]: I0122 06:36:39.166498 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:39 crc kubenswrapper[4720]: I0122 06:36:39.166520 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:39 crc kubenswrapper[4720]: I0122 06:36:39.166549 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:39 crc kubenswrapper[4720]: I0122 06:36:39.166567 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:39Z","lastTransitionTime":"2026-01-22T06:36:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:39 crc kubenswrapper[4720]: I0122 06:36:39.191121 4720 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 00:47:50.132810078 +0000 UTC Jan 22 06:36:39 crc kubenswrapper[4720]: I0122 06:36:39.210460 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 06:36:39 crc kubenswrapper[4720]: I0122 06:36:39.210494 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kvtch" Jan 22 06:36:39 crc kubenswrapper[4720]: E0122 06:36:39.210605 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 06:36:39 crc kubenswrapper[4720]: E0122 06:36:39.210783 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kvtch" podUID="409f50e8-9b68-4efe-8eb4-bc144d383817" Jan 22 06:36:39 crc kubenswrapper[4720]: I0122 06:36:39.270160 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:39 crc kubenswrapper[4720]: I0122 06:36:39.270263 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:39 crc kubenswrapper[4720]: I0122 06:36:39.270316 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:39 crc kubenswrapper[4720]: I0122 06:36:39.270343 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:39 crc kubenswrapper[4720]: I0122 06:36:39.270391 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:39Z","lastTransitionTime":"2026-01-22T06:36:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:39 crc kubenswrapper[4720]: I0122 06:36:39.373330 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:39 crc kubenswrapper[4720]: I0122 06:36:39.373379 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:39 crc kubenswrapper[4720]: I0122 06:36:39.373397 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:39 crc kubenswrapper[4720]: I0122 06:36:39.373425 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:39 crc kubenswrapper[4720]: I0122 06:36:39.373444 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:39Z","lastTransitionTime":"2026-01-22T06:36:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:39 crc kubenswrapper[4720]: I0122 06:36:39.476509 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:39 crc kubenswrapper[4720]: I0122 06:36:39.476581 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:39 crc kubenswrapper[4720]: I0122 06:36:39.476600 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:39 crc kubenswrapper[4720]: I0122 06:36:39.476627 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:39 crc kubenswrapper[4720]: I0122 06:36:39.476645 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:39Z","lastTransitionTime":"2026-01-22T06:36:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:39 crc kubenswrapper[4720]: I0122 06:36:39.579796 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:39 crc kubenswrapper[4720]: I0122 06:36:39.579857 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:39 crc kubenswrapper[4720]: I0122 06:36:39.579875 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:39 crc kubenswrapper[4720]: I0122 06:36:39.579926 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:39 crc kubenswrapper[4720]: I0122 06:36:39.579940 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:39Z","lastTransitionTime":"2026-01-22T06:36:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:39 crc kubenswrapper[4720]: I0122 06:36:39.683639 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:39 crc kubenswrapper[4720]: I0122 06:36:39.683707 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:39 crc kubenswrapper[4720]: I0122 06:36:39.683718 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:39 crc kubenswrapper[4720]: I0122 06:36:39.683740 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:39 crc kubenswrapper[4720]: I0122 06:36:39.683755 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:39Z","lastTransitionTime":"2026-01-22T06:36:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:39 crc kubenswrapper[4720]: I0122 06:36:39.786477 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:39 crc kubenswrapper[4720]: I0122 06:36:39.786536 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:39 crc kubenswrapper[4720]: I0122 06:36:39.786553 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:39 crc kubenswrapper[4720]: I0122 06:36:39.786579 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:39 crc kubenswrapper[4720]: I0122 06:36:39.786601 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:39Z","lastTransitionTime":"2026-01-22T06:36:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:39 crc kubenswrapper[4720]: I0122 06:36:39.889131 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:39 crc kubenswrapper[4720]: I0122 06:36:39.889204 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:39 crc kubenswrapper[4720]: I0122 06:36:39.889221 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:39 crc kubenswrapper[4720]: I0122 06:36:39.889247 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:39 crc kubenswrapper[4720]: I0122 06:36:39.889266 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:39Z","lastTransitionTime":"2026-01-22T06:36:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:39 crc kubenswrapper[4720]: I0122 06:36:39.992610 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:39 crc kubenswrapper[4720]: I0122 06:36:39.992685 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:39 crc kubenswrapper[4720]: I0122 06:36:39.992702 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:39 crc kubenswrapper[4720]: I0122 06:36:39.992730 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:39 crc kubenswrapper[4720]: I0122 06:36:39.992748 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:39Z","lastTransitionTime":"2026-01-22T06:36:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:40 crc kubenswrapper[4720]: I0122 06:36:40.096212 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:40 crc kubenswrapper[4720]: I0122 06:36:40.096292 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:40 crc kubenswrapper[4720]: I0122 06:36:40.096319 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:40 crc kubenswrapper[4720]: I0122 06:36:40.096350 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:40 crc kubenswrapper[4720]: I0122 06:36:40.096372 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:40Z","lastTransitionTime":"2026-01-22T06:36:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:40 crc kubenswrapper[4720]: I0122 06:36:40.191467 4720 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 06:21:16.971476834 +0000 UTC Jan 22 06:36:40 crc kubenswrapper[4720]: I0122 06:36:40.200256 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:40 crc kubenswrapper[4720]: I0122 06:36:40.200338 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:40 crc kubenswrapper[4720]: I0122 06:36:40.200358 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:40 crc kubenswrapper[4720]: I0122 06:36:40.200390 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:40 crc kubenswrapper[4720]: I0122 06:36:40.200414 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:40Z","lastTransitionTime":"2026-01-22T06:36:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:40 crc kubenswrapper[4720]: I0122 06:36:40.209900 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 06:36:40 crc kubenswrapper[4720]: I0122 06:36:40.209990 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 06:36:40 crc kubenswrapper[4720]: E0122 06:36:40.210245 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 06:36:40 crc kubenswrapper[4720]: E0122 06:36:40.210390 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 06:36:40 crc kubenswrapper[4720]: I0122 06:36:40.304458 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:40 crc kubenswrapper[4720]: I0122 06:36:40.304541 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:40 crc kubenswrapper[4720]: I0122 06:36:40.304568 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:40 crc kubenswrapper[4720]: I0122 06:36:40.304609 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:40 crc kubenswrapper[4720]: I0122 06:36:40.304636 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:40Z","lastTransitionTime":"2026-01-22T06:36:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:40 crc kubenswrapper[4720]: I0122 06:36:40.408319 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:40 crc kubenswrapper[4720]: I0122 06:36:40.408389 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:40 crc kubenswrapper[4720]: I0122 06:36:40.408410 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:40 crc kubenswrapper[4720]: I0122 06:36:40.408438 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:40 crc kubenswrapper[4720]: I0122 06:36:40.408457 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:40Z","lastTransitionTime":"2026-01-22T06:36:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:40 crc kubenswrapper[4720]: I0122 06:36:40.512040 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:40 crc kubenswrapper[4720]: I0122 06:36:40.512100 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:40 crc kubenswrapper[4720]: I0122 06:36:40.512113 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:40 crc kubenswrapper[4720]: I0122 06:36:40.512138 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:40 crc kubenswrapper[4720]: I0122 06:36:40.512153 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:40Z","lastTransitionTime":"2026-01-22T06:36:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:40 crc kubenswrapper[4720]: I0122 06:36:40.615896 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:40 crc kubenswrapper[4720]: I0122 06:36:40.616008 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:40 crc kubenswrapper[4720]: I0122 06:36:40.616027 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:40 crc kubenswrapper[4720]: I0122 06:36:40.616061 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:40 crc kubenswrapper[4720]: I0122 06:36:40.616084 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:40Z","lastTransitionTime":"2026-01-22T06:36:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:40 crc kubenswrapper[4720]: I0122 06:36:40.719864 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:40 crc kubenswrapper[4720]: I0122 06:36:40.719953 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:40 crc kubenswrapper[4720]: I0122 06:36:40.719966 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:40 crc kubenswrapper[4720]: I0122 06:36:40.719998 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:40 crc kubenswrapper[4720]: I0122 06:36:40.720015 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:40Z","lastTransitionTime":"2026-01-22T06:36:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:40 crc kubenswrapper[4720]: I0122 06:36:40.824461 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:40 crc kubenswrapper[4720]: I0122 06:36:40.824530 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:40 crc kubenswrapper[4720]: I0122 06:36:40.824550 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:40 crc kubenswrapper[4720]: I0122 06:36:40.824578 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:40 crc kubenswrapper[4720]: I0122 06:36:40.824597 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:40Z","lastTransitionTime":"2026-01-22T06:36:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:40 crc kubenswrapper[4720]: I0122 06:36:40.934397 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:40 crc kubenswrapper[4720]: I0122 06:36:40.934512 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:40 crc kubenswrapper[4720]: I0122 06:36:40.934535 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:40 crc kubenswrapper[4720]: I0122 06:36:40.934564 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:40 crc kubenswrapper[4720]: I0122 06:36:40.934584 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:40Z","lastTransitionTime":"2026-01-22T06:36:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:41 crc kubenswrapper[4720]: I0122 06:36:41.038127 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:41 crc kubenswrapper[4720]: I0122 06:36:41.038211 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:41 crc kubenswrapper[4720]: I0122 06:36:41.038230 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:41 crc kubenswrapper[4720]: I0122 06:36:41.038268 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:41 crc kubenswrapper[4720]: I0122 06:36:41.038289 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:41Z","lastTransitionTime":"2026-01-22T06:36:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:41 crc kubenswrapper[4720]: I0122 06:36:41.141253 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:41 crc kubenswrapper[4720]: I0122 06:36:41.141326 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:41 crc kubenswrapper[4720]: I0122 06:36:41.141345 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:41 crc kubenswrapper[4720]: I0122 06:36:41.141401 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:41 crc kubenswrapper[4720]: I0122 06:36:41.141423 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:41Z","lastTransitionTime":"2026-01-22T06:36:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:41 crc kubenswrapper[4720]: I0122 06:36:41.192596 4720 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 20:56:21.874546353 +0000 UTC Jan 22 06:36:41 crc kubenswrapper[4720]: I0122 06:36:41.210181 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kvtch" Jan 22 06:36:41 crc kubenswrapper[4720]: I0122 06:36:41.210322 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 06:36:41 crc kubenswrapper[4720]: E0122 06:36:41.210721 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kvtch" podUID="409f50e8-9b68-4efe-8eb4-bc144d383817" Jan 22 06:36:41 crc kubenswrapper[4720]: E0122 06:36:41.210787 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 06:36:41 crc kubenswrapper[4720]: I0122 06:36:41.244966 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:41 crc kubenswrapper[4720]: I0122 06:36:41.245050 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:41 crc kubenswrapper[4720]: I0122 06:36:41.245071 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:41 crc kubenswrapper[4720]: I0122 06:36:41.245105 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:41 crc kubenswrapper[4720]: I0122 06:36:41.245126 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:41Z","lastTransitionTime":"2026-01-22T06:36:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:41 crc kubenswrapper[4720]: I0122 06:36:41.349422 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:41 crc kubenswrapper[4720]: I0122 06:36:41.349501 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:41 crc kubenswrapper[4720]: I0122 06:36:41.349521 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:41 crc kubenswrapper[4720]: I0122 06:36:41.349550 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:41 crc kubenswrapper[4720]: I0122 06:36:41.349572 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:41Z","lastTransitionTime":"2026-01-22T06:36:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:41 crc kubenswrapper[4720]: I0122 06:36:41.452310 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:41 crc kubenswrapper[4720]: I0122 06:36:41.452369 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:41 crc kubenswrapper[4720]: I0122 06:36:41.452383 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:41 crc kubenswrapper[4720]: I0122 06:36:41.452403 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:41 crc kubenswrapper[4720]: I0122 06:36:41.452417 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:41Z","lastTransitionTime":"2026-01-22T06:36:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:41 crc kubenswrapper[4720]: I0122 06:36:41.555285 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:41 crc kubenswrapper[4720]: I0122 06:36:41.555328 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:41 crc kubenswrapper[4720]: I0122 06:36:41.555338 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:41 crc kubenswrapper[4720]: I0122 06:36:41.555353 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:41 crc kubenswrapper[4720]: I0122 06:36:41.555363 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:41Z","lastTransitionTime":"2026-01-22T06:36:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:41 crc kubenswrapper[4720]: I0122 06:36:41.658350 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:41 crc kubenswrapper[4720]: I0122 06:36:41.658414 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:41 crc kubenswrapper[4720]: I0122 06:36:41.658432 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:41 crc kubenswrapper[4720]: I0122 06:36:41.658462 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:41 crc kubenswrapper[4720]: I0122 06:36:41.658489 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:41Z","lastTransitionTime":"2026-01-22T06:36:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:41 crc kubenswrapper[4720]: I0122 06:36:41.762075 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:41 crc kubenswrapper[4720]: I0122 06:36:41.762139 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:41 crc kubenswrapper[4720]: I0122 06:36:41.762158 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:41 crc kubenswrapper[4720]: I0122 06:36:41.762185 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:41 crc kubenswrapper[4720]: I0122 06:36:41.762205 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:41Z","lastTransitionTime":"2026-01-22T06:36:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:41 crc kubenswrapper[4720]: I0122 06:36:41.865701 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:41 crc kubenswrapper[4720]: I0122 06:36:41.865762 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:41 crc kubenswrapper[4720]: I0122 06:36:41.865773 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:41 crc kubenswrapper[4720]: I0122 06:36:41.865792 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:41 crc kubenswrapper[4720]: I0122 06:36:41.865808 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:41Z","lastTransitionTime":"2026-01-22T06:36:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:41 crc kubenswrapper[4720]: I0122 06:36:41.968845 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:41 crc kubenswrapper[4720]: I0122 06:36:41.968936 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:41 crc kubenswrapper[4720]: I0122 06:36:41.968950 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:41 crc kubenswrapper[4720]: I0122 06:36:41.968977 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:41 crc kubenswrapper[4720]: I0122 06:36:41.968994 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:41Z","lastTransitionTime":"2026-01-22T06:36:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:42 crc kubenswrapper[4720]: I0122 06:36:42.072227 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:42 crc kubenswrapper[4720]: I0122 06:36:42.072284 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:42 crc kubenswrapper[4720]: I0122 06:36:42.072297 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:42 crc kubenswrapper[4720]: I0122 06:36:42.072318 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:42 crc kubenswrapper[4720]: I0122 06:36:42.072332 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:42Z","lastTransitionTime":"2026-01-22T06:36:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:42 crc kubenswrapper[4720]: I0122 06:36:42.174759 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:42 crc kubenswrapper[4720]: I0122 06:36:42.174846 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:42 crc kubenswrapper[4720]: I0122 06:36:42.174874 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:42 crc kubenswrapper[4720]: I0122 06:36:42.174940 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:42 crc kubenswrapper[4720]: I0122 06:36:42.174971 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:42Z","lastTransitionTime":"2026-01-22T06:36:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:42 crc kubenswrapper[4720]: I0122 06:36:42.193676 4720 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 18:07:43.535942873 +0000 UTC Jan 22 06:36:42 crc kubenswrapper[4720]: I0122 06:36:42.210183 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 06:36:42 crc kubenswrapper[4720]: I0122 06:36:42.210189 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 06:36:42 crc kubenswrapper[4720]: E0122 06:36:42.210328 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 06:36:42 crc kubenswrapper[4720]: E0122 06:36:42.210473 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 06:36:42 crc kubenswrapper[4720]: I0122 06:36:42.278588 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:42 crc kubenswrapper[4720]: I0122 06:36:42.278643 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:42 crc kubenswrapper[4720]: I0122 06:36:42.278658 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:42 crc kubenswrapper[4720]: I0122 06:36:42.278683 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:42 crc kubenswrapper[4720]: I0122 06:36:42.278698 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:42Z","lastTransitionTime":"2026-01-22T06:36:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:42 crc kubenswrapper[4720]: I0122 06:36:42.382053 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:42 crc kubenswrapper[4720]: I0122 06:36:42.382181 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:42 crc kubenswrapper[4720]: I0122 06:36:42.382201 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:42 crc kubenswrapper[4720]: I0122 06:36:42.382231 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:42 crc kubenswrapper[4720]: I0122 06:36:42.382251 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:42Z","lastTransitionTime":"2026-01-22T06:36:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:42 crc kubenswrapper[4720]: I0122 06:36:42.485950 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:42 crc kubenswrapper[4720]: I0122 06:36:42.486007 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:42 crc kubenswrapper[4720]: I0122 06:36:42.486026 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:42 crc kubenswrapper[4720]: I0122 06:36:42.486056 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:42 crc kubenswrapper[4720]: I0122 06:36:42.486075 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:42Z","lastTransitionTime":"2026-01-22T06:36:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:42 crc kubenswrapper[4720]: I0122 06:36:42.589472 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:42 crc kubenswrapper[4720]: I0122 06:36:42.589520 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:42 crc kubenswrapper[4720]: I0122 06:36:42.589533 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:42 crc kubenswrapper[4720]: I0122 06:36:42.589554 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:42 crc kubenswrapper[4720]: I0122 06:36:42.589567 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:42Z","lastTransitionTime":"2026-01-22T06:36:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:42 crc kubenswrapper[4720]: I0122 06:36:42.693851 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:42 crc kubenswrapper[4720]: I0122 06:36:42.693920 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:42 crc kubenswrapper[4720]: I0122 06:36:42.693931 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:42 crc kubenswrapper[4720]: I0122 06:36:42.693950 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:42 crc kubenswrapper[4720]: I0122 06:36:42.693961 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:42Z","lastTransitionTime":"2026-01-22T06:36:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:42 crc kubenswrapper[4720]: I0122 06:36:42.797430 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:42 crc kubenswrapper[4720]: I0122 06:36:42.797538 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:42 crc kubenswrapper[4720]: I0122 06:36:42.797558 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:42 crc kubenswrapper[4720]: I0122 06:36:42.797581 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:42 crc kubenswrapper[4720]: I0122 06:36:42.797595 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:42Z","lastTransitionTime":"2026-01-22T06:36:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:42 crc kubenswrapper[4720]: I0122 06:36:42.901452 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:42 crc kubenswrapper[4720]: I0122 06:36:42.901530 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:42 crc kubenswrapper[4720]: I0122 06:36:42.901549 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:42 crc kubenswrapper[4720]: I0122 06:36:42.901579 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:42 crc kubenswrapper[4720]: I0122 06:36:42.901600 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:42Z","lastTransitionTime":"2026-01-22T06:36:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:43 crc kubenswrapper[4720]: I0122 06:36:43.005885 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:43 crc kubenswrapper[4720]: I0122 06:36:43.006017 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:43 crc kubenswrapper[4720]: I0122 06:36:43.006041 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:43 crc kubenswrapper[4720]: I0122 06:36:43.006066 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:43 crc kubenswrapper[4720]: I0122 06:36:43.006086 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:43Z","lastTransitionTime":"2026-01-22T06:36:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:43 crc kubenswrapper[4720]: I0122 06:36:43.109791 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:43 crc kubenswrapper[4720]: I0122 06:36:43.109869 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:43 crc kubenswrapper[4720]: I0122 06:36:43.109896 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:43 crc kubenswrapper[4720]: I0122 06:36:43.109954 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:43 crc kubenswrapper[4720]: I0122 06:36:43.109976 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:43Z","lastTransitionTime":"2026-01-22T06:36:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:43 crc kubenswrapper[4720]: I0122 06:36:43.142805 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:43 crc kubenswrapper[4720]: I0122 06:36:43.142876 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:43 crc kubenswrapper[4720]: I0122 06:36:43.142900 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:43 crc kubenswrapper[4720]: I0122 06:36:43.142965 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:43 crc kubenswrapper[4720]: I0122 06:36:43.142998 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:43Z","lastTransitionTime":"2026-01-22T06:36:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:43 crc kubenswrapper[4720]: E0122 06:36:43.159323 4720 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T06:36:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T06:36:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T06:36:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T06:36:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T06:36:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T06:36:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T06:36:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T06:36:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"234f6209-cc86-46cc-ab69-026482c920c9\\\",\\\"systemUUID\\\":\\\"4713dd6d-99ec-4bb6-94e4-e7199d2e8be9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:43Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:43 crc kubenswrapper[4720]: I0122 06:36:43.164883 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:43 crc kubenswrapper[4720]: I0122 06:36:43.164989 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:43 crc kubenswrapper[4720]: I0122 06:36:43.165008 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:43 crc kubenswrapper[4720]: I0122 06:36:43.165039 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:43 crc kubenswrapper[4720]: I0122 06:36:43.165058 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:43Z","lastTransitionTime":"2026-01-22T06:36:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:43 crc kubenswrapper[4720]: E0122 06:36:43.184419 4720 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T06:36:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T06:36:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T06:36:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T06:36:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T06:36:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T06:36:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T06:36:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T06:36:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"234f6209-cc86-46cc-ab69-026482c920c9\\\",\\\"systemUUID\\\":\\\"4713dd6d-99ec-4bb6-94e4-e7199d2e8be9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:43Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:43 crc kubenswrapper[4720]: I0122 06:36:43.196363 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:43 crc kubenswrapper[4720]: I0122 06:36:43.196436 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:43 crc kubenswrapper[4720]: I0122 06:36:43.196457 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:43 crc kubenswrapper[4720]: I0122 06:36:43.196486 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:43 crc kubenswrapper[4720]: I0122 06:36:43.196515 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:43Z","lastTransitionTime":"2026-01-22T06:36:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:43 crc kubenswrapper[4720]: I0122 06:36:43.197299 4720 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 20:12:09.706111447 +0000 UTC Jan 22 06:36:43 crc kubenswrapper[4720]: I0122 06:36:43.209734 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 06:36:43 crc kubenswrapper[4720]: I0122 06:36:43.209744 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kvtch" Jan 22 06:36:43 crc kubenswrapper[4720]: E0122 06:36:43.210162 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kvtch" podUID="409f50e8-9b68-4efe-8eb4-bc144d383817" Jan 22 06:36:43 crc kubenswrapper[4720]: E0122 06:36:43.210028 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 06:36:43 crc kubenswrapper[4720]: E0122 06:36:43.217689 4720 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T06:36:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T06:36:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T06:36:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T06:36:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T06:36:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T06:36:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T06:36:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T06:36:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"234f6209-cc86-46cc-ab69-026482c920c9\\\",\\\"systemUUID\\\":\\\"4713dd6d-99ec-4bb6-94e4-e7199d2e8be9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:43Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:43 crc kubenswrapper[4720]: I0122 06:36:43.223946 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:43 crc kubenswrapper[4720]: I0122 06:36:43.223992 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:43 crc kubenswrapper[4720]: I0122 06:36:43.224005 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:43 crc kubenswrapper[4720]: I0122 06:36:43.224029 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:43 crc kubenswrapper[4720]: I0122 06:36:43.224045 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:43Z","lastTransitionTime":"2026-01-22T06:36:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:43 crc kubenswrapper[4720]: E0122 06:36:43.241363 4720 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T06:36:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T06:36:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T06:36:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T06:36:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T06:36:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T06:36:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T06:36:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T06:36:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"234f6209-cc86-46cc-ab69-026482c920c9\\\",\\\"systemUUID\\\":\\\"4713dd6d-99ec-4bb6-94e4-e7199d2e8be9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:43Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:43 crc kubenswrapper[4720]: I0122 06:36:43.246758 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:43 crc kubenswrapper[4720]: I0122 06:36:43.246826 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:43 crc kubenswrapper[4720]: I0122 06:36:43.246846 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:43 crc kubenswrapper[4720]: I0122 06:36:43.246883 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:43 crc kubenswrapper[4720]: I0122 06:36:43.246907 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:43Z","lastTransitionTime":"2026-01-22T06:36:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:43 crc kubenswrapper[4720]: E0122 06:36:43.269790 4720 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T06:36:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T06:36:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T06:36:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T06:36:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T06:36:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T06:36:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T06:36:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T06:36:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"234f6209-cc86-46cc-ab69-026482c920c9\\\",\\\"systemUUID\\\":\\\"4713dd6d-99ec-4bb6-94e4-e7199d2e8be9\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:43Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:43 crc kubenswrapper[4720]: E0122 06:36:43.270045 4720 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 22 06:36:43 crc kubenswrapper[4720]: I0122 06:36:43.272563 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:43 crc kubenswrapper[4720]: I0122 06:36:43.272623 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:43 crc kubenswrapper[4720]: I0122 06:36:43.272643 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:43 crc kubenswrapper[4720]: I0122 06:36:43.272672 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:43 crc kubenswrapper[4720]: I0122 06:36:43.272691 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:43Z","lastTransitionTime":"2026-01-22T06:36:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:43 crc kubenswrapper[4720]: I0122 06:36:43.376397 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:43 crc kubenswrapper[4720]: I0122 06:36:43.376478 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:43 crc kubenswrapper[4720]: I0122 06:36:43.376503 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:43 crc kubenswrapper[4720]: I0122 06:36:43.376539 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:43 crc kubenswrapper[4720]: I0122 06:36:43.376577 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:43Z","lastTransitionTime":"2026-01-22T06:36:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:43 crc kubenswrapper[4720]: I0122 06:36:43.480499 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:43 crc kubenswrapper[4720]: I0122 06:36:43.480583 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:43 crc kubenswrapper[4720]: I0122 06:36:43.480603 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:43 crc kubenswrapper[4720]: I0122 06:36:43.480632 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:43 crc kubenswrapper[4720]: I0122 06:36:43.480652 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:43Z","lastTransitionTime":"2026-01-22T06:36:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:43 crc kubenswrapper[4720]: I0122 06:36:43.584226 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:43 crc kubenswrapper[4720]: I0122 06:36:43.584299 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:43 crc kubenswrapper[4720]: I0122 06:36:43.584321 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:43 crc kubenswrapper[4720]: I0122 06:36:43.584356 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:43 crc kubenswrapper[4720]: I0122 06:36:43.584400 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:43Z","lastTransitionTime":"2026-01-22T06:36:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:43 crc kubenswrapper[4720]: I0122 06:36:43.712329 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:43 crc kubenswrapper[4720]: I0122 06:36:43.712408 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:43 crc kubenswrapper[4720]: I0122 06:36:43.712433 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:43 crc kubenswrapper[4720]: I0122 06:36:43.712467 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:43 crc kubenswrapper[4720]: I0122 06:36:43.712486 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:43Z","lastTransitionTime":"2026-01-22T06:36:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:43 crc kubenswrapper[4720]: I0122 06:36:43.816256 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:43 crc kubenswrapper[4720]: I0122 06:36:43.816394 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:43 crc kubenswrapper[4720]: I0122 06:36:43.816414 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:43 crc kubenswrapper[4720]: I0122 06:36:43.816442 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:43 crc kubenswrapper[4720]: I0122 06:36:43.816461 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:43Z","lastTransitionTime":"2026-01-22T06:36:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:43 crc kubenswrapper[4720]: I0122 06:36:43.919991 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:43 crc kubenswrapper[4720]: I0122 06:36:43.920083 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:43 crc kubenswrapper[4720]: I0122 06:36:43.920113 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:43 crc kubenswrapper[4720]: I0122 06:36:43.920151 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:43 crc kubenswrapper[4720]: I0122 06:36:43.920175 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:43Z","lastTransitionTime":"2026-01-22T06:36:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:44 crc kubenswrapper[4720]: I0122 06:36:44.024529 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:44 crc kubenswrapper[4720]: I0122 06:36:44.024603 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:44 crc kubenswrapper[4720]: I0122 06:36:44.024622 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:44 crc kubenswrapper[4720]: I0122 06:36:44.024650 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:44 crc kubenswrapper[4720]: I0122 06:36:44.024668 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:44Z","lastTransitionTime":"2026-01-22T06:36:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:44 crc kubenswrapper[4720]: I0122 06:36:44.128323 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:44 crc kubenswrapper[4720]: I0122 06:36:44.128382 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:44 crc kubenswrapper[4720]: I0122 06:36:44.128400 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:44 crc kubenswrapper[4720]: I0122 06:36:44.128428 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:44 crc kubenswrapper[4720]: I0122 06:36:44.128447 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:44Z","lastTransitionTime":"2026-01-22T06:36:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:44 crc kubenswrapper[4720]: I0122 06:36:44.198527 4720 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 09:54:43.280544115 +0000 UTC Jan 22 06:36:44 crc kubenswrapper[4720]: I0122 06:36:44.210027 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 06:36:44 crc kubenswrapper[4720]: I0122 06:36:44.210094 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 06:36:44 crc kubenswrapper[4720]: E0122 06:36:44.210209 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 06:36:44 crc kubenswrapper[4720]: E0122 06:36:44.210275 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 06:36:44 crc kubenswrapper[4720]: I0122 06:36:44.232660 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:44 crc kubenswrapper[4720]: I0122 06:36:44.232720 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:44 crc kubenswrapper[4720]: I0122 06:36:44.232736 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:44 crc kubenswrapper[4720]: I0122 06:36:44.232758 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:44 crc kubenswrapper[4720]: I0122 06:36:44.232775 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:44Z","lastTransitionTime":"2026-01-22T06:36:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:44 crc kubenswrapper[4720]: I0122 06:36:44.340400 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:44 crc kubenswrapper[4720]: I0122 06:36:44.340451 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:44 crc kubenswrapper[4720]: I0122 06:36:44.340465 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:44 crc kubenswrapper[4720]: I0122 06:36:44.340482 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:44 crc kubenswrapper[4720]: I0122 06:36:44.340492 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:44Z","lastTransitionTime":"2026-01-22T06:36:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:44 crc kubenswrapper[4720]: I0122 06:36:44.444184 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:44 crc kubenswrapper[4720]: I0122 06:36:44.444239 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:44 crc kubenswrapper[4720]: I0122 06:36:44.444254 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:44 crc kubenswrapper[4720]: I0122 06:36:44.444275 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:44 crc kubenswrapper[4720]: I0122 06:36:44.444289 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:44Z","lastTransitionTime":"2026-01-22T06:36:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:44 crc kubenswrapper[4720]: I0122 06:36:44.546500 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:44 crc kubenswrapper[4720]: I0122 06:36:44.546606 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:44 crc kubenswrapper[4720]: I0122 06:36:44.546622 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:44 crc kubenswrapper[4720]: I0122 06:36:44.546643 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:44 crc kubenswrapper[4720]: I0122 06:36:44.546657 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:44Z","lastTransitionTime":"2026-01-22T06:36:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:44 crc kubenswrapper[4720]: I0122 06:36:44.650964 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:44 crc kubenswrapper[4720]: I0122 06:36:44.651013 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:44 crc kubenswrapper[4720]: I0122 06:36:44.651024 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:44 crc kubenswrapper[4720]: I0122 06:36:44.651045 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:44 crc kubenswrapper[4720]: I0122 06:36:44.651058 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:44Z","lastTransitionTime":"2026-01-22T06:36:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:44 crc kubenswrapper[4720]: I0122 06:36:44.756735 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:44 crc kubenswrapper[4720]: I0122 06:36:44.756831 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:44 crc kubenswrapper[4720]: I0122 06:36:44.756864 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:44 crc kubenswrapper[4720]: I0122 06:36:44.756897 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:44 crc kubenswrapper[4720]: I0122 06:36:44.756938 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:44Z","lastTransitionTime":"2026-01-22T06:36:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:44 crc kubenswrapper[4720]: I0122 06:36:44.860881 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:44 crc kubenswrapper[4720]: I0122 06:36:44.860998 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:44 crc kubenswrapper[4720]: I0122 06:36:44.861021 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:44 crc kubenswrapper[4720]: I0122 06:36:44.861054 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:44 crc kubenswrapper[4720]: I0122 06:36:44.861075 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:44Z","lastTransitionTime":"2026-01-22T06:36:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:44 crc kubenswrapper[4720]: I0122 06:36:44.965482 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:44 crc kubenswrapper[4720]: I0122 06:36:44.965556 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:44 crc kubenswrapper[4720]: I0122 06:36:44.965609 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:44 crc kubenswrapper[4720]: I0122 06:36:44.965646 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:44 crc kubenswrapper[4720]: I0122 06:36:44.965667 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:44Z","lastTransitionTime":"2026-01-22T06:36:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:45 crc kubenswrapper[4720]: I0122 06:36:45.068589 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:45 crc kubenswrapper[4720]: I0122 06:36:45.068648 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:45 crc kubenswrapper[4720]: I0122 06:36:45.068667 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:45 crc kubenswrapper[4720]: I0122 06:36:45.068694 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:45 crc kubenswrapper[4720]: I0122 06:36:45.068712 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:45Z","lastTransitionTime":"2026-01-22T06:36:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:45 crc kubenswrapper[4720]: I0122 06:36:45.172160 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:45 crc kubenswrapper[4720]: I0122 06:36:45.172225 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:45 crc kubenswrapper[4720]: I0122 06:36:45.172244 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:45 crc kubenswrapper[4720]: I0122 06:36:45.172270 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:45 crc kubenswrapper[4720]: I0122 06:36:45.172289 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:45Z","lastTransitionTime":"2026-01-22T06:36:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:45 crc kubenswrapper[4720]: I0122 06:36:45.199193 4720 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 03:39:55.931138349 +0000 UTC Jan 22 06:36:45 crc kubenswrapper[4720]: I0122 06:36:45.210606 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 06:36:45 crc kubenswrapper[4720]: I0122 06:36:45.210697 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kvtch" Jan 22 06:36:45 crc kubenswrapper[4720]: E0122 06:36:45.210789 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 06:36:45 crc kubenswrapper[4720]: E0122 06:36:45.211013 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kvtch" podUID="409f50e8-9b68-4efe-8eb4-bc144d383817" Jan 22 06:36:45 crc kubenswrapper[4720]: I0122 06:36:45.276150 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:45 crc kubenswrapper[4720]: I0122 06:36:45.276223 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:45 crc kubenswrapper[4720]: I0122 06:36:45.276244 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:45 crc kubenswrapper[4720]: I0122 06:36:45.276275 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:45 crc kubenswrapper[4720]: I0122 06:36:45.276301 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:45Z","lastTransitionTime":"2026-01-22T06:36:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:45 crc kubenswrapper[4720]: I0122 06:36:45.379125 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:45 crc kubenswrapper[4720]: I0122 06:36:45.379165 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:45 crc kubenswrapper[4720]: I0122 06:36:45.379176 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:45 crc kubenswrapper[4720]: I0122 06:36:45.379194 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:45 crc kubenswrapper[4720]: I0122 06:36:45.379206 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:45Z","lastTransitionTime":"2026-01-22T06:36:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:45 crc kubenswrapper[4720]: I0122 06:36:45.482384 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:45 crc kubenswrapper[4720]: I0122 06:36:45.482445 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:45 crc kubenswrapper[4720]: I0122 06:36:45.482464 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:45 crc kubenswrapper[4720]: I0122 06:36:45.482493 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:45 crc kubenswrapper[4720]: I0122 06:36:45.482512 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:45Z","lastTransitionTime":"2026-01-22T06:36:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:45 crc kubenswrapper[4720]: I0122 06:36:45.585881 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:45 crc kubenswrapper[4720]: I0122 06:36:45.585959 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:45 crc kubenswrapper[4720]: I0122 06:36:45.585972 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:45 crc kubenswrapper[4720]: I0122 06:36:45.585991 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:45 crc kubenswrapper[4720]: I0122 06:36:45.586002 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:45Z","lastTransitionTime":"2026-01-22T06:36:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:45 crc kubenswrapper[4720]: I0122 06:36:45.688450 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:45 crc kubenswrapper[4720]: I0122 06:36:45.688497 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:45 crc kubenswrapper[4720]: I0122 06:36:45.688508 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:45 crc kubenswrapper[4720]: I0122 06:36:45.688526 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:45 crc kubenswrapper[4720]: I0122 06:36:45.688538 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:45Z","lastTransitionTime":"2026-01-22T06:36:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:45 crc kubenswrapper[4720]: I0122 06:36:45.795818 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:45 crc kubenswrapper[4720]: I0122 06:36:45.796059 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:45 crc kubenswrapper[4720]: I0122 06:36:45.796093 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:45 crc kubenswrapper[4720]: I0122 06:36:45.796132 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:45 crc kubenswrapper[4720]: I0122 06:36:45.796170 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:45Z","lastTransitionTime":"2026-01-22T06:36:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:45 crc kubenswrapper[4720]: I0122 06:36:45.900009 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:45 crc kubenswrapper[4720]: I0122 06:36:45.900072 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:45 crc kubenswrapper[4720]: I0122 06:36:45.900089 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:45 crc kubenswrapper[4720]: I0122 06:36:45.900115 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:45 crc kubenswrapper[4720]: I0122 06:36:45.900135 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:45Z","lastTransitionTime":"2026-01-22T06:36:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:46 crc kubenswrapper[4720]: I0122 06:36:46.004531 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:46 crc kubenswrapper[4720]: I0122 06:36:46.004614 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:46 crc kubenswrapper[4720]: I0122 06:36:46.004637 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:46 crc kubenswrapper[4720]: I0122 06:36:46.004669 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:46 crc kubenswrapper[4720]: I0122 06:36:46.004692 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:46Z","lastTransitionTime":"2026-01-22T06:36:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:46 crc kubenswrapper[4720]: I0122 06:36:46.108529 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:46 crc kubenswrapper[4720]: I0122 06:36:46.108573 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:46 crc kubenswrapper[4720]: I0122 06:36:46.108583 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:46 crc kubenswrapper[4720]: I0122 06:36:46.108600 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:46 crc kubenswrapper[4720]: I0122 06:36:46.108613 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:46Z","lastTransitionTime":"2026-01-22T06:36:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:46 crc kubenswrapper[4720]: I0122 06:36:46.200009 4720 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 05:57:07.510711835 +0000 UTC Jan 22 06:36:46 crc kubenswrapper[4720]: I0122 06:36:46.209788 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 06:36:46 crc kubenswrapper[4720]: I0122 06:36:46.209869 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 06:36:46 crc kubenswrapper[4720]: E0122 06:36:46.210048 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 06:36:46 crc kubenswrapper[4720]: E0122 06:36:46.210380 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 06:36:46 crc kubenswrapper[4720]: I0122 06:36:46.211260 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:46 crc kubenswrapper[4720]: I0122 06:36:46.211305 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:46 crc kubenswrapper[4720]: I0122 06:36:46.211325 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:46 crc kubenswrapper[4720]: I0122 06:36:46.211349 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:46 crc kubenswrapper[4720]: I0122 06:36:46.211365 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:46Z","lastTransitionTime":"2026-01-22T06:36:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:46 crc kubenswrapper[4720]: I0122 06:36:46.314731 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:46 crc kubenswrapper[4720]: I0122 06:36:46.314808 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:46 crc kubenswrapper[4720]: I0122 06:36:46.314829 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:46 crc kubenswrapper[4720]: I0122 06:36:46.314854 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:46 crc kubenswrapper[4720]: I0122 06:36:46.314875 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:46Z","lastTransitionTime":"2026-01-22T06:36:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:46 crc kubenswrapper[4720]: I0122 06:36:46.417376 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:46 crc kubenswrapper[4720]: I0122 06:36:46.417457 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:46 crc kubenswrapper[4720]: I0122 06:36:46.417475 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:46 crc kubenswrapper[4720]: I0122 06:36:46.417505 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:46 crc kubenswrapper[4720]: I0122 06:36:46.417524 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:46Z","lastTransitionTime":"2026-01-22T06:36:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:46 crc kubenswrapper[4720]: I0122 06:36:46.521127 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:46 crc kubenswrapper[4720]: I0122 06:36:46.521193 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:46 crc kubenswrapper[4720]: I0122 06:36:46.521211 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:46 crc kubenswrapper[4720]: I0122 06:36:46.521240 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:46 crc kubenswrapper[4720]: I0122 06:36:46.521262 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:46Z","lastTransitionTime":"2026-01-22T06:36:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:46 crc kubenswrapper[4720]: I0122 06:36:46.625175 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:46 crc kubenswrapper[4720]: I0122 06:36:46.625245 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:46 crc kubenswrapper[4720]: I0122 06:36:46.625269 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:46 crc kubenswrapper[4720]: I0122 06:36:46.625297 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:46 crc kubenswrapper[4720]: I0122 06:36:46.625315 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:46Z","lastTransitionTime":"2026-01-22T06:36:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:46 crc kubenswrapper[4720]: I0122 06:36:46.728367 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:46 crc kubenswrapper[4720]: I0122 06:36:46.728431 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:46 crc kubenswrapper[4720]: I0122 06:36:46.728448 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:46 crc kubenswrapper[4720]: I0122 06:36:46.728475 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:46 crc kubenswrapper[4720]: I0122 06:36:46.728493 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:46Z","lastTransitionTime":"2026-01-22T06:36:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:46 crc kubenswrapper[4720]: I0122 06:36:46.834420 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:46 crc kubenswrapper[4720]: I0122 06:36:46.834470 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:46 crc kubenswrapper[4720]: I0122 06:36:46.834487 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:46 crc kubenswrapper[4720]: I0122 06:36:46.834517 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:46 crc kubenswrapper[4720]: I0122 06:36:46.834535 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:46Z","lastTransitionTime":"2026-01-22T06:36:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:46 crc kubenswrapper[4720]: I0122 06:36:46.937894 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:46 crc kubenswrapper[4720]: I0122 06:36:46.938012 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:46 crc kubenswrapper[4720]: I0122 06:36:46.938028 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:46 crc kubenswrapper[4720]: I0122 06:36:46.938054 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:46 crc kubenswrapper[4720]: I0122 06:36:46.938072 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:46Z","lastTransitionTime":"2026-01-22T06:36:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:47 crc kubenswrapper[4720]: I0122 06:36:47.041495 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:47 crc kubenswrapper[4720]: I0122 06:36:47.041563 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:47 crc kubenswrapper[4720]: I0122 06:36:47.041583 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:47 crc kubenswrapper[4720]: I0122 06:36:47.041611 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:47 crc kubenswrapper[4720]: I0122 06:36:47.041629 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:47Z","lastTransitionTime":"2026-01-22T06:36:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:47 crc kubenswrapper[4720]: I0122 06:36:47.145735 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:47 crc kubenswrapper[4720]: I0122 06:36:47.145803 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:47 crc kubenswrapper[4720]: I0122 06:36:47.145825 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:47 crc kubenswrapper[4720]: I0122 06:36:47.145856 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:47 crc kubenswrapper[4720]: I0122 06:36:47.145877 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:47Z","lastTransitionTime":"2026-01-22T06:36:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:47 crc kubenswrapper[4720]: I0122 06:36:47.201483 4720 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 13:53:07.778814815 +0000 UTC Jan 22 06:36:47 crc kubenswrapper[4720]: I0122 06:36:47.209905 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 06:36:47 crc kubenswrapper[4720]: I0122 06:36:47.209905 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kvtch" Jan 22 06:36:47 crc kubenswrapper[4720]: E0122 06:36:47.210189 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 06:36:47 crc kubenswrapper[4720]: E0122 06:36:47.210263 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kvtch" podUID="409f50e8-9b68-4efe-8eb4-bc144d383817" Jan 22 06:36:47 crc kubenswrapper[4720]: I0122 06:36:47.249559 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:47 crc kubenswrapper[4720]: I0122 06:36:47.249630 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:47 crc kubenswrapper[4720]: I0122 06:36:47.249653 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:47 crc kubenswrapper[4720]: I0122 06:36:47.249682 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:47 crc kubenswrapper[4720]: I0122 06:36:47.249703 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:47Z","lastTransitionTime":"2026-01-22T06:36:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:47 crc kubenswrapper[4720]: I0122 06:36:47.352864 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:47 crc kubenswrapper[4720]: I0122 06:36:47.352937 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:47 crc kubenswrapper[4720]: I0122 06:36:47.352951 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:47 crc kubenswrapper[4720]: I0122 06:36:47.352972 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:47 crc kubenswrapper[4720]: I0122 06:36:47.352987 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:47Z","lastTransitionTime":"2026-01-22T06:36:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:47 crc kubenswrapper[4720]: I0122 06:36:47.456864 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:47 crc kubenswrapper[4720]: I0122 06:36:47.456978 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:47 crc kubenswrapper[4720]: I0122 06:36:47.457002 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:47 crc kubenswrapper[4720]: I0122 06:36:47.457038 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:47 crc kubenswrapper[4720]: I0122 06:36:47.457060 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:47Z","lastTransitionTime":"2026-01-22T06:36:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:47 crc kubenswrapper[4720]: I0122 06:36:47.560783 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:47 crc kubenswrapper[4720]: I0122 06:36:47.560864 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:47 crc kubenswrapper[4720]: I0122 06:36:47.560892 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:47 crc kubenswrapper[4720]: I0122 06:36:47.560992 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:47 crc kubenswrapper[4720]: I0122 06:36:47.561021 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:47Z","lastTransitionTime":"2026-01-22T06:36:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:47 crc kubenswrapper[4720]: I0122 06:36:47.664786 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:47 crc kubenswrapper[4720]: I0122 06:36:47.664869 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:47 crc kubenswrapper[4720]: I0122 06:36:47.664893 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:47 crc kubenswrapper[4720]: I0122 06:36:47.664972 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:47 crc kubenswrapper[4720]: I0122 06:36:47.664999 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:47Z","lastTransitionTime":"2026-01-22T06:36:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:47 crc kubenswrapper[4720]: I0122 06:36:47.768592 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:47 crc kubenswrapper[4720]: I0122 06:36:47.768693 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:47 crc kubenswrapper[4720]: I0122 06:36:47.768718 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:47 crc kubenswrapper[4720]: I0122 06:36:47.769223 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:47 crc kubenswrapper[4720]: I0122 06:36:47.769417 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:47Z","lastTransitionTime":"2026-01-22T06:36:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:47 crc kubenswrapper[4720]: I0122 06:36:47.872713 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:47 crc kubenswrapper[4720]: I0122 06:36:47.872781 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:47 crc kubenswrapper[4720]: I0122 06:36:47.872799 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:47 crc kubenswrapper[4720]: I0122 06:36:47.872827 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:47 crc kubenswrapper[4720]: I0122 06:36:47.872846 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:47Z","lastTransitionTime":"2026-01-22T06:36:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:47 crc kubenswrapper[4720]: I0122 06:36:47.976356 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:47 crc kubenswrapper[4720]: I0122 06:36:47.976422 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:47 crc kubenswrapper[4720]: I0122 06:36:47.976439 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:47 crc kubenswrapper[4720]: I0122 06:36:47.976467 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:47 crc kubenswrapper[4720]: I0122 06:36:47.976483 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:47Z","lastTransitionTime":"2026-01-22T06:36:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:48 crc kubenswrapper[4720]: I0122 06:36:48.079513 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:48 crc kubenswrapper[4720]: I0122 06:36:48.079589 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:48 crc kubenswrapper[4720]: I0122 06:36:48.079615 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:48 crc kubenswrapper[4720]: I0122 06:36:48.079649 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:48 crc kubenswrapper[4720]: I0122 06:36:48.079678 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:48Z","lastTransitionTime":"2026-01-22T06:36:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:48 crc kubenswrapper[4720]: I0122 06:36:48.182827 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:48 crc kubenswrapper[4720]: I0122 06:36:48.182895 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:48 crc kubenswrapper[4720]: I0122 06:36:48.182960 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:48 crc kubenswrapper[4720]: I0122 06:36:48.182993 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:48 crc kubenswrapper[4720]: I0122 06:36:48.183019 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:48Z","lastTransitionTime":"2026-01-22T06:36:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:48 crc kubenswrapper[4720]: I0122 06:36:48.202221 4720 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 14:37:25.154576724 +0000 UTC Jan 22 06:36:48 crc kubenswrapper[4720]: I0122 06:36:48.209768 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 06:36:48 crc kubenswrapper[4720]: E0122 06:36:48.210022 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 06:36:48 crc kubenswrapper[4720]: I0122 06:36:48.210185 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 06:36:48 crc kubenswrapper[4720]: E0122 06:36:48.210435 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 06:36:48 crc kubenswrapper[4720]: I0122 06:36:48.236880 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a98bb6d-6ab7-413e-ab94-7bb5d73babe8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a954389852d9be1f01cad5b53c0ee3a1e22d956897c2fec4bbeffdf558ec585\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02c044f6a9997893116f043639407239a8dc4cf8a30435557910df3c594389cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02c044f6a9997893116f043639407239a8dc4cf8a30435557910df3c594389cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:48Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:48 crc kubenswrapper[4720]: I0122 06:36:48.258191 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc18b75d6761db0d4a459a096804ac6600c6c2026ca133d47671a8aadba175f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6616553b8c62205590801d10e6e316b396b7190584b08a774ebb87ab425cc122\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:48Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:48 crc kubenswrapper[4720]: I0122 06:36:48.279406 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4b26e9d-6a95-4b1c-9750-88b6aa100c67\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57c69f24cbbb5aea3a63b602916230df0cc3d74344a32bd1528bd162786a6d4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f52f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88eb6692702bcb8523c759d764bb8dede5af5a2890217a1c6897a5b18a7197dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9f52f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bnsvd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:48Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:48 crc kubenswrapper[4720]: I0122 06:36:48.286825 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:48 crc kubenswrapper[4720]: I0122 06:36:48.286902 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:48 crc kubenswrapper[4720]: I0122 06:36:48.286959 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:48 crc kubenswrapper[4720]: I0122 06:36:48.287022 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:48 crc kubenswrapper[4720]: I0122 06:36:48.287045 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:48Z","lastTransitionTime":"2026-01-22T06:36:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:48 crc kubenswrapper[4720]: I0122 06:36:48.307025 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lxzml" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c7b3c34a-9870-4c9f-990b-29b7e768d5a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec19859a546dc85e4807d0f522014187b1d22c1167a342776aafbcfd7c3a2cd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h66q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0e395a61e07527eace7f980f476ef9b16a0c6d525b461837ebd9abcbebfa9b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0e395a61e07527eace7f980f476ef9b16a0c6d525b461837ebd9abcbebfa9b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h66q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9fa836ad8fb1d77bc1b9d30575db8ed7f2394a2b03d41f0b26a3907fa07ab154\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fa836ad8fb1d77bc1b9d30575db8ed7f2394a2b03d41f0b26a3907fa07ab154\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h66q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09cda77dbfa7cafc1f167478649fe1bc961d7610005bd3260407796956dd8fc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09cda77dbfa7cafc1f167478649fe1bc961d7610005bd3260407796956dd8fc3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h66q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d74c02c82589070c20f88c3ce9574341f2c4898af034fe9a686aa7b7c19ed93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d74c02c82589070c20f88c3ce9574341f2c4898af034fe9a686aa7b7c19ed93\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h66q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42e7374bf2c5e55d0afbe1e7564a2bc7fd23d0af7e22f0cab853fb9d75802422\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42e7374bf2c5e55d0afbe1e7564a2bc7fd23d0af7e22f0cab853fb9d75802422\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h66q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bbafab11f2868d0a97a54debc01d8e4cc6360b08a5b7e883a2f385db2e9ba0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3bbafab11f2868d0a97a54debc01d8e4cc6360b08a5b7e883a2f385db2e9ba0c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8h66q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:33Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lxzml\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:48Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:48 crc kubenswrapper[4720]: I0122 06:36:48.325390 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-5bmrh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"819a554b-cde8-41eb-bf3c-b965b5754ee9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fd548b8bf40d7be75f3433d2396eac3eb467cdc77a6e92f29c5321b0f2e11c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w949z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:35Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-5bmrh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:48Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:48 crc kubenswrapper[4720]: I0122 06:36:48.346050 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4c84t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83dddec7-9ecb-4d3b-97ac-e2f8f59e547c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ea91d23f213bf1c41ebffa43c30559153a3fdae5aac42557a24566cc90bd2b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl9b5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8570742a6d232a3695d42f42f4a8a7bfe79325cb20c8da4129148dda33df4683\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl9b5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:45Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4c84t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:48Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:48 crc kubenswrapper[4720]: I0122 06:36:48.367654 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63aad383-f2cb-4bee-a51e-8f27e7b6f6dc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f91f6cde32c3019a63315238551af8420def8e585e4dbf3d29e55f62f155ba3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eff41c6fc4cac32edebfbabd313603d50771e92427938f4fc8c12bc75563133d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5dc3d31620607809111c9a1ed06ab7dfb1b24ff5ef18155979efc6b09b6d67d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://365ce842df37c87fae429e32228087a45828c6c717b63a2eb9818f26bf9aed7b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:08Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:48Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:48 crc kubenswrapper[4720]: I0122 06:36:48.387831 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:48Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:48 crc kubenswrapper[4720]: I0122 06:36:48.392022 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:48 crc kubenswrapper[4720]: I0122 06:36:48.392284 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:48 crc kubenswrapper[4720]: I0122 06:36:48.392311 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:48 crc kubenswrapper[4720]: I0122 06:36:48.392343 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:48 crc kubenswrapper[4720]: I0122 06:36:48.392367 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:48Z","lastTransitionTime":"2026-01-22T06:36:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:48 crc kubenswrapper[4720]: I0122 06:36:48.409735 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:48Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:48 crc kubenswrapper[4720]: I0122 06:36:48.430503 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3042f2c25662a3c0a0f010b2e431521ea9dc24b59303773b1fc4e2b43308937\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:48Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:48 crc kubenswrapper[4720]: I0122 06:36:48.465561 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pc2f4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a725fa6-120e-41b1-bf7b-e1419e35c891\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:33Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be8927bdfee25dd12f02d8f5e4923d1ef6e034a4ab3c036ee8fb76b699aa4a04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b14f48ebae84e1450337b5ce78c6b9e74f4ffd418eaaf9df9fcb7495f5feae06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b013c0f8d83bb91a1385ed067377ea5bd81640f7b420160042c33a3112cbe153\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a1f835b7dfa74f56bbfdc5ef0a6674e6c04ae1be7119e3675d4a7e970bc37d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdea94d764eb25af4aec5da1f1aaf94b7d75e66292561a54803a079a6ca35cbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://279bdc70caff8b05f2eaff1d28bee03725a7069dcc99c4770a8df9a3e3ed0440\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4bab2488a753e9278defd984cada047c9d9b5411a54e88204c7c67add2341e64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4bab2488a753e9278defd984cada047c9d9b5411a54e88204c7c67add2341e64\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T06:36:35Z\\\",\\\"message\\\":\\\"lt network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:35Z is after 2025-08-24T17:21:41Z]\\\\nI0122 06:36:35.081612 6797 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-config-operator/metrics]} name:Service_openshift-config-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.161:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {f32857b5-f652-4313-a0d7-455c3156dd99}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T06:36:34Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-pc2f4_openshift-ovn-kubernetes(9a725fa6-120e-41b1-bf7b-e1419e35c891)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd6371da033cf0ecd1c0b746cae82e14593edbf2353910a77284a2987741580d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a478b385adad231c65bafa44aef49fd983ef1a81babc0464478e199578b612b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a478b385adad231c65bafa44aef49fd983ef1a81babc0464478e199578b612b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmnn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pc2f4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:48Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:48 crc kubenswrapper[4720]: I0122 06:36:48.484713 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kvtch" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"409f50e8-9b68-4efe-8eb4-bc144d383817\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhm9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhm9b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:47Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kvtch\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:48Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:48 crc kubenswrapper[4720]: I0122 06:36:48.495637 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:48 crc kubenswrapper[4720]: I0122 06:36:48.495728 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:48 crc kubenswrapper[4720]: I0122 06:36:48.495754 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:48 crc kubenswrapper[4720]: I0122 06:36:48.495787 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:48 crc kubenswrapper[4720]: I0122 06:36:48.495807 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:48Z","lastTransitionTime":"2026-01-22T06:36:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:48 crc kubenswrapper[4720]: I0122 06:36:48.519731 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2336077a-0d11-443d-aa5f-3bc0f75aeb59\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7f8b443098a61889ca09134ff7b6e219efca59abcbcea3bb4c1aa9266ee8e55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfdba4f4bb3ed55acad58ed0c07d2dd58b1a1e9fd56443090c002bdefd3e7e10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2caddf6dd66b641ff10add3b52e37581c741507660ec4fe67a3b9ddde699e56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://108525dd49ce1847f1cc6c4e2a4c70cae3d2d990cda97eeb4aba8879b955432d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d49a501c34b86f0f9300742970a00c92ce3b8d352518d8f53d1ab8c734e2af6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49fa65139740d25a82a8becea2b7476e9846a3eebfe5a02bb0b3e8cc4e7fa30d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49fa65139740d25a82a8becea2b7476e9846a3eebfe5a02bb0b3e8cc4e7fa30d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e171d6a13b6108f7df37e46d2195682d2ce7a75e5fbfd0ccf4aca27871964cf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e171d6a13b6108f7df37e46d2195682d2ce7a75e5fbfd0ccf4aca27871964cf6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://136504ce05b178f4c60a839a1044c0b0892ea97babfe80181eac601c5057df16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://136504ce05b178f4c60a839a1044c0b0892ea97babfe80181eac601c5057df16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:08Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:48Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:48 crc kubenswrapper[4720]: I0122 06:36:48.544659 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71c3232e-a7c6-4127-b9ae-54b793cf40fc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b844e983a5de29cec0e9c51f209f04921cd45b437887ca38e93348bc0816832a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d21c2ba52009379f44498a3064c67bcd1a54a6277cc2b1df9db949bf0871e95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65a3043e63bb84b58bb5486070eb84265d51c514df2d766703a62382b3d6f66f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b944677a78ab673857bf2dd507dcc2eced1caef632613ec1e2054cb76ef57f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f51bf89aa732948c3672632435a055b62f58316da07931b2773bc2d3c10789e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-22T06:35:26Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0122 06:35:20.895806 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0122 06:35:20.896560 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2161447925/tls.crt::/tmp/serving-cert-2161447925/tls.key\\\\\\\"\\\\nI0122 06:35:26.673138 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0122 06:35:26.677382 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0122 06:35:26.677419 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0122 06:35:26.677459 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0122 06:35:26.677469 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0122 06:35:26.688473 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0122 06:35:26.688552 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 06:35:26.688566 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0122 06:35:26.688580 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0122 06:35:26.688588 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0122 06:35:26.688596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0122 06:35:26.688605 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0122 06:35:26.688875 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0122 06:35:26.690763 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:10Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://927a5a6aba8d7533888fb9e5e664fbaed25c6e6c0616c5b3c6f0afd6f45aa128\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:10Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3418e6eacd36147ae6ae1be72f23113dea7dbaab3597ba0fee76fbc846c0162f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3418e6eacd36147ae6ae1be72f23113dea7dbaab3597ba0fee76fbc846c0162f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:48Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:48 crc kubenswrapper[4720]: I0122 06:36:48.566188 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://42d0f971bb1f2a686c44e255b97648123170034b41d626318aab29e111dda4a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:48Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:48 crc kubenswrapper[4720]: I0122 06:36:48.588068 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-n5w5r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"85373343-156d-4de0-a72b-baaf7c4e3d08\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:36:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:36:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b71047289bcefd19da4f70da8db4ee3456912a253f598d85540effeea52ca966\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4da0abad7292a9d82b80caabd4b1f0fc15fbe54bfa5b4316c85b2131ea8b5b7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-22T06:36:19Z\\\",\\\"message\\\":\\\"2026-01-22T06:35:34+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_310a9cdb-d8e6-49bd-b096-1bf7adc9b43b\\\\n2026-01-22T06:35:34+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_310a9cdb-d8e6-49bd-b096-1bf7adc9b43b to /host/opt/cni/bin/\\\\n2026-01-22T06:35:34Z [verbose] multus-daemon started\\\\n2026-01-22T06:35:34Z [verbose] Readiness Indicator file check\\\\n2026-01-22T06:36:19Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:33Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:36:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tlzmz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:32Z\\\"}}\" for pod \"openshift-multus\"/\"multus-n5w5r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:48Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:48 crc kubenswrapper[4720]: I0122 06:36:48.599463 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:48 crc kubenswrapper[4720]: I0122 06:36:48.599536 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:48 crc kubenswrapper[4720]: I0122 06:36:48.599555 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:48 crc kubenswrapper[4720]: I0122 06:36:48.599587 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:48 crc kubenswrapper[4720]: I0122 06:36:48.599609 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:48Z","lastTransitionTime":"2026-01-22T06:36:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:48 crc kubenswrapper[4720]: I0122 06:36:48.608252 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73ba4f9c-33cb-4898-b2a3-21bf3327cf5b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:36:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:36:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b5ab589e0e928e47ac498164439f2fbd62bfe1130a9c17a9d96ec4cedd2c1e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5ff36eb3ab53efb54f45ab3e3030175237fd76ecd28ffcdc5a5079dfb93ec2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5d1e4cb487f75b95bc0da8ec3adbb6410d171fa2c95137c8127cea6023166f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7605a8cc85e1a2da51c6bedd5f03df930e1135340ec7de69d1ef643c907fd2bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7605a8cc85e1a2da51c6bedd5f03df930e1135340ec7de69d1ef643c907fd2bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-22T06:35:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-22T06:35:09Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:08Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:48Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:48 crc kubenswrapper[4720]: I0122 06:36:48.629239 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:25Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:48Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:48 crc kubenswrapper[4720]: I0122 06:36:48.645304 4720 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dtnxt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"518eedd0-2cb6-458d-a7a8-d8c8b8296401\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-22T06:35:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://794bf7a98b2300bd21bd914f5ad90c8f92ea7c055c78f62dbac7bc66c1c4d282\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-22T06:35:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqx2p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-22T06:35:32Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dtnxt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:48Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:48 crc kubenswrapper[4720]: I0122 06:36:48.703364 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:48 crc kubenswrapper[4720]: I0122 06:36:48.703541 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:48 crc kubenswrapper[4720]: I0122 06:36:48.703569 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:48 crc kubenswrapper[4720]: I0122 06:36:48.703653 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:48 crc kubenswrapper[4720]: I0122 06:36:48.703727 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:48Z","lastTransitionTime":"2026-01-22T06:36:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:48 crc kubenswrapper[4720]: I0122 06:36:48.807084 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:48 crc kubenswrapper[4720]: I0122 06:36:48.807158 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:48 crc kubenswrapper[4720]: I0122 06:36:48.807178 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:48 crc kubenswrapper[4720]: I0122 06:36:48.807234 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:48 crc kubenswrapper[4720]: I0122 06:36:48.807254 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:48Z","lastTransitionTime":"2026-01-22T06:36:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:48 crc kubenswrapper[4720]: I0122 06:36:48.910086 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:48 crc kubenswrapper[4720]: I0122 06:36:48.910204 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:48 crc kubenswrapper[4720]: I0122 06:36:48.910232 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:48 crc kubenswrapper[4720]: I0122 06:36:48.910270 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:48 crc kubenswrapper[4720]: I0122 06:36:48.910293 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:48Z","lastTransitionTime":"2026-01-22T06:36:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:49 crc kubenswrapper[4720]: I0122 06:36:49.013559 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:49 crc kubenswrapper[4720]: I0122 06:36:49.013625 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:49 crc kubenswrapper[4720]: I0122 06:36:49.013644 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:49 crc kubenswrapper[4720]: I0122 06:36:49.013667 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:49 crc kubenswrapper[4720]: I0122 06:36:49.013683 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:49Z","lastTransitionTime":"2026-01-22T06:36:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:49 crc kubenswrapper[4720]: I0122 06:36:49.116386 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:49 crc kubenswrapper[4720]: I0122 06:36:49.116438 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:49 crc kubenswrapper[4720]: I0122 06:36:49.116449 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:49 crc kubenswrapper[4720]: I0122 06:36:49.116471 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:49 crc kubenswrapper[4720]: I0122 06:36:49.116483 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:49Z","lastTransitionTime":"2026-01-22T06:36:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:49 crc kubenswrapper[4720]: I0122 06:36:49.202838 4720 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 01:43:51.984948251 +0000 UTC Jan 22 06:36:49 crc kubenswrapper[4720]: I0122 06:36:49.210207 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 06:36:49 crc kubenswrapper[4720]: E0122 06:36:49.210361 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 06:36:49 crc kubenswrapper[4720]: I0122 06:36:49.211045 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kvtch" Jan 22 06:36:49 crc kubenswrapper[4720]: E0122 06:36:49.211127 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kvtch" podUID="409f50e8-9b68-4efe-8eb4-bc144d383817" Jan 22 06:36:49 crc kubenswrapper[4720]: I0122 06:36:49.221194 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:49 crc kubenswrapper[4720]: I0122 06:36:49.221252 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:49 crc kubenswrapper[4720]: I0122 06:36:49.221271 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:49 crc kubenswrapper[4720]: I0122 06:36:49.221298 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:49 crc kubenswrapper[4720]: I0122 06:36:49.221316 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:49Z","lastTransitionTime":"2026-01-22T06:36:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:49 crc kubenswrapper[4720]: I0122 06:36:49.324024 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:49 crc kubenswrapper[4720]: I0122 06:36:49.324088 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:49 crc kubenswrapper[4720]: I0122 06:36:49.324105 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:49 crc kubenswrapper[4720]: I0122 06:36:49.324132 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:49 crc kubenswrapper[4720]: I0122 06:36:49.324150 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:49Z","lastTransitionTime":"2026-01-22T06:36:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:49 crc kubenswrapper[4720]: I0122 06:36:49.434072 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:49 crc kubenswrapper[4720]: I0122 06:36:49.434149 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:49 crc kubenswrapper[4720]: I0122 06:36:49.434174 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:49 crc kubenswrapper[4720]: I0122 06:36:49.434203 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:49 crc kubenswrapper[4720]: I0122 06:36:49.434222 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:49Z","lastTransitionTime":"2026-01-22T06:36:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:49 crc kubenswrapper[4720]: I0122 06:36:49.537573 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:49 crc kubenswrapper[4720]: I0122 06:36:49.537643 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:49 crc kubenswrapper[4720]: I0122 06:36:49.537667 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:49 crc kubenswrapper[4720]: I0122 06:36:49.537698 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:49 crc kubenswrapper[4720]: I0122 06:36:49.537720 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:49Z","lastTransitionTime":"2026-01-22T06:36:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:49 crc kubenswrapper[4720]: I0122 06:36:49.641102 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:49 crc kubenswrapper[4720]: I0122 06:36:49.641144 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:49 crc kubenswrapper[4720]: I0122 06:36:49.641162 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:49 crc kubenswrapper[4720]: I0122 06:36:49.641206 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:49 crc kubenswrapper[4720]: I0122 06:36:49.641226 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:49Z","lastTransitionTime":"2026-01-22T06:36:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:49 crc kubenswrapper[4720]: I0122 06:36:49.743737 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:49 crc kubenswrapper[4720]: I0122 06:36:49.743768 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:49 crc kubenswrapper[4720]: I0122 06:36:49.743776 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:49 crc kubenswrapper[4720]: I0122 06:36:49.743790 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:49 crc kubenswrapper[4720]: I0122 06:36:49.743798 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:49Z","lastTransitionTime":"2026-01-22T06:36:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:49 crc kubenswrapper[4720]: I0122 06:36:49.846480 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:49 crc kubenswrapper[4720]: I0122 06:36:49.846533 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:49 crc kubenswrapper[4720]: I0122 06:36:49.846547 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:49 crc kubenswrapper[4720]: I0122 06:36:49.846571 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:49 crc kubenswrapper[4720]: I0122 06:36:49.846584 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:49Z","lastTransitionTime":"2026-01-22T06:36:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:49 crc kubenswrapper[4720]: I0122 06:36:49.949110 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:49 crc kubenswrapper[4720]: I0122 06:36:49.949163 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:49 crc kubenswrapper[4720]: I0122 06:36:49.949180 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:49 crc kubenswrapper[4720]: I0122 06:36:49.949204 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:49 crc kubenswrapper[4720]: I0122 06:36:49.949220 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:49Z","lastTransitionTime":"2026-01-22T06:36:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:50 crc kubenswrapper[4720]: I0122 06:36:50.052509 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:50 crc kubenswrapper[4720]: I0122 06:36:50.052569 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:50 crc kubenswrapper[4720]: I0122 06:36:50.052585 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:50 crc kubenswrapper[4720]: I0122 06:36:50.052612 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:50 crc kubenswrapper[4720]: I0122 06:36:50.052630 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:50Z","lastTransitionTime":"2026-01-22T06:36:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:50 crc kubenswrapper[4720]: I0122 06:36:50.156006 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:50 crc kubenswrapper[4720]: I0122 06:36:50.156078 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:50 crc kubenswrapper[4720]: I0122 06:36:50.156101 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:50 crc kubenswrapper[4720]: I0122 06:36:50.156133 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:50 crc kubenswrapper[4720]: I0122 06:36:50.156155 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:50Z","lastTransitionTime":"2026-01-22T06:36:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:50 crc kubenswrapper[4720]: I0122 06:36:50.203762 4720 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 10:57:37.147117955 +0000 UTC Jan 22 06:36:50 crc kubenswrapper[4720]: I0122 06:36:50.210270 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 06:36:50 crc kubenswrapper[4720]: I0122 06:36:50.210398 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 06:36:50 crc kubenswrapper[4720]: E0122 06:36:50.210467 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 06:36:50 crc kubenswrapper[4720]: E0122 06:36:50.211038 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 06:36:50 crc kubenswrapper[4720]: I0122 06:36:50.211548 4720 scope.go:117] "RemoveContainer" containerID="4bab2488a753e9278defd984cada047c9d9b5411a54e88204c7c67add2341e64" Jan 22 06:36:50 crc kubenswrapper[4720]: E0122 06:36:50.211864 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-pc2f4_openshift-ovn-kubernetes(9a725fa6-120e-41b1-bf7b-e1419e35c891)\"" pod="openshift-ovn-kubernetes/ovnkube-node-pc2f4" podUID="9a725fa6-120e-41b1-bf7b-e1419e35c891" Jan 22 06:36:50 crc kubenswrapper[4720]: I0122 06:36:50.258852 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:50 crc kubenswrapper[4720]: I0122 06:36:50.258900 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:50 crc kubenswrapper[4720]: I0122 06:36:50.258938 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:50 crc kubenswrapper[4720]: I0122 06:36:50.258961 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:50 crc kubenswrapper[4720]: I0122 06:36:50.258976 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:50Z","lastTransitionTime":"2026-01-22T06:36:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:50 crc kubenswrapper[4720]: I0122 06:36:50.362130 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:50 crc kubenswrapper[4720]: I0122 06:36:50.362201 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:50 crc kubenswrapper[4720]: I0122 06:36:50.362222 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:50 crc kubenswrapper[4720]: I0122 06:36:50.362254 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:50 crc kubenswrapper[4720]: I0122 06:36:50.362274 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:50Z","lastTransitionTime":"2026-01-22T06:36:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:50 crc kubenswrapper[4720]: I0122 06:36:50.466632 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:50 crc kubenswrapper[4720]: I0122 06:36:50.466714 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:50 crc kubenswrapper[4720]: I0122 06:36:50.466734 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:50 crc kubenswrapper[4720]: I0122 06:36:50.466765 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:50 crc kubenswrapper[4720]: I0122 06:36:50.466787 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:50Z","lastTransitionTime":"2026-01-22T06:36:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:50 crc kubenswrapper[4720]: I0122 06:36:50.570741 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:50 crc kubenswrapper[4720]: I0122 06:36:50.570824 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:50 crc kubenswrapper[4720]: I0122 06:36:50.570847 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:50 crc kubenswrapper[4720]: I0122 06:36:50.570878 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:50 crc kubenswrapper[4720]: I0122 06:36:50.570899 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:50Z","lastTransitionTime":"2026-01-22T06:36:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:50 crc kubenswrapper[4720]: I0122 06:36:50.674816 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:50 crc kubenswrapper[4720]: I0122 06:36:50.674879 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:50 crc kubenswrapper[4720]: I0122 06:36:50.674898 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:50 crc kubenswrapper[4720]: I0122 06:36:50.674951 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:50 crc kubenswrapper[4720]: I0122 06:36:50.674971 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:50Z","lastTransitionTime":"2026-01-22T06:36:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:50 crc kubenswrapper[4720]: I0122 06:36:50.778804 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:50 crc kubenswrapper[4720]: I0122 06:36:50.778873 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:50 crc kubenswrapper[4720]: I0122 06:36:50.778892 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:50 crc kubenswrapper[4720]: I0122 06:36:50.778947 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:50 crc kubenswrapper[4720]: I0122 06:36:50.778967 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:50Z","lastTransitionTime":"2026-01-22T06:36:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:50 crc kubenswrapper[4720]: I0122 06:36:50.881749 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:50 crc kubenswrapper[4720]: I0122 06:36:50.881816 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:50 crc kubenswrapper[4720]: I0122 06:36:50.881836 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:50 crc kubenswrapper[4720]: I0122 06:36:50.881868 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:50 crc kubenswrapper[4720]: I0122 06:36:50.881888 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:50Z","lastTransitionTime":"2026-01-22T06:36:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:50 crc kubenswrapper[4720]: I0122 06:36:50.985748 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:50 crc kubenswrapper[4720]: I0122 06:36:50.985820 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:50 crc kubenswrapper[4720]: I0122 06:36:50.985842 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:50 crc kubenswrapper[4720]: I0122 06:36:50.985874 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:50 crc kubenswrapper[4720]: I0122 06:36:50.985895 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:50Z","lastTransitionTime":"2026-01-22T06:36:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:51 crc kubenswrapper[4720]: I0122 06:36:51.089198 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:51 crc kubenswrapper[4720]: I0122 06:36:51.089280 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:51 crc kubenswrapper[4720]: I0122 06:36:51.089305 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:51 crc kubenswrapper[4720]: I0122 06:36:51.089340 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:51 crc kubenswrapper[4720]: I0122 06:36:51.089365 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:51Z","lastTransitionTime":"2026-01-22T06:36:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:51 crc kubenswrapper[4720]: I0122 06:36:51.193319 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:51 crc kubenswrapper[4720]: I0122 06:36:51.193385 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:51 crc kubenswrapper[4720]: I0122 06:36:51.193408 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:51 crc kubenswrapper[4720]: I0122 06:36:51.193437 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:51 crc kubenswrapper[4720]: I0122 06:36:51.193457 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:51Z","lastTransitionTime":"2026-01-22T06:36:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:51 crc kubenswrapper[4720]: I0122 06:36:51.204019 4720 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 12:04:41.547181314 +0000 UTC Jan 22 06:36:51 crc kubenswrapper[4720]: I0122 06:36:51.210384 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 06:36:51 crc kubenswrapper[4720]: I0122 06:36:51.210384 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kvtch" Jan 22 06:36:51 crc kubenswrapper[4720]: E0122 06:36:51.210555 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 06:36:51 crc kubenswrapper[4720]: E0122 06:36:51.210772 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kvtch" podUID="409f50e8-9b68-4efe-8eb4-bc144d383817" Jan 22 06:36:51 crc kubenswrapper[4720]: I0122 06:36:51.296537 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:51 crc kubenswrapper[4720]: I0122 06:36:51.296616 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:51 crc kubenswrapper[4720]: I0122 06:36:51.296632 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:51 crc kubenswrapper[4720]: I0122 06:36:51.296662 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:51 crc kubenswrapper[4720]: I0122 06:36:51.296681 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:51Z","lastTransitionTime":"2026-01-22T06:36:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:51 crc kubenswrapper[4720]: I0122 06:36:51.400053 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:51 crc kubenswrapper[4720]: I0122 06:36:51.400122 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:51 crc kubenswrapper[4720]: I0122 06:36:51.400055 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/409f50e8-9b68-4efe-8eb4-bc144d383817-metrics-certs\") pod \"network-metrics-daemon-kvtch\" (UID: \"409f50e8-9b68-4efe-8eb4-bc144d383817\") " pod="openshift-multus/network-metrics-daemon-kvtch" Jan 22 06:36:51 crc kubenswrapper[4720]: E0122 06:36:51.400195 4720 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 22 06:36:51 crc kubenswrapper[4720]: I0122 06:36:51.400144 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:51 crc kubenswrapper[4720]: E0122 06:36:51.400315 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/409f50e8-9b68-4efe-8eb4-bc144d383817-metrics-certs podName:409f50e8-9b68-4efe-8eb4-bc144d383817 nodeName:}" failed. No retries permitted until 2026-01-22 06:37:55.400265514 +0000 UTC m=+167.542172229 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/409f50e8-9b68-4efe-8eb4-bc144d383817-metrics-certs") pod "network-metrics-daemon-kvtch" (UID: "409f50e8-9b68-4efe-8eb4-bc144d383817") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 22 06:36:51 crc kubenswrapper[4720]: I0122 06:36:51.400372 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:51 crc kubenswrapper[4720]: I0122 06:36:51.400419 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:51Z","lastTransitionTime":"2026-01-22T06:36:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:51 crc kubenswrapper[4720]: I0122 06:36:51.503439 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:51 crc kubenswrapper[4720]: I0122 06:36:51.503505 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:51 crc kubenswrapper[4720]: I0122 06:36:51.503524 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:51 crc kubenswrapper[4720]: I0122 06:36:51.503551 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:51 crc kubenswrapper[4720]: I0122 06:36:51.503572 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:51Z","lastTransitionTime":"2026-01-22T06:36:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:51 crc kubenswrapper[4720]: I0122 06:36:51.606336 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:51 crc kubenswrapper[4720]: I0122 06:36:51.606403 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:51 crc kubenswrapper[4720]: I0122 06:36:51.606423 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:51 crc kubenswrapper[4720]: I0122 06:36:51.606450 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:51 crc kubenswrapper[4720]: I0122 06:36:51.606472 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:51Z","lastTransitionTime":"2026-01-22T06:36:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:51 crc kubenswrapper[4720]: I0122 06:36:51.709434 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:51 crc kubenswrapper[4720]: I0122 06:36:51.709500 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:51 crc kubenswrapper[4720]: I0122 06:36:51.709518 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:51 crc kubenswrapper[4720]: I0122 06:36:51.709545 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:51 crc kubenswrapper[4720]: I0122 06:36:51.709567 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:51Z","lastTransitionTime":"2026-01-22T06:36:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:51 crc kubenswrapper[4720]: I0122 06:36:51.811652 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:51 crc kubenswrapper[4720]: I0122 06:36:51.811717 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:51 crc kubenswrapper[4720]: I0122 06:36:51.811737 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:51 crc kubenswrapper[4720]: I0122 06:36:51.811768 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:51 crc kubenswrapper[4720]: I0122 06:36:51.811793 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:51Z","lastTransitionTime":"2026-01-22T06:36:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:51 crc kubenswrapper[4720]: I0122 06:36:51.914068 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:51 crc kubenswrapper[4720]: I0122 06:36:51.914140 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:51 crc kubenswrapper[4720]: I0122 06:36:51.914162 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:51 crc kubenswrapper[4720]: I0122 06:36:51.914188 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:51 crc kubenswrapper[4720]: I0122 06:36:51.914208 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:51Z","lastTransitionTime":"2026-01-22T06:36:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:52 crc kubenswrapper[4720]: I0122 06:36:52.018411 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:52 crc kubenswrapper[4720]: I0122 06:36:52.018483 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:52 crc kubenswrapper[4720]: I0122 06:36:52.018501 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:52 crc kubenswrapper[4720]: I0122 06:36:52.018531 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:52 crc kubenswrapper[4720]: I0122 06:36:52.018554 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:52Z","lastTransitionTime":"2026-01-22T06:36:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:52 crc kubenswrapper[4720]: I0122 06:36:52.121877 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:52 crc kubenswrapper[4720]: I0122 06:36:52.121997 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:52 crc kubenswrapper[4720]: I0122 06:36:52.122024 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:52 crc kubenswrapper[4720]: I0122 06:36:52.122062 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:52 crc kubenswrapper[4720]: I0122 06:36:52.122092 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:52Z","lastTransitionTime":"2026-01-22T06:36:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:52 crc kubenswrapper[4720]: I0122 06:36:52.204955 4720 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 10:23:04.1992502 +0000 UTC Jan 22 06:36:52 crc kubenswrapper[4720]: I0122 06:36:52.210479 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 06:36:52 crc kubenswrapper[4720]: I0122 06:36:52.210504 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 06:36:52 crc kubenswrapper[4720]: E0122 06:36:52.210718 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 06:36:52 crc kubenswrapper[4720]: E0122 06:36:52.210882 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 06:36:52 crc kubenswrapper[4720]: I0122 06:36:52.225572 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:52 crc kubenswrapper[4720]: I0122 06:36:52.225638 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:52 crc kubenswrapper[4720]: I0122 06:36:52.225658 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:52 crc kubenswrapper[4720]: I0122 06:36:52.225691 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:52 crc kubenswrapper[4720]: I0122 06:36:52.225712 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:52Z","lastTransitionTime":"2026-01-22T06:36:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:52 crc kubenswrapper[4720]: I0122 06:36:52.329644 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:52 crc kubenswrapper[4720]: I0122 06:36:52.329726 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:52 crc kubenswrapper[4720]: I0122 06:36:52.329747 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:52 crc kubenswrapper[4720]: I0122 06:36:52.329778 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:52 crc kubenswrapper[4720]: I0122 06:36:52.329799 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:52Z","lastTransitionTime":"2026-01-22T06:36:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:52 crc kubenswrapper[4720]: I0122 06:36:52.433490 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:52 crc kubenswrapper[4720]: I0122 06:36:52.433573 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:52 crc kubenswrapper[4720]: I0122 06:36:52.433599 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:52 crc kubenswrapper[4720]: I0122 06:36:52.433634 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:52 crc kubenswrapper[4720]: I0122 06:36:52.433663 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:52Z","lastTransitionTime":"2026-01-22T06:36:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:52 crc kubenswrapper[4720]: I0122 06:36:52.537649 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:52 crc kubenswrapper[4720]: I0122 06:36:52.537767 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:52 crc kubenswrapper[4720]: I0122 06:36:52.537790 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:52 crc kubenswrapper[4720]: I0122 06:36:52.537821 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:52 crc kubenswrapper[4720]: I0122 06:36:52.537863 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:52Z","lastTransitionTime":"2026-01-22T06:36:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:52 crc kubenswrapper[4720]: I0122 06:36:52.641563 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:52 crc kubenswrapper[4720]: I0122 06:36:52.641633 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:52 crc kubenswrapper[4720]: I0122 06:36:52.641651 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:52 crc kubenswrapper[4720]: I0122 06:36:52.641683 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:52 crc kubenswrapper[4720]: I0122 06:36:52.641703 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:52Z","lastTransitionTime":"2026-01-22T06:36:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:52 crc kubenswrapper[4720]: I0122 06:36:52.744286 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:52 crc kubenswrapper[4720]: I0122 06:36:52.744372 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:52 crc kubenswrapper[4720]: I0122 06:36:52.744399 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:52 crc kubenswrapper[4720]: I0122 06:36:52.744434 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:52 crc kubenswrapper[4720]: I0122 06:36:52.744463 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:52Z","lastTransitionTime":"2026-01-22T06:36:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:52 crc kubenswrapper[4720]: I0122 06:36:52.847768 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:52 crc kubenswrapper[4720]: I0122 06:36:52.847848 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:52 crc kubenswrapper[4720]: I0122 06:36:52.847866 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:52 crc kubenswrapper[4720]: I0122 06:36:52.847900 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:52 crc kubenswrapper[4720]: I0122 06:36:52.847966 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:52Z","lastTransitionTime":"2026-01-22T06:36:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:52 crc kubenswrapper[4720]: I0122 06:36:52.951302 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:52 crc kubenswrapper[4720]: I0122 06:36:52.951420 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:52 crc kubenswrapper[4720]: I0122 06:36:52.951447 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:52 crc kubenswrapper[4720]: I0122 06:36:52.951481 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:52 crc kubenswrapper[4720]: I0122 06:36:52.951505 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:52Z","lastTransitionTime":"2026-01-22T06:36:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:53 crc kubenswrapper[4720]: I0122 06:36:53.055582 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:53 crc kubenswrapper[4720]: I0122 06:36:53.055652 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:53 crc kubenswrapper[4720]: I0122 06:36:53.055671 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:53 crc kubenswrapper[4720]: I0122 06:36:53.055700 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:53 crc kubenswrapper[4720]: I0122 06:36:53.055722 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:53Z","lastTransitionTime":"2026-01-22T06:36:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:53 crc kubenswrapper[4720]: I0122 06:36:53.159687 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:53 crc kubenswrapper[4720]: I0122 06:36:53.159768 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:53 crc kubenswrapper[4720]: I0122 06:36:53.159787 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:53 crc kubenswrapper[4720]: I0122 06:36:53.159814 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:53 crc kubenswrapper[4720]: I0122 06:36:53.159840 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:53Z","lastTransitionTime":"2026-01-22T06:36:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:53 crc kubenswrapper[4720]: I0122 06:36:53.205455 4720 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 03:34:44.863420727 +0000 UTC Jan 22 06:36:53 crc kubenswrapper[4720]: I0122 06:36:53.209897 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 06:36:53 crc kubenswrapper[4720]: E0122 06:36:53.210314 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 06:36:53 crc kubenswrapper[4720]: I0122 06:36:53.209957 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kvtch" Jan 22 06:36:53 crc kubenswrapper[4720]: E0122 06:36:53.211410 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kvtch" podUID="409f50e8-9b68-4efe-8eb4-bc144d383817" Jan 22 06:36:53 crc kubenswrapper[4720]: I0122 06:36:53.263575 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:53 crc kubenswrapper[4720]: I0122 06:36:53.263676 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:53 crc kubenswrapper[4720]: I0122 06:36:53.263708 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:53 crc kubenswrapper[4720]: I0122 06:36:53.263747 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:53 crc kubenswrapper[4720]: I0122 06:36:53.263774 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:53Z","lastTransitionTime":"2026-01-22T06:36:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:53 crc kubenswrapper[4720]: I0122 06:36:53.367975 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:53 crc kubenswrapper[4720]: I0122 06:36:53.368048 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:53 crc kubenswrapper[4720]: I0122 06:36:53.368070 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:53 crc kubenswrapper[4720]: I0122 06:36:53.368100 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:53 crc kubenswrapper[4720]: I0122 06:36:53.368123 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:53Z","lastTransitionTime":"2026-01-22T06:36:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:53 crc kubenswrapper[4720]: I0122 06:36:53.471809 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:53 crc kubenswrapper[4720]: I0122 06:36:53.472535 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:53 crc kubenswrapper[4720]: I0122 06:36:53.472717 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:53 crc kubenswrapper[4720]: I0122 06:36:53.472965 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:53 crc kubenswrapper[4720]: I0122 06:36:53.473194 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:53Z","lastTransitionTime":"2026-01-22T06:36:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:53 crc kubenswrapper[4720]: I0122 06:36:53.577488 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:53 crc kubenswrapper[4720]: I0122 06:36:53.578023 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:53 crc kubenswrapper[4720]: I0122 06:36:53.578298 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:53 crc kubenswrapper[4720]: I0122 06:36:53.578483 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:53 crc kubenswrapper[4720]: I0122 06:36:53.578673 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:53Z","lastTransitionTime":"2026-01-22T06:36:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:53 crc kubenswrapper[4720]: I0122 06:36:53.651236 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:53 crc kubenswrapper[4720]: I0122 06:36:53.651301 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:53 crc kubenswrapper[4720]: I0122 06:36:53.651320 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:53 crc kubenswrapper[4720]: I0122 06:36:53.651349 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:53 crc kubenswrapper[4720]: I0122 06:36:53.651366 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:53Z","lastTransitionTime":"2026-01-22T06:36:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:53 crc kubenswrapper[4720]: E0122 06:36:53.674806 4720 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T06:36:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T06:36:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T06:36:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T06:36:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T06:36:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T06:36:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T06:36:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T06:36:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"234f6209-cc86-46cc-ab69-026482c920c9\\\",\\\"systemUUID\\\":\\\"4713dd6d-99ec-4bb6-94e4-e7199d2e8be9\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:53Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:53 crc kubenswrapper[4720]: I0122 06:36:53.682373 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:53 crc kubenswrapper[4720]: I0122 06:36:53.682447 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:53 crc kubenswrapper[4720]: I0122 06:36:53.682466 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:53 crc kubenswrapper[4720]: I0122 06:36:53.682499 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:53 crc kubenswrapper[4720]: I0122 06:36:53.682519 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:53Z","lastTransitionTime":"2026-01-22T06:36:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:53 crc kubenswrapper[4720]: E0122 06:36:53.704859 4720 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T06:36:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T06:36:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T06:36:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T06:36:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T06:36:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T06:36:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T06:36:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T06:36:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"234f6209-cc86-46cc-ab69-026482c920c9\\\",\\\"systemUUID\\\":\\\"4713dd6d-99ec-4bb6-94e4-e7199d2e8be9\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:53Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:53 crc kubenswrapper[4720]: I0122 06:36:53.711511 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:53 crc kubenswrapper[4720]: I0122 06:36:53.711606 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:53 crc kubenswrapper[4720]: I0122 06:36:53.711630 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:53 crc kubenswrapper[4720]: I0122 06:36:53.711664 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:53 crc kubenswrapper[4720]: I0122 06:36:53.711683 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:53Z","lastTransitionTime":"2026-01-22T06:36:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:53 crc kubenswrapper[4720]: E0122 06:36:53.733732 4720 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T06:36:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T06:36:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T06:36:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T06:36:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T06:36:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T06:36:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T06:36:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T06:36:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"234f6209-cc86-46cc-ab69-026482c920c9\\\",\\\"systemUUID\\\":\\\"4713dd6d-99ec-4bb6-94e4-e7199d2e8be9\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:53Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:53 crc kubenswrapper[4720]: I0122 06:36:53.740062 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:53 crc kubenswrapper[4720]: I0122 06:36:53.740144 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:53 crc kubenswrapper[4720]: I0122 06:36:53.740169 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:53 crc kubenswrapper[4720]: I0122 06:36:53.740206 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:53 crc kubenswrapper[4720]: I0122 06:36:53.740231 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:53Z","lastTransitionTime":"2026-01-22T06:36:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:53 crc kubenswrapper[4720]: E0122 06:36:53.763997 4720 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T06:36:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T06:36:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T06:36:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T06:36:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T06:36:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T06:36:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T06:36:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T06:36:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"234f6209-cc86-46cc-ab69-026482c920c9\\\",\\\"systemUUID\\\":\\\"4713dd6d-99ec-4bb6-94e4-e7199d2e8be9\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:53Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:53 crc kubenswrapper[4720]: I0122 06:36:53.769154 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:53 crc kubenswrapper[4720]: I0122 06:36:53.769219 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:53 crc kubenswrapper[4720]: I0122 06:36:53.769238 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:53 crc kubenswrapper[4720]: I0122 06:36:53.769377 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:53 crc kubenswrapper[4720]: I0122 06:36:53.769408 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:53Z","lastTransitionTime":"2026-01-22T06:36:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:53 crc kubenswrapper[4720]: E0122 06:36:53.790268 4720 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T06:36:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T06:36:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T06:36:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T06:36:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T06:36:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T06:36:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-22T06:36:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-22T06:36:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"234f6209-cc86-46cc-ab69-026482c920c9\\\",\\\"systemUUID\\\":\\\"4713dd6d-99ec-4bb6-94e4-e7199d2e8be9\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-22T06:36:53Z is after 2025-08-24T17:21:41Z" Jan 22 06:36:53 crc kubenswrapper[4720]: E0122 06:36:53.790535 4720 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 22 06:36:53 crc kubenswrapper[4720]: I0122 06:36:53.793457 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:53 crc kubenswrapper[4720]: I0122 06:36:53.793530 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:53 crc kubenswrapper[4720]: I0122 06:36:53.793555 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:53 crc kubenswrapper[4720]: I0122 06:36:53.793590 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:53 crc kubenswrapper[4720]: I0122 06:36:53.793613 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:53Z","lastTransitionTime":"2026-01-22T06:36:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:53 crc kubenswrapper[4720]: I0122 06:36:53.896893 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:53 crc kubenswrapper[4720]: I0122 06:36:53.896990 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:53 crc kubenswrapper[4720]: I0122 06:36:53.897008 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:53 crc kubenswrapper[4720]: I0122 06:36:53.897042 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:53 crc kubenswrapper[4720]: I0122 06:36:53.897068 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:53Z","lastTransitionTime":"2026-01-22T06:36:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:53 crc kubenswrapper[4720]: I0122 06:36:53.999715 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:53 crc kubenswrapper[4720]: I0122 06:36:53.999781 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:53 crc kubenswrapper[4720]: I0122 06:36:53.999799 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:53 crc kubenswrapper[4720]: I0122 06:36:53.999838 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:53 crc kubenswrapper[4720]: I0122 06:36:53.999858 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:53Z","lastTransitionTime":"2026-01-22T06:36:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:54 crc kubenswrapper[4720]: I0122 06:36:54.108887 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:54 crc kubenswrapper[4720]: I0122 06:36:54.109000 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:54 crc kubenswrapper[4720]: I0122 06:36:54.109026 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:54 crc kubenswrapper[4720]: I0122 06:36:54.109061 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:54 crc kubenswrapper[4720]: I0122 06:36:54.109092 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:54Z","lastTransitionTime":"2026-01-22T06:36:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:54 crc kubenswrapper[4720]: I0122 06:36:54.206586 4720 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 15:20:31.409452807 +0000 UTC Jan 22 06:36:54 crc kubenswrapper[4720]: I0122 06:36:54.210176 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 06:36:54 crc kubenswrapper[4720]: I0122 06:36:54.210190 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 06:36:54 crc kubenswrapper[4720]: E0122 06:36:54.210483 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 06:36:54 crc kubenswrapper[4720]: E0122 06:36:54.210707 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 06:36:54 crc kubenswrapper[4720]: I0122 06:36:54.212783 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:54 crc kubenswrapper[4720]: I0122 06:36:54.212843 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:54 crc kubenswrapper[4720]: I0122 06:36:54.212861 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:54 crc kubenswrapper[4720]: I0122 06:36:54.212894 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:54 crc kubenswrapper[4720]: I0122 06:36:54.212944 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:54Z","lastTransitionTime":"2026-01-22T06:36:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:54 crc kubenswrapper[4720]: I0122 06:36:54.316716 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:54 crc kubenswrapper[4720]: I0122 06:36:54.316791 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:54 crc kubenswrapper[4720]: I0122 06:36:54.316812 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:54 crc kubenswrapper[4720]: I0122 06:36:54.316841 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:54 crc kubenswrapper[4720]: I0122 06:36:54.316860 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:54Z","lastTransitionTime":"2026-01-22T06:36:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:54 crc kubenswrapper[4720]: I0122 06:36:54.420215 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:54 crc kubenswrapper[4720]: I0122 06:36:54.420279 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:54 crc kubenswrapper[4720]: I0122 06:36:54.420300 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:54 crc kubenswrapper[4720]: I0122 06:36:54.420327 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:54 crc kubenswrapper[4720]: I0122 06:36:54.420349 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:54Z","lastTransitionTime":"2026-01-22T06:36:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:54 crc kubenswrapper[4720]: I0122 06:36:54.524787 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:54 crc kubenswrapper[4720]: I0122 06:36:54.525274 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:54 crc kubenswrapper[4720]: I0122 06:36:54.525507 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:54 crc kubenswrapper[4720]: I0122 06:36:54.526232 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:54 crc kubenswrapper[4720]: I0122 06:36:54.526301 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:54Z","lastTransitionTime":"2026-01-22T06:36:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:54 crc kubenswrapper[4720]: I0122 06:36:54.630097 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:54 crc kubenswrapper[4720]: I0122 06:36:54.630163 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:54 crc kubenswrapper[4720]: I0122 06:36:54.630181 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:54 crc kubenswrapper[4720]: I0122 06:36:54.630209 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:54 crc kubenswrapper[4720]: I0122 06:36:54.630227 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:54Z","lastTransitionTime":"2026-01-22T06:36:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:54 crc kubenswrapper[4720]: I0122 06:36:54.734080 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:54 crc kubenswrapper[4720]: I0122 06:36:54.734441 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:54 crc kubenswrapper[4720]: I0122 06:36:54.734546 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:54 crc kubenswrapper[4720]: I0122 06:36:54.734658 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:54 crc kubenswrapper[4720]: I0122 06:36:54.734765 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:54Z","lastTransitionTime":"2026-01-22T06:36:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:54 crc kubenswrapper[4720]: I0122 06:36:54.837977 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:54 crc kubenswrapper[4720]: I0122 06:36:54.838468 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:54 crc kubenswrapper[4720]: I0122 06:36:54.838810 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:54 crc kubenswrapper[4720]: I0122 06:36:54.839161 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:54 crc kubenswrapper[4720]: I0122 06:36:54.839489 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:54Z","lastTransitionTime":"2026-01-22T06:36:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:54 crc kubenswrapper[4720]: I0122 06:36:54.942825 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:54 crc kubenswrapper[4720]: I0122 06:36:54.942951 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:54 crc kubenswrapper[4720]: I0122 06:36:54.942971 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:54 crc kubenswrapper[4720]: I0122 06:36:54.942999 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:54 crc kubenswrapper[4720]: I0122 06:36:54.943021 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:54Z","lastTransitionTime":"2026-01-22T06:36:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:55 crc kubenswrapper[4720]: I0122 06:36:55.046324 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:55 crc kubenswrapper[4720]: I0122 06:36:55.046390 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:55 crc kubenswrapper[4720]: I0122 06:36:55.046409 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:55 crc kubenswrapper[4720]: I0122 06:36:55.046435 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:55 crc kubenswrapper[4720]: I0122 06:36:55.046455 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:55Z","lastTransitionTime":"2026-01-22T06:36:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:55 crc kubenswrapper[4720]: I0122 06:36:55.149361 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:55 crc kubenswrapper[4720]: I0122 06:36:55.149411 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:55 crc kubenswrapper[4720]: I0122 06:36:55.149423 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:55 crc kubenswrapper[4720]: I0122 06:36:55.149442 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:55 crc kubenswrapper[4720]: I0122 06:36:55.149457 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:55Z","lastTransitionTime":"2026-01-22T06:36:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:55 crc kubenswrapper[4720]: I0122 06:36:55.207403 4720 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 20:41:57.235127336 +0000 UTC Jan 22 06:36:55 crc kubenswrapper[4720]: I0122 06:36:55.209855 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kvtch" Jan 22 06:36:55 crc kubenswrapper[4720]: I0122 06:36:55.210122 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 06:36:55 crc kubenswrapper[4720]: E0122 06:36:55.210458 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 06:36:55 crc kubenswrapper[4720]: E0122 06:36:55.210787 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kvtch" podUID="409f50e8-9b68-4efe-8eb4-bc144d383817" Jan 22 06:36:55 crc kubenswrapper[4720]: I0122 06:36:55.252797 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:55 crc kubenswrapper[4720]: I0122 06:36:55.252842 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:55 crc kubenswrapper[4720]: I0122 06:36:55.252853 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:55 crc kubenswrapper[4720]: I0122 06:36:55.252871 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:55 crc kubenswrapper[4720]: I0122 06:36:55.252885 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:55Z","lastTransitionTime":"2026-01-22T06:36:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:55 crc kubenswrapper[4720]: I0122 06:36:55.356502 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:55 crc kubenswrapper[4720]: I0122 06:36:55.356565 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:55 crc kubenswrapper[4720]: I0122 06:36:55.356584 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:55 crc kubenswrapper[4720]: I0122 06:36:55.356613 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:55 crc kubenswrapper[4720]: I0122 06:36:55.356633 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:55Z","lastTransitionTime":"2026-01-22T06:36:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:55 crc kubenswrapper[4720]: I0122 06:36:55.461136 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:55 crc kubenswrapper[4720]: I0122 06:36:55.461679 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:55 crc kubenswrapper[4720]: I0122 06:36:55.461699 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:55 crc kubenswrapper[4720]: I0122 06:36:55.461730 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:55 crc kubenswrapper[4720]: I0122 06:36:55.461751 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:55Z","lastTransitionTime":"2026-01-22T06:36:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:55 crc kubenswrapper[4720]: I0122 06:36:55.565509 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:55 crc kubenswrapper[4720]: I0122 06:36:55.565593 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:55 crc kubenswrapper[4720]: I0122 06:36:55.565611 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:55 crc kubenswrapper[4720]: I0122 06:36:55.565638 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:55 crc kubenswrapper[4720]: I0122 06:36:55.565657 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:55Z","lastTransitionTime":"2026-01-22T06:36:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:55 crc kubenswrapper[4720]: I0122 06:36:55.668998 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:55 crc kubenswrapper[4720]: I0122 06:36:55.669085 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:55 crc kubenswrapper[4720]: I0122 06:36:55.669114 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:55 crc kubenswrapper[4720]: I0122 06:36:55.669149 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:55 crc kubenswrapper[4720]: I0122 06:36:55.669171 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:55Z","lastTransitionTime":"2026-01-22T06:36:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:55 crc kubenswrapper[4720]: I0122 06:36:55.772680 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:55 crc kubenswrapper[4720]: I0122 06:36:55.772762 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:55 crc kubenswrapper[4720]: I0122 06:36:55.772782 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:55 crc kubenswrapper[4720]: I0122 06:36:55.772812 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:55 crc kubenswrapper[4720]: I0122 06:36:55.772833 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:55Z","lastTransitionTime":"2026-01-22T06:36:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:55 crc kubenswrapper[4720]: I0122 06:36:55.875594 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:55 crc kubenswrapper[4720]: I0122 06:36:55.875673 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:55 crc kubenswrapper[4720]: I0122 06:36:55.875699 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:55 crc kubenswrapper[4720]: I0122 06:36:55.875735 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:55 crc kubenswrapper[4720]: I0122 06:36:55.875766 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:55Z","lastTransitionTime":"2026-01-22T06:36:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:55 crc kubenswrapper[4720]: I0122 06:36:55.979545 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:55 crc kubenswrapper[4720]: I0122 06:36:55.979597 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:55 crc kubenswrapper[4720]: I0122 06:36:55.979609 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:55 crc kubenswrapper[4720]: I0122 06:36:55.979630 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:55 crc kubenswrapper[4720]: I0122 06:36:55.979642 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:55Z","lastTransitionTime":"2026-01-22T06:36:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:56 crc kubenswrapper[4720]: I0122 06:36:56.083621 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:56 crc kubenswrapper[4720]: I0122 06:36:56.083690 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:56 crc kubenswrapper[4720]: I0122 06:36:56.083710 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:56 crc kubenswrapper[4720]: I0122 06:36:56.083742 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:56 crc kubenswrapper[4720]: I0122 06:36:56.083761 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:56Z","lastTransitionTime":"2026-01-22T06:36:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:56 crc kubenswrapper[4720]: I0122 06:36:56.186872 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:56 crc kubenswrapper[4720]: I0122 06:36:56.186973 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:56 crc kubenswrapper[4720]: I0122 06:36:56.186997 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:56 crc kubenswrapper[4720]: I0122 06:36:56.187026 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:56 crc kubenswrapper[4720]: I0122 06:36:56.187043 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:56Z","lastTransitionTime":"2026-01-22T06:36:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:56 crc kubenswrapper[4720]: I0122 06:36:56.208538 4720 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 13:20:35.154470305 +0000 UTC Jan 22 06:36:56 crc kubenswrapper[4720]: I0122 06:36:56.209897 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 06:36:56 crc kubenswrapper[4720]: I0122 06:36:56.210290 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 06:36:56 crc kubenswrapper[4720]: E0122 06:36:56.210425 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 06:36:56 crc kubenswrapper[4720]: E0122 06:36:56.210500 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 06:36:56 crc kubenswrapper[4720]: I0122 06:36:56.290696 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:56 crc kubenswrapper[4720]: I0122 06:36:56.290771 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:56 crc kubenswrapper[4720]: I0122 06:36:56.290790 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:56 crc kubenswrapper[4720]: I0122 06:36:56.290820 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:56 crc kubenswrapper[4720]: I0122 06:36:56.290840 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:56Z","lastTransitionTime":"2026-01-22T06:36:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:56 crc kubenswrapper[4720]: I0122 06:36:56.394015 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:56 crc kubenswrapper[4720]: I0122 06:36:56.394085 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:56 crc kubenswrapper[4720]: I0122 06:36:56.394103 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:56 crc kubenswrapper[4720]: I0122 06:36:56.394131 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:56 crc kubenswrapper[4720]: I0122 06:36:56.394150 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:56Z","lastTransitionTime":"2026-01-22T06:36:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:56 crc kubenswrapper[4720]: I0122 06:36:56.498075 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:56 crc kubenswrapper[4720]: I0122 06:36:56.498153 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:56 crc kubenswrapper[4720]: I0122 06:36:56.498179 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:56 crc kubenswrapper[4720]: I0122 06:36:56.498209 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:56 crc kubenswrapper[4720]: I0122 06:36:56.498231 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:56Z","lastTransitionTime":"2026-01-22T06:36:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:56 crc kubenswrapper[4720]: I0122 06:36:56.600996 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:56 crc kubenswrapper[4720]: I0122 06:36:56.601081 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:56 crc kubenswrapper[4720]: I0122 06:36:56.601106 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:56 crc kubenswrapper[4720]: I0122 06:36:56.601139 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:56 crc kubenswrapper[4720]: I0122 06:36:56.601158 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:56Z","lastTransitionTime":"2026-01-22T06:36:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:56 crc kubenswrapper[4720]: I0122 06:36:56.709862 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:56 crc kubenswrapper[4720]: I0122 06:36:56.710482 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:56 crc kubenswrapper[4720]: I0122 06:36:56.710695 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:56 crc kubenswrapper[4720]: I0122 06:36:56.711365 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:56 crc kubenswrapper[4720]: I0122 06:36:56.711514 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:56Z","lastTransitionTime":"2026-01-22T06:36:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:56 crc kubenswrapper[4720]: I0122 06:36:56.814490 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:56 crc kubenswrapper[4720]: I0122 06:36:56.814553 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:56 crc kubenswrapper[4720]: I0122 06:36:56.814613 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:56 crc kubenswrapper[4720]: I0122 06:36:56.814643 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:56 crc kubenswrapper[4720]: I0122 06:36:56.814695 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:56Z","lastTransitionTime":"2026-01-22T06:36:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:56 crc kubenswrapper[4720]: I0122 06:36:56.918278 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:56 crc kubenswrapper[4720]: I0122 06:36:56.918338 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:56 crc kubenswrapper[4720]: I0122 06:36:56.918367 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:56 crc kubenswrapper[4720]: I0122 06:36:56.918402 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:56 crc kubenswrapper[4720]: I0122 06:36:56.918431 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:56Z","lastTransitionTime":"2026-01-22T06:36:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:57 crc kubenswrapper[4720]: I0122 06:36:57.022542 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:57 crc kubenswrapper[4720]: I0122 06:36:57.022618 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:57 crc kubenswrapper[4720]: I0122 06:36:57.022635 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:57 crc kubenswrapper[4720]: I0122 06:36:57.022694 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:57 crc kubenswrapper[4720]: I0122 06:36:57.022714 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:57Z","lastTransitionTime":"2026-01-22T06:36:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:57 crc kubenswrapper[4720]: I0122 06:36:57.126866 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:57 crc kubenswrapper[4720]: I0122 06:36:57.126973 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:57 crc kubenswrapper[4720]: I0122 06:36:57.126992 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:57 crc kubenswrapper[4720]: I0122 06:36:57.127019 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:57 crc kubenswrapper[4720]: I0122 06:36:57.127038 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:57Z","lastTransitionTime":"2026-01-22T06:36:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:57 crc kubenswrapper[4720]: I0122 06:36:57.209607 4720 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 03:55:00.579746856 +0000 UTC Jan 22 06:36:57 crc kubenswrapper[4720]: I0122 06:36:57.209786 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 06:36:57 crc kubenswrapper[4720]: I0122 06:36:57.209835 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kvtch" Jan 22 06:36:57 crc kubenswrapper[4720]: E0122 06:36:57.210021 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 06:36:57 crc kubenswrapper[4720]: E0122 06:36:57.210249 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kvtch" podUID="409f50e8-9b68-4efe-8eb4-bc144d383817" Jan 22 06:36:57 crc kubenswrapper[4720]: I0122 06:36:57.231298 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:57 crc kubenswrapper[4720]: I0122 06:36:57.231440 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:57 crc kubenswrapper[4720]: I0122 06:36:57.231477 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:57 crc kubenswrapper[4720]: I0122 06:36:57.231565 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:57 crc kubenswrapper[4720]: I0122 06:36:57.231682 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:57Z","lastTransitionTime":"2026-01-22T06:36:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:57 crc kubenswrapper[4720]: I0122 06:36:57.334839 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:57 crc kubenswrapper[4720]: I0122 06:36:57.334978 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:57 crc kubenswrapper[4720]: I0122 06:36:57.335185 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:57 crc kubenswrapper[4720]: I0122 06:36:57.335235 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:57 crc kubenswrapper[4720]: I0122 06:36:57.335271 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:57Z","lastTransitionTime":"2026-01-22T06:36:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:57 crc kubenswrapper[4720]: I0122 06:36:57.439877 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:57 crc kubenswrapper[4720]: I0122 06:36:57.439983 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:57 crc kubenswrapper[4720]: I0122 06:36:57.440015 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:57 crc kubenswrapper[4720]: I0122 06:36:57.440041 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:57 crc kubenswrapper[4720]: I0122 06:36:57.440061 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:57Z","lastTransitionTime":"2026-01-22T06:36:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:57 crc kubenswrapper[4720]: I0122 06:36:57.544161 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:57 crc kubenswrapper[4720]: I0122 06:36:57.544233 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:57 crc kubenswrapper[4720]: I0122 06:36:57.544252 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:57 crc kubenswrapper[4720]: I0122 06:36:57.544278 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:57 crc kubenswrapper[4720]: I0122 06:36:57.544299 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:57Z","lastTransitionTime":"2026-01-22T06:36:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:57 crc kubenswrapper[4720]: I0122 06:36:57.647818 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:57 crc kubenswrapper[4720]: I0122 06:36:57.647889 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:57 crc kubenswrapper[4720]: I0122 06:36:57.647966 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:57 crc kubenswrapper[4720]: I0122 06:36:57.648001 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:57 crc kubenswrapper[4720]: I0122 06:36:57.648022 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:57Z","lastTransitionTime":"2026-01-22T06:36:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:57 crc kubenswrapper[4720]: I0122 06:36:57.751788 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:57 crc kubenswrapper[4720]: I0122 06:36:57.751864 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:57 crc kubenswrapper[4720]: I0122 06:36:57.751881 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:57 crc kubenswrapper[4720]: I0122 06:36:57.751938 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:57 crc kubenswrapper[4720]: I0122 06:36:57.751963 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:57Z","lastTransitionTime":"2026-01-22T06:36:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:57 crc kubenswrapper[4720]: I0122 06:36:57.855985 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:57 crc kubenswrapper[4720]: I0122 06:36:57.856052 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:57 crc kubenswrapper[4720]: I0122 06:36:57.856071 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:57 crc kubenswrapper[4720]: I0122 06:36:57.856102 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:57 crc kubenswrapper[4720]: I0122 06:36:57.856129 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:57Z","lastTransitionTime":"2026-01-22T06:36:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:57 crc kubenswrapper[4720]: I0122 06:36:57.960648 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:57 crc kubenswrapper[4720]: I0122 06:36:57.960745 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:57 crc kubenswrapper[4720]: I0122 06:36:57.960769 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:57 crc kubenswrapper[4720]: I0122 06:36:57.960800 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:57 crc kubenswrapper[4720]: I0122 06:36:57.960819 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:57Z","lastTransitionTime":"2026-01-22T06:36:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:58 crc kubenswrapper[4720]: I0122 06:36:58.064949 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:58 crc kubenswrapper[4720]: I0122 06:36:58.065032 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:58 crc kubenswrapper[4720]: I0122 06:36:58.065053 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:58 crc kubenswrapper[4720]: I0122 06:36:58.065083 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:58 crc kubenswrapper[4720]: I0122 06:36:58.065104 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:58Z","lastTransitionTime":"2026-01-22T06:36:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:58 crc kubenswrapper[4720]: I0122 06:36:58.169415 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:58 crc kubenswrapper[4720]: I0122 06:36:58.169479 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:58 crc kubenswrapper[4720]: I0122 06:36:58.169574 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:58 crc kubenswrapper[4720]: I0122 06:36:58.169603 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:58 crc kubenswrapper[4720]: I0122 06:36:58.169633 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:58Z","lastTransitionTime":"2026-01-22T06:36:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:58 crc kubenswrapper[4720]: I0122 06:36:58.210044 4720 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 23:57:27.472261413 +0000 UTC Jan 22 06:36:58 crc kubenswrapper[4720]: I0122 06:36:58.210236 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 06:36:58 crc kubenswrapper[4720]: E0122 06:36:58.210400 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 06:36:58 crc kubenswrapper[4720]: I0122 06:36:58.210999 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 06:36:58 crc kubenswrapper[4720]: E0122 06:36:58.211525 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 06:36:58 crc kubenswrapper[4720]: I0122 06:36:58.248988 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=56.248949892 podStartE2EDuration="56.248949892s" podCreationTimestamp="2026-01-22 06:36:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 06:36:58.248194061 +0000 UTC m=+110.390100836" watchObservedRunningTime="2026-01-22 06:36:58.248949892 +0000 UTC m=+110.390856637" Jan 22 06:36:58 crc kubenswrapper[4720]: I0122 06:36:58.275787 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:58 crc kubenswrapper[4720]: I0122 06:36:58.275895 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:58 crc kubenswrapper[4720]: I0122 06:36:58.275956 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:58 crc kubenswrapper[4720]: I0122 06:36:58.275994 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:58 crc kubenswrapper[4720]: I0122 06:36:58.276020 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:58Z","lastTransitionTime":"2026-01-22T06:36:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:58 crc kubenswrapper[4720]: I0122 06:36:58.297582 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-dtnxt" podStartSLOduration=86.297549623 podStartE2EDuration="1m26.297549623s" podCreationTimestamp="2026-01-22 06:35:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 06:36:58.29743409 +0000 UTC m=+110.439340855" watchObservedRunningTime="2026-01-22 06:36:58.297549623 +0000 UTC m=+110.439456368" Jan 22 06:36:58 crc kubenswrapper[4720]: I0122 06:36:58.324565 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=22.324527 podStartE2EDuration="22.324527s" podCreationTimestamp="2026-01-22 06:36:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 06:36:58.32419056 +0000 UTC m=+110.466097345" watchObservedRunningTime="2026-01-22 06:36:58.324527 +0000 UTC m=+110.466433735" Jan 22 06:36:58 crc kubenswrapper[4720]: I0122 06:36:58.379134 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:58 crc kubenswrapper[4720]: I0122 06:36:58.379191 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:58 crc kubenswrapper[4720]: I0122 06:36:58.379209 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:58 crc kubenswrapper[4720]: I0122 06:36:58.379276 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:58 crc kubenswrapper[4720]: I0122 06:36:58.379295 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:58Z","lastTransitionTime":"2026-01-22T06:36:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:58 crc kubenswrapper[4720]: I0122 06:36:58.416016 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" podStartSLOduration=86.415982368 podStartE2EDuration="1m26.415982368s" podCreationTimestamp="2026-01-22 06:35:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 06:36:58.374238462 +0000 UTC m=+110.516145217" watchObservedRunningTime="2026-01-22 06:36:58.415982368 +0000 UTC m=+110.557889103" Jan 22 06:36:58 crc kubenswrapper[4720]: I0122 06:36:58.455347 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-lxzml" podStartSLOduration=86.455314026 podStartE2EDuration="1m26.455314026s" podCreationTimestamp="2026-01-22 06:35:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 06:36:58.453851224 +0000 UTC m=+110.595757969" watchObservedRunningTime="2026-01-22 06:36:58.455314026 +0000 UTC m=+110.597220891" Jan 22 06:36:58 crc kubenswrapper[4720]: I0122 06:36:58.477620 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-5bmrh" podStartSLOduration=86.477583649 podStartE2EDuration="1m26.477583649s" podCreationTimestamp="2026-01-22 06:35:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 06:36:58.476765565 +0000 UTC m=+110.618672310" watchObservedRunningTime="2026-01-22 06:36:58.477583649 +0000 UTC m=+110.619490384" Jan 22 06:36:58 crc kubenswrapper[4720]: I0122 06:36:58.483181 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:58 crc kubenswrapper[4720]: I0122 06:36:58.483483 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:58 crc kubenswrapper[4720]: I0122 06:36:58.483635 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:58 crc kubenswrapper[4720]: I0122 06:36:58.483784 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:58 crc kubenswrapper[4720]: I0122 06:36:58.483943 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:58Z","lastTransitionTime":"2026-01-22T06:36:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:58 crc kubenswrapper[4720]: I0122 06:36:58.539692 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4c84t" podStartSLOduration=85.539655932 podStartE2EDuration="1m25.539655932s" podCreationTimestamp="2026-01-22 06:35:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 06:36:58.502220579 +0000 UTC m=+110.644127314" watchObservedRunningTime="2026-01-22 06:36:58.539655932 +0000 UTC m=+110.681562667" Jan 22 06:36:58 crc kubenswrapper[4720]: I0122 06:36:58.568083 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=88.568047569 podStartE2EDuration="1m28.568047569s" podCreationTimestamp="2026-01-22 06:35:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 06:36:58.539175709 +0000 UTC m=+110.681082424" watchObservedRunningTime="2026-01-22 06:36:58.568047569 +0000 UTC m=+110.709954284" Jan 22 06:36:58 crc kubenswrapper[4720]: I0122 06:36:58.587610 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:58 crc kubenswrapper[4720]: I0122 06:36:58.587671 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:58 crc kubenswrapper[4720]: I0122 06:36:58.587689 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:58 crc kubenswrapper[4720]: I0122 06:36:58.587714 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:58 crc kubenswrapper[4720]: I0122 06:36:58.587731 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:58Z","lastTransitionTime":"2026-01-22T06:36:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:58 crc kubenswrapper[4720]: I0122 06:36:58.670752 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=91.670722326 podStartE2EDuration="1m31.670722326s" podCreationTimestamp="2026-01-22 06:35:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 06:36:58.669029768 +0000 UTC m=+110.810936513" watchObservedRunningTime="2026-01-22 06:36:58.670722326 +0000 UTC m=+110.812629061" Jan 22 06:36:58 crc kubenswrapper[4720]: I0122 06:36:58.691510 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:58 crc kubenswrapper[4720]: I0122 06:36:58.691583 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:58 crc kubenswrapper[4720]: I0122 06:36:58.691605 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:58 crc kubenswrapper[4720]: I0122 06:36:58.691634 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:58 crc kubenswrapper[4720]: I0122 06:36:58.691655 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:58Z","lastTransitionTime":"2026-01-22T06:36:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:58 crc kubenswrapper[4720]: I0122 06:36:58.729811 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=91.729772564 podStartE2EDuration="1m31.729772564s" podCreationTimestamp="2026-01-22 06:35:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 06:36:58.704319201 +0000 UTC m=+110.846225936" watchObservedRunningTime="2026-01-22 06:36:58.729772564 +0000 UTC m=+110.871679309" Jan 22 06:36:58 crc kubenswrapper[4720]: I0122 06:36:58.797904 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:58 crc kubenswrapper[4720]: I0122 06:36:58.798002 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:58 crc kubenswrapper[4720]: I0122 06:36:58.798013 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:58 crc kubenswrapper[4720]: I0122 06:36:58.798059 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:58 crc kubenswrapper[4720]: I0122 06:36:58.798087 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:58Z","lastTransitionTime":"2026-01-22T06:36:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:58 crc kubenswrapper[4720]: I0122 06:36:58.900515 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:58 crc kubenswrapper[4720]: I0122 06:36:58.900613 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:58 crc kubenswrapper[4720]: I0122 06:36:58.900633 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:58 crc kubenswrapper[4720]: I0122 06:36:58.900668 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:58 crc kubenswrapper[4720]: I0122 06:36:58.900692 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:58Z","lastTransitionTime":"2026-01-22T06:36:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:59 crc kubenswrapper[4720]: I0122 06:36:59.003209 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:59 crc kubenswrapper[4720]: I0122 06:36:59.003255 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:59 crc kubenswrapper[4720]: I0122 06:36:59.003268 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:59 crc kubenswrapper[4720]: I0122 06:36:59.003290 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:59 crc kubenswrapper[4720]: I0122 06:36:59.003303 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:59Z","lastTransitionTime":"2026-01-22T06:36:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:59 crc kubenswrapper[4720]: I0122 06:36:59.106481 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:59 crc kubenswrapper[4720]: I0122 06:36:59.106581 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:59 crc kubenswrapper[4720]: I0122 06:36:59.106599 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:59 crc kubenswrapper[4720]: I0122 06:36:59.106630 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:59 crc kubenswrapper[4720]: I0122 06:36:59.106651 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:59Z","lastTransitionTime":"2026-01-22T06:36:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:59 crc kubenswrapper[4720]: I0122 06:36:59.210289 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kvtch" Jan 22 06:36:59 crc kubenswrapper[4720]: I0122 06:36:59.210231 4720 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 16:09:42.332560868 +0000 UTC Jan 22 06:36:59 crc kubenswrapper[4720]: I0122 06:36:59.210367 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 06:36:59 crc kubenswrapper[4720]: I0122 06:36:59.210720 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:59 crc kubenswrapper[4720]: I0122 06:36:59.210756 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:59 crc kubenswrapper[4720]: I0122 06:36:59.210774 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:59 crc kubenswrapper[4720]: I0122 06:36:59.210801 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:59 crc kubenswrapper[4720]: I0122 06:36:59.210820 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:59Z","lastTransitionTime":"2026-01-22T06:36:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:59 crc kubenswrapper[4720]: E0122 06:36:59.210856 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kvtch" podUID="409f50e8-9b68-4efe-8eb4-bc144d383817" Jan 22 06:36:59 crc kubenswrapper[4720]: E0122 06:36:59.211026 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 06:36:59 crc kubenswrapper[4720]: I0122 06:36:59.313893 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:59 crc kubenswrapper[4720]: I0122 06:36:59.313995 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:59 crc kubenswrapper[4720]: I0122 06:36:59.314014 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:59 crc kubenswrapper[4720]: I0122 06:36:59.314043 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:59 crc kubenswrapper[4720]: I0122 06:36:59.314061 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:59Z","lastTransitionTime":"2026-01-22T06:36:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:59 crc kubenswrapper[4720]: I0122 06:36:59.416952 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:59 crc kubenswrapper[4720]: I0122 06:36:59.417029 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:59 crc kubenswrapper[4720]: I0122 06:36:59.417046 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:59 crc kubenswrapper[4720]: I0122 06:36:59.417074 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:59 crc kubenswrapper[4720]: I0122 06:36:59.417094 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:59Z","lastTransitionTime":"2026-01-22T06:36:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:59 crc kubenswrapper[4720]: I0122 06:36:59.520902 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:59 crc kubenswrapper[4720]: I0122 06:36:59.521028 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:59 crc kubenswrapper[4720]: I0122 06:36:59.521047 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:59 crc kubenswrapper[4720]: I0122 06:36:59.521078 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:59 crc kubenswrapper[4720]: I0122 06:36:59.521097 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:59Z","lastTransitionTime":"2026-01-22T06:36:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:59 crc kubenswrapper[4720]: I0122 06:36:59.632898 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:59 crc kubenswrapper[4720]: I0122 06:36:59.633003 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:59 crc kubenswrapper[4720]: I0122 06:36:59.633024 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:59 crc kubenswrapper[4720]: I0122 06:36:59.633058 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:59 crc kubenswrapper[4720]: I0122 06:36:59.633082 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:59Z","lastTransitionTime":"2026-01-22T06:36:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:59 crc kubenswrapper[4720]: I0122 06:36:59.736367 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:59 crc kubenswrapper[4720]: I0122 06:36:59.736421 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:59 crc kubenswrapper[4720]: I0122 06:36:59.736440 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:59 crc kubenswrapper[4720]: I0122 06:36:59.736468 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:59 crc kubenswrapper[4720]: I0122 06:36:59.736484 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:59Z","lastTransitionTime":"2026-01-22T06:36:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:59 crc kubenswrapper[4720]: I0122 06:36:59.839633 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:59 crc kubenswrapper[4720]: I0122 06:36:59.839710 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:59 crc kubenswrapper[4720]: I0122 06:36:59.839723 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:59 crc kubenswrapper[4720]: I0122 06:36:59.839747 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:59 crc kubenswrapper[4720]: I0122 06:36:59.839760 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:59Z","lastTransitionTime":"2026-01-22T06:36:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:36:59 crc kubenswrapper[4720]: I0122 06:36:59.943266 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:36:59 crc kubenswrapper[4720]: I0122 06:36:59.943320 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:36:59 crc kubenswrapper[4720]: I0122 06:36:59.943337 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:36:59 crc kubenswrapper[4720]: I0122 06:36:59.943357 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:36:59 crc kubenswrapper[4720]: I0122 06:36:59.943370 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:36:59Z","lastTransitionTime":"2026-01-22T06:36:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:37:00 crc kubenswrapper[4720]: I0122 06:37:00.047193 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:37:00 crc kubenswrapper[4720]: I0122 06:37:00.047254 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:37:00 crc kubenswrapper[4720]: I0122 06:37:00.047271 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:37:00 crc kubenswrapper[4720]: I0122 06:37:00.047298 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:37:00 crc kubenswrapper[4720]: I0122 06:37:00.047316 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:37:00Z","lastTransitionTime":"2026-01-22T06:37:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:37:00 crc kubenswrapper[4720]: I0122 06:37:00.151104 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:37:00 crc kubenswrapper[4720]: I0122 06:37:00.151173 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:37:00 crc kubenswrapper[4720]: I0122 06:37:00.151194 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:37:00 crc kubenswrapper[4720]: I0122 06:37:00.151223 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:37:00 crc kubenswrapper[4720]: I0122 06:37:00.151243 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:37:00Z","lastTransitionTime":"2026-01-22T06:37:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:37:00 crc kubenswrapper[4720]: I0122 06:37:00.210019 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 06:37:00 crc kubenswrapper[4720]: I0122 06:37:00.210212 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 06:37:00 crc kubenswrapper[4720]: E0122 06:37:00.210396 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 06:37:00 crc kubenswrapper[4720]: I0122 06:37:00.210483 4720 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 14:13:21.003618695 +0000 UTC Jan 22 06:37:00 crc kubenswrapper[4720]: E0122 06:37:00.210906 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 06:37:00 crc kubenswrapper[4720]: I0122 06:37:00.254532 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:37:00 crc kubenswrapper[4720]: I0122 06:37:00.254607 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:37:00 crc kubenswrapper[4720]: I0122 06:37:00.254624 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:37:00 crc kubenswrapper[4720]: I0122 06:37:00.254652 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:37:00 crc kubenswrapper[4720]: I0122 06:37:00.254673 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:37:00Z","lastTransitionTime":"2026-01-22T06:37:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:37:00 crc kubenswrapper[4720]: I0122 06:37:00.358330 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:37:00 crc kubenswrapper[4720]: I0122 06:37:00.358417 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:37:00 crc kubenswrapper[4720]: I0122 06:37:00.358437 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:37:00 crc kubenswrapper[4720]: I0122 06:37:00.358466 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:37:00 crc kubenswrapper[4720]: I0122 06:37:00.358485 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:37:00Z","lastTransitionTime":"2026-01-22T06:37:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:37:00 crc kubenswrapper[4720]: I0122 06:37:00.461901 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:37:00 crc kubenswrapper[4720]: I0122 06:37:00.461980 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:37:00 crc kubenswrapper[4720]: I0122 06:37:00.461998 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:37:00 crc kubenswrapper[4720]: I0122 06:37:00.462030 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:37:00 crc kubenswrapper[4720]: I0122 06:37:00.462054 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:37:00Z","lastTransitionTime":"2026-01-22T06:37:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:37:00 crc kubenswrapper[4720]: I0122 06:37:00.565138 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:37:00 crc kubenswrapper[4720]: I0122 06:37:00.565206 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:37:00 crc kubenswrapper[4720]: I0122 06:37:00.565225 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:37:00 crc kubenswrapper[4720]: I0122 06:37:00.565257 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:37:00 crc kubenswrapper[4720]: I0122 06:37:00.565278 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:37:00Z","lastTransitionTime":"2026-01-22T06:37:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:37:00 crc kubenswrapper[4720]: I0122 06:37:00.668762 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:37:00 crc kubenswrapper[4720]: I0122 06:37:00.668832 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:37:00 crc kubenswrapper[4720]: I0122 06:37:00.668851 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:37:00 crc kubenswrapper[4720]: I0122 06:37:00.668883 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:37:00 crc kubenswrapper[4720]: I0122 06:37:00.668903 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:37:00Z","lastTransitionTime":"2026-01-22T06:37:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:37:00 crc kubenswrapper[4720]: I0122 06:37:00.773757 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:37:00 crc kubenswrapper[4720]: I0122 06:37:00.773858 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:37:00 crc kubenswrapper[4720]: I0122 06:37:00.773967 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:37:00 crc kubenswrapper[4720]: I0122 06:37:00.773997 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:37:00 crc kubenswrapper[4720]: I0122 06:37:00.774056 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:37:00Z","lastTransitionTime":"2026-01-22T06:37:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:37:00 crc kubenswrapper[4720]: I0122 06:37:00.877604 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:37:00 crc kubenswrapper[4720]: I0122 06:37:00.877692 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:37:00 crc kubenswrapper[4720]: I0122 06:37:00.877717 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:37:00 crc kubenswrapper[4720]: I0122 06:37:00.877752 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:37:00 crc kubenswrapper[4720]: I0122 06:37:00.877777 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:37:00Z","lastTransitionTime":"2026-01-22T06:37:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:37:00 crc kubenswrapper[4720]: I0122 06:37:00.980982 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:37:00 crc kubenswrapper[4720]: I0122 06:37:00.981048 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:37:00 crc kubenswrapper[4720]: I0122 06:37:00.981066 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:37:00 crc kubenswrapper[4720]: I0122 06:37:00.981092 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:37:00 crc kubenswrapper[4720]: I0122 06:37:00.981110 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:37:00Z","lastTransitionTime":"2026-01-22T06:37:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:37:01 crc kubenswrapper[4720]: I0122 06:37:01.085270 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:37:01 crc kubenswrapper[4720]: I0122 06:37:01.085354 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:37:01 crc kubenswrapper[4720]: I0122 06:37:01.085377 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:37:01 crc kubenswrapper[4720]: I0122 06:37:01.085411 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:37:01 crc kubenswrapper[4720]: I0122 06:37:01.085429 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:37:01Z","lastTransitionTime":"2026-01-22T06:37:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:37:01 crc kubenswrapper[4720]: I0122 06:37:01.205469 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:37:01 crc kubenswrapper[4720]: I0122 06:37:01.205531 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:37:01 crc kubenswrapper[4720]: I0122 06:37:01.205548 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:37:01 crc kubenswrapper[4720]: I0122 06:37:01.205575 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:37:01 crc kubenswrapper[4720]: I0122 06:37:01.205595 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:37:01Z","lastTransitionTime":"2026-01-22T06:37:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:37:01 crc kubenswrapper[4720]: I0122 06:37:01.210536 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 06:37:01 crc kubenswrapper[4720]: I0122 06:37:01.210624 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kvtch" Jan 22 06:37:01 crc kubenswrapper[4720]: E0122 06:37:01.210707 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 06:37:01 crc kubenswrapper[4720]: E0122 06:37:01.210822 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kvtch" podUID="409f50e8-9b68-4efe-8eb4-bc144d383817" Jan 22 06:37:01 crc kubenswrapper[4720]: I0122 06:37:01.210890 4720 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 09:07:00.523707359 +0000 UTC Jan 22 06:37:01 crc kubenswrapper[4720]: I0122 06:37:01.309240 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:37:01 crc kubenswrapper[4720]: I0122 06:37:01.309301 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:37:01 crc kubenswrapper[4720]: I0122 06:37:01.309316 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:37:01 crc kubenswrapper[4720]: I0122 06:37:01.309337 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:37:01 crc kubenswrapper[4720]: I0122 06:37:01.309358 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:37:01Z","lastTransitionTime":"2026-01-22T06:37:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:37:01 crc kubenswrapper[4720]: I0122 06:37:01.412974 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:37:01 crc kubenswrapper[4720]: I0122 06:37:01.413038 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:37:01 crc kubenswrapper[4720]: I0122 06:37:01.413056 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:37:01 crc kubenswrapper[4720]: I0122 06:37:01.413082 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:37:01 crc kubenswrapper[4720]: I0122 06:37:01.413101 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:37:01Z","lastTransitionTime":"2026-01-22T06:37:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:37:01 crc kubenswrapper[4720]: I0122 06:37:01.517176 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:37:01 crc kubenswrapper[4720]: I0122 06:37:01.517240 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:37:01 crc kubenswrapper[4720]: I0122 06:37:01.517257 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:37:01 crc kubenswrapper[4720]: I0122 06:37:01.517286 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:37:01 crc kubenswrapper[4720]: I0122 06:37:01.517306 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:37:01Z","lastTransitionTime":"2026-01-22T06:37:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:37:01 crc kubenswrapper[4720]: I0122 06:37:01.620723 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:37:01 crc kubenswrapper[4720]: I0122 06:37:01.620793 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:37:01 crc kubenswrapper[4720]: I0122 06:37:01.620815 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:37:01 crc kubenswrapper[4720]: I0122 06:37:01.620844 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:37:01 crc kubenswrapper[4720]: I0122 06:37:01.620861 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:37:01Z","lastTransitionTime":"2026-01-22T06:37:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:37:01 crc kubenswrapper[4720]: I0122 06:37:01.725467 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:37:01 crc kubenswrapper[4720]: I0122 06:37:01.725545 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:37:01 crc kubenswrapper[4720]: I0122 06:37:01.725563 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:37:01 crc kubenswrapper[4720]: I0122 06:37:01.725590 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:37:01 crc kubenswrapper[4720]: I0122 06:37:01.725610 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:37:01Z","lastTransitionTime":"2026-01-22T06:37:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:37:01 crc kubenswrapper[4720]: I0122 06:37:01.827996 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:37:01 crc kubenswrapper[4720]: I0122 06:37:01.828052 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:37:01 crc kubenswrapper[4720]: I0122 06:37:01.828063 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:37:01 crc kubenswrapper[4720]: I0122 06:37:01.828083 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:37:01 crc kubenswrapper[4720]: I0122 06:37:01.828095 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:37:01Z","lastTransitionTime":"2026-01-22T06:37:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:37:01 crc kubenswrapper[4720]: I0122 06:37:01.931094 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:37:01 crc kubenswrapper[4720]: I0122 06:37:01.931139 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:37:01 crc kubenswrapper[4720]: I0122 06:37:01.931151 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:37:01 crc kubenswrapper[4720]: I0122 06:37:01.931171 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:37:01 crc kubenswrapper[4720]: I0122 06:37:01.931185 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:37:01Z","lastTransitionTime":"2026-01-22T06:37:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:37:02 crc kubenswrapper[4720]: I0122 06:37:02.035114 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:37:02 crc kubenswrapper[4720]: I0122 06:37:02.035192 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:37:02 crc kubenswrapper[4720]: I0122 06:37:02.035210 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:37:02 crc kubenswrapper[4720]: I0122 06:37:02.035240 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:37:02 crc kubenswrapper[4720]: I0122 06:37:02.035260 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:37:02Z","lastTransitionTime":"2026-01-22T06:37:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:37:02 crc kubenswrapper[4720]: I0122 06:37:02.138603 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:37:02 crc kubenswrapper[4720]: I0122 06:37:02.138676 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:37:02 crc kubenswrapper[4720]: I0122 06:37:02.138694 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:37:02 crc kubenswrapper[4720]: I0122 06:37:02.138721 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:37:02 crc kubenswrapper[4720]: I0122 06:37:02.138739 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:37:02Z","lastTransitionTime":"2026-01-22T06:37:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:37:02 crc kubenswrapper[4720]: I0122 06:37:02.210547 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 06:37:02 crc kubenswrapper[4720]: I0122 06:37:02.210586 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 06:37:02 crc kubenswrapper[4720]: E0122 06:37:02.210776 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 06:37:02 crc kubenswrapper[4720]: E0122 06:37:02.210952 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 06:37:02 crc kubenswrapper[4720]: I0122 06:37:02.211020 4720 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 15:28:48.703573578 +0000 UTC Jan 22 06:37:02 crc kubenswrapper[4720]: I0122 06:37:02.242053 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:37:02 crc kubenswrapper[4720]: I0122 06:37:02.242100 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:37:02 crc kubenswrapper[4720]: I0122 06:37:02.242116 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:37:02 crc kubenswrapper[4720]: I0122 06:37:02.242142 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:37:02 crc kubenswrapper[4720]: I0122 06:37:02.242161 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:37:02Z","lastTransitionTime":"2026-01-22T06:37:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:37:02 crc kubenswrapper[4720]: I0122 06:37:02.346032 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:37:02 crc kubenswrapper[4720]: I0122 06:37:02.346096 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:37:02 crc kubenswrapper[4720]: I0122 06:37:02.346114 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:37:02 crc kubenswrapper[4720]: I0122 06:37:02.346140 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:37:02 crc kubenswrapper[4720]: I0122 06:37:02.346160 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:37:02Z","lastTransitionTime":"2026-01-22T06:37:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:37:02 crc kubenswrapper[4720]: I0122 06:37:02.449580 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:37:02 crc kubenswrapper[4720]: I0122 06:37:02.449645 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:37:02 crc kubenswrapper[4720]: I0122 06:37:02.449663 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:37:02 crc kubenswrapper[4720]: I0122 06:37:02.449691 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:37:02 crc kubenswrapper[4720]: I0122 06:37:02.449711 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:37:02Z","lastTransitionTime":"2026-01-22T06:37:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:37:02 crc kubenswrapper[4720]: I0122 06:37:02.553462 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:37:02 crc kubenswrapper[4720]: I0122 06:37:02.553525 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:37:02 crc kubenswrapper[4720]: I0122 06:37:02.553543 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:37:02 crc kubenswrapper[4720]: I0122 06:37:02.553572 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:37:02 crc kubenswrapper[4720]: I0122 06:37:02.553592 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:37:02Z","lastTransitionTime":"2026-01-22T06:37:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:37:02 crc kubenswrapper[4720]: I0122 06:37:02.657447 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:37:02 crc kubenswrapper[4720]: I0122 06:37:02.657517 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:37:02 crc kubenswrapper[4720]: I0122 06:37:02.657537 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:37:02 crc kubenswrapper[4720]: I0122 06:37:02.657566 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:37:02 crc kubenswrapper[4720]: I0122 06:37:02.657586 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:37:02Z","lastTransitionTime":"2026-01-22T06:37:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:37:02 crc kubenswrapper[4720]: I0122 06:37:02.761049 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:37:02 crc kubenswrapper[4720]: I0122 06:37:02.761158 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:37:02 crc kubenswrapper[4720]: I0122 06:37:02.761179 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:37:02 crc kubenswrapper[4720]: I0122 06:37:02.761205 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:37:02 crc kubenswrapper[4720]: I0122 06:37:02.761224 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:37:02Z","lastTransitionTime":"2026-01-22T06:37:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:37:02 crc kubenswrapper[4720]: I0122 06:37:02.864264 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:37:02 crc kubenswrapper[4720]: I0122 06:37:02.864347 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:37:02 crc kubenswrapper[4720]: I0122 06:37:02.864361 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:37:02 crc kubenswrapper[4720]: I0122 06:37:02.864381 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:37:02 crc kubenswrapper[4720]: I0122 06:37:02.864396 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:37:02Z","lastTransitionTime":"2026-01-22T06:37:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:37:02 crc kubenswrapper[4720]: I0122 06:37:02.968887 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:37:02 crc kubenswrapper[4720]: I0122 06:37:02.969029 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:37:02 crc kubenswrapper[4720]: I0122 06:37:02.969057 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:37:02 crc kubenswrapper[4720]: I0122 06:37:02.969093 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:37:02 crc kubenswrapper[4720]: I0122 06:37:02.969121 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:37:02Z","lastTransitionTime":"2026-01-22T06:37:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:37:03 crc kubenswrapper[4720]: I0122 06:37:03.072848 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:37:03 crc kubenswrapper[4720]: I0122 06:37:03.072936 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:37:03 crc kubenswrapper[4720]: I0122 06:37:03.072958 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:37:03 crc kubenswrapper[4720]: I0122 06:37:03.072985 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:37:03 crc kubenswrapper[4720]: I0122 06:37:03.073006 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:37:03Z","lastTransitionTime":"2026-01-22T06:37:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:37:03 crc kubenswrapper[4720]: I0122 06:37:03.177289 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:37:03 crc kubenswrapper[4720]: I0122 06:37:03.177376 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:37:03 crc kubenswrapper[4720]: I0122 06:37:03.177397 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:37:03 crc kubenswrapper[4720]: I0122 06:37:03.177426 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:37:03 crc kubenswrapper[4720]: I0122 06:37:03.177445 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:37:03Z","lastTransitionTime":"2026-01-22T06:37:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:37:03 crc kubenswrapper[4720]: I0122 06:37:03.210489 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kvtch" Jan 22 06:37:03 crc kubenswrapper[4720]: I0122 06:37:03.210525 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 06:37:03 crc kubenswrapper[4720]: E0122 06:37:03.210726 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kvtch" podUID="409f50e8-9b68-4efe-8eb4-bc144d383817" Jan 22 06:37:03 crc kubenswrapper[4720]: E0122 06:37:03.210945 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 06:37:03 crc kubenswrapper[4720]: I0122 06:37:03.211289 4720 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 06:25:23.712409827 +0000 UTC Jan 22 06:37:03 crc kubenswrapper[4720]: I0122 06:37:03.281218 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:37:03 crc kubenswrapper[4720]: I0122 06:37:03.281336 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:37:03 crc kubenswrapper[4720]: I0122 06:37:03.281356 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:37:03 crc kubenswrapper[4720]: I0122 06:37:03.281385 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:37:03 crc kubenswrapper[4720]: I0122 06:37:03.281408 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:37:03Z","lastTransitionTime":"2026-01-22T06:37:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:37:03 crc kubenswrapper[4720]: I0122 06:37:03.385211 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:37:03 crc kubenswrapper[4720]: I0122 06:37:03.385322 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:37:03 crc kubenswrapper[4720]: I0122 06:37:03.385726 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:37:03 crc kubenswrapper[4720]: I0122 06:37:03.385784 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:37:03 crc kubenswrapper[4720]: I0122 06:37:03.385811 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:37:03Z","lastTransitionTime":"2026-01-22T06:37:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:37:03 crc kubenswrapper[4720]: I0122 06:37:03.490543 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:37:03 crc kubenswrapper[4720]: I0122 06:37:03.490613 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:37:03 crc kubenswrapper[4720]: I0122 06:37:03.490632 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:37:03 crc kubenswrapper[4720]: I0122 06:37:03.490662 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:37:03 crc kubenswrapper[4720]: I0122 06:37:03.490681 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:37:03Z","lastTransitionTime":"2026-01-22T06:37:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:37:03 crc kubenswrapper[4720]: I0122 06:37:03.595334 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:37:03 crc kubenswrapper[4720]: I0122 06:37:03.595395 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:37:03 crc kubenswrapper[4720]: I0122 06:37:03.595414 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:37:03 crc kubenswrapper[4720]: I0122 06:37:03.595443 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:37:03 crc kubenswrapper[4720]: I0122 06:37:03.595465 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:37:03Z","lastTransitionTime":"2026-01-22T06:37:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:37:03 crc kubenswrapper[4720]: I0122 06:37:03.699379 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:37:03 crc kubenswrapper[4720]: I0122 06:37:03.699446 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:37:03 crc kubenswrapper[4720]: I0122 06:37:03.699464 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:37:03 crc kubenswrapper[4720]: I0122 06:37:03.699498 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:37:03 crc kubenswrapper[4720]: I0122 06:37:03.699517 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:37:03Z","lastTransitionTime":"2026-01-22T06:37:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:37:03 crc kubenswrapper[4720]: I0122 06:37:03.802548 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:37:03 crc kubenswrapper[4720]: I0122 06:37:03.802603 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:37:03 crc kubenswrapper[4720]: I0122 06:37:03.802643 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:37:03 crc kubenswrapper[4720]: I0122 06:37:03.802668 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:37:03 crc kubenswrapper[4720]: I0122 06:37:03.802686 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:37:03Z","lastTransitionTime":"2026-01-22T06:37:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:37:03 crc kubenswrapper[4720]: I0122 06:37:03.906535 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:37:03 crc kubenswrapper[4720]: I0122 06:37:03.906629 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:37:03 crc kubenswrapper[4720]: I0122 06:37:03.906654 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:37:03 crc kubenswrapper[4720]: I0122 06:37:03.906688 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:37:03 crc kubenswrapper[4720]: I0122 06:37:03.906712 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:37:03Z","lastTransitionTime":"2026-01-22T06:37:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:37:03 crc kubenswrapper[4720]: I0122 06:37:03.998159 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:37:03 crc kubenswrapper[4720]: I0122 06:37:03.998243 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:37:03 crc kubenswrapper[4720]: I0122 06:37:03.998274 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:37:03 crc kubenswrapper[4720]: I0122 06:37:03.998310 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:37:03 crc kubenswrapper[4720]: I0122 06:37:03.998340 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:37:03Z","lastTransitionTime":"2026-01-22T06:37:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:37:04 crc kubenswrapper[4720]: I0122 06:37:04.027541 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 22 06:37:04 crc kubenswrapper[4720]: I0122 06:37:04.027732 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 22 06:37:04 crc kubenswrapper[4720]: I0122 06:37:04.027757 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 22 06:37:04 crc kubenswrapper[4720]: I0122 06:37:04.027786 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 22 06:37:04 crc kubenswrapper[4720]: I0122 06:37:04.027809 4720 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-22T06:37:04Z","lastTransitionTime":"2026-01-22T06:37:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 22 06:37:04 crc kubenswrapper[4720]: I0122 06:37:04.076508 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-n5w5r" podStartSLOduration=92.076472765 podStartE2EDuration="1m32.076472765s" podCreationTimestamp="2026-01-22 06:35:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 06:36:58.759255292 +0000 UTC m=+110.901162027" watchObservedRunningTime="2026-01-22 06:37:04.076472765 +0000 UTC m=+116.218379540" Jan 22 06:37:04 crc kubenswrapper[4720]: I0122 06:37:04.078105 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-mnhtz"] Jan 22 06:37:04 crc kubenswrapper[4720]: I0122 06:37:04.078958 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-mnhtz" Jan 22 06:37:04 crc kubenswrapper[4720]: I0122 06:37:04.082640 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 22 06:37:04 crc kubenswrapper[4720]: I0122 06:37:04.082644 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 22 06:37:04 crc kubenswrapper[4720]: I0122 06:37:04.082750 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 22 06:37:04 crc kubenswrapper[4720]: I0122 06:37:04.083753 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 22 06:37:04 crc kubenswrapper[4720]: I0122 06:37:04.166237 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8ab01bc3-20a6-4b07-949b-fc3138771a45-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-mnhtz\" (UID: \"8ab01bc3-20a6-4b07-949b-fc3138771a45\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-mnhtz" Jan 22 06:37:04 crc kubenswrapper[4720]: I0122 06:37:04.166507 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8ab01bc3-20a6-4b07-949b-fc3138771a45-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-mnhtz\" (UID: \"8ab01bc3-20a6-4b07-949b-fc3138771a45\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-mnhtz" Jan 22 06:37:04 crc kubenswrapper[4720]: I0122 06:37:04.166548 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/8ab01bc3-20a6-4b07-949b-fc3138771a45-service-ca\") pod \"cluster-version-operator-5c965bbfc6-mnhtz\" (UID: \"8ab01bc3-20a6-4b07-949b-fc3138771a45\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-mnhtz" Jan 22 06:37:04 crc kubenswrapper[4720]: I0122 06:37:04.166697 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/8ab01bc3-20a6-4b07-949b-fc3138771a45-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-mnhtz\" (UID: \"8ab01bc3-20a6-4b07-949b-fc3138771a45\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-mnhtz" Jan 22 06:37:04 crc kubenswrapper[4720]: I0122 06:37:04.166739 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/8ab01bc3-20a6-4b07-949b-fc3138771a45-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-mnhtz\" (UID: \"8ab01bc3-20a6-4b07-949b-fc3138771a45\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-mnhtz" Jan 22 06:37:04 crc kubenswrapper[4720]: I0122 06:37:04.210289 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 06:37:04 crc kubenswrapper[4720]: I0122 06:37:04.210495 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 06:37:04 crc kubenswrapper[4720]: E0122 06:37:04.210776 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 06:37:04 crc kubenswrapper[4720]: E0122 06:37:04.211017 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 06:37:04 crc kubenswrapper[4720]: I0122 06:37:04.212261 4720 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 20:32:33.829666086 +0000 UTC Jan 22 06:37:04 crc kubenswrapper[4720]: I0122 06:37:04.212322 4720 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Jan 22 06:37:04 crc kubenswrapper[4720]: I0122 06:37:04.215666 4720 scope.go:117] "RemoveContainer" containerID="4bab2488a753e9278defd984cada047c9d9b5411a54e88204c7c67add2341e64" Jan 22 06:37:04 crc kubenswrapper[4720]: E0122 06:37:04.216396 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-pc2f4_openshift-ovn-kubernetes(9a725fa6-120e-41b1-bf7b-e1419e35c891)\"" pod="openshift-ovn-kubernetes/ovnkube-node-pc2f4" podUID="9a725fa6-120e-41b1-bf7b-e1419e35c891" Jan 22 06:37:04 crc kubenswrapper[4720]: I0122 06:37:04.228442 4720 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 22 06:37:04 crc kubenswrapper[4720]: I0122 06:37:04.268380 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8ab01bc3-20a6-4b07-949b-fc3138771a45-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-mnhtz\" (UID: \"8ab01bc3-20a6-4b07-949b-fc3138771a45\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-mnhtz" Jan 22 06:37:04 crc kubenswrapper[4720]: I0122 06:37:04.268707 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/8ab01bc3-20a6-4b07-949b-fc3138771a45-service-ca\") pod \"cluster-version-operator-5c965bbfc6-mnhtz\" (UID: \"8ab01bc3-20a6-4b07-949b-fc3138771a45\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-mnhtz" Jan 22 06:37:04 crc kubenswrapper[4720]: I0122 06:37:04.269015 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/8ab01bc3-20a6-4b07-949b-fc3138771a45-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-mnhtz\" (UID: \"8ab01bc3-20a6-4b07-949b-fc3138771a45\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-mnhtz" Jan 22 06:37:04 crc kubenswrapper[4720]: I0122 06:37:04.269208 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/8ab01bc3-20a6-4b07-949b-fc3138771a45-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-mnhtz\" (UID: \"8ab01bc3-20a6-4b07-949b-fc3138771a45\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-mnhtz" Jan 22 06:37:04 crc kubenswrapper[4720]: I0122 06:37:04.269336 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/8ab01bc3-20a6-4b07-949b-fc3138771a45-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-mnhtz\" (UID: \"8ab01bc3-20a6-4b07-949b-fc3138771a45\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-mnhtz" Jan 22 06:37:04 crc kubenswrapper[4720]: I0122 06:37:04.269572 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/8ab01bc3-20a6-4b07-949b-fc3138771a45-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-mnhtz\" (UID: \"8ab01bc3-20a6-4b07-949b-fc3138771a45\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-mnhtz" Jan 22 06:37:04 crc kubenswrapper[4720]: I0122 06:37:04.269615 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8ab01bc3-20a6-4b07-949b-fc3138771a45-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-mnhtz\" (UID: \"8ab01bc3-20a6-4b07-949b-fc3138771a45\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-mnhtz" Jan 22 06:37:04 crc kubenswrapper[4720]: I0122 06:37:04.270593 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/8ab01bc3-20a6-4b07-949b-fc3138771a45-service-ca\") pod \"cluster-version-operator-5c965bbfc6-mnhtz\" (UID: \"8ab01bc3-20a6-4b07-949b-fc3138771a45\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-mnhtz" Jan 22 06:37:04 crc kubenswrapper[4720]: I0122 06:37:04.279906 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8ab01bc3-20a6-4b07-949b-fc3138771a45-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-mnhtz\" (UID: \"8ab01bc3-20a6-4b07-949b-fc3138771a45\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-mnhtz" Jan 22 06:37:04 crc kubenswrapper[4720]: I0122 06:37:04.301533 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8ab01bc3-20a6-4b07-949b-fc3138771a45-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-mnhtz\" (UID: \"8ab01bc3-20a6-4b07-949b-fc3138771a45\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-mnhtz" Jan 22 06:37:04 crc kubenswrapper[4720]: I0122 06:37:04.406356 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-mnhtz" Jan 22 06:37:04 crc kubenswrapper[4720]: I0122 06:37:04.957252 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-mnhtz" event={"ID":"8ab01bc3-20a6-4b07-949b-fc3138771a45","Type":"ContainerStarted","Data":"e6a299d69c98bd3393f44abd75b1ec410d4e45de3d3ee99417232b510992bc60"} Jan 22 06:37:04 crc kubenswrapper[4720]: I0122 06:37:04.957423 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-mnhtz" event={"ID":"8ab01bc3-20a6-4b07-949b-fc3138771a45","Type":"ContainerStarted","Data":"79ec8dc098e1c5f3065fbdf6ef6d209727632d00f48d48d9f2fc33ccbf18cd80"} Jan 22 06:37:04 crc kubenswrapper[4720]: I0122 06:37:04.980017 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-mnhtz" podStartSLOduration=92.979985237 podStartE2EDuration="1m32.979985237s" podCreationTimestamp="2026-01-22 06:35:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 06:37:04.978851315 +0000 UTC m=+117.120758060" watchObservedRunningTime="2026-01-22 06:37:04.979985237 +0000 UTC m=+117.121891972" Jan 22 06:37:05 crc kubenswrapper[4720]: I0122 06:37:05.210469 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 06:37:05 crc kubenswrapper[4720]: I0122 06:37:05.210664 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kvtch" Jan 22 06:37:05 crc kubenswrapper[4720]: E0122 06:37:05.210680 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 06:37:05 crc kubenswrapper[4720]: E0122 06:37:05.210896 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kvtch" podUID="409f50e8-9b68-4efe-8eb4-bc144d383817" Jan 22 06:37:05 crc kubenswrapper[4720]: I0122 06:37:05.964991 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-n5w5r_85373343-156d-4de0-a72b-baaf7c4e3d08/kube-multus/1.log" Jan 22 06:37:05 crc kubenswrapper[4720]: I0122 06:37:05.966294 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-n5w5r_85373343-156d-4de0-a72b-baaf7c4e3d08/kube-multus/0.log" Jan 22 06:37:05 crc kubenswrapper[4720]: I0122 06:37:05.966385 4720 generic.go:334] "Generic (PLEG): container finished" podID="85373343-156d-4de0-a72b-baaf7c4e3d08" containerID="b71047289bcefd19da4f70da8db4ee3456912a253f598d85540effeea52ca966" exitCode=1 Jan 22 06:37:05 crc kubenswrapper[4720]: I0122 06:37:05.966477 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-n5w5r" event={"ID":"85373343-156d-4de0-a72b-baaf7c4e3d08","Type":"ContainerDied","Data":"b71047289bcefd19da4f70da8db4ee3456912a253f598d85540effeea52ca966"} Jan 22 06:37:05 crc kubenswrapper[4720]: I0122 06:37:05.966544 4720 scope.go:117] "RemoveContainer" containerID="e4da0abad7292a9d82b80caabd4b1f0fc15fbe54bfa5b4316c85b2131ea8b5b7" Jan 22 06:37:05 crc kubenswrapper[4720]: I0122 06:37:05.967305 4720 scope.go:117] "RemoveContainer" containerID="b71047289bcefd19da4f70da8db4ee3456912a253f598d85540effeea52ca966" Jan 22 06:37:05 crc kubenswrapper[4720]: E0122 06:37:05.967656 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-n5w5r_openshift-multus(85373343-156d-4de0-a72b-baaf7c4e3d08)\"" pod="openshift-multus/multus-n5w5r" podUID="85373343-156d-4de0-a72b-baaf7c4e3d08" Jan 22 06:37:06 crc kubenswrapper[4720]: I0122 06:37:06.210088 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 06:37:06 crc kubenswrapper[4720]: I0122 06:37:06.210088 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 06:37:06 crc kubenswrapper[4720]: E0122 06:37:06.210273 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 06:37:06 crc kubenswrapper[4720]: E0122 06:37:06.210344 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 06:37:06 crc kubenswrapper[4720]: I0122 06:37:06.973522 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-n5w5r_85373343-156d-4de0-a72b-baaf7c4e3d08/kube-multus/1.log" Jan 22 06:37:07 crc kubenswrapper[4720]: I0122 06:37:07.210518 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kvtch" Jan 22 06:37:07 crc kubenswrapper[4720]: I0122 06:37:07.210577 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 06:37:07 crc kubenswrapper[4720]: E0122 06:37:07.210781 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kvtch" podUID="409f50e8-9b68-4efe-8eb4-bc144d383817" Jan 22 06:37:07 crc kubenswrapper[4720]: E0122 06:37:07.210965 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 06:37:08 crc kubenswrapper[4720]: E0122 06:37:08.178716 4720 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Jan 22 06:37:08 crc kubenswrapper[4720]: I0122 06:37:08.209873 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 06:37:08 crc kubenswrapper[4720]: I0122 06:37:08.210058 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 06:37:08 crc kubenswrapper[4720]: E0122 06:37:08.211828 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 06:37:08 crc kubenswrapper[4720]: E0122 06:37:08.211948 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 06:37:08 crc kubenswrapper[4720]: E0122 06:37:08.301243 4720 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 22 06:37:09 crc kubenswrapper[4720]: I0122 06:37:09.210096 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kvtch" Jan 22 06:37:09 crc kubenswrapper[4720]: I0122 06:37:09.210127 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 06:37:09 crc kubenswrapper[4720]: E0122 06:37:09.210348 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kvtch" podUID="409f50e8-9b68-4efe-8eb4-bc144d383817" Jan 22 06:37:09 crc kubenswrapper[4720]: E0122 06:37:09.210573 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 06:37:10 crc kubenswrapper[4720]: I0122 06:37:10.209782 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 06:37:10 crc kubenswrapper[4720]: I0122 06:37:10.209953 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 06:37:10 crc kubenswrapper[4720]: E0122 06:37:10.210080 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 06:37:10 crc kubenswrapper[4720]: E0122 06:37:10.210307 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 06:37:11 crc kubenswrapper[4720]: I0122 06:37:11.210469 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kvtch" Jan 22 06:37:11 crc kubenswrapper[4720]: I0122 06:37:11.210467 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 06:37:11 crc kubenswrapper[4720]: E0122 06:37:11.211357 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kvtch" podUID="409f50e8-9b68-4efe-8eb4-bc144d383817" Jan 22 06:37:11 crc kubenswrapper[4720]: E0122 06:37:11.211837 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 06:37:12 crc kubenswrapper[4720]: I0122 06:37:12.210575 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 06:37:12 crc kubenswrapper[4720]: I0122 06:37:12.210685 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 06:37:12 crc kubenswrapper[4720]: E0122 06:37:12.210853 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 06:37:12 crc kubenswrapper[4720]: E0122 06:37:12.211158 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 06:37:13 crc kubenswrapper[4720]: I0122 06:37:13.209616 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 06:37:13 crc kubenswrapper[4720]: I0122 06:37:13.209626 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kvtch" Jan 22 06:37:13 crc kubenswrapper[4720]: E0122 06:37:13.209824 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 06:37:13 crc kubenswrapper[4720]: E0122 06:37:13.209970 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kvtch" podUID="409f50e8-9b68-4efe-8eb4-bc144d383817" Jan 22 06:37:13 crc kubenswrapper[4720]: E0122 06:37:13.303244 4720 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 22 06:37:14 crc kubenswrapper[4720]: I0122 06:37:14.210642 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 06:37:14 crc kubenswrapper[4720]: E0122 06:37:14.210853 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 06:37:14 crc kubenswrapper[4720]: I0122 06:37:14.211006 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 06:37:14 crc kubenswrapper[4720]: E0122 06:37:14.211192 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 06:37:15 crc kubenswrapper[4720]: I0122 06:37:15.210063 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 06:37:15 crc kubenswrapper[4720]: I0122 06:37:15.210134 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kvtch" Jan 22 06:37:15 crc kubenswrapper[4720]: E0122 06:37:15.210449 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 06:37:15 crc kubenswrapper[4720]: E0122 06:37:15.211196 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kvtch" podUID="409f50e8-9b68-4efe-8eb4-bc144d383817" Jan 22 06:37:16 crc kubenswrapper[4720]: I0122 06:37:16.211350 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 06:37:16 crc kubenswrapper[4720]: I0122 06:37:16.211351 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 06:37:16 crc kubenswrapper[4720]: E0122 06:37:16.211536 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 06:37:16 crc kubenswrapper[4720]: E0122 06:37:16.211668 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 06:37:17 crc kubenswrapper[4720]: I0122 06:37:17.210092 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 06:37:17 crc kubenswrapper[4720]: I0122 06:37:17.210193 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kvtch" Jan 22 06:37:17 crc kubenswrapper[4720]: E0122 06:37:17.210401 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 06:37:17 crc kubenswrapper[4720]: I0122 06:37:17.210580 4720 scope.go:117] "RemoveContainer" containerID="b71047289bcefd19da4f70da8db4ee3456912a253f598d85540effeea52ca966" Jan 22 06:37:17 crc kubenswrapper[4720]: E0122 06:37:17.210738 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kvtch" podUID="409f50e8-9b68-4efe-8eb4-bc144d383817" Jan 22 06:37:18 crc kubenswrapper[4720]: I0122 06:37:18.019389 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-n5w5r_85373343-156d-4de0-a72b-baaf7c4e3d08/kube-multus/1.log" Jan 22 06:37:18 crc kubenswrapper[4720]: I0122 06:37:18.020022 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-n5w5r" event={"ID":"85373343-156d-4de0-a72b-baaf7c4e3d08","Type":"ContainerStarted","Data":"c0028ee94bbbee298a2b436cb261af92d992335251cf0d39192eacaf29503865"} Jan 22 06:37:18 crc kubenswrapper[4720]: I0122 06:37:18.210428 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 06:37:18 crc kubenswrapper[4720]: I0122 06:37:18.210567 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 06:37:18 crc kubenswrapper[4720]: E0122 06:37:18.213321 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 06:37:18 crc kubenswrapper[4720]: E0122 06:37:18.213445 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 06:37:18 crc kubenswrapper[4720]: I0122 06:37:18.214255 4720 scope.go:117] "RemoveContainer" containerID="4bab2488a753e9278defd984cada047c9d9b5411a54e88204c7c67add2341e64" Jan 22 06:37:18 crc kubenswrapper[4720]: E0122 06:37:18.304226 4720 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 22 06:37:19 crc kubenswrapper[4720]: I0122 06:37:19.029846 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pc2f4_9a725fa6-120e-41b1-bf7b-e1419e35c891/ovnkube-controller/3.log" Jan 22 06:37:19 crc kubenswrapper[4720]: I0122 06:37:19.033279 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pc2f4" event={"ID":"9a725fa6-120e-41b1-bf7b-e1419e35c891","Type":"ContainerStarted","Data":"dee4e936e1b6317e41638ead6be9d777b984b89ca57b4bb97aeea2cd862976e7"} Jan 22 06:37:19 crc kubenswrapper[4720]: I0122 06:37:19.034799 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-pc2f4" Jan 22 06:37:19 crc kubenswrapper[4720]: I0122 06:37:19.080577 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-pc2f4" podStartSLOduration=107.08055077 podStartE2EDuration="1m47.08055077s" podCreationTimestamp="2026-01-22 06:35:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 06:37:19.080488869 +0000 UTC m=+131.222395664" watchObservedRunningTime="2026-01-22 06:37:19.08055077 +0000 UTC m=+131.222457475" Jan 22 06:37:19 crc kubenswrapper[4720]: I0122 06:37:19.190679 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-kvtch"] Jan 22 06:37:19 crc kubenswrapper[4720]: I0122 06:37:19.190888 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kvtch" Jan 22 06:37:19 crc kubenswrapper[4720]: E0122 06:37:19.191072 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kvtch" podUID="409f50e8-9b68-4efe-8eb4-bc144d383817" Jan 22 06:37:19 crc kubenswrapper[4720]: I0122 06:37:19.210067 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 06:37:19 crc kubenswrapper[4720]: E0122 06:37:19.210245 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 06:37:20 crc kubenswrapper[4720]: I0122 06:37:20.210322 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 06:37:20 crc kubenswrapper[4720]: I0122 06:37:20.210445 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 06:37:20 crc kubenswrapper[4720]: E0122 06:37:20.210892 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 06:37:20 crc kubenswrapper[4720]: E0122 06:37:20.211101 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 06:37:21 crc kubenswrapper[4720]: I0122 06:37:21.210585 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kvtch" Jan 22 06:37:21 crc kubenswrapper[4720]: I0122 06:37:21.210585 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 06:37:21 crc kubenswrapper[4720]: E0122 06:37:21.210811 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kvtch" podUID="409f50e8-9b68-4efe-8eb4-bc144d383817" Jan 22 06:37:21 crc kubenswrapper[4720]: E0122 06:37:21.210958 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 06:37:22 crc kubenswrapper[4720]: I0122 06:37:22.210001 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 06:37:22 crc kubenswrapper[4720]: I0122 06:37:22.210068 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 06:37:22 crc kubenswrapper[4720]: E0122 06:37:22.210209 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 22 06:37:22 crc kubenswrapper[4720]: E0122 06:37:22.210378 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 22 06:37:23 crc kubenswrapper[4720]: I0122 06:37:23.210490 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 06:37:23 crc kubenswrapper[4720]: I0122 06:37:23.210532 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kvtch" Jan 22 06:37:23 crc kubenswrapper[4720]: E0122 06:37:23.210722 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 22 06:37:23 crc kubenswrapper[4720]: E0122 06:37:23.210880 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kvtch" podUID="409f50e8-9b68-4efe-8eb4-bc144d383817" Jan 22 06:37:24 crc kubenswrapper[4720]: I0122 06:37:24.210820 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 06:37:24 crc kubenswrapper[4720]: I0122 06:37:24.211012 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 06:37:24 crc kubenswrapper[4720]: I0122 06:37:24.214540 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 22 06:37:24 crc kubenswrapper[4720]: I0122 06:37:24.214838 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 22 06:37:24 crc kubenswrapper[4720]: I0122 06:37:24.215008 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 22 06:37:24 crc kubenswrapper[4720]: I0122 06:37:24.215236 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 22 06:37:24 crc kubenswrapper[4720]: I0122 06:37:24.727620 4720 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Jan 22 06:37:24 crc kubenswrapper[4720]: I0122 06:37:24.794418 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-gxfr8"] Jan 22 06:37:24 crc kubenswrapper[4720]: I0122 06:37:24.795686 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gxfr8" Jan 22 06:37:24 crc kubenswrapper[4720]: I0122 06:37:24.796574 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gz8mf"] Jan 22 06:37:24 crc kubenswrapper[4720]: I0122 06:37:24.797570 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gz8mf" Jan 22 06:37:24 crc kubenswrapper[4720]: I0122 06:37:24.801090 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-9cvs7"] Jan 22 06:37:24 crc kubenswrapper[4720]: I0122 06:37:24.802229 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9cvs7" Jan 22 06:37:24 crc kubenswrapper[4720]: I0122 06:37:24.803801 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-bkx6t"] Jan 22 06:37:24 crc kubenswrapper[4720]: I0122 06:37:24.805156 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-bkx6t" Jan 22 06:37:24 crc kubenswrapper[4720]: I0122 06:37:24.806244 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-vp8tq"] Jan 22 06:37:24 crc kubenswrapper[4720]: I0122 06:37:24.806997 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-vp8tq" Jan 22 06:37:24 crc kubenswrapper[4720]: I0122 06:37:24.809565 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-hrglt"] Jan 22 06:37:24 crc kubenswrapper[4720]: I0122 06:37:24.810329 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-hrglt" Jan 22 06:37:24 crc kubenswrapper[4720]: I0122 06:37:24.811089 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-zv6lm"] Jan 22 06:37:24 crc kubenswrapper[4720]: I0122 06:37:24.811536 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-zv6lm" Jan 22 06:37:24 crc kubenswrapper[4720]: I0122 06:37:24.812512 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-dhklt"] Jan 22 06:37:24 crc kubenswrapper[4720]: I0122 06:37:24.813350 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-dhklt" Jan 22 06:37:24 crc kubenswrapper[4720]: I0122 06:37:24.814037 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-gxkzq"] Jan 22 06:37:24 crc kubenswrapper[4720]: I0122 06:37:24.814783 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-gxkzq" Jan 22 06:37:24 crc kubenswrapper[4720]: I0122 06:37:24.822683 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-hxdwr"] Jan 22 06:37:24 crc kubenswrapper[4720]: I0122 06:37:24.823684 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-hxdwr" Jan 22 06:37:24 crc kubenswrapper[4720]: I0122 06:37:24.824966 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-ws6w8"] Jan 22 06:37:24 crc kubenswrapper[4720]: I0122 06:37:24.825995 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-ws6w8" Jan 22 06:37:24 crc kubenswrapper[4720]: I0122 06:37:24.831249 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 22 06:37:24 crc kubenswrapper[4720]: I0122 06:37:24.831477 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 22 06:37:24 crc kubenswrapper[4720]: I0122 06:37:24.831626 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 22 06:37:24 crc kubenswrapper[4720]: I0122 06:37:24.831654 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 22 06:37:24 crc kubenswrapper[4720]: I0122 06:37:24.831673 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 22 06:37:24 crc kubenswrapper[4720]: I0122 06:37:24.831778 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 22 06:37:24 crc kubenswrapper[4720]: I0122 06:37:24.831801 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 22 06:37:24 crc kubenswrapper[4720]: I0122 06:37:24.831800 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 22 06:37:24 crc kubenswrapper[4720]: I0122 06:37:24.837264 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 22 06:37:24 crc kubenswrapper[4720]: I0122 06:37:24.837678 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 22 06:37:24 crc kubenswrapper[4720]: I0122 06:37:24.837854 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 22 06:37:24 crc kubenswrapper[4720]: I0122 06:37:24.831809 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 22 06:37:24 crc kubenswrapper[4720]: I0122 06:37:24.837283 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 22 06:37:24 crc kubenswrapper[4720]: I0122 06:37:24.838793 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 22 06:37:24 crc kubenswrapper[4720]: I0122 06:37:24.839079 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 22 06:37:24 crc kubenswrapper[4720]: I0122 06:37:24.831804 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 22 06:37:24 crc kubenswrapper[4720]: I0122 06:37:24.840488 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 22 06:37:24 crc kubenswrapper[4720]: I0122 06:37:24.841356 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 22 06:37:24 crc kubenswrapper[4720]: I0122 06:37:24.841551 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 22 06:37:24 crc kubenswrapper[4720]: I0122 06:37:24.844196 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 22 06:37:24 crc kubenswrapper[4720]: I0122 06:37:24.844516 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 22 06:37:24 crc kubenswrapper[4720]: I0122 06:37:24.844924 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 22 06:37:24 crc kubenswrapper[4720]: I0122 06:37:24.845250 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 22 06:37:24 crc kubenswrapper[4720]: I0122 06:37:24.845476 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 22 06:37:24 crc kubenswrapper[4720]: I0122 06:37:24.845834 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 22 06:37:24 crc kubenswrapper[4720]: I0122 06:37:24.848396 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 22 06:37:24 crc kubenswrapper[4720]: I0122 06:37:24.848761 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 22 06:37:24 crc kubenswrapper[4720]: I0122 06:37:24.848814 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 22 06:37:24 crc kubenswrapper[4720]: I0122 06:37:24.849099 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 22 06:37:24 crc kubenswrapper[4720]: I0122 06:37:24.849167 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 22 06:37:24 crc kubenswrapper[4720]: I0122 06:37:24.849647 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 22 06:37:24 crc kubenswrapper[4720]: I0122 06:37:24.846018 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 22 06:37:24 crc kubenswrapper[4720]: I0122 06:37:24.848773 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 22 06:37:24 crc kubenswrapper[4720]: I0122 06:37:24.850344 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 22 06:37:24 crc kubenswrapper[4720]: I0122 06:37:24.846091 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 22 06:37:24 crc kubenswrapper[4720]: I0122 06:37:24.858357 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 22 06:37:24 crc kubenswrapper[4720]: I0122 06:37:24.868356 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 22 06:37:24 crc kubenswrapper[4720]: I0122 06:37:24.869308 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 22 06:37:24 crc kubenswrapper[4720]: I0122 06:37:24.869418 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 22 06:37:24 crc kubenswrapper[4720]: I0122 06:37:24.869503 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 22 06:37:24 crc kubenswrapper[4720]: I0122 06:37:24.869699 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 22 06:37:24 crc kubenswrapper[4720]: I0122 06:37:24.869970 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 22 06:37:24 crc kubenswrapper[4720]: I0122 06:37:24.870135 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 22 06:37:24 crc kubenswrapper[4720]: I0122 06:37:24.870410 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 22 06:37:24 crc kubenswrapper[4720]: I0122 06:37:24.872181 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 22 06:37:24 crc kubenswrapper[4720]: I0122 06:37:24.872350 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 22 06:37:24 crc kubenswrapper[4720]: I0122 06:37:24.872446 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 22 06:37:24 crc kubenswrapper[4720]: I0122 06:37:24.872458 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 22 06:37:24 crc kubenswrapper[4720]: I0122 06:37:24.872519 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 22 06:37:24 crc kubenswrapper[4720]: I0122 06:37:24.872709 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 22 06:37:24 crc kubenswrapper[4720]: I0122 06:37:24.872709 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 22 06:37:24 crc kubenswrapper[4720]: I0122 06:37:24.872818 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 22 06:37:24 crc kubenswrapper[4720]: I0122 06:37:24.872873 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 22 06:37:24 crc kubenswrapper[4720]: I0122 06:37:24.872823 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 22 06:37:24 crc kubenswrapper[4720]: I0122 06:37:24.873064 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 22 06:37:24 crc kubenswrapper[4720]: I0122 06:37:24.873263 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 22 06:37:24 crc kubenswrapper[4720]: I0122 06:37:24.873271 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 22 06:37:24 crc kubenswrapper[4720]: I0122 06:37:24.873382 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 22 06:37:24 crc kubenswrapper[4720]: I0122 06:37:24.873384 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 22 06:37:24 crc kubenswrapper[4720]: I0122 06:37:24.873472 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 22 06:37:24 crc kubenswrapper[4720]: I0122 06:37:24.873594 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 22 06:37:24 crc kubenswrapper[4720]: I0122 06:37:24.873679 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 22 06:37:24 crc kubenswrapper[4720]: I0122 06:37:24.873739 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 22 06:37:24 crc kubenswrapper[4720]: I0122 06:37:24.873770 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 22 06:37:24 crc kubenswrapper[4720]: I0122 06:37:24.874078 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 22 06:37:24 crc kubenswrapper[4720]: I0122 06:37:24.874167 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 22 06:37:24 crc kubenswrapper[4720]: I0122 06:37:24.874205 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 22 06:37:24 crc kubenswrapper[4720]: I0122 06:37:24.874264 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 22 06:37:24 crc kubenswrapper[4720]: I0122 06:37:24.874340 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 22 06:37:24 crc kubenswrapper[4720]: I0122 06:37:24.877513 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 22 06:37:24 crc kubenswrapper[4720]: I0122 06:37:24.882444 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-tfvxx"] Jan 22 06:37:24 crc kubenswrapper[4720]: I0122 06:37:24.886876 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-x6khz"] Jan 22 06:37:24 crc kubenswrapper[4720]: I0122 06:37:24.888625 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-4ztkj"] Jan 22 06:37:24 crc kubenswrapper[4720]: I0122 06:37:24.889109 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-dsfv4"] Jan 22 06:37:24 crc kubenswrapper[4720]: I0122 06:37:24.891593 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-x6khz" Jan 22 06:37:24 crc kubenswrapper[4720]: I0122 06:37:24.891830 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-dsfv4" Jan 22 06:37:24 crc kubenswrapper[4720]: I0122 06:37:24.892131 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-4ztkj" Jan 22 06:37:24 crc kubenswrapper[4720]: I0122 06:37:24.892920 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-tfvxx" Jan 22 06:37:24 crc kubenswrapper[4720]: I0122 06:37:24.894998 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-r7h6p"] Jan 22 06:37:24 crc kubenswrapper[4720]: I0122 06:37:24.895084 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 22 06:37:24 crc kubenswrapper[4720]: I0122 06:37:24.897305 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 22 06:37:24 crc kubenswrapper[4720]: I0122 06:37:24.897469 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 22 06:37:24 crc kubenswrapper[4720]: I0122 06:37:24.898438 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 22 06:37:24 crc kubenswrapper[4720]: I0122 06:37:24.905406 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 22 06:37:24 crc kubenswrapper[4720]: I0122 06:37:24.905836 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-r7h6p" Jan 22 06:37:24 crc kubenswrapper[4720]: I0122 06:37:24.909676 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-rlg6q"] Jan 22 06:37:24 crc kubenswrapper[4720]: I0122 06:37:24.925494 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-t25tr"] Jan 22 06:37:24 crc kubenswrapper[4720]: I0122 06:37:24.926512 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-t25tr" Jan 22 06:37:24 crc kubenswrapper[4720]: I0122 06:37:24.931783 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-zmhj8"] Jan 22 06:37:24 crc kubenswrapper[4720]: I0122 06:37:24.932517 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-twh47"] Jan 22 06:37:24 crc kubenswrapper[4720]: I0122 06:37:24.934002 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/86ad3ffd-89b2-4b4a-b1b1-72d6ad907204-console-oauth-config\") pod \"console-f9d7485db-zv6lm\" (UID: \"86ad3ffd-89b2-4b4a-b1b1-72d6ad907204\") " pod="openshift-console/console-f9d7485db-zv6lm" Jan 22 06:37:24 crc kubenswrapper[4720]: I0122 06:37:24.934049 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/86ad3ffd-89b2-4b4a-b1b1-72d6ad907204-trusted-ca-bundle\") pod \"console-f9d7485db-zv6lm\" (UID: \"86ad3ffd-89b2-4b4a-b1b1-72d6ad907204\") " pod="openshift-console/console-f9d7485db-zv6lm" Jan 22 06:37:24 crc kubenswrapper[4720]: I0122 06:37:24.934201 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/1088f6d1-1bac-4e7c-a944-2e9b5d259413-audit-dir\") pod \"apiserver-7bbb656c7d-9cvs7\" (UID: \"1088f6d1-1bac-4e7c-a944-2e9b5d259413\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9cvs7" Jan 22 06:37:24 crc kubenswrapper[4720]: I0122 06:37:24.934249 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0a21ae7b-9111-4c9f-a378-f2acdb19931a-audit-dir\") pod \"oauth-openshift-558db77b4-vp8tq\" (UID: \"0a21ae7b-9111-4c9f-a378-f2acdb19931a\") " pod="openshift-authentication/oauth-openshift-558db77b4-vp8tq" Jan 22 06:37:24 crc kubenswrapper[4720]: I0122 06:37:24.934446 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/0a21ae7b-9111-4c9f-a378-f2acdb19931a-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-vp8tq\" (UID: \"0a21ae7b-9111-4c9f-a378-f2acdb19931a\") " pod="openshift-authentication/oauth-openshift-558db77b4-vp8tq" Jan 22 06:37:24 crc kubenswrapper[4720]: I0122 06:37:24.934484 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1088f6d1-1bac-4e7c-a944-2e9b5d259413-etcd-client\") pod \"apiserver-7bbb656c7d-9cvs7\" (UID: \"1088f6d1-1bac-4e7c-a944-2e9b5d259413\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9cvs7" Jan 22 06:37:24 crc kubenswrapper[4720]: I0122 06:37:24.934588 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hf4tk\" (UniqueName: \"kubernetes.io/projected/86ad3ffd-89b2-4b4a-b1b1-72d6ad907204-kube-api-access-hf4tk\") pod \"console-f9d7485db-zv6lm\" (UID: \"86ad3ffd-89b2-4b4a-b1b1-72d6ad907204\") " pod="openshift-console/console-f9d7485db-zv6lm" Jan 22 06:37:24 crc kubenswrapper[4720]: I0122 06:37:24.934621 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e41cd5a0-a754-4161-938a-463f2673d37e-config\") pod \"machine-approver-56656f9798-gxfr8\" (UID: \"e41cd5a0-a754-4161-938a-463f2673d37e\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gxfr8" Jan 22 06:37:24 crc kubenswrapper[4720]: I0122 06:37:24.934644 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/610296d1-12dc-4132-8ef9-9cc37ed81a3d-trusted-ca\") pod \"console-operator-58897d9998-hrglt\" (UID: \"610296d1-12dc-4132-8ef9-9cc37ed81a3d\") " pod="openshift-console-operator/console-operator-58897d9998-hrglt" Jan 22 06:37:24 crc kubenswrapper[4720]: I0122 06:37:24.934733 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/0a21ae7b-9111-4c9f-a378-f2acdb19931a-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-vp8tq\" (UID: \"0a21ae7b-9111-4c9f-a378-f2acdb19931a\") " pod="openshift-authentication/oauth-openshift-558db77b4-vp8tq" Jan 22 06:37:24 crc kubenswrapper[4720]: I0122 06:37:24.934779 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1088f6d1-1bac-4e7c-a944-2e9b5d259413-serving-cert\") pod \"apiserver-7bbb656c7d-9cvs7\" (UID: \"1088f6d1-1bac-4e7c-a944-2e9b5d259413\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9cvs7" Jan 22 06:37:24 crc kubenswrapper[4720]: I0122 06:37:24.934924 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3f7c9fba-71e2-44d4-9601-be0ffa541be4-config\") pod \"controller-manager-879f6c89f-dhklt\" (UID: \"3f7c9fba-71e2-44d4-9601-be0ffa541be4\") " pod="openshift-controller-manager/controller-manager-879f6c89f-dhklt" Jan 22 06:37:24 crc kubenswrapper[4720]: I0122 06:37:24.934951 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/610296d1-12dc-4132-8ef9-9cc37ed81a3d-config\") pod \"console-operator-58897d9998-hrglt\" (UID: \"610296d1-12dc-4132-8ef9-9cc37ed81a3d\") " pod="openshift-console-operator/console-operator-58897d9998-hrglt" Jan 22 06:37:24 crc kubenswrapper[4720]: I0122 06:37:24.934976 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/0a21ae7b-9111-4c9f-a378-f2acdb19931a-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-vp8tq\" (UID: \"0a21ae7b-9111-4c9f-a378-f2acdb19931a\") " pod="openshift-authentication/oauth-openshift-558db77b4-vp8tq" Jan 22 06:37:24 crc kubenswrapper[4720]: I0122 06:37:24.935001 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4b9e32c5-534c-42ed-96fd-4e747d7084dd-config\") pod \"authentication-operator-69f744f599-bkx6t\" (UID: \"4b9e32c5-534c-42ed-96fd-4e747d7084dd\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bkx6t" Jan 22 06:37:24 crc kubenswrapper[4720]: I0122 06:37:24.935209 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4b9e32c5-534c-42ed-96fd-4e747d7084dd-serving-cert\") pod \"authentication-operator-69f744f599-bkx6t\" (UID: \"4b9e32c5-534c-42ed-96fd-4e747d7084dd\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bkx6t" Jan 22 06:37:24 crc kubenswrapper[4720]: I0122 06:37:24.935664 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/42322892-7874-4c59-ab1a-e3f205212e2e-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-hxdwr\" (UID: \"42322892-7874-4c59-ab1a-e3f205212e2e\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-hxdwr" Jan 22 06:37:24 crc kubenswrapper[4720]: I0122 06:37:24.935824 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0a21ae7b-9111-4c9f-a378-f2acdb19931a-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-vp8tq\" (UID: \"0a21ae7b-9111-4c9f-a378-f2acdb19931a\") " pod="openshift-authentication/oauth-openshift-558db77b4-vp8tq" Jan 22 06:37:24 crc kubenswrapper[4720]: I0122 06:37:24.936049 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/86ad3ffd-89b2-4b4a-b1b1-72d6ad907204-console-config\") pod \"console-f9d7485db-zv6lm\" (UID: \"86ad3ffd-89b2-4b4a-b1b1-72d6ad907204\") " pod="openshift-console/console-f9d7485db-zv6lm" Jan 22 06:37:24 crc kubenswrapper[4720]: I0122 06:37:24.936103 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/508eaeea-db9b-4801-a9d3-a758e3ae9502-config\") pod \"route-controller-manager-6576b87f9c-gxkzq\" (UID: \"508eaeea-db9b-4801-a9d3-a758e3ae9502\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-gxkzq" Jan 22 06:37:24 crc kubenswrapper[4720]: I0122 06:37:24.936169 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dxtjd\" (UniqueName: \"kubernetes.io/projected/dc1c1a54-81dc-4e91-80db-606befa6c477-kube-api-access-dxtjd\") pod \"downloads-7954f5f757-ws6w8\" (UID: \"dc1c1a54-81dc-4e91-80db-606befa6c477\") " pod="openshift-console/downloads-7954f5f757-ws6w8" Jan 22 06:37:24 crc kubenswrapper[4720]: I0122 06:37:24.936213 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4b9e32c5-534c-42ed-96fd-4e747d7084dd-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-bkx6t\" (UID: \"4b9e32c5-534c-42ed-96fd-4e747d7084dd\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bkx6t" Jan 22 06:37:24 crc kubenswrapper[4720]: I0122 06:37:24.936271 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/86ad3ffd-89b2-4b4a-b1b1-72d6ad907204-console-serving-cert\") pod \"console-f9d7485db-zv6lm\" (UID: \"86ad3ffd-89b2-4b4a-b1b1-72d6ad907204\") " pod="openshift-console/console-f9d7485db-zv6lm" Jan 22 06:37:24 crc kubenswrapper[4720]: I0122 06:37:24.936434 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/0a21ae7b-9111-4c9f-a378-f2acdb19931a-audit-policies\") pod \"oauth-openshift-558db77b4-vp8tq\" (UID: \"0a21ae7b-9111-4c9f-a378-f2acdb19931a\") " pod="openshift-authentication/oauth-openshift-558db77b4-vp8tq" Jan 22 06:37:24 crc kubenswrapper[4720]: I0122 06:37:24.936870 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-69v9c\" (UniqueName: \"kubernetes.io/projected/3f7c9fba-71e2-44d4-9601-be0ffa541be4-kube-api-access-69v9c\") pod \"controller-manager-879f6c89f-dhklt\" (UID: \"3f7c9fba-71e2-44d4-9601-be0ffa541be4\") " pod="openshift-controller-manager/controller-manager-879f6c89f-dhklt" Jan 22 06:37:24 crc kubenswrapper[4720]: I0122 06:37:24.937071 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-rlg6q" Jan 22 06:37:24 crc kubenswrapper[4720]: I0122 06:37:24.937148 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/86ad3ffd-89b2-4b4a-b1b1-72d6ad907204-oauth-serving-cert\") pod \"console-f9d7485db-zv6lm\" (UID: \"86ad3ffd-89b2-4b4a-b1b1-72d6ad907204\") " pod="openshift-console/console-f9d7485db-zv6lm" Jan 22 06:37:24 crc kubenswrapper[4720]: I0122 06:37:24.937201 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/508eaeea-db9b-4801-a9d3-a758e3ae9502-serving-cert\") pod \"route-controller-manager-6576b87f9c-gxkzq\" (UID: \"508eaeea-db9b-4801-a9d3-a758e3ae9502\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-gxkzq" Jan 22 06:37:24 crc kubenswrapper[4720]: I0122 06:37:24.937573 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-zmhj8" Jan 22 06:37:24 crc kubenswrapper[4720]: I0122 06:37:24.939215 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1088f6d1-1bac-4e7c-a944-2e9b5d259413-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-9cvs7\" (UID: \"1088f6d1-1bac-4e7c-a944-2e9b5d259413\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9cvs7" Jan 22 06:37:24 crc kubenswrapper[4720]: I0122 06:37:24.939544 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7b9dafa1-4a65-48a2-bf74-5bfcea6aa310-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-gz8mf\" (UID: \"7b9dafa1-4a65-48a2-bf74-5bfcea6aa310\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gz8mf" Jan 22 06:37:24 crc kubenswrapper[4720]: I0122 06:37:24.939602 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/0a21ae7b-9111-4c9f-a378-f2acdb19931a-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-vp8tq\" (UID: \"0a21ae7b-9111-4c9f-a378-f2acdb19931a\") " pod="openshift-authentication/oauth-openshift-558db77b4-vp8tq" Jan 22 06:37:24 crc kubenswrapper[4720]: I0122 06:37:24.939636 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/42322892-7874-4c59-ab1a-e3f205212e2e-images\") pod \"machine-api-operator-5694c8668f-hxdwr\" (UID: \"42322892-7874-4c59-ab1a-e3f205212e2e\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-hxdwr" Jan 22 06:37:24 crc kubenswrapper[4720]: I0122 06:37:24.939674 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/86ad3ffd-89b2-4b4a-b1b1-72d6ad907204-service-ca\") pod \"console-f9d7485db-zv6lm\" (UID: \"86ad3ffd-89b2-4b4a-b1b1-72d6ad907204\") " pod="openshift-console/console-f9d7485db-zv6lm" Jan 22 06:37:24 crc kubenswrapper[4720]: I0122 06:37:24.939701 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1088f6d1-1bac-4e7c-a944-2e9b5d259413-encryption-config\") pod \"apiserver-7bbb656c7d-9cvs7\" (UID: \"1088f6d1-1bac-4e7c-a944-2e9b5d259413\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9cvs7" Jan 22 06:37:24 crc kubenswrapper[4720]: I0122 06:37:24.939731 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jzrfw\" (UniqueName: \"kubernetes.io/projected/7b9dafa1-4a65-48a2-bf74-5bfcea6aa310-kube-api-access-jzrfw\") pod \"openshift-apiserver-operator-796bbdcf4f-gz8mf\" (UID: \"7b9dafa1-4a65-48a2-bf74-5bfcea6aa310\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gz8mf" Jan 22 06:37:24 crc kubenswrapper[4720]: I0122 06:37:24.939759 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4b9e32c5-534c-42ed-96fd-4e747d7084dd-service-ca-bundle\") pod \"authentication-operator-69f744f599-bkx6t\" (UID: \"4b9e32c5-534c-42ed-96fd-4e747d7084dd\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bkx6t" Jan 22 06:37:24 crc kubenswrapper[4720]: I0122 06:37:24.939786 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-754cl\" (UniqueName: \"kubernetes.io/projected/610296d1-12dc-4132-8ef9-9cc37ed81a3d-kube-api-access-754cl\") pod \"console-operator-58897d9998-hrglt\" (UID: \"610296d1-12dc-4132-8ef9-9cc37ed81a3d\") " pod="openshift-console-operator/console-operator-58897d9998-hrglt" Jan 22 06:37:24 crc kubenswrapper[4720]: I0122 06:37:24.939815 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/1088f6d1-1bac-4e7c-a944-2e9b5d259413-audit-policies\") pod \"apiserver-7bbb656c7d-9cvs7\" (UID: \"1088f6d1-1bac-4e7c-a944-2e9b5d259413\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9cvs7" Jan 22 06:37:24 crc kubenswrapper[4720]: I0122 06:37:24.939842 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1088f6d1-1bac-4e7c-a944-2e9b5d259413-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-9cvs7\" (UID: \"1088f6d1-1bac-4e7c-a944-2e9b5d259413\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9cvs7" Jan 22 06:37:24 crc kubenswrapper[4720]: I0122 06:37:24.939868 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nxnxk\" (UniqueName: \"kubernetes.io/projected/4b9e32c5-534c-42ed-96fd-4e747d7084dd-kube-api-access-nxnxk\") pod \"authentication-operator-69f744f599-bkx6t\" (UID: \"4b9e32c5-534c-42ed-96fd-4e747d7084dd\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bkx6t" Jan 22 06:37:24 crc kubenswrapper[4720]: I0122 06:37:24.939888 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7b9dafa1-4a65-48a2-bf74-5bfcea6aa310-config\") pod \"openshift-apiserver-operator-796bbdcf4f-gz8mf\" (UID: \"7b9dafa1-4a65-48a2-bf74-5bfcea6aa310\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gz8mf" Jan 22 06:37:24 crc kubenswrapper[4720]: I0122 06:37:24.939940 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3f7c9fba-71e2-44d4-9601-be0ffa541be4-client-ca\") pod \"controller-manager-879f6c89f-dhklt\" (UID: \"3f7c9fba-71e2-44d4-9601-be0ffa541be4\") " pod="openshift-controller-manager/controller-manager-879f6c89f-dhklt" Jan 22 06:37:24 crc kubenswrapper[4720]: I0122 06:37:24.939960 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/0a21ae7b-9111-4c9f-a378-f2acdb19931a-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-vp8tq\" (UID: \"0a21ae7b-9111-4c9f-a378-f2acdb19931a\") " pod="openshift-authentication/oauth-openshift-558db77b4-vp8tq" Jan 22 06:37:24 crc kubenswrapper[4720]: I0122 06:37:24.939983 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/0a21ae7b-9111-4c9f-a378-f2acdb19931a-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-vp8tq\" (UID: \"0a21ae7b-9111-4c9f-a378-f2acdb19931a\") " pod="openshift-authentication/oauth-openshift-558db77b4-vp8tq" Jan 22 06:37:24 crc kubenswrapper[4720]: I0122 06:37:24.940008 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/e41cd5a0-a754-4161-938a-463f2673d37e-machine-approver-tls\") pod \"machine-approver-56656f9798-gxfr8\" (UID: \"e41cd5a0-a754-4161-938a-463f2673d37e\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gxfr8" Jan 22 06:37:24 crc kubenswrapper[4720]: I0122 06:37:24.940029 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t786k\" (UniqueName: \"kubernetes.io/projected/e41cd5a0-a754-4161-938a-463f2673d37e-kube-api-access-t786k\") pod \"machine-approver-56656f9798-gxfr8\" (UID: \"e41cd5a0-a754-4161-938a-463f2673d37e\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gxfr8" Jan 22 06:37:24 crc kubenswrapper[4720]: I0122 06:37:24.940049 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/0a21ae7b-9111-4c9f-a378-f2acdb19931a-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-vp8tq\" (UID: \"0a21ae7b-9111-4c9f-a378-f2acdb19931a\") " pod="openshift-authentication/oauth-openshift-558db77b4-vp8tq" Jan 22 06:37:24 crc kubenswrapper[4720]: I0122 06:37:24.940078 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/42322892-7874-4c59-ab1a-e3f205212e2e-config\") pod \"machine-api-operator-5694c8668f-hxdwr\" (UID: \"42322892-7874-4c59-ab1a-e3f205212e2e\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-hxdwr" Jan 22 06:37:24 crc kubenswrapper[4720]: I0122 06:37:24.940132 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wqjj6\" (UniqueName: \"kubernetes.io/projected/508eaeea-db9b-4801-a9d3-a758e3ae9502-kube-api-access-wqjj6\") pod \"route-controller-manager-6576b87f9c-gxkzq\" (UID: \"508eaeea-db9b-4801-a9d3-a758e3ae9502\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-gxkzq" Jan 22 06:37:24 crc kubenswrapper[4720]: I0122 06:37:24.940157 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/0a21ae7b-9111-4c9f-a378-f2acdb19931a-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-vp8tq\" (UID: \"0a21ae7b-9111-4c9f-a378-f2acdb19931a\") " pod="openshift-authentication/oauth-openshift-558db77b4-vp8tq" Jan 22 06:37:24 crc kubenswrapper[4720]: I0122 06:37:24.940187 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ghg4z\" (UniqueName: \"kubernetes.io/projected/42322892-7874-4c59-ab1a-e3f205212e2e-kube-api-access-ghg4z\") pod \"machine-api-operator-5694c8668f-hxdwr\" (UID: \"42322892-7874-4c59-ab1a-e3f205212e2e\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-hxdwr" Jan 22 06:37:24 crc kubenswrapper[4720]: I0122 06:37:24.940217 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/508eaeea-db9b-4801-a9d3-a758e3ae9502-client-ca\") pod \"route-controller-manager-6576b87f9c-gxkzq\" (UID: \"508eaeea-db9b-4801-a9d3-a758e3ae9502\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-gxkzq" Jan 22 06:37:24 crc kubenswrapper[4720]: I0122 06:37:24.940276 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/0a21ae7b-9111-4c9f-a378-f2acdb19931a-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-vp8tq\" (UID: \"0a21ae7b-9111-4c9f-a378-f2acdb19931a\") " pod="openshift-authentication/oauth-openshift-558db77b4-vp8tq" Jan 22 06:37:24 crc kubenswrapper[4720]: I0122 06:37:24.940308 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z727j\" (UniqueName: \"kubernetes.io/projected/0a21ae7b-9111-4c9f-a378-f2acdb19931a-kube-api-access-z727j\") pod \"oauth-openshift-558db77b4-vp8tq\" (UID: \"0a21ae7b-9111-4c9f-a378-f2acdb19931a\") " pod="openshift-authentication/oauth-openshift-558db77b4-vp8tq" Jan 22 06:37:24 crc kubenswrapper[4720]: I0122 06:37:24.940378 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3f7c9fba-71e2-44d4-9601-be0ffa541be4-serving-cert\") pod \"controller-manager-879f6c89f-dhklt\" (UID: \"3f7c9fba-71e2-44d4-9601-be0ffa541be4\") " pod="openshift-controller-manager/controller-manager-879f6c89f-dhklt" Jan 22 06:37:24 crc kubenswrapper[4720]: I0122 06:37:24.940402 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e41cd5a0-a754-4161-938a-463f2673d37e-auth-proxy-config\") pod \"machine-approver-56656f9798-gxfr8\" (UID: \"e41cd5a0-a754-4161-938a-463f2673d37e\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gxfr8" Jan 22 06:37:24 crc kubenswrapper[4720]: I0122 06:37:24.940531 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3f7c9fba-71e2-44d4-9601-be0ffa541be4-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-dhklt\" (UID: \"3f7c9fba-71e2-44d4-9601-be0ffa541be4\") " pod="openshift-controller-manager/controller-manager-879f6c89f-dhklt" Jan 22 06:37:24 crc kubenswrapper[4720]: I0122 06:37:24.940685 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/0a21ae7b-9111-4c9f-a378-f2acdb19931a-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-vp8tq\" (UID: \"0a21ae7b-9111-4c9f-a378-f2acdb19931a\") " pod="openshift-authentication/oauth-openshift-558db77b4-vp8tq" Jan 22 06:37:24 crc kubenswrapper[4720]: I0122 06:37:24.940758 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tpw8s\" (UniqueName: \"kubernetes.io/projected/1088f6d1-1bac-4e7c-a944-2e9b5d259413-kube-api-access-tpw8s\") pod \"apiserver-7bbb656c7d-9cvs7\" (UID: \"1088f6d1-1bac-4e7c-a944-2e9b5d259413\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9cvs7" Jan 22 06:37:24 crc kubenswrapper[4720]: I0122 06:37:24.940862 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/610296d1-12dc-4132-8ef9-9cc37ed81a3d-serving-cert\") pod \"console-operator-58897d9998-hrglt\" (UID: \"610296d1-12dc-4132-8ef9-9cc37ed81a3d\") " pod="openshift-console-operator/console-operator-58897d9998-hrglt" Jan 22 06:37:24 crc kubenswrapper[4720]: I0122 06:37:24.941266 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-zcbc4"] Jan 22 06:37:24 crc kubenswrapper[4720]: I0122 06:37:24.945999 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 22 06:37:24 crc kubenswrapper[4720]: I0122 06:37:24.948768 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 22 06:37:24 crc kubenswrapper[4720]: I0122 06:37:24.949412 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-6j6b9"] Jan 22 06:37:24 crc kubenswrapper[4720]: I0122 06:37:24.949732 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-nn9tr"] Jan 22 06:37:24 crc kubenswrapper[4720]: I0122 06:37:24.949895 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-6j6b9" Jan 22 06:37:24 crc kubenswrapper[4720]: I0122 06:37:24.950823 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-zcbc4" Jan 22 06:37:24 crc kubenswrapper[4720]: I0122 06:37:24.951017 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-twh47" Jan 22 06:37:24 crc kubenswrapper[4720]: I0122 06:37:24.953393 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 22 06:37:24 crc kubenswrapper[4720]: I0122 06:37:24.956482 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 22 06:37:24 crc kubenswrapper[4720]: I0122 06:37:24.956712 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-xf5cz"] Jan 22 06:37:24 crc kubenswrapper[4720]: I0122 06:37:24.957181 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-8khmt"] Jan 22 06:37:24 crc kubenswrapper[4720]: I0122 06:37:24.957173 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 22 06:37:24 crc kubenswrapper[4720]: I0122 06:37:24.957212 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 22 06:37:24 crc kubenswrapper[4720]: I0122 06:37:24.957362 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-nn9tr" Jan 22 06:37:24 crc kubenswrapper[4720]: I0122 06:37:24.957457 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 22 06:37:24 crc kubenswrapper[4720]: I0122 06:37:24.957384 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-xf5cz" Jan 22 06:37:24 crc kubenswrapper[4720]: I0122 06:37:24.957663 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 22 06:37:24 crc kubenswrapper[4720]: I0122 06:37:24.957677 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 22 06:37:24 crc kubenswrapper[4720]: I0122 06:37:24.957760 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 22 06:37:24 crc kubenswrapper[4720]: I0122 06:37:24.957730 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 22 06:37:24 crc kubenswrapper[4720]: I0122 06:37:24.957851 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 22 06:37:24 crc kubenswrapper[4720]: I0122 06:37:24.957899 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 22 06:37:24 crc kubenswrapper[4720]: I0122 06:37:24.957954 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 22 06:37:24 crc kubenswrapper[4720]: I0122 06:37:24.958001 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 22 06:37:24 crc kubenswrapper[4720]: I0122 06:37:24.958022 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 22 06:37:24 crc kubenswrapper[4720]: I0122 06:37:24.958050 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 22 06:37:24 crc kubenswrapper[4720]: I0122 06:37:24.958084 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 22 06:37:24 crc kubenswrapper[4720]: I0122 06:37:24.958147 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 22 06:37:24 crc kubenswrapper[4720]: I0122 06:37:24.958166 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 22 06:37:24 crc kubenswrapper[4720]: I0122 06:37:24.958183 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 22 06:37:24 crc kubenswrapper[4720]: I0122 06:37:24.958195 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 22 06:37:24 crc kubenswrapper[4720]: I0122 06:37:24.958268 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 22 06:37:24 crc kubenswrapper[4720]: I0122 06:37:24.958282 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 22 06:37:24 crc kubenswrapper[4720]: I0122 06:37:24.958295 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 22 06:37:24 crc kubenswrapper[4720]: I0122 06:37:24.958307 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 22 06:37:24 crc kubenswrapper[4720]: I0122 06:37:24.958352 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 22 06:37:24 crc kubenswrapper[4720]: I0122 06:37:24.958362 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 22 06:37:24 crc kubenswrapper[4720]: I0122 06:37:24.958407 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 22 06:37:24 crc kubenswrapper[4720]: I0122 06:37:24.958761 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-g9f7l"] Jan 22 06:37:24 crc kubenswrapper[4720]: I0122 06:37:24.959162 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-8khmt" Jan 22 06:37:24 crc kubenswrapper[4720]: I0122 06:37:24.959379 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-x9zg2"] Jan 22 06:37:24 crc kubenswrapper[4720]: I0122 06:37:24.959709 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-g9f7l" Jan 22 06:37:24 crc kubenswrapper[4720]: I0122 06:37:24.960053 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-x9zg2" Jan 22 06:37:24 crc kubenswrapper[4720]: I0122 06:37:24.960801 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 22 06:37:24 crc kubenswrapper[4720]: I0122 06:37:24.961110 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 22 06:37:24 crc kubenswrapper[4720]: I0122 06:37:24.963808 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2n672"] Jan 22 06:37:24 crc kubenswrapper[4720]: I0122 06:37:24.964536 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2n672" Jan 22 06:37:24 crc kubenswrapper[4720]: I0122 06:37:24.966015 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-qph2m"] Jan 22 06:37:24 crc kubenswrapper[4720]: I0122 06:37:24.967884 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-bj86g"] Jan 22 06:37:24 crc kubenswrapper[4720]: I0122 06:37:24.968328 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-bj86g" Jan 22 06:37:24 crc kubenswrapper[4720]: I0122 06:37:24.968340 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-qph2m" Jan 22 06:37:24 crc kubenswrapper[4720]: I0122 06:37:24.977632 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 22 06:37:24 crc kubenswrapper[4720]: I0122 06:37:24.978175 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-n9zff"] Jan 22 06:37:24 crc kubenswrapper[4720]: I0122 06:37:24.981064 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-x4kkz"] Jan 22 06:37:24 crc kubenswrapper[4720]: I0122 06:37:24.981200 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-n9zff" Jan 22 06:37:24 crc kubenswrapper[4720]: I0122 06:37:24.984217 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-nhzl2"] Jan 22 06:37:24 crc kubenswrapper[4720]: I0122 06:37:24.984371 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-x4kkz" Jan 22 06:37:24 crc kubenswrapper[4720]: I0122 06:37:24.984587 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gz8mf"] Jan 22 06:37:24 crc kubenswrapper[4720]: I0122 06:37:24.984615 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484390-gc7lm"] Jan 22 06:37:24 crc kubenswrapper[4720]: I0122 06:37:24.984817 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-nhzl2" Jan 22 06:37:24 crc kubenswrapper[4720]: I0122 06:37:24.984982 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484390-gc7lm" Jan 22 06:37:24 crc kubenswrapper[4720]: I0122 06:37:24.986037 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-4b66q"] Jan 22 06:37:24 crc kubenswrapper[4720]: I0122 06:37:24.987304 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vmvqs"] Jan 22 06:37:24 crc kubenswrapper[4720]: I0122 06:37:24.987455 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-4b66q" Jan 22 06:37:24 crc kubenswrapper[4720]: I0122 06:37:24.987754 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vmvqs" Jan 22 06:37:24 crc kubenswrapper[4720]: I0122 06:37:24.988864 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-24gcw"] Jan 22 06:37:24 crc kubenswrapper[4720]: I0122 06:37:24.989524 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-24gcw" Jan 22 06:37:24 crc kubenswrapper[4720]: I0122 06:37:24.990874 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-tk7sp"] Jan 22 06:37:24 crc kubenswrapper[4720]: I0122 06:37:24.991320 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-tk7sp" Jan 22 06:37:24 crc kubenswrapper[4720]: I0122 06:37:24.992325 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-9cvs7"] Jan 22 06:37:24 crc kubenswrapper[4720]: I0122 06:37:24.993681 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-zv6lm"] Jan 22 06:37:24 crc kubenswrapper[4720]: I0122 06:37:24.994793 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-gxkzq"] Jan 22 06:37:24 crc kubenswrapper[4720]: I0122 06:37:24.996557 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-hrglt"] Jan 22 06:37:24 crc kubenswrapper[4720]: I0122 06:37:24.998902 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 22 06:37:24 crc kubenswrapper[4720]: I0122 06:37:24.999138 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-ws6w8"] Jan 22 06:37:25 crc kubenswrapper[4720]: I0122 06:37:25.002037 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-vp8tq"] Jan 22 06:37:25 crc kubenswrapper[4720]: I0122 06:37:25.005944 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-dsfv4"] Jan 22 06:37:25 crc kubenswrapper[4720]: I0122 06:37:25.011608 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-x6khz"] Jan 22 06:37:25 crc kubenswrapper[4720]: I0122 06:37:25.012498 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-9n8jj"] Jan 22 06:37:25 crc kubenswrapper[4720]: I0122 06:37:25.021193 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-9n8jj" Jan 22 06:37:25 crc kubenswrapper[4720]: I0122 06:37:25.026167 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 22 06:37:25 crc kubenswrapper[4720]: I0122 06:37:25.027647 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-bkx6t"] Jan 22 06:37:25 crc kubenswrapper[4720]: I0122 06:37:25.030253 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-t25tr"] Jan 22 06:37:25 crc kubenswrapper[4720]: I0122 06:37:25.039627 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-zcbc4"] Jan 22 06:37:25 crc kubenswrapper[4720]: I0122 06:37:25.041795 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1088f6d1-1bac-4e7c-a944-2e9b5d259413-etcd-client\") pod \"apiserver-7bbb656c7d-9cvs7\" (UID: \"1088f6d1-1bac-4e7c-a944-2e9b5d259413\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9cvs7" Jan 22 06:37:25 crc kubenswrapper[4720]: I0122 06:37:25.041830 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hf4tk\" (UniqueName: \"kubernetes.io/projected/86ad3ffd-89b2-4b4a-b1b1-72d6ad907204-kube-api-access-hf4tk\") pod \"console-f9d7485db-zv6lm\" (UID: \"86ad3ffd-89b2-4b4a-b1b1-72d6ad907204\") " pod="openshift-console/console-f9d7485db-zv6lm" Jan 22 06:37:25 crc kubenswrapper[4720]: I0122 06:37:25.041854 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e41cd5a0-a754-4161-938a-463f2673d37e-config\") pod \"machine-approver-56656f9798-gxfr8\" (UID: \"e41cd5a0-a754-4161-938a-463f2673d37e\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gxfr8" Jan 22 06:37:25 crc kubenswrapper[4720]: I0122 06:37:25.041877 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/610296d1-12dc-4132-8ef9-9cc37ed81a3d-trusted-ca\") pod \"console-operator-58897d9998-hrglt\" (UID: \"610296d1-12dc-4132-8ef9-9cc37ed81a3d\") " pod="openshift-console-operator/console-operator-58897d9998-hrglt" Jan 22 06:37:25 crc kubenswrapper[4720]: I0122 06:37:25.041898 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/0a21ae7b-9111-4c9f-a378-f2acdb19931a-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-vp8tq\" (UID: \"0a21ae7b-9111-4c9f-a378-f2acdb19931a\") " pod="openshift-authentication/oauth-openshift-558db77b4-vp8tq" Jan 22 06:37:25 crc kubenswrapper[4720]: I0122 06:37:25.041940 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1088f6d1-1bac-4e7c-a944-2e9b5d259413-serving-cert\") pod \"apiserver-7bbb656c7d-9cvs7\" (UID: \"1088f6d1-1bac-4e7c-a944-2e9b5d259413\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9cvs7" Jan 22 06:37:25 crc kubenswrapper[4720]: I0122 06:37:25.041958 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3f7c9fba-71e2-44d4-9601-be0ffa541be4-config\") pod \"controller-manager-879f6c89f-dhklt\" (UID: \"3f7c9fba-71e2-44d4-9601-be0ffa541be4\") " pod="openshift-controller-manager/controller-manager-879f6c89f-dhklt" Jan 22 06:37:25 crc kubenswrapper[4720]: I0122 06:37:25.041983 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/12b3f8d7-d79f-48e6-be2f-eeb97827e913-default-certificate\") pod \"router-default-5444994796-6j6b9\" (UID: \"12b3f8d7-d79f-48e6-be2f-eeb97827e913\") " pod="openshift-ingress/router-default-5444994796-6j6b9" Jan 22 06:37:25 crc kubenswrapper[4720]: I0122 06:37:25.042032 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/610296d1-12dc-4132-8ef9-9cc37ed81a3d-config\") pod \"console-operator-58897d9998-hrglt\" (UID: \"610296d1-12dc-4132-8ef9-9cc37ed81a3d\") " pod="openshift-console-operator/console-operator-58897d9998-hrglt" Jan 22 06:37:25 crc kubenswrapper[4720]: I0122 06:37:25.042054 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/0a21ae7b-9111-4c9f-a378-f2acdb19931a-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-vp8tq\" (UID: \"0a21ae7b-9111-4c9f-a378-f2acdb19931a\") " pod="openshift-authentication/oauth-openshift-558db77b4-vp8tq" Jan 22 06:37:25 crc kubenswrapper[4720]: I0122 06:37:25.042074 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4b9e32c5-534c-42ed-96fd-4e747d7084dd-config\") pod \"authentication-operator-69f744f599-bkx6t\" (UID: \"4b9e32c5-534c-42ed-96fd-4e747d7084dd\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bkx6t" Jan 22 06:37:25 crc kubenswrapper[4720]: I0122 06:37:25.042094 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tl5vk\" (UniqueName: \"kubernetes.io/projected/824d4c6b-8052-429c-a050-4339913991b5-kube-api-access-tl5vk\") pod \"cluster-samples-operator-665b6dd947-x6khz\" (UID: \"824d4c6b-8052-429c-a050-4339913991b5\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-x6khz" Jan 22 06:37:25 crc kubenswrapper[4720]: I0122 06:37:25.042112 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4b9e32c5-534c-42ed-96fd-4e747d7084dd-serving-cert\") pod \"authentication-operator-69f744f599-bkx6t\" (UID: \"4b9e32c5-534c-42ed-96fd-4e747d7084dd\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bkx6t" Jan 22 06:37:25 crc kubenswrapper[4720]: I0122 06:37:25.042131 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0a21ae7b-9111-4c9f-a378-f2acdb19931a-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-vp8tq\" (UID: \"0a21ae7b-9111-4c9f-a378-f2acdb19931a\") " pod="openshift-authentication/oauth-openshift-558db77b4-vp8tq" Jan 22 06:37:25 crc kubenswrapper[4720]: I0122 06:37:25.042149 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/12b3f8d7-d79f-48e6-be2f-eeb97827e913-stats-auth\") pod \"router-default-5444994796-6j6b9\" (UID: \"12b3f8d7-d79f-48e6-be2f-eeb97827e913\") " pod="openshift-ingress/router-default-5444994796-6j6b9" Jan 22 06:37:25 crc kubenswrapper[4720]: I0122 06:37:25.042171 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/42322892-7874-4c59-ab1a-e3f205212e2e-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-hxdwr\" (UID: \"42322892-7874-4c59-ab1a-e3f205212e2e\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-hxdwr" Jan 22 06:37:25 crc kubenswrapper[4720]: I0122 06:37:25.042188 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/86ad3ffd-89b2-4b4a-b1b1-72d6ad907204-console-config\") pod \"console-f9d7485db-zv6lm\" (UID: \"86ad3ffd-89b2-4b4a-b1b1-72d6ad907204\") " pod="openshift-console/console-f9d7485db-zv6lm" Jan 22 06:37:25 crc kubenswrapper[4720]: I0122 06:37:25.042207 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/824d4c6b-8052-429c-a050-4339913991b5-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-x6khz\" (UID: \"824d4c6b-8052-429c-a050-4339913991b5\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-x6khz" Jan 22 06:37:25 crc kubenswrapper[4720]: I0122 06:37:25.042210 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-8khmt"] Jan 22 06:37:25 crc kubenswrapper[4720]: I0122 06:37:25.042225 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dxtjd\" (UniqueName: \"kubernetes.io/projected/dc1c1a54-81dc-4e91-80db-606befa6c477-kube-api-access-dxtjd\") pod \"downloads-7954f5f757-ws6w8\" (UID: \"dc1c1a54-81dc-4e91-80db-606befa6c477\") " pod="openshift-console/downloads-7954f5f757-ws6w8" Jan 22 06:37:25 crc kubenswrapper[4720]: I0122 06:37:25.042508 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4b9e32c5-534c-42ed-96fd-4e747d7084dd-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-bkx6t\" (UID: \"4b9e32c5-534c-42ed-96fd-4e747d7084dd\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bkx6t" Jan 22 06:37:25 crc kubenswrapper[4720]: I0122 06:37:25.042557 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/508eaeea-db9b-4801-a9d3-a758e3ae9502-config\") pod \"route-controller-manager-6576b87f9c-gxkzq\" (UID: \"508eaeea-db9b-4801-a9d3-a758e3ae9502\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-gxkzq" Jan 22 06:37:25 crc kubenswrapper[4720]: I0122 06:37:25.042582 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/b768bae9-692e-4039-8fea-d88359e16ee4-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-zmhj8\" (UID: \"b768bae9-692e-4039-8fea-d88359e16ee4\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-zmhj8" Jan 22 06:37:25 crc kubenswrapper[4720]: I0122 06:37:25.042643 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/86ad3ffd-89b2-4b4a-b1b1-72d6ad907204-console-serving-cert\") pod \"console-f9d7485db-zv6lm\" (UID: \"86ad3ffd-89b2-4b4a-b1b1-72d6ad907204\") " pod="openshift-console/console-f9d7485db-zv6lm" Jan 22 06:37:25 crc kubenswrapper[4720]: I0122 06:37:25.042670 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/0a21ae7b-9111-4c9f-a378-f2acdb19931a-audit-policies\") pod \"oauth-openshift-558db77b4-vp8tq\" (UID: \"0a21ae7b-9111-4c9f-a378-f2acdb19931a\") " pod="openshift-authentication/oauth-openshift-558db77b4-vp8tq" Jan 22 06:37:25 crc kubenswrapper[4720]: I0122 06:37:25.042689 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-69v9c\" (UniqueName: \"kubernetes.io/projected/3f7c9fba-71e2-44d4-9601-be0ffa541be4-kube-api-access-69v9c\") pod \"controller-manager-879f6c89f-dhklt\" (UID: \"3f7c9fba-71e2-44d4-9601-be0ffa541be4\") " pod="openshift-controller-manager/controller-manager-879f6c89f-dhklt" Jan 22 06:37:25 crc kubenswrapper[4720]: I0122 06:37:25.042710 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/86ad3ffd-89b2-4b4a-b1b1-72d6ad907204-oauth-serving-cert\") pod \"console-f9d7485db-zv6lm\" (UID: \"86ad3ffd-89b2-4b4a-b1b1-72d6ad907204\") " pod="openshift-console/console-f9d7485db-zv6lm" Jan 22 06:37:25 crc kubenswrapper[4720]: I0122 06:37:25.042727 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/508eaeea-db9b-4801-a9d3-a758e3ae9502-serving-cert\") pod \"route-controller-manager-6576b87f9c-gxkzq\" (UID: \"508eaeea-db9b-4801-a9d3-a758e3ae9502\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-gxkzq" Jan 22 06:37:25 crc kubenswrapper[4720]: I0122 06:37:25.042743 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1088f6d1-1bac-4e7c-a944-2e9b5d259413-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-9cvs7\" (UID: \"1088f6d1-1bac-4e7c-a944-2e9b5d259413\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9cvs7" Jan 22 06:37:25 crc kubenswrapper[4720]: I0122 06:37:25.042759 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7b9dafa1-4a65-48a2-bf74-5bfcea6aa310-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-gz8mf\" (UID: \"7b9dafa1-4a65-48a2-bf74-5bfcea6aa310\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gz8mf" Jan 22 06:37:25 crc kubenswrapper[4720]: I0122 06:37:25.042778 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/12b3f8d7-d79f-48e6-be2f-eeb97827e913-metrics-certs\") pod \"router-default-5444994796-6j6b9\" (UID: \"12b3f8d7-d79f-48e6-be2f-eeb97827e913\") " pod="openshift-ingress/router-default-5444994796-6j6b9" Jan 22 06:37:25 crc kubenswrapper[4720]: I0122 06:37:25.042806 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/0a21ae7b-9111-4c9f-a378-f2acdb19931a-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-vp8tq\" (UID: \"0a21ae7b-9111-4c9f-a378-f2acdb19931a\") " pod="openshift-authentication/oauth-openshift-558db77b4-vp8tq" Jan 22 06:37:25 crc kubenswrapper[4720]: I0122 06:37:25.042825 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/86ad3ffd-89b2-4b4a-b1b1-72d6ad907204-service-ca\") pod \"console-f9d7485db-zv6lm\" (UID: \"86ad3ffd-89b2-4b4a-b1b1-72d6ad907204\") " pod="openshift-console/console-f9d7485db-zv6lm" Jan 22 06:37:25 crc kubenswrapper[4720]: I0122 06:37:25.042841 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1088f6d1-1bac-4e7c-a944-2e9b5d259413-encryption-config\") pod \"apiserver-7bbb656c7d-9cvs7\" (UID: \"1088f6d1-1bac-4e7c-a944-2e9b5d259413\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9cvs7" Jan 22 06:37:25 crc kubenswrapper[4720]: I0122 06:37:25.042865 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/42322892-7874-4c59-ab1a-e3f205212e2e-images\") pod \"machine-api-operator-5694c8668f-hxdwr\" (UID: \"42322892-7874-4c59-ab1a-e3f205212e2e\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-hxdwr" Jan 22 06:37:25 crc kubenswrapper[4720]: I0122 06:37:25.042890 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/87a73166-b8c6-4dab-bd45-46b640a4b1c5-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-twh47\" (UID: \"87a73166-b8c6-4dab-bd45-46b640a4b1c5\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-twh47" Jan 22 06:37:25 crc kubenswrapper[4720]: I0122 06:37:25.042927 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jzrfw\" (UniqueName: \"kubernetes.io/projected/7b9dafa1-4a65-48a2-bf74-5bfcea6aa310-kube-api-access-jzrfw\") pod \"openshift-apiserver-operator-796bbdcf4f-gz8mf\" (UID: \"7b9dafa1-4a65-48a2-bf74-5bfcea6aa310\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gz8mf" Jan 22 06:37:25 crc kubenswrapper[4720]: I0122 06:37:25.042935 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4b9e32c5-534c-42ed-96fd-4e747d7084dd-config\") pod \"authentication-operator-69f744f599-bkx6t\" (UID: \"4b9e32c5-534c-42ed-96fd-4e747d7084dd\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bkx6t" Jan 22 06:37:25 crc kubenswrapper[4720]: I0122 06:37:25.042947 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4b9e32c5-534c-42ed-96fd-4e747d7084dd-service-ca-bundle\") pod \"authentication-operator-69f744f599-bkx6t\" (UID: \"4b9e32c5-534c-42ed-96fd-4e747d7084dd\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bkx6t" Jan 22 06:37:25 crc kubenswrapper[4720]: I0122 06:37:25.042968 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-754cl\" (UniqueName: \"kubernetes.io/projected/610296d1-12dc-4132-8ef9-9cc37ed81a3d-kube-api-access-754cl\") pod \"console-operator-58897d9998-hrglt\" (UID: \"610296d1-12dc-4132-8ef9-9cc37ed81a3d\") " pod="openshift-console-operator/console-operator-58897d9998-hrglt" Jan 22 06:37:25 crc kubenswrapper[4720]: I0122 06:37:25.042988 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nxnxk\" (UniqueName: \"kubernetes.io/projected/4b9e32c5-534c-42ed-96fd-4e747d7084dd-kube-api-access-nxnxk\") pod \"authentication-operator-69f744f599-bkx6t\" (UID: \"4b9e32c5-534c-42ed-96fd-4e747d7084dd\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bkx6t" Jan 22 06:37:25 crc kubenswrapper[4720]: I0122 06:37:25.043009 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/87a73166-b8c6-4dab-bd45-46b640a4b1c5-config\") pod \"kube-apiserver-operator-766d6c64bb-twh47\" (UID: \"87a73166-b8c6-4dab-bd45-46b640a4b1c5\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-twh47" Jan 22 06:37:25 crc kubenswrapper[4720]: I0122 06:37:25.043030 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/1088f6d1-1bac-4e7c-a944-2e9b5d259413-audit-policies\") pod \"apiserver-7bbb656c7d-9cvs7\" (UID: \"1088f6d1-1bac-4e7c-a944-2e9b5d259413\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9cvs7" Jan 22 06:37:25 crc kubenswrapper[4720]: I0122 06:37:25.043047 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1088f6d1-1bac-4e7c-a944-2e9b5d259413-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-9cvs7\" (UID: \"1088f6d1-1bac-4e7c-a944-2e9b5d259413\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9cvs7" Jan 22 06:37:25 crc kubenswrapper[4720]: I0122 06:37:25.043064 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7b9dafa1-4a65-48a2-bf74-5bfcea6aa310-config\") pod \"openshift-apiserver-operator-796bbdcf4f-gz8mf\" (UID: \"7b9dafa1-4a65-48a2-bf74-5bfcea6aa310\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gz8mf" Jan 22 06:37:25 crc kubenswrapper[4720]: I0122 06:37:25.043080 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3f7c9fba-71e2-44d4-9601-be0ffa541be4-client-ca\") pod \"controller-manager-879f6c89f-dhklt\" (UID: \"3f7c9fba-71e2-44d4-9601-be0ffa541be4\") " pod="openshift-controller-manager/controller-manager-879f6c89f-dhklt" Jan 22 06:37:25 crc kubenswrapper[4720]: I0122 06:37:25.043097 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/0a21ae7b-9111-4c9f-a378-f2acdb19931a-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-vp8tq\" (UID: \"0a21ae7b-9111-4c9f-a378-f2acdb19931a\") " pod="openshift-authentication/oauth-openshift-558db77b4-vp8tq" Jan 22 06:37:25 crc kubenswrapper[4720]: I0122 06:37:25.043115 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/0a21ae7b-9111-4c9f-a378-f2acdb19931a-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-vp8tq\" (UID: \"0a21ae7b-9111-4c9f-a378-f2acdb19931a\") " pod="openshift-authentication/oauth-openshift-558db77b4-vp8tq" Jan 22 06:37:25 crc kubenswrapper[4720]: I0122 06:37:25.043135 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/0a21ae7b-9111-4c9f-a378-f2acdb19931a-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-vp8tq\" (UID: \"0a21ae7b-9111-4c9f-a378-f2acdb19931a\") " pod="openshift-authentication/oauth-openshift-558db77b4-vp8tq" Jan 22 06:37:25 crc kubenswrapper[4720]: I0122 06:37:25.043152 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/e41cd5a0-a754-4161-938a-463f2673d37e-machine-approver-tls\") pod \"machine-approver-56656f9798-gxfr8\" (UID: \"e41cd5a0-a754-4161-938a-463f2673d37e\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gxfr8" Jan 22 06:37:25 crc kubenswrapper[4720]: I0122 06:37:25.043169 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t786k\" (UniqueName: \"kubernetes.io/projected/e41cd5a0-a754-4161-938a-463f2673d37e-kube-api-access-t786k\") pod \"machine-approver-56656f9798-gxfr8\" (UID: \"e41cd5a0-a754-4161-938a-463f2673d37e\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gxfr8" Jan 22 06:37:25 crc kubenswrapper[4720]: I0122 06:37:25.043188 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/42322892-7874-4c59-ab1a-e3f205212e2e-config\") pod \"machine-api-operator-5694c8668f-hxdwr\" (UID: \"42322892-7874-4c59-ab1a-e3f205212e2e\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-hxdwr" Jan 22 06:37:25 crc kubenswrapper[4720]: I0122 06:37:25.043207 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/87a73166-b8c6-4dab-bd45-46b640a4b1c5-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-twh47\" (UID: \"87a73166-b8c6-4dab-bd45-46b640a4b1c5\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-twh47" Jan 22 06:37:25 crc kubenswrapper[4720]: I0122 06:37:25.043226 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/0a21ae7b-9111-4c9f-a378-f2acdb19931a-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-vp8tq\" (UID: \"0a21ae7b-9111-4c9f-a378-f2acdb19931a\") " pod="openshift-authentication/oauth-openshift-558db77b4-vp8tq" Jan 22 06:37:25 crc kubenswrapper[4720]: I0122 06:37:25.043251 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wqjj6\" (UniqueName: \"kubernetes.io/projected/508eaeea-db9b-4801-a9d3-a758e3ae9502-kube-api-access-wqjj6\") pod \"route-controller-manager-6576b87f9c-gxkzq\" (UID: \"508eaeea-db9b-4801-a9d3-a758e3ae9502\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-gxkzq" Jan 22 06:37:25 crc kubenswrapper[4720]: I0122 06:37:25.043269 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/0a21ae7b-9111-4c9f-a378-f2acdb19931a-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-vp8tq\" (UID: \"0a21ae7b-9111-4c9f-a378-f2acdb19931a\") " pod="openshift-authentication/oauth-openshift-558db77b4-vp8tq" Jan 22 06:37:25 crc kubenswrapper[4720]: I0122 06:37:25.043285 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z727j\" (UniqueName: \"kubernetes.io/projected/0a21ae7b-9111-4c9f-a378-f2acdb19931a-kube-api-access-z727j\") pod \"oauth-openshift-558db77b4-vp8tq\" (UID: \"0a21ae7b-9111-4c9f-a378-f2acdb19931a\") " pod="openshift-authentication/oauth-openshift-558db77b4-vp8tq" Jan 22 06:37:25 crc kubenswrapper[4720]: I0122 06:37:25.043302 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ghg4z\" (UniqueName: \"kubernetes.io/projected/42322892-7874-4c59-ab1a-e3f205212e2e-kube-api-access-ghg4z\") pod \"machine-api-operator-5694c8668f-hxdwr\" (UID: \"42322892-7874-4c59-ab1a-e3f205212e2e\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-hxdwr" Jan 22 06:37:25 crc kubenswrapper[4720]: I0122 06:37:25.043318 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/508eaeea-db9b-4801-a9d3-a758e3ae9502-client-ca\") pod \"route-controller-manager-6576b87f9c-gxkzq\" (UID: \"508eaeea-db9b-4801-a9d3-a758e3ae9502\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-gxkzq" Jan 22 06:37:25 crc kubenswrapper[4720]: I0122 06:37:25.043335 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3f7c9fba-71e2-44d4-9601-be0ffa541be4-serving-cert\") pod \"controller-manager-879f6c89f-dhklt\" (UID: \"3f7c9fba-71e2-44d4-9601-be0ffa541be4\") " pod="openshift-controller-manager/controller-manager-879f6c89f-dhklt" Jan 22 06:37:25 crc kubenswrapper[4720]: I0122 06:37:25.043352 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e41cd5a0-a754-4161-938a-463f2673d37e-auth-proxy-config\") pod \"machine-approver-56656f9798-gxfr8\" (UID: \"e41cd5a0-a754-4161-938a-463f2673d37e\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gxfr8" Jan 22 06:37:25 crc kubenswrapper[4720]: I0122 06:37:25.043370 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/12b3f8d7-d79f-48e6-be2f-eeb97827e913-service-ca-bundle\") pod \"router-default-5444994796-6j6b9\" (UID: \"12b3f8d7-d79f-48e6-be2f-eeb97827e913\") " pod="openshift-ingress/router-default-5444994796-6j6b9" Jan 22 06:37:25 crc kubenswrapper[4720]: I0122 06:37:25.043390 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3f7c9fba-71e2-44d4-9601-be0ffa541be4-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-dhklt\" (UID: \"3f7c9fba-71e2-44d4-9601-be0ffa541be4\") " pod="openshift-controller-manager/controller-manager-879f6c89f-dhklt" Jan 22 06:37:25 crc kubenswrapper[4720]: I0122 06:37:25.043393 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e41cd5a0-a754-4161-938a-463f2673d37e-config\") pod \"machine-approver-56656f9798-gxfr8\" (UID: \"e41cd5a0-a754-4161-938a-463f2673d37e\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gxfr8" Jan 22 06:37:25 crc kubenswrapper[4720]: I0122 06:37:25.043407 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qq57f\" (UniqueName: \"kubernetes.io/projected/b768bae9-692e-4039-8fea-d88359e16ee4-kube-api-access-qq57f\") pod \"control-plane-machine-set-operator-78cbb6b69f-zmhj8\" (UID: \"b768bae9-692e-4039-8fea-d88359e16ee4\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-zmhj8" Jan 22 06:37:25 crc kubenswrapper[4720]: I0122 06:37:25.043430 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/0a21ae7b-9111-4c9f-a378-f2acdb19931a-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-vp8tq\" (UID: \"0a21ae7b-9111-4c9f-a378-f2acdb19931a\") " pod="openshift-authentication/oauth-openshift-558db77b4-vp8tq" Jan 22 06:37:25 crc kubenswrapper[4720]: I0122 06:37:25.043448 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ndbzv\" (UniqueName: \"kubernetes.io/projected/12b3f8d7-d79f-48e6-be2f-eeb97827e913-kube-api-access-ndbzv\") pod \"router-default-5444994796-6j6b9\" (UID: \"12b3f8d7-d79f-48e6-be2f-eeb97827e913\") " pod="openshift-ingress/router-default-5444994796-6j6b9" Jan 22 06:37:25 crc kubenswrapper[4720]: I0122 06:37:25.043484 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tpw8s\" (UniqueName: \"kubernetes.io/projected/1088f6d1-1bac-4e7c-a944-2e9b5d259413-kube-api-access-tpw8s\") pod \"apiserver-7bbb656c7d-9cvs7\" (UID: \"1088f6d1-1bac-4e7c-a944-2e9b5d259413\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9cvs7" Jan 22 06:37:25 crc kubenswrapper[4720]: I0122 06:37:25.043517 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/610296d1-12dc-4132-8ef9-9cc37ed81a3d-serving-cert\") pod \"console-operator-58897d9998-hrglt\" (UID: \"610296d1-12dc-4132-8ef9-9cc37ed81a3d\") " pod="openshift-console-operator/console-operator-58897d9998-hrglt" Jan 22 06:37:25 crc kubenswrapper[4720]: I0122 06:37:25.043537 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/86ad3ffd-89b2-4b4a-b1b1-72d6ad907204-console-oauth-config\") pod \"console-f9d7485db-zv6lm\" (UID: \"86ad3ffd-89b2-4b4a-b1b1-72d6ad907204\") " pod="openshift-console/console-f9d7485db-zv6lm" Jan 22 06:37:25 crc kubenswrapper[4720]: I0122 06:37:25.043553 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/86ad3ffd-89b2-4b4a-b1b1-72d6ad907204-trusted-ca-bundle\") pod \"console-f9d7485db-zv6lm\" (UID: \"86ad3ffd-89b2-4b4a-b1b1-72d6ad907204\") " pod="openshift-console/console-f9d7485db-zv6lm" Jan 22 06:37:25 crc kubenswrapper[4720]: I0122 06:37:25.043661 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0a21ae7b-9111-4c9f-a378-f2acdb19931a-audit-dir\") pod \"oauth-openshift-558db77b4-vp8tq\" (UID: \"0a21ae7b-9111-4c9f-a378-f2acdb19931a\") " pod="openshift-authentication/oauth-openshift-558db77b4-vp8tq" Jan 22 06:37:25 crc kubenswrapper[4720]: I0122 06:37:25.043683 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/0a21ae7b-9111-4c9f-a378-f2acdb19931a-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-vp8tq\" (UID: \"0a21ae7b-9111-4c9f-a378-f2acdb19931a\") " pod="openshift-authentication/oauth-openshift-558db77b4-vp8tq" Jan 22 06:37:25 crc kubenswrapper[4720]: I0122 06:37:25.043699 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/1088f6d1-1bac-4e7c-a944-2e9b5d259413-audit-dir\") pod \"apiserver-7bbb656c7d-9cvs7\" (UID: \"1088f6d1-1bac-4e7c-a944-2e9b5d259413\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9cvs7" Jan 22 06:37:25 crc kubenswrapper[4720]: I0122 06:37:25.043775 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/1088f6d1-1bac-4e7c-a944-2e9b5d259413-audit-dir\") pod \"apiserver-7bbb656c7d-9cvs7\" (UID: \"1088f6d1-1bac-4e7c-a944-2e9b5d259413\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9cvs7" Jan 22 06:37:25 crc kubenswrapper[4720]: I0122 06:37:25.044279 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-rlg6q"] Jan 22 06:37:25 crc kubenswrapper[4720]: I0122 06:37:25.044303 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/610296d1-12dc-4132-8ef9-9cc37ed81a3d-trusted-ca\") pod \"console-operator-58897d9998-hrglt\" (UID: \"610296d1-12dc-4132-8ef9-9cc37ed81a3d\") " pod="openshift-console-operator/console-operator-58897d9998-hrglt" Jan 22 06:37:25 crc kubenswrapper[4720]: I0122 06:37:25.045378 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-hxdwr"] Jan 22 06:37:25 crc kubenswrapper[4720]: I0122 06:37:25.047187 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7b9dafa1-4a65-48a2-bf74-5bfcea6aa310-config\") pod \"openshift-apiserver-operator-796bbdcf4f-gz8mf\" (UID: \"7b9dafa1-4a65-48a2-bf74-5bfcea6aa310\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gz8mf" Jan 22 06:37:25 crc kubenswrapper[4720]: I0122 06:37:25.048615 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1088f6d1-1bac-4e7c-a944-2e9b5d259413-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-9cvs7\" (UID: \"1088f6d1-1bac-4e7c-a944-2e9b5d259413\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9cvs7" Jan 22 06:37:25 crc kubenswrapper[4720]: I0122 06:37:25.048757 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/1088f6d1-1bac-4e7c-a944-2e9b5d259413-audit-policies\") pod \"apiserver-7bbb656c7d-9cvs7\" (UID: \"1088f6d1-1bac-4e7c-a944-2e9b5d259413\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9cvs7" Jan 22 06:37:25 crc kubenswrapper[4720]: I0122 06:37:25.049110 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4b9e32c5-534c-42ed-96fd-4e747d7084dd-service-ca-bundle\") pod \"authentication-operator-69f744f599-bkx6t\" (UID: \"4b9e32c5-534c-42ed-96fd-4e747d7084dd\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bkx6t" Jan 22 06:37:25 crc kubenswrapper[4720]: I0122 06:37:25.049124 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/610296d1-12dc-4132-8ef9-9cc37ed81a3d-config\") pod \"console-operator-58897d9998-hrglt\" (UID: \"610296d1-12dc-4132-8ef9-9cc37ed81a3d\") " pod="openshift-console-operator/console-operator-58897d9998-hrglt" Jan 22 06:37:25 crc kubenswrapper[4720]: I0122 06:37:25.049198 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3f7c9fba-71e2-44d4-9601-be0ffa541be4-client-ca\") pod \"controller-manager-879f6c89f-dhklt\" (UID: \"3f7c9fba-71e2-44d4-9601-be0ffa541be4\") " pod="openshift-controller-manager/controller-manager-879f6c89f-dhklt" Jan 22 06:37:25 crc kubenswrapper[4720]: I0122 06:37:25.049263 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-tk7sp"] Jan 22 06:37:25 crc kubenswrapper[4720]: I0122 06:37:25.049304 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-tfvxx"] Jan 22 06:37:25 crc kubenswrapper[4720]: I0122 06:37:25.049501 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/42322892-7874-4c59-ab1a-e3f205212e2e-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-hxdwr\" (UID: \"42322892-7874-4c59-ab1a-e3f205212e2e\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-hxdwr" Jan 22 06:37:25 crc kubenswrapper[4720]: I0122 06:37:25.049558 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-xf5cz"] Jan 22 06:37:25 crc kubenswrapper[4720]: I0122 06:37:25.050189 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e41cd5a0-a754-4161-938a-463f2673d37e-auth-proxy-config\") pod \"machine-approver-56656f9798-gxfr8\" (UID: \"e41cd5a0-a754-4161-938a-463f2673d37e\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gxfr8" Jan 22 06:37:25 crc kubenswrapper[4720]: I0122 06:37:25.050225 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/42322892-7874-4c59-ab1a-e3f205212e2e-images\") pod \"machine-api-operator-5694c8668f-hxdwr\" (UID: \"42322892-7874-4c59-ab1a-e3f205212e2e\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-hxdwr" Jan 22 06:37:25 crc kubenswrapper[4720]: I0122 06:37:25.050247 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/86ad3ffd-89b2-4b4a-b1b1-72d6ad907204-service-ca\") pod \"console-f9d7485db-zv6lm\" (UID: \"86ad3ffd-89b2-4b4a-b1b1-72d6ad907204\") " pod="openshift-console/console-f9d7485db-zv6lm" Jan 22 06:37:25 crc kubenswrapper[4720]: I0122 06:37:25.050556 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/86ad3ffd-89b2-4b4a-b1b1-72d6ad907204-console-config\") pod \"console-f9d7485db-zv6lm\" (UID: \"86ad3ffd-89b2-4b4a-b1b1-72d6ad907204\") " pod="openshift-console/console-f9d7485db-zv6lm" Jan 22 06:37:25 crc kubenswrapper[4720]: I0122 06:37:25.042183 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 22 06:37:25 crc kubenswrapper[4720]: I0122 06:37:25.050738 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4b9e32c5-534c-42ed-96fd-4e747d7084dd-serving-cert\") pod \"authentication-operator-69f744f599-bkx6t\" (UID: \"4b9e32c5-534c-42ed-96fd-4e747d7084dd\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bkx6t" Jan 22 06:37:25 crc kubenswrapper[4720]: I0122 06:37:25.050772 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/0a21ae7b-9111-4c9f-a378-f2acdb19931a-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-vp8tq\" (UID: \"0a21ae7b-9111-4c9f-a378-f2acdb19931a\") " pod="openshift-authentication/oauth-openshift-558db77b4-vp8tq" Jan 22 06:37:25 crc kubenswrapper[4720]: I0122 06:37:25.050879 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1088f6d1-1bac-4e7c-a944-2e9b5d259413-encryption-config\") pod \"apiserver-7bbb656c7d-9cvs7\" (UID: \"1088f6d1-1bac-4e7c-a944-2e9b5d259413\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9cvs7" Jan 22 06:37:25 crc kubenswrapper[4720]: I0122 06:37:25.051259 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3f7c9fba-71e2-44d4-9601-be0ffa541be4-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-dhklt\" (UID: \"3f7c9fba-71e2-44d4-9601-be0ffa541be4\") " pod="openshift-controller-manager/controller-manager-879f6c89f-dhklt" Jan 22 06:37:25 crc kubenswrapper[4720]: I0122 06:37:25.042936 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/0a21ae7b-9111-4c9f-a378-f2acdb19931a-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-vp8tq\" (UID: \"0a21ae7b-9111-4c9f-a378-f2acdb19931a\") " pod="openshift-authentication/oauth-openshift-558db77b4-vp8tq" Jan 22 06:37:25 crc kubenswrapper[4720]: I0122 06:37:25.052042 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1088f6d1-1bac-4e7c-a944-2e9b5d259413-etcd-client\") pod \"apiserver-7bbb656c7d-9cvs7\" (UID: \"1088f6d1-1bac-4e7c-a944-2e9b5d259413\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9cvs7" Jan 22 06:37:25 crc kubenswrapper[4720]: I0122 06:37:25.052431 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4b9e32c5-534c-42ed-96fd-4e747d7084dd-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-bkx6t\" (UID: \"4b9e32c5-534c-42ed-96fd-4e747d7084dd\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bkx6t" Jan 22 06:37:25 crc kubenswrapper[4720]: I0122 06:37:25.052815 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0a21ae7b-9111-4c9f-a378-f2acdb19931a-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-vp8tq\" (UID: \"0a21ae7b-9111-4c9f-a378-f2acdb19931a\") " pod="openshift-authentication/oauth-openshift-558db77b4-vp8tq" Jan 22 06:37:25 crc kubenswrapper[4720]: I0122 06:37:25.052884 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/508eaeea-db9b-4801-a9d3-a758e3ae9502-client-ca\") pod \"route-controller-manager-6576b87f9c-gxkzq\" (UID: \"508eaeea-db9b-4801-a9d3-a758e3ae9502\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-gxkzq" Jan 22 06:37:25 crc kubenswrapper[4720]: I0122 06:37:25.052454 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/42322892-7874-4c59-ab1a-e3f205212e2e-config\") pod \"machine-api-operator-5694c8668f-hxdwr\" (UID: \"42322892-7874-4c59-ab1a-e3f205212e2e\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-hxdwr" Jan 22 06:37:25 crc kubenswrapper[4720]: I0122 06:37:25.053152 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/0a21ae7b-9111-4c9f-a378-f2acdb19931a-audit-policies\") pod \"oauth-openshift-558db77b4-vp8tq\" (UID: \"0a21ae7b-9111-4c9f-a378-f2acdb19931a\") " pod="openshift-authentication/oauth-openshift-558db77b4-vp8tq" Jan 22 06:37:25 crc kubenswrapper[4720]: I0122 06:37:25.053483 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/508eaeea-db9b-4801-a9d3-a758e3ae9502-config\") pod \"route-controller-manager-6576b87f9c-gxkzq\" (UID: \"508eaeea-db9b-4801-a9d3-a758e3ae9502\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-gxkzq" Jan 22 06:37:25 crc kubenswrapper[4720]: I0122 06:37:25.053545 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0a21ae7b-9111-4c9f-a378-f2acdb19931a-audit-dir\") pod \"oauth-openshift-558db77b4-vp8tq\" (UID: \"0a21ae7b-9111-4c9f-a378-f2acdb19931a\") " pod="openshift-authentication/oauth-openshift-558db77b4-vp8tq" Jan 22 06:37:25 crc kubenswrapper[4720]: I0122 06:37:25.053862 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/86ad3ffd-89b2-4b4a-b1b1-72d6ad907204-oauth-serving-cert\") pod \"console-f9d7485db-zv6lm\" (UID: \"86ad3ffd-89b2-4b4a-b1b1-72d6ad907204\") " pod="openshift-console/console-f9d7485db-zv6lm" Jan 22 06:37:25 crc kubenswrapper[4720]: I0122 06:37:25.053991 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/0a21ae7b-9111-4c9f-a378-f2acdb19931a-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-vp8tq\" (UID: \"0a21ae7b-9111-4c9f-a378-f2acdb19931a\") " pod="openshift-authentication/oauth-openshift-558db77b4-vp8tq" Jan 22 06:37:25 crc kubenswrapper[4720]: I0122 06:37:25.054268 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1088f6d1-1bac-4e7c-a944-2e9b5d259413-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-9cvs7\" (UID: \"1088f6d1-1bac-4e7c-a944-2e9b5d259413\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9cvs7" Jan 22 06:37:25 crc kubenswrapper[4720]: I0122 06:37:25.054656 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/0a21ae7b-9111-4c9f-a378-f2acdb19931a-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-vp8tq\" (UID: \"0a21ae7b-9111-4c9f-a378-f2acdb19931a\") " pod="openshift-authentication/oauth-openshift-558db77b4-vp8tq" Jan 22 06:37:25 crc kubenswrapper[4720]: I0122 06:37:25.055167 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/0a21ae7b-9111-4c9f-a378-f2acdb19931a-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-vp8tq\" (UID: \"0a21ae7b-9111-4c9f-a378-f2acdb19931a\") " pod="openshift-authentication/oauth-openshift-558db77b4-vp8tq" Jan 22 06:37:25 crc kubenswrapper[4720]: I0122 06:37:25.055935 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-qph2m"] Jan 22 06:37:25 crc kubenswrapper[4720]: I0122 06:37:25.056112 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/508eaeea-db9b-4801-a9d3-a758e3ae9502-serving-cert\") pod \"route-controller-manager-6576b87f9c-gxkzq\" (UID: \"508eaeea-db9b-4801-a9d3-a758e3ae9502\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-gxkzq" Jan 22 06:37:25 crc kubenswrapper[4720]: I0122 06:37:25.056413 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/0a21ae7b-9111-4c9f-a378-f2acdb19931a-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-vp8tq\" (UID: \"0a21ae7b-9111-4c9f-a378-f2acdb19931a\") " pod="openshift-authentication/oauth-openshift-558db77b4-vp8tq" Jan 22 06:37:25 crc kubenswrapper[4720]: I0122 06:37:25.056561 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/86ad3ffd-89b2-4b4a-b1b1-72d6ad907204-console-oauth-config\") pod \"console-f9d7485db-zv6lm\" (UID: \"86ad3ffd-89b2-4b4a-b1b1-72d6ad907204\") " pod="openshift-console/console-f9d7485db-zv6lm" Jan 22 06:37:25 crc kubenswrapper[4720]: I0122 06:37:25.056652 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/86ad3ffd-89b2-4b4a-b1b1-72d6ad907204-trusted-ca-bundle\") pod \"console-f9d7485db-zv6lm\" (UID: \"86ad3ffd-89b2-4b4a-b1b1-72d6ad907204\") " pod="openshift-console/console-f9d7485db-zv6lm" Jan 22 06:37:25 crc kubenswrapper[4720]: I0122 06:37:25.056779 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3f7c9fba-71e2-44d4-9601-be0ffa541be4-config\") pod \"controller-manager-879f6c89f-dhklt\" (UID: \"3f7c9fba-71e2-44d4-9601-be0ffa541be4\") " pod="openshift-controller-manager/controller-manager-879f6c89f-dhklt" Jan 22 06:37:25 crc kubenswrapper[4720]: I0122 06:37:25.057863 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/e41cd5a0-a754-4161-938a-463f2673d37e-machine-approver-tls\") pod \"machine-approver-56656f9798-gxfr8\" (UID: \"e41cd5a0-a754-4161-938a-463f2673d37e\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gxfr8" Jan 22 06:37:25 crc kubenswrapper[4720]: I0122 06:37:25.057870 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-nn9tr"] Jan 22 06:37:25 crc kubenswrapper[4720]: I0122 06:37:25.058557 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 22 06:37:25 crc kubenswrapper[4720]: I0122 06:37:25.058660 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/0a21ae7b-9111-4c9f-a378-f2acdb19931a-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-vp8tq\" (UID: \"0a21ae7b-9111-4c9f-a378-f2acdb19931a\") " pod="openshift-authentication/oauth-openshift-558db77b4-vp8tq" Jan 22 06:37:25 crc kubenswrapper[4720]: I0122 06:37:25.059075 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/0a21ae7b-9111-4c9f-a378-f2acdb19931a-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-vp8tq\" (UID: \"0a21ae7b-9111-4c9f-a378-f2acdb19931a\") " pod="openshift-authentication/oauth-openshift-558db77b4-vp8tq" Jan 22 06:37:25 crc kubenswrapper[4720]: I0122 06:37:25.059204 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/0a21ae7b-9111-4c9f-a378-f2acdb19931a-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-vp8tq\" (UID: \"0a21ae7b-9111-4c9f-a378-f2acdb19931a\") " pod="openshift-authentication/oauth-openshift-558db77b4-vp8tq" Jan 22 06:37:25 crc kubenswrapper[4720]: I0122 06:37:25.060112 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-dhklt"] Jan 22 06:37:25 crc kubenswrapper[4720]: I0122 06:37:25.061169 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-r7h6p"] Jan 22 06:37:25 crc kubenswrapper[4720]: I0122 06:37:25.062407 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-zmhj8"] Jan 22 06:37:25 crc kubenswrapper[4720]: I0122 06:37:25.063632 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-x4kkz"] Jan 22 06:37:25 crc kubenswrapper[4720]: I0122 06:37:25.064919 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-twh47"] Jan 22 06:37:25 crc kubenswrapper[4720]: I0122 06:37:25.066081 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-4ztkj"] Jan 22 06:37:25 crc kubenswrapper[4720]: I0122 06:37:25.066368 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7b9dafa1-4a65-48a2-bf74-5bfcea6aa310-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-gz8mf\" (UID: \"7b9dafa1-4a65-48a2-bf74-5bfcea6aa310\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gz8mf" Jan 22 06:37:25 crc kubenswrapper[4720]: I0122 06:37:25.067060 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vmvqs"] Jan 22 06:37:25 crc kubenswrapper[4720]: I0122 06:37:25.068142 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484390-gc7lm"] Jan 22 06:37:25 crc kubenswrapper[4720]: I0122 06:37:25.070211 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-4b66q"] Jan 22 06:37:25 crc kubenswrapper[4720]: I0122 06:37:25.070630 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/0a21ae7b-9111-4c9f-a378-f2acdb19931a-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-vp8tq\" (UID: \"0a21ae7b-9111-4c9f-a378-f2acdb19931a\") " pod="openshift-authentication/oauth-openshift-558db77b4-vp8tq" Jan 22 06:37:25 crc kubenswrapper[4720]: I0122 06:37:25.071188 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1088f6d1-1bac-4e7c-a944-2e9b5d259413-serving-cert\") pod \"apiserver-7bbb656c7d-9cvs7\" (UID: \"1088f6d1-1bac-4e7c-a944-2e9b5d259413\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9cvs7" Jan 22 06:37:25 crc kubenswrapper[4720]: I0122 06:37:25.071235 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-n9zff"] Jan 22 06:37:25 crc kubenswrapper[4720]: I0122 06:37:25.071343 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/610296d1-12dc-4132-8ef9-9cc37ed81a3d-serving-cert\") pod \"console-operator-58897d9998-hrglt\" (UID: \"610296d1-12dc-4132-8ef9-9cc37ed81a3d\") " pod="openshift-console-operator/console-operator-58897d9998-hrglt" Jan 22 06:37:25 crc kubenswrapper[4720]: I0122 06:37:25.071391 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3f7c9fba-71e2-44d4-9601-be0ffa541be4-serving-cert\") pod \"controller-manager-879f6c89f-dhklt\" (UID: \"3f7c9fba-71e2-44d4-9601-be0ffa541be4\") " pod="openshift-controller-manager/controller-manager-879f6c89f-dhklt" Jan 22 06:37:25 crc kubenswrapper[4720]: I0122 06:37:25.072017 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-24gcw"] Jan 22 06:37:25 crc kubenswrapper[4720]: I0122 06:37:25.072684 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/86ad3ffd-89b2-4b4a-b1b1-72d6ad907204-console-serving-cert\") pod \"console-f9d7485db-zv6lm\" (UID: \"86ad3ffd-89b2-4b4a-b1b1-72d6ad907204\") " pod="openshift-console/console-f9d7485db-zv6lm" Jan 22 06:37:25 crc kubenswrapper[4720]: I0122 06:37:25.072806 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-9n8jj"] Jan 22 06:37:25 crc kubenswrapper[4720]: I0122 06:37:25.073764 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-x9zg2"] Jan 22 06:37:25 crc kubenswrapper[4720]: I0122 06:37:25.074736 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-rg9qd"] Jan 22 06:37:25 crc kubenswrapper[4720]: I0122 06:37:25.078335 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 22 06:37:25 crc kubenswrapper[4720]: I0122 06:37:25.081588 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2n672"] Jan 22 06:37:25 crc kubenswrapper[4720]: I0122 06:37:25.081759 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-rg9qd" Jan 22 06:37:25 crc kubenswrapper[4720]: I0122 06:37:25.086346 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-g9f7l"] Jan 22 06:37:25 crc kubenswrapper[4720]: I0122 06:37:25.094063 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-nhzl2"] Jan 22 06:37:25 crc kubenswrapper[4720]: I0122 06:37:25.094155 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-bj86g"] Jan 22 06:37:25 crc kubenswrapper[4720]: I0122 06:37:25.096307 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-t8k8z"] Jan 22 06:37:25 crc kubenswrapper[4720]: I0122 06:37:25.097512 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-t8k8z" Jan 22 06:37:25 crc kubenswrapper[4720]: I0122 06:37:25.097879 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 22 06:37:25 crc kubenswrapper[4720]: I0122 06:37:25.097953 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-rg9qd"] Jan 22 06:37:25 crc kubenswrapper[4720]: I0122 06:37:25.098729 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-t8k8z"] Jan 22 06:37:25 crc kubenswrapper[4720]: I0122 06:37:25.102790 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-tdfb6"] Jan 22 06:37:25 crc kubenswrapper[4720]: I0122 06:37:25.103515 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-tdfb6" Jan 22 06:37:25 crc kubenswrapper[4720]: I0122 06:37:25.119228 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 22 06:37:25 crc kubenswrapper[4720]: I0122 06:37:25.138563 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 22 06:37:25 crc kubenswrapper[4720]: I0122 06:37:25.145115 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/12b3f8d7-d79f-48e6-be2f-eeb97827e913-default-certificate\") pod \"router-default-5444994796-6j6b9\" (UID: \"12b3f8d7-d79f-48e6-be2f-eeb97827e913\") " pod="openshift-ingress/router-default-5444994796-6j6b9" Jan 22 06:37:25 crc kubenswrapper[4720]: I0122 06:37:25.145154 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tl5vk\" (UniqueName: \"kubernetes.io/projected/824d4c6b-8052-429c-a050-4339913991b5-kube-api-access-tl5vk\") pod \"cluster-samples-operator-665b6dd947-x6khz\" (UID: \"824d4c6b-8052-429c-a050-4339913991b5\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-x6khz" Jan 22 06:37:25 crc kubenswrapper[4720]: I0122 06:37:25.145182 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/12b3f8d7-d79f-48e6-be2f-eeb97827e913-stats-auth\") pod \"router-default-5444994796-6j6b9\" (UID: \"12b3f8d7-d79f-48e6-be2f-eeb97827e913\") " pod="openshift-ingress/router-default-5444994796-6j6b9" Jan 22 06:37:25 crc kubenswrapper[4720]: I0122 06:37:25.145206 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/824d4c6b-8052-429c-a050-4339913991b5-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-x6khz\" (UID: \"824d4c6b-8052-429c-a050-4339913991b5\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-x6khz" Jan 22 06:37:25 crc kubenswrapper[4720]: I0122 06:37:25.145236 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/b768bae9-692e-4039-8fea-d88359e16ee4-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-zmhj8\" (UID: \"b768bae9-692e-4039-8fea-d88359e16ee4\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-zmhj8" Jan 22 06:37:25 crc kubenswrapper[4720]: I0122 06:37:25.145276 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/12b3f8d7-d79f-48e6-be2f-eeb97827e913-metrics-certs\") pod \"router-default-5444994796-6j6b9\" (UID: \"12b3f8d7-d79f-48e6-be2f-eeb97827e913\") " pod="openshift-ingress/router-default-5444994796-6j6b9" Jan 22 06:37:25 crc kubenswrapper[4720]: I0122 06:37:25.145365 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/87a73166-b8c6-4dab-bd45-46b640a4b1c5-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-twh47\" (UID: \"87a73166-b8c6-4dab-bd45-46b640a4b1c5\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-twh47" Jan 22 06:37:25 crc kubenswrapper[4720]: I0122 06:37:25.145400 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/87a73166-b8c6-4dab-bd45-46b640a4b1c5-config\") pod \"kube-apiserver-operator-766d6c64bb-twh47\" (UID: \"87a73166-b8c6-4dab-bd45-46b640a4b1c5\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-twh47" Jan 22 06:37:25 crc kubenswrapper[4720]: I0122 06:37:25.145429 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/87a73166-b8c6-4dab-bd45-46b640a4b1c5-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-twh47\" (UID: \"87a73166-b8c6-4dab-bd45-46b640a4b1c5\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-twh47" Jan 22 06:37:25 crc kubenswrapper[4720]: I0122 06:37:25.145479 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/12b3f8d7-d79f-48e6-be2f-eeb97827e913-service-ca-bundle\") pod \"router-default-5444994796-6j6b9\" (UID: \"12b3f8d7-d79f-48e6-be2f-eeb97827e913\") " pod="openshift-ingress/router-default-5444994796-6j6b9" Jan 22 06:37:25 crc kubenswrapper[4720]: I0122 06:37:25.145499 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qq57f\" (UniqueName: \"kubernetes.io/projected/b768bae9-692e-4039-8fea-d88359e16ee4-kube-api-access-qq57f\") pod \"control-plane-machine-set-operator-78cbb6b69f-zmhj8\" (UID: \"b768bae9-692e-4039-8fea-d88359e16ee4\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-zmhj8" Jan 22 06:37:25 crc kubenswrapper[4720]: I0122 06:37:25.145526 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ndbzv\" (UniqueName: \"kubernetes.io/projected/12b3f8d7-d79f-48e6-be2f-eeb97827e913-kube-api-access-ndbzv\") pod \"router-default-5444994796-6j6b9\" (UID: \"12b3f8d7-d79f-48e6-be2f-eeb97827e913\") " pod="openshift-ingress/router-default-5444994796-6j6b9" Jan 22 06:37:25 crc kubenswrapper[4720]: I0122 06:37:25.148306 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/824d4c6b-8052-429c-a050-4339913991b5-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-x6khz\" (UID: \"824d4c6b-8052-429c-a050-4339913991b5\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-x6khz" Jan 22 06:37:25 crc kubenswrapper[4720]: I0122 06:37:25.148623 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/b768bae9-692e-4039-8fea-d88359e16ee4-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-zmhj8\" (UID: \"b768bae9-692e-4039-8fea-d88359e16ee4\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-zmhj8" Jan 22 06:37:25 crc kubenswrapper[4720]: I0122 06:37:25.158755 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 22 06:37:25 crc kubenswrapper[4720]: I0122 06:37:25.178777 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 22 06:37:25 crc kubenswrapper[4720]: I0122 06:37:25.199254 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 22 06:37:25 crc kubenswrapper[4720]: I0122 06:37:25.210168 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 06:37:25 crc kubenswrapper[4720]: I0122 06:37:25.210219 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kvtch" Jan 22 06:37:25 crc kubenswrapper[4720]: I0122 06:37:25.218243 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 22 06:37:25 crc kubenswrapper[4720]: I0122 06:37:25.238268 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 22 06:37:25 crc kubenswrapper[4720]: I0122 06:37:25.258867 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 22 06:37:25 crc kubenswrapper[4720]: I0122 06:37:25.269725 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/12b3f8d7-d79f-48e6-be2f-eeb97827e913-default-certificate\") pod \"router-default-5444994796-6j6b9\" (UID: \"12b3f8d7-d79f-48e6-be2f-eeb97827e913\") " pod="openshift-ingress/router-default-5444994796-6j6b9" Jan 22 06:37:25 crc kubenswrapper[4720]: I0122 06:37:25.278594 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 22 06:37:25 crc kubenswrapper[4720]: I0122 06:37:25.289719 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/12b3f8d7-d79f-48e6-be2f-eeb97827e913-stats-auth\") pod \"router-default-5444994796-6j6b9\" (UID: \"12b3f8d7-d79f-48e6-be2f-eeb97827e913\") " pod="openshift-ingress/router-default-5444994796-6j6b9" Jan 22 06:37:25 crc kubenswrapper[4720]: I0122 06:37:25.298495 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 22 06:37:25 crc kubenswrapper[4720]: I0122 06:37:25.308549 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/12b3f8d7-d79f-48e6-be2f-eeb97827e913-metrics-certs\") pod \"router-default-5444994796-6j6b9\" (UID: \"12b3f8d7-d79f-48e6-be2f-eeb97827e913\") " pod="openshift-ingress/router-default-5444994796-6j6b9" Jan 22 06:37:25 crc kubenswrapper[4720]: I0122 06:37:25.318589 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 22 06:37:25 crc kubenswrapper[4720]: I0122 06:37:25.326662 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/12b3f8d7-d79f-48e6-be2f-eeb97827e913-service-ca-bundle\") pod \"router-default-5444994796-6j6b9\" (UID: \"12b3f8d7-d79f-48e6-be2f-eeb97827e913\") " pod="openshift-ingress/router-default-5444994796-6j6b9" Jan 22 06:37:25 crc kubenswrapper[4720]: I0122 06:37:25.339270 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 22 06:37:25 crc kubenswrapper[4720]: I0122 06:37:25.359842 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 22 06:37:25 crc kubenswrapper[4720]: I0122 06:37:25.378638 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 22 06:37:25 crc kubenswrapper[4720]: I0122 06:37:25.397879 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 22 06:37:25 crc kubenswrapper[4720]: I0122 06:37:25.418548 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 22 06:37:25 crc kubenswrapper[4720]: I0122 06:37:25.429628 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/87a73166-b8c6-4dab-bd45-46b640a4b1c5-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-twh47\" (UID: \"87a73166-b8c6-4dab-bd45-46b640a4b1c5\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-twh47" Jan 22 06:37:25 crc kubenswrapper[4720]: I0122 06:37:25.438278 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 22 06:37:25 crc kubenswrapper[4720]: I0122 06:37:25.447726 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/87a73166-b8c6-4dab-bd45-46b640a4b1c5-config\") pod \"kube-apiserver-operator-766d6c64bb-twh47\" (UID: \"87a73166-b8c6-4dab-bd45-46b640a4b1c5\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-twh47" Jan 22 06:37:25 crc kubenswrapper[4720]: I0122 06:37:25.457394 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 22 06:37:25 crc kubenswrapper[4720]: I0122 06:37:25.478376 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 22 06:37:25 crc kubenswrapper[4720]: I0122 06:37:25.498879 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 22 06:37:25 crc kubenswrapper[4720]: I0122 06:37:25.527658 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 22 06:37:25 crc kubenswrapper[4720]: I0122 06:37:25.539190 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 22 06:37:25 crc kubenswrapper[4720]: I0122 06:37:25.558437 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 22 06:37:25 crc kubenswrapper[4720]: I0122 06:37:25.579023 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 22 06:37:25 crc kubenswrapper[4720]: I0122 06:37:25.598948 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 22 06:37:25 crc kubenswrapper[4720]: I0122 06:37:25.618620 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 22 06:37:25 crc kubenswrapper[4720]: I0122 06:37:25.638602 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 22 06:37:25 crc kubenswrapper[4720]: I0122 06:37:25.659294 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 22 06:37:25 crc kubenswrapper[4720]: I0122 06:37:25.678685 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 22 06:37:25 crc kubenswrapper[4720]: I0122 06:37:25.698517 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 22 06:37:25 crc kubenswrapper[4720]: I0122 06:37:25.718451 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 22 06:37:25 crc kubenswrapper[4720]: I0122 06:37:25.740118 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 22 06:37:25 crc kubenswrapper[4720]: I0122 06:37:25.758465 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 22 06:37:25 crc kubenswrapper[4720]: I0122 06:37:25.778985 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 22 06:37:25 crc kubenswrapper[4720]: I0122 06:37:25.798221 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 22 06:37:25 crc kubenswrapper[4720]: I0122 06:37:25.818613 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 22 06:37:25 crc kubenswrapper[4720]: I0122 06:37:25.838166 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 22 06:37:25 crc kubenswrapper[4720]: I0122 06:37:25.859512 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 22 06:37:25 crc kubenswrapper[4720]: I0122 06:37:25.878789 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 22 06:37:25 crc kubenswrapper[4720]: I0122 06:37:25.899049 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 22 06:37:25 crc kubenswrapper[4720]: I0122 06:37:25.918701 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 22 06:37:25 crc kubenswrapper[4720]: I0122 06:37:25.938708 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 22 06:37:25 crc kubenswrapper[4720]: I0122 06:37:25.958210 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 22 06:37:25 crc kubenswrapper[4720]: I0122 06:37:25.976457 4720 request.go:700] Waited for 1.007797634s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dmachine-config-operator-dockercfg-98p87&limit=500&resourceVersion=0 Jan 22 06:37:25 crc kubenswrapper[4720]: I0122 06:37:25.978542 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 22 06:37:25 crc kubenswrapper[4720]: I0122 06:37:25.998657 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 22 06:37:26 crc kubenswrapper[4720]: I0122 06:37:26.019508 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 22 06:37:26 crc kubenswrapper[4720]: I0122 06:37:26.060112 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 22 06:37:26 crc kubenswrapper[4720]: I0122 06:37:26.099096 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 22 06:37:26 crc kubenswrapper[4720]: I0122 06:37:26.119500 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 22 06:37:26 crc kubenswrapper[4720]: I0122 06:37:26.139012 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 22 06:37:26 crc kubenswrapper[4720]: I0122 06:37:26.158545 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 22 06:37:26 crc kubenswrapper[4720]: I0122 06:37:26.179411 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 22 06:37:26 crc kubenswrapper[4720]: I0122 06:37:26.198562 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 22 06:37:26 crc kubenswrapper[4720]: I0122 06:37:26.218633 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 22 06:37:26 crc kubenswrapper[4720]: I0122 06:37:26.238940 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 22 06:37:26 crc kubenswrapper[4720]: I0122 06:37:26.269377 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 22 06:37:26 crc kubenswrapper[4720]: I0122 06:37:26.278747 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 22 06:37:26 crc kubenswrapper[4720]: I0122 06:37:26.299448 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 22 06:37:26 crc kubenswrapper[4720]: I0122 06:37:26.318210 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 22 06:37:26 crc kubenswrapper[4720]: I0122 06:37:26.338860 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 22 06:37:26 crc kubenswrapper[4720]: I0122 06:37:26.358391 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 22 06:37:26 crc kubenswrapper[4720]: I0122 06:37:26.378870 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 22 06:37:26 crc kubenswrapper[4720]: I0122 06:37:26.399308 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 22 06:37:26 crc kubenswrapper[4720]: I0122 06:37:26.418774 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 22 06:37:26 crc kubenswrapper[4720]: I0122 06:37:26.439085 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 22 06:37:26 crc kubenswrapper[4720]: I0122 06:37:26.459600 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 22 06:37:26 crc kubenswrapper[4720]: I0122 06:37:26.479648 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 22 06:37:26 crc kubenswrapper[4720]: I0122 06:37:26.498462 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 22 06:37:26 crc kubenswrapper[4720]: I0122 06:37:26.519468 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 22 06:37:26 crc kubenswrapper[4720]: I0122 06:37:26.538711 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 22 06:37:26 crc kubenswrapper[4720]: I0122 06:37:26.558410 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 22 06:37:26 crc kubenswrapper[4720]: I0122 06:37:26.579665 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 22 06:37:26 crc kubenswrapper[4720]: I0122 06:37:26.598464 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 22 06:37:26 crc kubenswrapper[4720]: I0122 06:37:26.618845 4720 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 22 06:37:26 crc kubenswrapper[4720]: I0122 06:37:26.639729 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 22 06:37:26 crc kubenswrapper[4720]: I0122 06:37:26.695954 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hf4tk\" (UniqueName: \"kubernetes.io/projected/86ad3ffd-89b2-4b4a-b1b1-72d6ad907204-kube-api-access-hf4tk\") pod \"console-f9d7485db-zv6lm\" (UID: \"86ad3ffd-89b2-4b4a-b1b1-72d6ad907204\") " pod="openshift-console/console-f9d7485db-zv6lm" Jan 22 06:37:26 crc kubenswrapper[4720]: I0122 06:37:26.720596 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jzrfw\" (UniqueName: \"kubernetes.io/projected/7b9dafa1-4a65-48a2-bf74-5bfcea6aa310-kube-api-access-jzrfw\") pod \"openshift-apiserver-operator-796bbdcf4f-gz8mf\" (UID: \"7b9dafa1-4a65-48a2-bf74-5bfcea6aa310\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gz8mf" Jan 22 06:37:26 crc kubenswrapper[4720]: I0122 06:37:26.731493 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nxnxk\" (UniqueName: \"kubernetes.io/projected/4b9e32c5-534c-42ed-96fd-4e747d7084dd-kube-api-access-nxnxk\") pod \"authentication-operator-69f744f599-bkx6t\" (UID: \"4b9e32c5-534c-42ed-96fd-4e747d7084dd\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bkx6t" Jan 22 06:37:26 crc kubenswrapper[4720]: I0122 06:37:26.737093 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-754cl\" (UniqueName: \"kubernetes.io/projected/610296d1-12dc-4132-8ef9-9cc37ed81a3d-kube-api-access-754cl\") pod \"console-operator-58897d9998-hrglt\" (UID: \"610296d1-12dc-4132-8ef9-9cc37ed81a3d\") " pod="openshift-console-operator/console-operator-58897d9998-hrglt" Jan 22 06:37:26 crc kubenswrapper[4720]: I0122 06:37:26.748247 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-bkx6t" Jan 22 06:37:26 crc kubenswrapper[4720]: I0122 06:37:26.757217 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tpw8s\" (UniqueName: \"kubernetes.io/projected/1088f6d1-1bac-4e7c-a944-2e9b5d259413-kube-api-access-tpw8s\") pod \"apiserver-7bbb656c7d-9cvs7\" (UID: \"1088f6d1-1bac-4e7c-a944-2e9b5d259413\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9cvs7" Jan 22 06:37:26 crc kubenswrapper[4720]: I0122 06:37:26.775124 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dxtjd\" (UniqueName: \"kubernetes.io/projected/dc1c1a54-81dc-4e91-80db-606befa6c477-kube-api-access-dxtjd\") pod \"downloads-7954f5f757-ws6w8\" (UID: \"dc1c1a54-81dc-4e91-80db-606befa6c477\") " pod="openshift-console/downloads-7954f5f757-ws6w8" Jan 22 06:37:26 crc kubenswrapper[4720]: I0122 06:37:26.781683 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-hrglt" Jan 22 06:37:26 crc kubenswrapper[4720]: I0122 06:37:26.793327 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t786k\" (UniqueName: \"kubernetes.io/projected/e41cd5a0-a754-4161-938a-463f2673d37e-kube-api-access-t786k\") pod \"machine-approver-56656f9798-gxfr8\" (UID: \"e41cd5a0-a754-4161-938a-463f2673d37e\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gxfr8" Jan 22 06:37:26 crc kubenswrapper[4720]: I0122 06:37:26.799160 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-zv6lm" Jan 22 06:37:26 crc kubenswrapper[4720]: I0122 06:37:26.820791 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wqjj6\" (UniqueName: \"kubernetes.io/projected/508eaeea-db9b-4801-a9d3-a758e3ae9502-kube-api-access-wqjj6\") pod \"route-controller-manager-6576b87f9c-gxkzq\" (UID: \"508eaeea-db9b-4801-a9d3-a758e3ae9502\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-gxkzq" Jan 22 06:37:26 crc kubenswrapper[4720]: I0122 06:37:26.846233 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-ws6w8" Jan 22 06:37:26 crc kubenswrapper[4720]: I0122 06:37:26.855502 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ghg4z\" (UniqueName: \"kubernetes.io/projected/42322892-7874-4c59-ab1a-e3f205212e2e-kube-api-access-ghg4z\") pod \"machine-api-operator-5694c8668f-hxdwr\" (UID: \"42322892-7874-4c59-ab1a-e3f205212e2e\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-hxdwr" Jan 22 06:37:26 crc kubenswrapper[4720]: I0122 06:37:26.855793 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-69v9c\" (UniqueName: \"kubernetes.io/projected/3f7c9fba-71e2-44d4-9601-be0ffa541be4-kube-api-access-69v9c\") pod \"controller-manager-879f6c89f-dhklt\" (UID: \"3f7c9fba-71e2-44d4-9601-be0ffa541be4\") " pod="openshift-controller-manager/controller-manager-879f6c89f-dhklt" Jan 22 06:37:26 crc kubenswrapper[4720]: I0122 06:37:26.882449 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 22 06:37:26 crc kubenswrapper[4720]: I0122 06:37:26.900696 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 22 06:37:26 crc kubenswrapper[4720]: I0122 06:37:26.924339 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z727j\" (UniqueName: \"kubernetes.io/projected/0a21ae7b-9111-4c9f-a378-f2acdb19931a-kube-api-access-z727j\") pod \"oauth-openshift-558db77b4-vp8tq\" (UID: \"0a21ae7b-9111-4c9f-a378-f2acdb19931a\") " pod="openshift-authentication/oauth-openshift-558db77b4-vp8tq" Jan 22 06:37:26 crc kubenswrapper[4720]: I0122 06:37:26.924489 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gxfr8" Jan 22 06:37:26 crc kubenswrapper[4720]: I0122 06:37:26.924685 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 22 06:37:26 crc kubenswrapper[4720]: I0122 06:37:26.941699 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 22 06:37:26 crc kubenswrapper[4720]: W0122 06:37:26.941751 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode41cd5a0_a754_4161_938a_463f2673d37e.slice/crio-6b83414e04af860a43f8dbfd01a208902c469cc686b3e3a379dada6129733a23 WatchSource:0}: Error finding container 6b83414e04af860a43f8dbfd01a208902c469cc686b3e3a379dada6129733a23: Status 404 returned error can't find the container with id 6b83414e04af860a43f8dbfd01a208902c469cc686b3e3a379dada6129733a23 Jan 22 06:37:26 crc kubenswrapper[4720]: I0122 06:37:26.959752 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 22 06:37:26 crc kubenswrapper[4720]: I0122 06:37:26.976692 4720 request.go:700] Waited for 1.878814596s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-dns/configmaps?fieldSelector=metadata.name%3Ddns-default&limit=500&resourceVersion=0 Jan 22 06:37:26 crc kubenswrapper[4720]: I0122 06:37:26.978160 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gz8mf" Jan 22 06:37:26 crc kubenswrapper[4720]: I0122 06:37:26.980203 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 22 06:37:26 crc kubenswrapper[4720]: I0122 06:37:26.998801 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9cvs7" Jan 22 06:37:26 crc kubenswrapper[4720]: I0122 06:37:26.999142 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.019513 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.039370 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.053889 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-bkx6t"] Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.057504 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.064087 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-vp8tq" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.066888 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gxfr8" event={"ID":"e41cd5a0-a754-4161-938a-463f2673d37e","Type":"ContainerStarted","Data":"6b83414e04af860a43f8dbfd01a208902c469cc686b3e3a379dada6129733a23"} Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.102107 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tl5vk\" (UniqueName: \"kubernetes.io/projected/824d4c6b-8052-429c-a050-4339913991b5-kube-api-access-tl5vk\") pod \"cluster-samples-operator-665b6dd947-x6khz\" (UID: \"824d4c6b-8052-429c-a050-4339913991b5\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-x6khz" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.106741 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-dhklt" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.113285 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qq57f\" (UniqueName: \"kubernetes.io/projected/b768bae9-692e-4039-8fea-d88359e16ee4-kube-api-access-qq57f\") pod \"control-plane-machine-set-operator-78cbb6b69f-zmhj8\" (UID: \"b768bae9-692e-4039-8fea-d88359e16ee4\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-zmhj8" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.120576 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-gxkzq" Jan 22 06:37:27 crc kubenswrapper[4720]: W0122 06:37:27.127392 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4b9e32c5_534c_42ed_96fd_4e747d7084dd.slice/crio-f3b41213da8b1bc179a5a722a4a44574aefae32cf16e9b87bf57e39359443b73 WatchSource:0}: Error finding container f3b41213da8b1bc179a5a722a4a44574aefae32cf16e9b87bf57e39359443b73: Status 404 returned error can't find the container with id f3b41213da8b1bc179a5a722a4a44574aefae32cf16e9b87bf57e39359443b73 Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.137346 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-zv6lm"] Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.138607 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ndbzv\" (UniqueName: \"kubernetes.io/projected/12b3f8d7-d79f-48e6-be2f-eeb97827e913-kube-api-access-ndbzv\") pod \"router-default-5444994796-6j6b9\" (UID: \"12b3f8d7-d79f-48e6-be2f-eeb97827e913\") " pod="openshift-ingress/router-default-5444994796-6j6b9" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.146595 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-hxdwr" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.154173 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/87a73166-b8c6-4dab-bd45-46b640a4b1c5-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-twh47\" (UID: \"87a73166-b8c6-4dab-bd45-46b640a4b1c5\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-twh47" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.158797 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.183228 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.193538 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-ws6w8"] Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.206994 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-x6khz" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.233719 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gz8mf"] Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.276504 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-zmhj8" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.279043 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-98dc2\" (UniqueName: \"kubernetes.io/projected/9f57a689-3b37-4c87-a02f-7898dbbaa665-kube-api-access-98dc2\") pod \"ingress-operator-5b745b69d9-nn9tr\" (UID: \"9f57a689-3b37-4c87-a02f-7898dbbaa665\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-nn9tr" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.279115 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c27ad45d-a6e8-48af-9417-5422ce60dcec-installation-pull-secrets\") pod \"image-registry-697d97f7c8-zcbc4\" (UID: \"c27ad45d-a6e8-48af-9417-5422ce60dcec\") " pod="openshift-image-registry/image-registry-697d97f7c8-zcbc4" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.279137 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/655a100f-fb0a-4668-8d78-3b357542dad4-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-t25tr\" (UID: \"655a100f-fb0a-4668-8d78-3b357542dad4\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-t25tr" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.279177 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c27ad45d-a6e8-48af-9417-5422ce60dcec-registry-tls\") pod \"image-registry-697d97f7c8-zcbc4\" (UID: \"c27ad45d-a6e8-48af-9417-5422ce60dcec\") " pod="openshift-image-registry/image-registry-697d97f7c8-zcbc4" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.279197 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/630eae9a-c1b8-47ce-873a-3ef59ef6c002-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-bj86g\" (UID: \"630eae9a-c1b8-47ce-873a-3ef59ef6c002\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-bj86g" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.279246 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4dae2f17-e6e0-4ad0-9eca-3d3adaac3c24-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-2n672\" (UID: \"4dae2f17-e6e0-4ad0-9eca-3d3adaac3c24\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2n672" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.279269 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9f57a689-3b37-4c87-a02f-7898dbbaa665-trusted-ca\") pod \"ingress-operator-5b745b69d9-nn9tr\" (UID: \"9f57a689-3b37-4c87-a02f-7898dbbaa665\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-nn9tr" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.279291 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1919d4bf-d3e1-4648-bcd6-1e7b0f5a0756-serving-cert\") pod \"openshift-config-operator-7777fb866f-r7h6p\" (UID: \"1919d4bf-d3e1-4648-bcd6-1e7b0f5a0756\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-r7h6p" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.279328 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9f57a689-3b37-4c87-a02f-7898dbbaa665-bound-sa-token\") pod \"ingress-operator-5b745b69d9-nn9tr\" (UID: \"9f57a689-3b37-4c87-a02f-7898dbbaa665\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-nn9tr" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.279355 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/8486a6bf-b477-46be-9841-94481ef84313-etcd-client\") pod \"apiserver-76f77b778f-dsfv4\" (UID: \"8486a6bf-b477-46be-9841-94481ef84313\") " pod="openshift-apiserver/apiserver-76f77b778f-dsfv4" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.279371 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9f57a689-3b37-4c87-a02f-7898dbbaa665-metrics-tls\") pod \"ingress-operator-5b745b69d9-nn9tr\" (UID: \"9f57a689-3b37-4c87-a02f-7898dbbaa665\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-nn9tr" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.279423 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dv4qh\" (UniqueName: \"kubernetes.io/projected/52fd7f11-0ca1-4af5-98a0-00789fb541e6-kube-api-access-dv4qh\") pod \"machine-config-operator-74547568cd-qph2m\" (UID: \"52fd7f11-0ca1-4af5-98a0-00789fb541e6\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-qph2m" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.279450 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zcbc4\" (UID: \"c27ad45d-a6e8-48af-9417-5422ce60dcec\") " pod="openshift-image-registry/image-registry-697d97f7c8-zcbc4" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.279496 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c27ad45d-a6e8-48af-9417-5422ce60dcec-trusted-ca\") pod \"image-registry-697d97f7c8-zcbc4\" (UID: \"c27ad45d-a6e8-48af-9417-5422ce60dcec\") " pod="openshift-image-registry/image-registry-697d97f7c8-zcbc4" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.279528 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/8486a6bf-b477-46be-9841-94481ef84313-audit-dir\") pod \"apiserver-76f77b778f-dsfv4\" (UID: \"8486a6bf-b477-46be-9841-94481ef84313\") " pod="openshift-apiserver/apiserver-76f77b778f-dsfv4" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.279587 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rvjjc\" (UniqueName: \"kubernetes.io/projected/655a100f-fb0a-4668-8d78-3b357542dad4-kube-api-access-rvjjc\") pod \"cluster-image-registry-operator-dc59b4c8b-t25tr\" (UID: \"655a100f-fb0a-4668-8d78-3b357542dad4\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-t25tr" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.279656 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/c27ad45d-a6e8-48af-9417-5422ce60dcec-registry-certificates\") pod \"image-registry-697d97f7c8-zcbc4\" (UID: \"c27ad45d-a6e8-48af-9417-5422ce60dcec\") " pod="openshift-image-registry/image-registry-697d97f7c8-zcbc4" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.279677 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wmzrl\" (UniqueName: \"kubernetes.io/projected/a8593368-7930-499d-aa21-6526251ce66c-kube-api-access-wmzrl\") pod \"multus-admission-controller-857f4d67dd-xf5cz\" (UID: \"a8593368-7930-499d-aa21-6526251ce66c\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-xf5cz" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.279698 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4dae2f17-e6e0-4ad0-9eca-3d3adaac3c24-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-2n672\" (UID: \"4dae2f17-e6e0-4ad0-9eca-3d3adaac3c24\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2n672" Jan 22 06:37:27 crc kubenswrapper[4720]: E0122 06:37:27.280099 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 06:37:27.780082496 +0000 UTC m=+139.921989201 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zcbc4" (UID: "c27ad45d-a6e8-48af-9417-5422ce60dcec") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.280563 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4dae2f17-e6e0-4ad0-9eca-3d3adaac3c24-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-2n672\" (UID: \"4dae2f17-e6e0-4ad0-9eca-3d3adaac3c24\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2n672" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.280687 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/def7efcb-32f5-4a8b-9be9-9fc39456c534-proxy-tls\") pod \"machine-config-controller-84d6567774-8khmt\" (UID: \"def7efcb-32f5-4a8b-9be9-9fc39456c534\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-8khmt" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.280720 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/a8593368-7930-499d-aa21-6526251ce66c-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-xf5cz\" (UID: \"a8593368-7930-499d-aa21-6526251ce66c\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-xf5cz" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.281362 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j9bq9\" (UniqueName: \"kubernetes.io/projected/29b6ba9b-6a4d-4e72-be18-2d9dbe9921a9-kube-api-access-j9bq9\") pod \"kube-storage-version-migrator-operator-b67b599dd-x9zg2\" (UID: \"29b6ba9b-6a4d-4e72-be18-2d9dbe9921a9\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-x9zg2" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.281759 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b0d496aa-81c7-47cf-9966-00c96cecc997-config\") pod \"kube-controller-manager-operator-78b949d7b-g9f7l\" (UID: \"b0d496aa-81c7-47cf-9966-00c96cecc997\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-g9f7l" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.281788 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/29b6ba9b-6a4d-4e72-be18-2d9dbe9921a9-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-x9zg2\" (UID: \"29b6ba9b-6a4d-4e72-be18-2d9dbe9921a9\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-x9zg2" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.281806 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4mmwx\" (UniqueName: \"kubernetes.io/projected/1919d4bf-d3e1-4648-bcd6-1e7b0f5a0756-kube-api-access-4mmwx\") pod \"openshift-config-operator-7777fb866f-r7h6p\" (UID: \"1919d4bf-d3e1-4648-bcd6-1e7b0f5a0756\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-r7h6p" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.281887 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8486a6bf-b477-46be-9841-94481ef84313-config\") pod \"apiserver-76f77b778f-dsfv4\" (UID: \"8486a6bf-b477-46be-9841-94481ef84313\") " pod="openshift-apiserver/apiserver-76f77b778f-dsfv4" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.281968 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29b6ba9b-6a4d-4e72-be18-2d9dbe9921a9-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-x9zg2\" (UID: \"29b6ba9b-6a4d-4e72-be18-2d9dbe9921a9\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-x9zg2" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.281992 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/8486a6bf-b477-46be-9841-94481ef84313-encryption-config\") pod \"apiserver-76f77b778f-dsfv4\" (UID: \"8486a6bf-b477-46be-9841-94481ef84313\") " pod="openshift-apiserver/apiserver-76f77b778f-dsfv4" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.282011 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/1919d4bf-d3e1-4648-bcd6-1e7b0f5a0756-available-featuregates\") pod \"openshift-config-operator-7777fb866f-r7h6p\" (UID: \"1919d4bf-d3e1-4648-bcd6-1e7b0f5a0756\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-r7h6p" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.282036 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f98b4\" (UniqueName: \"kubernetes.io/projected/f574ab44-d876-47fc-b23e-a46666fdaf9e-kube-api-access-f98b4\") pod \"dns-operator-744455d44c-4ztkj\" (UID: \"f574ab44-d876-47fc-b23e-a46666fdaf9e\") " pod="openshift-dns-operator/dns-operator-744455d44c-4ztkj" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.282065 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f7103470-2ea6-46ac-ba17-32ea3ffb00ae-etcd-client\") pod \"etcd-operator-b45778765-tfvxx\" (UID: \"f7103470-2ea6-46ac-ba17-32ea3ffb00ae\") " pod="openshift-etcd-operator/etcd-operator-b45778765-tfvxx" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.282088 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mdfwb\" (UniqueName: \"kubernetes.io/projected/def7efcb-32f5-4a8b-9be9-9fc39456c534-kube-api-access-mdfwb\") pod \"machine-config-controller-84d6567774-8khmt\" (UID: \"def7efcb-32f5-4a8b-9be9-9fc39456c534\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-8khmt" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.282124 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rssgf\" (UniqueName: \"kubernetes.io/projected/baa1be6a-a3ce-4a10-9038-8e2cc8e7079c-kube-api-access-rssgf\") pod \"migrator-59844c95c7-rlg6q\" (UID: \"baa1be6a-a3ce-4a10-9038-8e2cc8e7079c\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-rlg6q" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.282145 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/def7efcb-32f5-4a8b-9be9-9fc39456c534-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-8khmt\" (UID: \"def7efcb-32f5-4a8b-9be9-9fc39456c534\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-8khmt" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.282166 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/8486a6bf-b477-46be-9841-94481ef84313-node-pullsecrets\") pod \"apiserver-76f77b778f-dsfv4\" (UID: \"8486a6bf-b477-46be-9841-94481ef84313\") " pod="openshift-apiserver/apiserver-76f77b778f-dsfv4" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.282190 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/52fd7f11-0ca1-4af5-98a0-00789fb541e6-auth-proxy-config\") pod \"machine-config-operator-74547568cd-qph2m\" (UID: \"52fd7f11-0ca1-4af5-98a0-00789fb541e6\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-qph2m" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.282220 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/f574ab44-d876-47fc-b23e-a46666fdaf9e-metrics-tls\") pod \"dns-operator-744455d44c-4ztkj\" (UID: \"f574ab44-d876-47fc-b23e-a46666fdaf9e\") " pod="openshift-dns-operator/dns-operator-744455d44c-4ztkj" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.282351 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/630eae9a-c1b8-47ce-873a-3ef59ef6c002-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-bj86g\" (UID: \"630eae9a-c1b8-47ce-873a-3ef59ef6c002\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-bj86g" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.282378 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/52fd7f11-0ca1-4af5-98a0-00789fb541e6-images\") pod \"machine-config-operator-74547568cd-qph2m\" (UID: \"52fd7f11-0ca1-4af5-98a0-00789fb541e6\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-qph2m" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.282413 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b0d496aa-81c7-47cf-9966-00c96cecc997-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-g9f7l\" (UID: \"b0d496aa-81c7-47cf-9966-00c96cecc997\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-g9f7l" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.282476 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c27ad45d-a6e8-48af-9417-5422ce60dcec-bound-sa-token\") pod \"image-registry-697d97f7c8-zcbc4\" (UID: \"c27ad45d-a6e8-48af-9417-5422ce60dcec\") " pod="openshift-image-registry/image-registry-697d97f7c8-zcbc4" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.282498 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f7103470-2ea6-46ac-ba17-32ea3ffb00ae-serving-cert\") pod \"etcd-operator-b45778765-tfvxx\" (UID: \"f7103470-2ea6-46ac-ba17-32ea3ffb00ae\") " pod="openshift-etcd-operator/etcd-operator-b45778765-tfvxx" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.282517 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/f7103470-2ea6-46ac-ba17-32ea3ffb00ae-etcd-service-ca\") pod \"etcd-operator-b45778765-tfvxx\" (UID: \"f7103470-2ea6-46ac-ba17-32ea3ffb00ae\") " pod="openshift-etcd-operator/etcd-operator-b45778765-tfvxx" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.282537 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/c27ad45d-a6e8-48af-9417-5422ce60dcec-ca-trust-extracted\") pod \"image-registry-697d97f7c8-zcbc4\" (UID: \"c27ad45d-a6e8-48af-9417-5422ce60dcec\") " pod="openshift-image-registry/image-registry-697d97f7c8-zcbc4" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.282575 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f7103470-2ea6-46ac-ba17-32ea3ffb00ae-config\") pod \"etcd-operator-b45778765-tfvxx\" (UID: \"f7103470-2ea6-46ac-ba17-32ea3ffb00ae\") " pod="openshift-etcd-operator/etcd-operator-b45778765-tfvxx" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.282624 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-thmhv\" (UniqueName: \"kubernetes.io/projected/8486a6bf-b477-46be-9841-94481ef84313-kube-api-access-thmhv\") pod \"apiserver-76f77b778f-dsfv4\" (UID: \"8486a6bf-b477-46be-9841-94481ef84313\") " pod="openshift-apiserver/apiserver-76f77b778f-dsfv4" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.282645 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/8486a6bf-b477-46be-9841-94481ef84313-etcd-serving-ca\") pod \"apiserver-76f77b778f-dsfv4\" (UID: \"8486a6bf-b477-46be-9841-94481ef84313\") " pod="openshift-apiserver/apiserver-76f77b778f-dsfv4" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.282677 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8486a6bf-b477-46be-9841-94481ef84313-serving-cert\") pod \"apiserver-76f77b778f-dsfv4\" (UID: \"8486a6bf-b477-46be-9841-94481ef84313\") " pod="openshift-apiserver/apiserver-76f77b778f-dsfv4" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.282728 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nhm5d\" (UniqueName: \"kubernetes.io/projected/c27ad45d-a6e8-48af-9417-5422ce60dcec-kube-api-access-nhm5d\") pod \"image-registry-697d97f7c8-zcbc4\" (UID: \"c27ad45d-a6e8-48af-9417-5422ce60dcec\") " pod="openshift-image-registry/image-registry-697d97f7c8-zcbc4" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.282748 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/655a100f-fb0a-4668-8d78-3b357542dad4-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-t25tr\" (UID: \"655a100f-fb0a-4668-8d78-3b357542dad4\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-t25tr" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.282768 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/655a100f-fb0a-4668-8d78-3b357542dad4-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-t25tr\" (UID: \"655a100f-fb0a-4668-8d78-3b357542dad4\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-t25tr" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.282791 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/8486a6bf-b477-46be-9841-94481ef84313-image-import-ca\") pod \"apiserver-76f77b778f-dsfv4\" (UID: \"8486a6bf-b477-46be-9841-94481ef84313\") " pod="openshift-apiserver/apiserver-76f77b778f-dsfv4" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.282810 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/52fd7f11-0ca1-4af5-98a0-00789fb541e6-proxy-tls\") pod \"machine-config-operator-74547568cd-qph2m\" (UID: \"52fd7f11-0ca1-4af5-98a0-00789fb541e6\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-qph2m" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.282869 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-98zgs\" (UniqueName: \"kubernetes.io/projected/630eae9a-c1b8-47ce-873a-3ef59ef6c002-kube-api-access-98zgs\") pod \"openshift-controller-manager-operator-756b6f6bc6-bj86g\" (UID: \"630eae9a-c1b8-47ce-873a-3ef59ef6c002\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-bj86g" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.282887 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pngd8\" (UniqueName: \"kubernetes.io/projected/f7103470-2ea6-46ac-ba17-32ea3ffb00ae-kube-api-access-pngd8\") pod \"etcd-operator-b45778765-tfvxx\" (UID: \"f7103470-2ea6-46ac-ba17-32ea3ffb00ae\") " pod="openshift-etcd-operator/etcd-operator-b45778765-tfvxx" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.282918 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/f7103470-2ea6-46ac-ba17-32ea3ffb00ae-etcd-ca\") pod \"etcd-operator-b45778765-tfvxx\" (UID: \"f7103470-2ea6-46ac-ba17-32ea3ffb00ae\") " pod="openshift-etcd-operator/etcd-operator-b45778765-tfvxx" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.282948 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/8486a6bf-b477-46be-9841-94481ef84313-audit\") pod \"apiserver-76f77b778f-dsfv4\" (UID: \"8486a6bf-b477-46be-9841-94481ef84313\") " pod="openshift-apiserver/apiserver-76f77b778f-dsfv4" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.282968 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b0d496aa-81c7-47cf-9966-00c96cecc997-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-g9f7l\" (UID: \"b0d496aa-81c7-47cf-9966-00c96cecc997\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-g9f7l" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.283005 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8486a6bf-b477-46be-9841-94481ef84313-trusted-ca-bundle\") pod \"apiserver-76f77b778f-dsfv4\" (UID: \"8486a6bf-b477-46be-9841-94481ef84313\") " pod="openshift-apiserver/apiserver-76f77b778f-dsfv4" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.287681 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-6j6b9" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.302924 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-twh47" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.315453 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-hrglt"] Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.331601 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-9cvs7"] Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.384361 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.384709 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c27ad45d-a6e8-48af-9417-5422ce60dcec-trusted-ca\") pod \"image-registry-697d97f7c8-zcbc4\" (UID: \"c27ad45d-a6e8-48af-9417-5422ce60dcec\") " pod="openshift-image-registry/image-registry-697d97f7c8-zcbc4" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.384754 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-khxjq\" (UniqueName: \"kubernetes.io/projected/bfc7dce3-7c14-4844-b363-d7f9422769cd-kube-api-access-khxjq\") pod \"catalog-operator-68c6474976-n9zff\" (UID: \"bfc7dce3-7c14-4844-b363-d7f9422769cd\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-n9zff" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.384792 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/8486a6bf-b477-46be-9841-94481ef84313-audit-dir\") pod \"apiserver-76f77b778f-dsfv4\" (UID: \"8486a6bf-b477-46be-9841-94481ef84313\") " pod="openshift-apiserver/apiserver-76f77b778f-dsfv4" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.384836 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/84f9138c-fc5c-4cd9-83c7-2c6cc2d9b3a0-webhook-cert\") pod \"packageserver-d55dfcdfc-x4kkz\" (UID: \"84f9138c-fc5c-4cd9-83c7-2c6cc2d9b3a0\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-x4kkz" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.384862 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rvjjc\" (UniqueName: \"kubernetes.io/projected/655a100f-fb0a-4668-8d78-3b357542dad4-kube-api-access-rvjjc\") pod \"cluster-image-registry-operator-dc59b4c8b-t25tr\" (UID: \"655a100f-fb0a-4668-8d78-3b357542dad4\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-t25tr" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.384883 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qzhz2\" (UniqueName: \"kubernetes.io/projected/2a217772-16ab-414b-b3b6-3758c65a8c58-kube-api-access-qzhz2\") pod \"csi-hostpathplugin-9n8jj\" (UID: \"2a217772-16ab-414b-b3b6-3758c65a8c58\") " pod="hostpath-provisioner/csi-hostpathplugin-9n8jj" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.384903 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/c27ad45d-a6e8-48af-9417-5422ce60dcec-registry-certificates\") pod \"image-registry-697d97f7c8-zcbc4\" (UID: \"c27ad45d-a6e8-48af-9417-5422ce60dcec\") " pod="openshift-image-registry/image-registry-697d97f7c8-zcbc4" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.384944 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wmzrl\" (UniqueName: \"kubernetes.io/projected/a8593368-7930-499d-aa21-6526251ce66c-kube-api-access-wmzrl\") pod \"multus-admission-controller-857f4d67dd-xf5cz\" (UID: \"a8593368-7930-499d-aa21-6526251ce66c\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-xf5cz" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.384964 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4dae2f17-e6e0-4ad0-9eca-3d3adaac3c24-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-2n672\" (UID: \"4dae2f17-e6e0-4ad0-9eca-3d3adaac3c24\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2n672" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.384992 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/636ee97b-f5c5-4079-bb13-35d75fa7ffa9-signing-cabundle\") pod \"service-ca-9c57cc56f-4b66q\" (UID: \"636ee97b-f5c5-4079-bb13-35d75fa7ffa9\") " pod="openshift-service-ca/service-ca-9c57cc56f-4b66q" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.385011 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/84f9138c-fc5c-4cd9-83c7-2c6cc2d9b3a0-apiservice-cert\") pod \"packageserver-d55dfcdfc-x4kkz\" (UID: \"84f9138c-fc5c-4cd9-83c7-2c6cc2d9b3a0\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-x4kkz" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.385032 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e65daf94-2073-4b05-8b99-f80d7f777d12-config-volume\") pod \"collect-profiles-29484390-gc7lm\" (UID: \"e65daf94-2073-4b05-8b99-f80d7f777d12\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484390-gc7lm" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.386751 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t8499\" (UniqueName: \"kubernetes.io/projected/41f9ff9a-13f9-49b2-8ba6-0f56462cc94c-kube-api-access-t8499\") pod \"marketplace-operator-79b997595-nhzl2\" (UID: \"41f9ff9a-13f9-49b2-8ba6-0f56462cc94c\") " pod="openshift-marketplace/marketplace-operator-79b997595-nhzl2" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.386807 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4dae2f17-e6e0-4ad0-9eca-3d3adaac3c24-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-2n672\" (UID: \"4dae2f17-e6e0-4ad0-9eca-3d3adaac3c24\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2n672" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.386831 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/def7efcb-32f5-4a8b-9be9-9fc39456c534-proxy-tls\") pod \"machine-config-controller-84d6567774-8khmt\" (UID: \"def7efcb-32f5-4a8b-9be9-9fc39456c534\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-8khmt" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.386851 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/a8593368-7930-499d-aa21-6526251ce66c-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-xf5cz\" (UID: \"a8593368-7930-499d-aa21-6526251ce66c\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-xf5cz" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.386873 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j9bq9\" (UniqueName: \"kubernetes.io/projected/29b6ba9b-6a4d-4e72-be18-2d9dbe9921a9-kube-api-access-j9bq9\") pod \"kube-storage-version-migrator-operator-b67b599dd-x9zg2\" (UID: \"29b6ba9b-6a4d-4e72-be18-2d9dbe9921a9\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-x9zg2" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.386920 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/84f9138c-fc5c-4cd9-83c7-2c6cc2d9b3a0-tmpfs\") pod \"packageserver-d55dfcdfc-x4kkz\" (UID: \"84f9138c-fc5c-4cd9-83c7-2c6cc2d9b3a0\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-x4kkz" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.386956 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e65daf94-2073-4b05-8b99-f80d7f777d12-secret-volume\") pod \"collect-profiles-29484390-gc7lm\" (UID: \"e65daf94-2073-4b05-8b99-f80d7f777d12\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484390-gc7lm" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.386981 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/636ee97b-f5c5-4079-bb13-35d75fa7ffa9-signing-key\") pod \"service-ca-9c57cc56f-4b66q\" (UID: \"636ee97b-f5c5-4079-bb13-35d75fa7ffa9\") " pod="openshift-service-ca/service-ca-9c57cc56f-4b66q" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.387007 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vwvlp\" (UniqueName: \"kubernetes.io/projected/a5ffab1e-2d22-407c-a1ea-ce4e8a74b4c7-kube-api-access-vwvlp\") pod \"service-ca-operator-777779d784-tk7sp\" (UID: \"a5ffab1e-2d22-407c-a1ea-ce4e8a74b4c7\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-tk7sp" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.387027 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b0d496aa-81c7-47cf-9966-00c96cecc997-config\") pod \"kube-controller-manager-operator-78b949d7b-g9f7l\" (UID: \"b0d496aa-81c7-47cf-9966-00c96cecc997\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-g9f7l" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.387051 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/29b6ba9b-6a4d-4e72-be18-2d9dbe9921a9-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-x9zg2\" (UID: \"29b6ba9b-6a4d-4e72-be18-2d9dbe9921a9\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-x9zg2" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.387076 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4mmwx\" (UniqueName: \"kubernetes.io/projected/1919d4bf-d3e1-4648-bcd6-1e7b0f5a0756-kube-api-access-4mmwx\") pod \"openshift-config-operator-7777fb866f-r7h6p\" (UID: \"1919d4bf-d3e1-4648-bcd6-1e7b0f5a0756\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-r7h6p" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.387099 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/bfc7dce3-7c14-4844-b363-d7f9422769cd-profile-collector-cert\") pod \"catalog-operator-68c6474976-n9zff\" (UID: \"bfc7dce3-7c14-4844-b363-d7f9422769cd\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-n9zff" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.387118 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rppbz\" (UniqueName: \"kubernetes.io/projected/06a737ed-a93e-407f-a8c9-4f096bc8d7dd-kube-api-access-rppbz\") pod \"dns-default-t8k8z\" (UID: \"06a737ed-a93e-407f-a8c9-4f096bc8d7dd\") " pod="openshift-dns/dns-default-t8k8z" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.387156 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vss24\" (UniqueName: \"kubernetes.io/projected/5c4c11aa-147f-4cd0-8beb-05f19b0c690d-kube-api-access-vss24\") pod \"olm-operator-6b444d44fb-vmvqs\" (UID: \"5c4c11aa-147f-4cd0-8beb-05f19b0c690d\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vmvqs" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.387186 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8486a6bf-b477-46be-9841-94481ef84313-config\") pod \"apiserver-76f77b778f-dsfv4\" (UID: \"8486a6bf-b477-46be-9841-94481ef84313\") " pod="openshift-apiserver/apiserver-76f77b778f-dsfv4" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.387221 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/1919d4bf-d3e1-4648-bcd6-1e7b0f5a0756-available-featuregates\") pod \"openshift-config-operator-7777fb866f-r7h6p\" (UID: \"1919d4bf-d3e1-4648-bcd6-1e7b0f5a0756\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-r7h6p" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.387252 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29b6ba9b-6a4d-4e72-be18-2d9dbe9921a9-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-x9zg2\" (UID: \"29b6ba9b-6a4d-4e72-be18-2d9dbe9921a9\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-x9zg2" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.387270 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/8486a6bf-b477-46be-9841-94481ef84313-encryption-config\") pod \"apiserver-76f77b778f-dsfv4\" (UID: \"8486a6bf-b477-46be-9841-94481ef84313\") " pod="openshift-apiserver/apiserver-76f77b778f-dsfv4" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.387290 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f98b4\" (UniqueName: \"kubernetes.io/projected/f574ab44-d876-47fc-b23e-a46666fdaf9e-kube-api-access-f98b4\") pod \"dns-operator-744455d44c-4ztkj\" (UID: \"f574ab44-d876-47fc-b23e-a46666fdaf9e\") " pod="openshift-dns-operator/dns-operator-744455d44c-4ztkj" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.387308 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f7103470-2ea6-46ac-ba17-32ea3ffb00ae-etcd-client\") pod \"etcd-operator-b45778765-tfvxx\" (UID: \"f7103470-2ea6-46ac-ba17-32ea3ffb00ae\") " pod="openshift-etcd-operator/etcd-operator-b45778765-tfvxx" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.387329 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/da5e3c21-a4d3-4a75-8375-4cd909ee8a05-certs\") pod \"machine-config-server-tdfb6\" (UID: \"da5e3c21-a4d3-4a75-8375-4cd909ee8a05\") " pod="openshift-machine-config-operator/machine-config-server-tdfb6" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.387348 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mdfwb\" (UniqueName: \"kubernetes.io/projected/def7efcb-32f5-4a8b-9be9-9fc39456c534-kube-api-access-mdfwb\") pod \"machine-config-controller-84d6567774-8khmt\" (UID: \"def7efcb-32f5-4a8b-9be9-9fc39456c534\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-8khmt" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.387387 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/def7efcb-32f5-4a8b-9be9-9fc39456c534-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-8khmt\" (UID: \"def7efcb-32f5-4a8b-9be9-9fc39456c534\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-8khmt" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.387405 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/2a217772-16ab-414b-b3b6-3758c65a8c58-csi-data-dir\") pod \"csi-hostpathplugin-9n8jj\" (UID: \"2a217772-16ab-414b-b3b6-3758c65a8c58\") " pod="hostpath-provisioner/csi-hostpathplugin-9n8jj" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.387424 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rssgf\" (UniqueName: \"kubernetes.io/projected/baa1be6a-a3ce-4a10-9038-8e2cc8e7079c-kube-api-access-rssgf\") pod \"migrator-59844c95c7-rlg6q\" (UID: \"baa1be6a-a3ce-4a10-9038-8e2cc8e7079c\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-rlg6q" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.387456 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/8486a6bf-b477-46be-9841-94481ef84313-node-pullsecrets\") pod \"apiserver-76f77b778f-dsfv4\" (UID: \"8486a6bf-b477-46be-9841-94481ef84313\") " pod="openshift-apiserver/apiserver-76f77b778f-dsfv4" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.387484 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/52fd7f11-0ca1-4af5-98a0-00789fb541e6-auth-proxy-config\") pod \"machine-config-operator-74547568cd-qph2m\" (UID: \"52fd7f11-0ca1-4af5-98a0-00789fb541e6\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-qph2m" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.387504 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-txxg7\" (UniqueName: \"kubernetes.io/projected/5cdfd3a3-6548-4657-a810-55f8eaac886b-kube-api-access-txxg7\") pod \"package-server-manager-789f6589d5-24gcw\" (UID: \"5cdfd3a3-6548-4657-a810-55f8eaac886b\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-24gcw" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.387527 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/f574ab44-d876-47fc-b23e-a46666fdaf9e-metrics-tls\") pod \"dns-operator-744455d44c-4ztkj\" (UID: \"f574ab44-d876-47fc-b23e-a46666fdaf9e\") " pod="openshift-dns-operator/dns-operator-744455d44c-4ztkj" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.387548 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/630eae9a-c1b8-47ce-873a-3ef59ef6c002-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-bj86g\" (UID: \"630eae9a-c1b8-47ce-873a-3ef59ef6c002\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-bj86g" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.387565 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/52fd7f11-0ca1-4af5-98a0-00789fb541e6-images\") pod \"machine-config-operator-74547568cd-qph2m\" (UID: \"52fd7f11-0ca1-4af5-98a0-00789fb541e6\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-qph2m" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.387583 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b0d496aa-81c7-47cf-9966-00c96cecc997-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-g9f7l\" (UID: \"b0d496aa-81c7-47cf-9966-00c96cecc997\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-g9f7l" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.387602 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/5c4c11aa-147f-4cd0-8beb-05f19b0c690d-srv-cert\") pod \"olm-operator-6b444d44fb-vmvqs\" (UID: \"5c4c11aa-147f-4cd0-8beb-05f19b0c690d\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vmvqs" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.387620 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a5ffab1e-2d22-407c-a1ea-ce4e8a74b4c7-serving-cert\") pod \"service-ca-operator-777779d784-tk7sp\" (UID: \"a5ffab1e-2d22-407c-a1ea-ce4e8a74b4c7\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-tk7sp" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.387648 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/c27ad45d-a6e8-48af-9417-5422ce60dcec-ca-trust-extracted\") pod \"image-registry-697d97f7c8-zcbc4\" (UID: \"c27ad45d-a6e8-48af-9417-5422ce60dcec\") " pod="openshift-image-registry/image-registry-697d97f7c8-zcbc4" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.387666 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c27ad45d-a6e8-48af-9417-5422ce60dcec-bound-sa-token\") pod \"image-registry-697d97f7c8-zcbc4\" (UID: \"c27ad45d-a6e8-48af-9417-5422ce60dcec\") " pod="openshift-image-registry/image-registry-697d97f7c8-zcbc4" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.387684 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f7103470-2ea6-46ac-ba17-32ea3ffb00ae-serving-cert\") pod \"etcd-operator-b45778765-tfvxx\" (UID: \"f7103470-2ea6-46ac-ba17-32ea3ffb00ae\") " pod="openshift-etcd-operator/etcd-operator-b45778765-tfvxx" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.387699 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/f7103470-2ea6-46ac-ba17-32ea3ffb00ae-etcd-service-ca\") pod \"etcd-operator-b45778765-tfvxx\" (UID: \"f7103470-2ea6-46ac-ba17-32ea3ffb00ae\") " pod="openshift-etcd-operator/etcd-operator-b45778765-tfvxx" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.387759 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f7103470-2ea6-46ac-ba17-32ea3ffb00ae-config\") pod \"etcd-operator-b45778765-tfvxx\" (UID: \"f7103470-2ea6-46ac-ba17-32ea3ffb00ae\") " pod="openshift-etcd-operator/etcd-operator-b45778765-tfvxx" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.387788 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-thmhv\" (UniqueName: \"kubernetes.io/projected/8486a6bf-b477-46be-9841-94481ef84313-kube-api-access-thmhv\") pod \"apiserver-76f77b778f-dsfv4\" (UID: \"8486a6bf-b477-46be-9841-94481ef84313\") " pod="openshift-apiserver/apiserver-76f77b778f-dsfv4" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.387809 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/8486a6bf-b477-46be-9841-94481ef84313-etcd-serving-ca\") pod \"apiserver-76f77b778f-dsfv4\" (UID: \"8486a6bf-b477-46be-9841-94481ef84313\") " pod="openshift-apiserver/apiserver-76f77b778f-dsfv4" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.387829 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a5ffab1e-2d22-407c-a1ea-ce4e8a74b4c7-config\") pod \"service-ca-operator-777779d784-tk7sp\" (UID: \"a5ffab1e-2d22-407c-a1ea-ce4e8a74b4c7\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-tk7sp" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.387866 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7889h\" (UniqueName: \"kubernetes.io/projected/636ee97b-f5c5-4079-bb13-35d75fa7ffa9-kube-api-access-7889h\") pod \"service-ca-9c57cc56f-4b66q\" (UID: \"636ee97b-f5c5-4079-bb13-35d75fa7ffa9\") " pod="openshift-service-ca/service-ca-9c57cc56f-4b66q" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.387883 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8486a6bf-b477-46be-9841-94481ef84313-serving-cert\") pod \"apiserver-76f77b778f-dsfv4\" (UID: \"8486a6bf-b477-46be-9841-94481ef84313\") " pod="openshift-apiserver/apiserver-76f77b778f-dsfv4" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.387902 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/2a217772-16ab-414b-b3b6-3758c65a8c58-registration-dir\") pod \"csi-hostpathplugin-9n8jj\" (UID: \"2a217772-16ab-414b-b3b6-3758c65a8c58\") " pod="hostpath-provisioner/csi-hostpathplugin-9n8jj" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.387970 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/655a100f-fb0a-4668-8d78-3b357542dad4-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-t25tr\" (UID: \"655a100f-fb0a-4668-8d78-3b357542dad4\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-t25tr" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.387992 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/8486a6bf-b477-46be-9841-94481ef84313-image-import-ca\") pod \"apiserver-76f77b778f-dsfv4\" (UID: \"8486a6bf-b477-46be-9841-94481ef84313\") " pod="openshift-apiserver/apiserver-76f77b778f-dsfv4" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.388019 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nhm5d\" (UniqueName: \"kubernetes.io/projected/c27ad45d-a6e8-48af-9417-5422ce60dcec-kube-api-access-nhm5d\") pod \"image-registry-697d97f7c8-zcbc4\" (UID: \"c27ad45d-a6e8-48af-9417-5422ce60dcec\") " pod="openshift-image-registry/image-registry-697d97f7c8-zcbc4" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.388038 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/655a100f-fb0a-4668-8d78-3b357542dad4-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-t25tr\" (UID: \"655a100f-fb0a-4668-8d78-3b357542dad4\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-t25tr" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.388057 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/52fd7f11-0ca1-4af5-98a0-00789fb541e6-proxy-tls\") pod \"machine-config-operator-74547568cd-qph2m\" (UID: \"52fd7f11-0ca1-4af5-98a0-00789fb541e6\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-qph2m" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.388076 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-98zgs\" (UniqueName: \"kubernetes.io/projected/630eae9a-c1b8-47ce-873a-3ef59ef6c002-kube-api-access-98zgs\") pod \"openshift-controller-manager-operator-756b6f6bc6-bj86g\" (UID: \"630eae9a-c1b8-47ce-873a-3ef59ef6c002\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-bj86g" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.388096 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/f7103470-2ea6-46ac-ba17-32ea3ffb00ae-etcd-ca\") pod \"etcd-operator-b45778765-tfvxx\" (UID: \"f7103470-2ea6-46ac-ba17-32ea3ffb00ae\") " pod="openshift-etcd-operator/etcd-operator-b45778765-tfvxx" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.388119 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pngd8\" (UniqueName: \"kubernetes.io/projected/f7103470-2ea6-46ac-ba17-32ea3ffb00ae-kube-api-access-pngd8\") pod \"etcd-operator-b45778765-tfvxx\" (UID: \"f7103470-2ea6-46ac-ba17-32ea3ffb00ae\") " pod="openshift-etcd-operator/etcd-operator-b45778765-tfvxx" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.388138 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/bfc7dce3-7c14-4844-b363-d7f9422769cd-srv-cert\") pod \"catalog-operator-68c6474976-n9zff\" (UID: \"bfc7dce3-7c14-4844-b363-d7f9422769cd\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-n9zff" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.388159 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/8486a6bf-b477-46be-9841-94481ef84313-audit\") pod \"apiserver-76f77b778f-dsfv4\" (UID: \"8486a6bf-b477-46be-9841-94481ef84313\") " pod="openshift-apiserver/apiserver-76f77b778f-dsfv4" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.388190 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b0d496aa-81c7-47cf-9966-00c96cecc997-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-g9f7l\" (UID: \"b0d496aa-81c7-47cf-9966-00c96cecc997\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-g9f7l" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.388210 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jhq6v\" (UniqueName: \"kubernetes.io/projected/e65daf94-2073-4b05-8b99-f80d7f777d12-kube-api-access-jhq6v\") pod \"collect-profiles-29484390-gc7lm\" (UID: \"e65daf94-2073-4b05-8b99-f80d7f777d12\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484390-gc7lm" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.388226 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/da5e3c21-a4d3-4a75-8375-4cd909ee8a05-node-bootstrap-token\") pod \"machine-config-server-tdfb6\" (UID: \"da5e3c21-a4d3-4a75-8375-4cd909ee8a05\") " pod="openshift-machine-config-operator/machine-config-server-tdfb6" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.388246 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8486a6bf-b477-46be-9841-94481ef84313-trusted-ca-bundle\") pod \"apiserver-76f77b778f-dsfv4\" (UID: \"8486a6bf-b477-46be-9841-94481ef84313\") " pod="openshift-apiserver/apiserver-76f77b778f-dsfv4" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.388280 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8qfn8\" (UniqueName: \"kubernetes.io/projected/ef604d7d-576b-48eb-8131-888627c5c681-kube-api-access-8qfn8\") pod \"ingress-canary-rg9qd\" (UID: \"ef604d7d-576b-48eb-8131-888627c5c681\") " pod="openshift-ingress-canary/ingress-canary-rg9qd" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.388299 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/5c4c11aa-147f-4cd0-8beb-05f19b0c690d-profile-collector-cert\") pod \"olm-operator-6b444d44fb-vmvqs\" (UID: \"5c4c11aa-147f-4cd0-8beb-05f19b0c690d\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vmvqs" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.388339 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-98dc2\" (UniqueName: \"kubernetes.io/projected/9f57a689-3b37-4c87-a02f-7898dbbaa665-kube-api-access-98dc2\") pod \"ingress-operator-5b745b69d9-nn9tr\" (UID: \"9f57a689-3b37-4c87-a02f-7898dbbaa665\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-nn9tr" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.388367 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c27ad45d-a6e8-48af-9417-5422ce60dcec-installation-pull-secrets\") pod \"image-registry-697d97f7c8-zcbc4\" (UID: \"c27ad45d-a6e8-48af-9417-5422ce60dcec\") " pod="openshift-image-registry/image-registry-697d97f7c8-zcbc4" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.388387 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/655a100f-fb0a-4668-8d78-3b357542dad4-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-t25tr\" (UID: \"655a100f-fb0a-4668-8d78-3b357542dad4\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-t25tr" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.388406 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/41f9ff9a-13f9-49b2-8ba6-0f56462cc94c-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-nhzl2\" (UID: \"41f9ff9a-13f9-49b2-8ba6-0f56462cc94c\") " pod="openshift-marketplace/marketplace-operator-79b997595-nhzl2" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.388427 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c27ad45d-a6e8-48af-9417-5422ce60dcec-registry-tls\") pod \"image-registry-697d97f7c8-zcbc4\" (UID: \"c27ad45d-a6e8-48af-9417-5422ce60dcec\") " pod="openshift-image-registry/image-registry-697d97f7c8-zcbc4" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.388448 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/630eae9a-c1b8-47ce-873a-3ef59ef6c002-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-bj86g\" (UID: \"630eae9a-c1b8-47ce-873a-3ef59ef6c002\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-bj86g" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.388463 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/2a217772-16ab-414b-b3b6-3758c65a8c58-plugins-dir\") pod \"csi-hostpathplugin-9n8jj\" (UID: \"2a217772-16ab-414b-b3b6-3758c65a8c58\") " pod="hostpath-provisioner/csi-hostpathplugin-9n8jj" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.388481 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/2a217772-16ab-414b-b3b6-3758c65a8c58-mountpoint-dir\") pod \"csi-hostpathplugin-9n8jj\" (UID: \"2a217772-16ab-414b-b3b6-3758c65a8c58\") " pod="hostpath-provisioner/csi-hostpathplugin-9n8jj" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.388496 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/06a737ed-a93e-407f-a8c9-4f096bc8d7dd-config-volume\") pod \"dns-default-t8k8z\" (UID: \"06a737ed-a93e-407f-a8c9-4f096bc8d7dd\") " pod="openshift-dns/dns-default-t8k8z" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.388514 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/2a217772-16ab-414b-b3b6-3758c65a8c58-socket-dir\") pod \"csi-hostpathplugin-9n8jj\" (UID: \"2a217772-16ab-414b-b3b6-3758c65a8c58\") " pod="hostpath-provisioner/csi-hostpathplugin-9n8jj" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.388537 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/5cdfd3a3-6548-4657-a810-55f8eaac886b-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-24gcw\" (UID: \"5cdfd3a3-6548-4657-a810-55f8eaac886b\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-24gcw" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.388556 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4dae2f17-e6e0-4ad0-9eca-3d3adaac3c24-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-2n672\" (UID: \"4dae2f17-e6e0-4ad0-9eca-3d3adaac3c24\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2n672" Jan 22 06:37:27 crc kubenswrapper[4720]: E0122 06:37:27.388623 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 06:37:27.888599006 +0000 UTC m=+140.030505701 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.388685 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9f57a689-3b37-4c87-a02f-7898dbbaa665-trusted-ca\") pod \"ingress-operator-5b745b69d9-nn9tr\" (UID: \"9f57a689-3b37-4c87-a02f-7898dbbaa665\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-nn9tr" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.388733 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q2p5t\" (UniqueName: \"kubernetes.io/projected/da5e3c21-a4d3-4a75-8375-4cd909ee8a05-kube-api-access-q2p5t\") pod \"machine-config-server-tdfb6\" (UID: \"da5e3c21-a4d3-4a75-8375-4cd909ee8a05\") " pod="openshift-machine-config-operator/machine-config-server-tdfb6" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.388768 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ef604d7d-576b-48eb-8131-888627c5c681-cert\") pod \"ingress-canary-rg9qd\" (UID: \"ef604d7d-576b-48eb-8131-888627c5c681\") " pod="openshift-ingress-canary/ingress-canary-rg9qd" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.389201 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/41f9ff9a-13f9-49b2-8ba6-0f56462cc94c-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-nhzl2\" (UID: \"41f9ff9a-13f9-49b2-8ba6-0f56462cc94c\") " pod="openshift-marketplace/marketplace-operator-79b997595-nhzl2" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.389279 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1919d4bf-d3e1-4648-bcd6-1e7b0f5a0756-serving-cert\") pod \"openshift-config-operator-7777fb866f-r7h6p\" (UID: \"1919d4bf-d3e1-4648-bcd6-1e7b0f5a0756\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-r7h6p" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.389365 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9f57a689-3b37-4c87-a02f-7898dbbaa665-bound-sa-token\") pod \"ingress-operator-5b745b69d9-nn9tr\" (UID: \"9f57a689-3b37-4c87-a02f-7898dbbaa665\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-nn9tr" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.389394 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/06a737ed-a93e-407f-a8c9-4f096bc8d7dd-metrics-tls\") pod \"dns-default-t8k8z\" (UID: \"06a737ed-a93e-407f-a8c9-4f096bc8d7dd\") " pod="openshift-dns/dns-default-t8k8z" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.389421 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dv4qh\" (UniqueName: \"kubernetes.io/projected/52fd7f11-0ca1-4af5-98a0-00789fb541e6-kube-api-access-dv4qh\") pod \"machine-config-operator-74547568cd-qph2m\" (UID: \"52fd7f11-0ca1-4af5-98a0-00789fb541e6\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-qph2m" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.389450 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/8486a6bf-b477-46be-9841-94481ef84313-etcd-client\") pod \"apiserver-76f77b778f-dsfv4\" (UID: \"8486a6bf-b477-46be-9841-94481ef84313\") " pod="openshift-apiserver/apiserver-76f77b778f-dsfv4" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.389475 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9f57a689-3b37-4c87-a02f-7898dbbaa665-metrics-tls\") pod \"ingress-operator-5b745b69d9-nn9tr\" (UID: \"9f57a689-3b37-4c87-a02f-7898dbbaa665\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-nn9tr" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.389544 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7ngqp\" (UniqueName: \"kubernetes.io/projected/84f9138c-fc5c-4cd9-83c7-2c6cc2d9b3a0-kube-api-access-7ngqp\") pod \"packageserver-d55dfcdfc-x4kkz\" (UID: \"84f9138c-fc5c-4cd9-83c7-2c6cc2d9b3a0\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-x4kkz" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.393083 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/8486a6bf-b477-46be-9841-94481ef84313-audit-dir\") pod \"apiserver-76f77b778f-dsfv4\" (UID: \"8486a6bf-b477-46be-9841-94481ef84313\") " pod="openshift-apiserver/apiserver-76f77b778f-dsfv4" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.395644 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/8486a6bf-b477-46be-9841-94481ef84313-image-import-ca\") pod \"apiserver-76f77b778f-dsfv4\" (UID: \"8486a6bf-b477-46be-9841-94481ef84313\") " pod="openshift-apiserver/apiserver-76f77b778f-dsfv4" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.396253 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9f57a689-3b37-4c87-a02f-7898dbbaa665-trusted-ca\") pod \"ingress-operator-5b745b69d9-nn9tr\" (UID: \"9f57a689-3b37-4c87-a02f-7898dbbaa665\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-nn9tr" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.396965 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8486a6bf-b477-46be-9841-94481ef84313-trusted-ca-bundle\") pod \"apiserver-76f77b778f-dsfv4\" (UID: \"8486a6bf-b477-46be-9841-94481ef84313\") " pod="openshift-apiserver/apiserver-76f77b778f-dsfv4" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.397382 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29b6ba9b-6a4d-4e72-be18-2d9dbe9921a9-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-x9zg2\" (UID: \"29b6ba9b-6a4d-4e72-be18-2d9dbe9921a9\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-x9zg2" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.398398 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/def7efcb-32f5-4a8b-9be9-9fc39456c534-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-8khmt\" (UID: \"def7efcb-32f5-4a8b-9be9-9fc39456c534\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-8khmt" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.398956 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4dae2f17-e6e0-4ad0-9eca-3d3adaac3c24-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-2n672\" (UID: \"4dae2f17-e6e0-4ad0-9eca-3d3adaac3c24\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2n672" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.402010 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/8486a6bf-b477-46be-9841-94481ef84313-etcd-serving-ca\") pod \"apiserver-76f77b778f-dsfv4\" (UID: \"8486a6bf-b477-46be-9841-94481ef84313\") " pod="openshift-apiserver/apiserver-76f77b778f-dsfv4" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.402489 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/655a100f-fb0a-4668-8d78-3b357542dad4-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-t25tr\" (UID: \"655a100f-fb0a-4668-8d78-3b357542dad4\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-t25tr" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.403397 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/52fd7f11-0ca1-4af5-98a0-00789fb541e6-auth-proxy-config\") pod \"machine-config-operator-74547568cd-qph2m\" (UID: \"52fd7f11-0ca1-4af5-98a0-00789fb541e6\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-qph2m" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.403407 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f7103470-2ea6-46ac-ba17-32ea3ffb00ae-config\") pod \"etcd-operator-b45778765-tfvxx\" (UID: \"f7103470-2ea6-46ac-ba17-32ea3ffb00ae\") " pod="openshift-etcd-operator/etcd-operator-b45778765-tfvxx" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.404589 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/8486a6bf-b477-46be-9841-94481ef84313-audit\") pod \"apiserver-76f77b778f-dsfv4\" (UID: \"8486a6bf-b477-46be-9841-94481ef84313\") " pod="openshift-apiserver/apiserver-76f77b778f-dsfv4" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.405356 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/c27ad45d-a6e8-48af-9417-5422ce60dcec-registry-certificates\") pod \"image-registry-697d97f7c8-zcbc4\" (UID: \"c27ad45d-a6e8-48af-9417-5422ce60dcec\") " pod="openshift-image-registry/image-registry-697d97f7c8-zcbc4" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.405869 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/8486a6bf-b477-46be-9841-94481ef84313-node-pullsecrets\") pod \"apiserver-76f77b778f-dsfv4\" (UID: \"8486a6bf-b477-46be-9841-94481ef84313\") " pod="openshift-apiserver/apiserver-76f77b778f-dsfv4" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.406784 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b0d496aa-81c7-47cf-9966-00c96cecc997-config\") pod \"kube-controller-manager-operator-78b949d7b-g9f7l\" (UID: \"b0d496aa-81c7-47cf-9966-00c96cecc997\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-g9f7l" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.408083 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/c27ad45d-a6e8-48af-9417-5422ce60dcec-ca-trust-extracted\") pod \"image-registry-697d97f7c8-zcbc4\" (UID: \"c27ad45d-a6e8-48af-9417-5422ce60dcec\") " pod="openshift-image-registry/image-registry-697d97f7c8-zcbc4" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.408813 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8486a6bf-b477-46be-9841-94481ef84313-config\") pod \"apiserver-76f77b778f-dsfv4\" (UID: \"8486a6bf-b477-46be-9841-94481ef84313\") " pod="openshift-apiserver/apiserver-76f77b778f-dsfv4" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.409544 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c27ad45d-a6e8-48af-9417-5422ce60dcec-trusted-ca\") pod \"image-registry-697d97f7c8-zcbc4\" (UID: \"c27ad45d-a6e8-48af-9417-5422ce60dcec\") " pod="openshift-image-registry/image-registry-697d97f7c8-zcbc4" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.402817 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/1919d4bf-d3e1-4648-bcd6-1e7b0f5a0756-available-featuregates\") pod \"openshift-config-operator-7777fb866f-r7h6p\" (UID: \"1919d4bf-d3e1-4648-bcd6-1e7b0f5a0756\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-r7h6p" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.410142 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/a8593368-7930-499d-aa21-6526251ce66c-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-xf5cz\" (UID: \"a8593368-7930-499d-aa21-6526251ce66c\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-xf5cz" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.410570 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/def7efcb-32f5-4a8b-9be9-9fc39456c534-proxy-tls\") pod \"machine-config-controller-84d6567774-8khmt\" (UID: \"def7efcb-32f5-4a8b-9be9-9fc39456c534\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-8khmt" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.411284 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f7103470-2ea6-46ac-ba17-32ea3ffb00ae-etcd-client\") pod \"etcd-operator-b45778765-tfvxx\" (UID: \"f7103470-2ea6-46ac-ba17-32ea3ffb00ae\") " pod="openshift-etcd-operator/etcd-operator-b45778765-tfvxx" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.411982 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/630eae9a-c1b8-47ce-873a-3ef59ef6c002-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-bj86g\" (UID: \"630eae9a-c1b8-47ce-873a-3ef59ef6c002\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-bj86g" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.412203 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1919d4bf-d3e1-4648-bcd6-1e7b0f5a0756-serving-cert\") pod \"openshift-config-operator-7777fb866f-r7h6p\" (UID: \"1919d4bf-d3e1-4648-bcd6-1e7b0f5a0756\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-r7h6p" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.412408 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/8486a6bf-b477-46be-9841-94481ef84313-encryption-config\") pod \"apiserver-76f77b778f-dsfv4\" (UID: \"8486a6bf-b477-46be-9841-94481ef84313\") " pod="openshift-apiserver/apiserver-76f77b778f-dsfv4" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.412599 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/f7103470-2ea6-46ac-ba17-32ea3ffb00ae-etcd-service-ca\") pod \"etcd-operator-b45778765-tfvxx\" (UID: \"f7103470-2ea6-46ac-ba17-32ea3ffb00ae\") " pod="openshift-etcd-operator/etcd-operator-b45778765-tfvxx" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.413254 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/52fd7f11-0ca1-4af5-98a0-00789fb541e6-images\") pod \"machine-config-operator-74547568cd-qph2m\" (UID: \"52fd7f11-0ca1-4af5-98a0-00789fb541e6\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-qph2m" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.414047 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b0d496aa-81c7-47cf-9966-00c96cecc997-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-g9f7l\" (UID: \"b0d496aa-81c7-47cf-9966-00c96cecc997\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-g9f7l" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.414428 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8486a6bf-b477-46be-9841-94481ef84313-serving-cert\") pod \"apiserver-76f77b778f-dsfv4\" (UID: \"8486a6bf-b477-46be-9841-94481ef84313\") " pod="openshift-apiserver/apiserver-76f77b778f-dsfv4" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.417556 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f7103470-2ea6-46ac-ba17-32ea3ffb00ae-serving-cert\") pod \"etcd-operator-b45778765-tfvxx\" (UID: \"f7103470-2ea6-46ac-ba17-32ea3ffb00ae\") " pod="openshift-etcd-operator/etcd-operator-b45778765-tfvxx" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.417774 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/f574ab44-d876-47fc-b23e-a46666fdaf9e-metrics-tls\") pod \"dns-operator-744455d44c-4ztkj\" (UID: \"f574ab44-d876-47fc-b23e-a46666fdaf9e\") " pod="openshift-dns-operator/dns-operator-744455d44c-4ztkj" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.418280 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/29b6ba9b-6a4d-4e72-be18-2d9dbe9921a9-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-x9zg2\" (UID: \"29b6ba9b-6a4d-4e72-be18-2d9dbe9921a9\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-x9zg2" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.418370 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/655a100f-fb0a-4668-8d78-3b357542dad4-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-t25tr\" (UID: \"655a100f-fb0a-4668-8d78-3b357542dad4\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-t25tr" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.418830 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9f57a689-3b37-4c87-a02f-7898dbbaa665-metrics-tls\") pod \"ingress-operator-5b745b69d9-nn9tr\" (UID: \"9f57a689-3b37-4c87-a02f-7898dbbaa665\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-nn9tr" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.421281 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4dae2f17-e6e0-4ad0-9eca-3d3adaac3c24-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-2n672\" (UID: \"4dae2f17-e6e0-4ad0-9eca-3d3adaac3c24\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2n672" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.421372 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/630eae9a-c1b8-47ce-873a-3ef59ef6c002-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-bj86g\" (UID: \"630eae9a-c1b8-47ce-873a-3ef59ef6c002\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-bj86g" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.422135 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/8486a6bf-b477-46be-9841-94481ef84313-etcd-client\") pod \"apiserver-76f77b778f-dsfv4\" (UID: \"8486a6bf-b477-46be-9841-94481ef84313\") " pod="openshift-apiserver/apiserver-76f77b778f-dsfv4" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.422186 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c27ad45d-a6e8-48af-9417-5422ce60dcec-installation-pull-secrets\") pod \"image-registry-697d97f7c8-zcbc4\" (UID: \"c27ad45d-a6e8-48af-9417-5422ce60dcec\") " pod="openshift-image-registry/image-registry-697d97f7c8-zcbc4" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.422408 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/f7103470-2ea6-46ac-ba17-32ea3ffb00ae-etcd-ca\") pod \"etcd-operator-b45778765-tfvxx\" (UID: \"f7103470-2ea6-46ac-ba17-32ea3ffb00ae\") " pod="openshift-etcd-operator/etcd-operator-b45778765-tfvxx" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.427725 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c27ad45d-a6e8-48af-9417-5422ce60dcec-registry-tls\") pod \"image-registry-697d97f7c8-zcbc4\" (UID: \"c27ad45d-a6e8-48af-9417-5422ce60dcec\") " pod="openshift-image-registry/image-registry-697d97f7c8-zcbc4" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.430817 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/52fd7f11-0ca1-4af5-98a0-00789fb541e6-proxy-tls\") pod \"machine-config-operator-74547568cd-qph2m\" (UID: \"52fd7f11-0ca1-4af5-98a0-00789fb541e6\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-qph2m" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.451391 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f98b4\" (UniqueName: \"kubernetes.io/projected/f574ab44-d876-47fc-b23e-a46666fdaf9e-kube-api-access-f98b4\") pod \"dns-operator-744455d44c-4ztkj\" (UID: \"f574ab44-d876-47fc-b23e-a46666fdaf9e\") " pod="openshift-dns-operator/dns-operator-744455d44c-4ztkj" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.452410 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-vp8tq"] Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.455873 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4dae2f17-e6e0-4ad0-9eca-3d3adaac3c24-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-2n672\" (UID: \"4dae2f17-e6e0-4ad0-9eca-3d3adaac3c24\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2n672" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.476497 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mdfwb\" (UniqueName: \"kubernetes.io/projected/def7efcb-32f5-4a8b-9be9-9fc39456c534-kube-api-access-mdfwb\") pod \"machine-config-controller-84d6567774-8khmt\" (UID: \"def7efcb-32f5-4a8b-9be9-9fc39456c534\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-8khmt" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.485223 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-dhklt"] Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.490634 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vwvlp\" (UniqueName: \"kubernetes.io/projected/a5ffab1e-2d22-407c-a1ea-ce4e8a74b4c7-kube-api-access-vwvlp\") pod \"service-ca-operator-777779d784-tk7sp\" (UID: \"a5ffab1e-2d22-407c-a1ea-ce4e8a74b4c7\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-tk7sp" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.490679 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/636ee97b-f5c5-4079-bb13-35d75fa7ffa9-signing-key\") pod \"service-ca-9c57cc56f-4b66q\" (UID: \"636ee97b-f5c5-4079-bb13-35d75fa7ffa9\") " pod="openshift-service-ca/service-ca-9c57cc56f-4b66q" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.490701 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rppbz\" (UniqueName: \"kubernetes.io/projected/06a737ed-a93e-407f-a8c9-4f096bc8d7dd-kube-api-access-rppbz\") pod \"dns-default-t8k8z\" (UID: \"06a737ed-a93e-407f-a8c9-4f096bc8d7dd\") " pod="openshift-dns/dns-default-t8k8z" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.490729 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/bfc7dce3-7c14-4844-b363-d7f9422769cd-profile-collector-cert\") pod \"catalog-operator-68c6474976-n9zff\" (UID: \"bfc7dce3-7c14-4844-b363-d7f9422769cd\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-n9zff" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.490759 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vss24\" (UniqueName: \"kubernetes.io/projected/5c4c11aa-147f-4cd0-8beb-05f19b0c690d-kube-api-access-vss24\") pod \"olm-operator-6b444d44fb-vmvqs\" (UID: \"5c4c11aa-147f-4cd0-8beb-05f19b0c690d\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vmvqs" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.490787 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/da5e3c21-a4d3-4a75-8375-4cd909ee8a05-certs\") pod \"machine-config-server-tdfb6\" (UID: \"da5e3c21-a4d3-4a75-8375-4cd909ee8a05\") " pod="openshift-machine-config-operator/machine-config-server-tdfb6" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.490815 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/2a217772-16ab-414b-b3b6-3758c65a8c58-csi-data-dir\") pod \"csi-hostpathplugin-9n8jj\" (UID: \"2a217772-16ab-414b-b3b6-3758c65a8c58\") " pod="hostpath-provisioner/csi-hostpathplugin-9n8jj" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.490839 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-txxg7\" (UniqueName: \"kubernetes.io/projected/5cdfd3a3-6548-4657-a810-55f8eaac886b-kube-api-access-txxg7\") pod \"package-server-manager-789f6589d5-24gcw\" (UID: \"5cdfd3a3-6548-4657-a810-55f8eaac886b\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-24gcw" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.490864 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/5c4c11aa-147f-4cd0-8beb-05f19b0c690d-srv-cert\") pod \"olm-operator-6b444d44fb-vmvqs\" (UID: \"5c4c11aa-147f-4cd0-8beb-05f19b0c690d\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vmvqs" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.490895 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a5ffab1e-2d22-407c-a1ea-ce4e8a74b4c7-serving-cert\") pod \"service-ca-operator-777779d784-tk7sp\" (UID: \"a5ffab1e-2d22-407c-a1ea-ce4e8a74b4c7\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-tk7sp" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.490946 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a5ffab1e-2d22-407c-a1ea-ce4e8a74b4c7-config\") pod \"service-ca-operator-777779d784-tk7sp\" (UID: \"a5ffab1e-2d22-407c-a1ea-ce4e8a74b4c7\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-tk7sp" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.490978 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7889h\" (UniqueName: \"kubernetes.io/projected/636ee97b-f5c5-4079-bb13-35d75fa7ffa9-kube-api-access-7889h\") pod \"service-ca-9c57cc56f-4b66q\" (UID: \"636ee97b-f5c5-4079-bb13-35d75fa7ffa9\") " pod="openshift-service-ca/service-ca-9c57cc56f-4b66q" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.491005 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/2a217772-16ab-414b-b3b6-3758c65a8c58-registration-dir\") pod \"csi-hostpathplugin-9n8jj\" (UID: \"2a217772-16ab-414b-b3b6-3758c65a8c58\") " pod="hostpath-provisioner/csi-hostpathplugin-9n8jj" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.491050 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/bfc7dce3-7c14-4844-b363-d7f9422769cd-srv-cert\") pod \"catalog-operator-68c6474976-n9zff\" (UID: \"bfc7dce3-7c14-4844-b363-d7f9422769cd\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-n9zff" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.491071 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/da5e3c21-a4d3-4a75-8375-4cd909ee8a05-node-bootstrap-token\") pod \"machine-config-server-tdfb6\" (UID: \"da5e3c21-a4d3-4a75-8375-4cd909ee8a05\") " pod="openshift-machine-config-operator/machine-config-server-tdfb6" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.491098 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jhq6v\" (UniqueName: \"kubernetes.io/projected/e65daf94-2073-4b05-8b99-f80d7f777d12-kube-api-access-jhq6v\") pod \"collect-profiles-29484390-gc7lm\" (UID: \"e65daf94-2073-4b05-8b99-f80d7f777d12\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484390-gc7lm" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.491119 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8qfn8\" (UniqueName: \"kubernetes.io/projected/ef604d7d-576b-48eb-8131-888627c5c681-kube-api-access-8qfn8\") pod \"ingress-canary-rg9qd\" (UID: \"ef604d7d-576b-48eb-8131-888627c5c681\") " pod="openshift-ingress-canary/ingress-canary-rg9qd" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.491137 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/5c4c11aa-147f-4cd0-8beb-05f19b0c690d-profile-collector-cert\") pod \"olm-operator-6b444d44fb-vmvqs\" (UID: \"5c4c11aa-147f-4cd0-8beb-05f19b0c690d\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vmvqs" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.491183 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/41f9ff9a-13f9-49b2-8ba6-0f56462cc94c-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-nhzl2\" (UID: \"41f9ff9a-13f9-49b2-8ba6-0f56462cc94c\") " pod="openshift-marketplace/marketplace-operator-79b997595-nhzl2" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.491210 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/2a217772-16ab-414b-b3b6-3758c65a8c58-plugins-dir\") pod \"csi-hostpathplugin-9n8jj\" (UID: \"2a217772-16ab-414b-b3b6-3758c65a8c58\") " pod="hostpath-provisioner/csi-hostpathplugin-9n8jj" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.491230 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/06a737ed-a93e-407f-a8c9-4f096bc8d7dd-config-volume\") pod \"dns-default-t8k8z\" (UID: \"06a737ed-a93e-407f-a8c9-4f096bc8d7dd\") " pod="openshift-dns/dns-default-t8k8z" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.491252 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/2a217772-16ab-414b-b3b6-3758c65a8c58-mountpoint-dir\") pod \"csi-hostpathplugin-9n8jj\" (UID: \"2a217772-16ab-414b-b3b6-3758c65a8c58\") " pod="hostpath-provisioner/csi-hostpathplugin-9n8jj" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.491269 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/2a217772-16ab-414b-b3b6-3758c65a8c58-socket-dir\") pod \"csi-hostpathplugin-9n8jj\" (UID: \"2a217772-16ab-414b-b3b6-3758c65a8c58\") " pod="hostpath-provisioner/csi-hostpathplugin-9n8jj" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.491271 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/2a217772-16ab-414b-b3b6-3758c65a8c58-csi-data-dir\") pod \"csi-hostpathplugin-9n8jj\" (UID: \"2a217772-16ab-414b-b3b6-3758c65a8c58\") " pod="hostpath-provisioner/csi-hostpathplugin-9n8jj" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.491288 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q2p5t\" (UniqueName: \"kubernetes.io/projected/da5e3c21-a4d3-4a75-8375-4cd909ee8a05-kube-api-access-q2p5t\") pod \"machine-config-server-tdfb6\" (UID: \"da5e3c21-a4d3-4a75-8375-4cd909ee8a05\") " pod="openshift-machine-config-operator/machine-config-server-tdfb6" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.491385 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/5cdfd3a3-6548-4657-a810-55f8eaac886b-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-24gcw\" (UID: \"5cdfd3a3-6548-4657-a810-55f8eaac886b\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-24gcw" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.491451 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ef604d7d-576b-48eb-8131-888627c5c681-cert\") pod \"ingress-canary-rg9qd\" (UID: \"ef604d7d-576b-48eb-8131-888627c5c681\") " pod="openshift-ingress-canary/ingress-canary-rg9qd" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.491496 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/06a737ed-a93e-407f-a8c9-4f096bc8d7dd-metrics-tls\") pod \"dns-default-t8k8z\" (UID: \"06a737ed-a93e-407f-a8c9-4f096bc8d7dd\") " pod="openshift-dns/dns-default-t8k8z" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.491517 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/41f9ff9a-13f9-49b2-8ba6-0f56462cc94c-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-nhzl2\" (UID: \"41f9ff9a-13f9-49b2-8ba6-0f56462cc94c\") " pod="openshift-marketplace/marketplace-operator-79b997595-nhzl2" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.491566 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7ngqp\" (UniqueName: \"kubernetes.io/projected/84f9138c-fc5c-4cd9-83c7-2c6cc2d9b3a0-kube-api-access-7ngqp\") pod \"packageserver-d55dfcdfc-x4kkz\" (UID: \"84f9138c-fc5c-4cd9-83c7-2c6cc2d9b3a0\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-x4kkz" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.491587 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-khxjq\" (UniqueName: \"kubernetes.io/projected/bfc7dce3-7c14-4844-b363-d7f9422769cd-kube-api-access-khxjq\") pod \"catalog-operator-68c6474976-n9zff\" (UID: \"bfc7dce3-7c14-4844-b363-d7f9422769cd\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-n9zff" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.491610 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zcbc4\" (UID: \"c27ad45d-a6e8-48af-9417-5422ce60dcec\") " pod="openshift-image-registry/image-registry-697d97f7c8-zcbc4" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.491646 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/84f9138c-fc5c-4cd9-83c7-2c6cc2d9b3a0-webhook-cert\") pod \"packageserver-d55dfcdfc-x4kkz\" (UID: \"84f9138c-fc5c-4cd9-83c7-2c6cc2d9b3a0\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-x4kkz" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.491669 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qzhz2\" (UniqueName: \"kubernetes.io/projected/2a217772-16ab-414b-b3b6-3758c65a8c58-kube-api-access-qzhz2\") pod \"csi-hostpathplugin-9n8jj\" (UID: \"2a217772-16ab-414b-b3b6-3758c65a8c58\") " pod="hostpath-provisioner/csi-hostpathplugin-9n8jj" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.491720 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/636ee97b-f5c5-4079-bb13-35d75fa7ffa9-signing-cabundle\") pod \"service-ca-9c57cc56f-4b66q\" (UID: \"636ee97b-f5c5-4079-bb13-35d75fa7ffa9\") " pod="openshift-service-ca/service-ca-9c57cc56f-4b66q" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.491752 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/84f9138c-fc5c-4cd9-83c7-2c6cc2d9b3a0-apiservice-cert\") pod \"packageserver-d55dfcdfc-x4kkz\" (UID: \"84f9138c-fc5c-4cd9-83c7-2c6cc2d9b3a0\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-x4kkz" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.491774 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e65daf94-2073-4b05-8b99-f80d7f777d12-config-volume\") pod \"collect-profiles-29484390-gc7lm\" (UID: \"e65daf94-2073-4b05-8b99-f80d7f777d12\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484390-gc7lm" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.491796 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t8499\" (UniqueName: \"kubernetes.io/projected/41f9ff9a-13f9-49b2-8ba6-0f56462cc94c-kube-api-access-t8499\") pod \"marketplace-operator-79b997595-nhzl2\" (UID: \"41f9ff9a-13f9-49b2-8ba6-0f56462cc94c\") " pod="openshift-marketplace/marketplace-operator-79b997595-nhzl2" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.491841 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/84f9138c-fc5c-4cd9-83c7-2c6cc2d9b3a0-tmpfs\") pod \"packageserver-d55dfcdfc-x4kkz\" (UID: \"84f9138c-fc5c-4cd9-83c7-2c6cc2d9b3a0\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-x4kkz" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.491866 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e65daf94-2073-4b05-8b99-f80d7f777d12-secret-volume\") pod \"collect-profiles-29484390-gc7lm\" (UID: \"e65daf94-2073-4b05-8b99-f80d7f777d12\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484390-gc7lm" Jan 22 06:37:27 crc kubenswrapper[4720]: E0122 06:37:27.492550 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 06:37:27.992529075 +0000 UTC m=+140.134435780 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zcbc4" (UID: "c27ad45d-a6e8-48af-9417-5422ce60dcec") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.494047 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a5ffab1e-2d22-407c-a1ea-ce4e8a74b4c7-config\") pod \"service-ca-operator-777779d784-tk7sp\" (UID: \"a5ffab1e-2d22-407c-a1ea-ce4e8a74b4c7\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-tk7sp" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.495600 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/2a217772-16ab-414b-b3b6-3758c65a8c58-plugins-dir\") pod \"csi-hostpathplugin-9n8jj\" (UID: \"2a217772-16ab-414b-b3b6-3758c65a8c58\") " pod="hostpath-provisioner/csi-hostpathplugin-9n8jj" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.503369 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a5ffab1e-2d22-407c-a1ea-ce4e8a74b4c7-serving-cert\") pod \"service-ca-operator-777779d784-tk7sp\" (UID: \"a5ffab1e-2d22-407c-a1ea-ce4e8a74b4c7\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-tk7sp" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.503524 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/bfc7dce3-7c14-4844-b363-d7f9422769cd-profile-collector-cert\") pod \"catalog-operator-68c6474976-n9zff\" (UID: \"bfc7dce3-7c14-4844-b363-d7f9422769cd\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-n9zff" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.504201 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/da5e3c21-a4d3-4a75-8375-4cd909ee8a05-certs\") pod \"machine-config-server-tdfb6\" (UID: \"da5e3c21-a4d3-4a75-8375-4cd909ee8a05\") " pod="openshift-machine-config-operator/machine-config-server-tdfb6" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.504337 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/06a737ed-a93e-407f-a8c9-4f096bc8d7dd-config-volume\") pod \"dns-default-t8k8z\" (UID: \"06a737ed-a93e-407f-a8c9-4f096bc8d7dd\") " pod="openshift-dns/dns-default-t8k8z" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.504497 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/2a217772-16ab-414b-b3b6-3758c65a8c58-mountpoint-dir\") pod \"csi-hostpathplugin-9n8jj\" (UID: \"2a217772-16ab-414b-b3b6-3758c65a8c58\") " pod="hostpath-provisioner/csi-hostpathplugin-9n8jj" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.504586 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/2a217772-16ab-414b-b3b6-3758c65a8c58-socket-dir\") pod \"csi-hostpathplugin-9n8jj\" (UID: \"2a217772-16ab-414b-b3b6-3758c65a8c58\") " pod="hostpath-provisioner/csi-hostpathplugin-9n8jj" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.504815 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/41f9ff9a-13f9-49b2-8ba6-0f56462cc94c-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-nhzl2\" (UID: \"41f9ff9a-13f9-49b2-8ba6-0f56462cc94c\") " pod="openshift-marketplace/marketplace-operator-79b997595-nhzl2" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.505039 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/5c4c11aa-147f-4cd0-8beb-05f19b0c690d-srv-cert\") pod \"olm-operator-6b444d44fb-vmvqs\" (UID: \"5c4c11aa-147f-4cd0-8beb-05f19b0c690d\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vmvqs" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.505148 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/41f9ff9a-13f9-49b2-8ba6-0f56462cc94c-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-nhzl2\" (UID: \"41f9ff9a-13f9-49b2-8ba6-0f56462cc94c\") " pod="openshift-marketplace/marketplace-operator-79b997595-nhzl2" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.505216 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/636ee97b-f5c5-4079-bb13-35d75fa7ffa9-signing-key\") pod \"service-ca-9c57cc56f-4b66q\" (UID: \"636ee97b-f5c5-4079-bb13-35d75fa7ffa9\") " pod="openshift-service-ca/service-ca-9c57cc56f-4b66q" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.505270 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/2a217772-16ab-414b-b3b6-3758c65a8c58-registration-dir\") pod \"csi-hostpathplugin-9n8jj\" (UID: \"2a217772-16ab-414b-b3b6-3758c65a8c58\") " pod="hostpath-provisioner/csi-hostpathplugin-9n8jj" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.505347 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/bfc7dce3-7c14-4844-b363-d7f9422769cd-srv-cert\") pod \"catalog-operator-68c6474976-n9zff\" (UID: \"bfc7dce3-7c14-4844-b363-d7f9422769cd\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-n9zff" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.505644 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e65daf94-2073-4b05-8b99-f80d7f777d12-secret-volume\") pod \"collect-profiles-29484390-gc7lm\" (UID: \"e65daf94-2073-4b05-8b99-f80d7f777d12\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484390-gc7lm" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.505997 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/84f9138c-fc5c-4cd9-83c7-2c6cc2d9b3a0-tmpfs\") pod \"packageserver-d55dfcdfc-x4kkz\" (UID: \"84f9138c-fc5c-4cd9-83c7-2c6cc2d9b3a0\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-x4kkz" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.506405 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e65daf94-2073-4b05-8b99-f80d7f777d12-config-volume\") pod \"collect-profiles-29484390-gc7lm\" (UID: \"e65daf94-2073-4b05-8b99-f80d7f777d12\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484390-gc7lm" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.506428 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/da5e3c21-a4d3-4a75-8375-4cd909ee8a05-node-bootstrap-token\") pod \"machine-config-server-tdfb6\" (UID: \"da5e3c21-a4d3-4a75-8375-4cd909ee8a05\") " pod="openshift-machine-config-operator/machine-config-server-tdfb6" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.506988 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/636ee97b-f5c5-4079-bb13-35d75fa7ffa9-signing-cabundle\") pod \"service-ca-9c57cc56f-4b66q\" (UID: \"636ee97b-f5c5-4079-bb13-35d75fa7ffa9\") " pod="openshift-service-ca/service-ca-9c57cc56f-4b66q" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.507360 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rvjjc\" (UniqueName: \"kubernetes.io/projected/655a100f-fb0a-4668-8d78-3b357542dad4-kube-api-access-rvjjc\") pod \"cluster-image-registry-operator-dc59b4c8b-t25tr\" (UID: \"655a100f-fb0a-4668-8d78-3b357542dad4\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-t25tr" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.507864 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ef604d7d-576b-48eb-8131-888627c5c681-cert\") pod \"ingress-canary-rg9qd\" (UID: \"ef604d7d-576b-48eb-8131-888627c5c681\") " pod="openshift-ingress-canary/ingress-canary-rg9qd" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.510137 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/84f9138c-fc5c-4cd9-83c7-2c6cc2d9b3a0-apiservice-cert\") pod \"packageserver-d55dfcdfc-x4kkz\" (UID: \"84f9138c-fc5c-4cd9-83c7-2c6cc2d9b3a0\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-x4kkz" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.513882 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/5cdfd3a3-6548-4657-a810-55f8eaac886b-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-24gcw\" (UID: \"5cdfd3a3-6548-4657-a810-55f8eaac886b\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-24gcw" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.515396 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/84f9138c-fc5c-4cd9-83c7-2c6cc2d9b3a0-webhook-cert\") pod \"packageserver-d55dfcdfc-x4kkz\" (UID: \"84f9138c-fc5c-4cd9-83c7-2c6cc2d9b3a0\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-x4kkz" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.525666 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/06a737ed-a93e-407f-a8c9-4f096bc8d7dd-metrics-tls\") pod \"dns-default-t8k8z\" (UID: \"06a737ed-a93e-407f-a8c9-4f096bc8d7dd\") " pod="openshift-dns/dns-default-t8k8z" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.527738 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-4ztkj" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.532448 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/5c4c11aa-147f-4cd0-8beb-05f19b0c690d-profile-collector-cert\") pod \"olm-operator-6b444d44fb-vmvqs\" (UID: \"5c4c11aa-147f-4cd0-8beb-05f19b0c690d\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vmvqs" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.548388 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pngd8\" (UniqueName: \"kubernetes.io/projected/f7103470-2ea6-46ac-ba17-32ea3ffb00ae-kube-api-access-pngd8\") pod \"etcd-operator-b45778765-tfvxx\" (UID: \"f7103470-2ea6-46ac-ba17-32ea3ffb00ae\") " pod="openshift-etcd-operator/etcd-operator-b45778765-tfvxx" Jan 22 06:37:27 crc kubenswrapper[4720]: W0122 06:37:27.551222 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3f7c9fba_71e2_44d4_9601_be0ffa541be4.slice/crio-a39126b9faad9e2b2fc2a69217c4e4799a1bf64d17de49ec063690b97535b1b4 WatchSource:0}: Error finding container a39126b9faad9e2b2fc2a69217c4e4799a1bf64d17de49ec063690b97535b1b4: Status 404 returned error can't find the container with id a39126b9faad9e2b2fc2a69217c4e4799a1bf64d17de49ec063690b97535b1b4 Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.554820 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nhm5d\" (UniqueName: \"kubernetes.io/projected/c27ad45d-a6e8-48af-9417-5422ce60dcec-kube-api-access-nhm5d\") pod \"image-registry-697d97f7c8-zcbc4\" (UID: \"c27ad45d-a6e8-48af-9417-5422ce60dcec\") " pod="openshift-image-registry/image-registry-697d97f7c8-zcbc4" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.556434 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-gxkzq"] Jan 22 06:37:27 crc kubenswrapper[4720]: W0122 06:37:27.557028 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod508eaeea_db9b_4801_a9d3_a758e3ae9502.slice/crio-58568531637fc48f04358aa29bcfbcfda9fa1c2b3f8b3987421bb8d9943e45e6 WatchSource:0}: Error finding container 58568531637fc48f04358aa29bcfbcfda9fa1c2b3f8b3987421bb8d9943e45e6: Status 404 returned error can't find the container with id 58568531637fc48f04358aa29bcfbcfda9fa1c2b3f8b3987421bb8d9943e45e6 Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.572471 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-98dc2\" (UniqueName: \"kubernetes.io/projected/9f57a689-3b37-4c87-a02f-7898dbbaa665-kube-api-access-98dc2\") pod \"ingress-operator-5b745b69d9-nn9tr\" (UID: \"9f57a689-3b37-4c87-a02f-7898dbbaa665\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-nn9tr" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.582703 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-thmhv\" (UniqueName: \"kubernetes.io/projected/8486a6bf-b477-46be-9841-94481ef84313-kube-api-access-thmhv\") pod \"apiserver-76f77b778f-dsfv4\" (UID: \"8486a6bf-b477-46be-9841-94481ef84313\") " pod="openshift-apiserver/apiserver-76f77b778f-dsfv4" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.595243 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 06:37:27 crc kubenswrapper[4720]: E0122 06:37:27.595891 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 06:37:28.095871209 +0000 UTC m=+140.237777914 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.611601 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-x6khz"] Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.615817 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/655a100f-fb0a-4668-8d78-3b357542dad4-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-t25tr\" (UID: \"655a100f-fb0a-4668-8d78-3b357542dad4\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-t25tr" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.626577 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-8khmt" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.636581 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wmzrl\" (UniqueName: \"kubernetes.io/projected/a8593368-7930-499d-aa21-6526251ce66c-kube-api-access-wmzrl\") pod \"multus-admission-controller-857f4d67dd-xf5cz\" (UID: \"a8593368-7930-499d-aa21-6526251ce66c\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-xf5cz" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.637229 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2n672" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.642439 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rssgf\" (UniqueName: \"kubernetes.io/projected/baa1be6a-a3ce-4a10-9038-8e2cc8e7079c-kube-api-access-rssgf\") pod \"migrator-59844c95c7-rlg6q\" (UID: \"baa1be6a-a3ce-4a10-9038-8e2cc8e7079c\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-rlg6q" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.676703 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-98zgs\" (UniqueName: \"kubernetes.io/projected/630eae9a-c1b8-47ce-873a-3ef59ef6c002-kube-api-access-98zgs\") pod \"openshift-controller-manager-operator-756b6f6bc6-bj86g\" (UID: \"630eae9a-c1b8-47ce-873a-3ef59ef6c002\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-bj86g" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.703675 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zcbc4\" (UID: \"c27ad45d-a6e8-48af-9417-5422ce60dcec\") " pod="openshift-image-registry/image-registry-697d97f7c8-zcbc4" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.703829 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j9bq9\" (UniqueName: \"kubernetes.io/projected/29b6ba9b-6a4d-4e72-be18-2d9dbe9921a9-kube-api-access-j9bq9\") pod \"kube-storage-version-migrator-operator-b67b599dd-x9zg2\" (UID: \"29b6ba9b-6a4d-4e72-be18-2d9dbe9921a9\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-x9zg2" Jan 22 06:37:27 crc kubenswrapper[4720]: E0122 06:37:27.704202 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 06:37:28.204184163 +0000 UTC m=+140.346090868 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zcbc4" (UID: "c27ad45d-a6e8-48af-9417-5422ce60dcec") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.708084 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b0d496aa-81c7-47cf-9966-00c96cecc997-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-g9f7l\" (UID: \"b0d496aa-81c7-47cf-9966-00c96cecc997\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-g9f7l" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.720801 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c27ad45d-a6e8-48af-9417-5422ce60dcec-bound-sa-token\") pod \"image-registry-697d97f7c8-zcbc4\" (UID: \"c27ad45d-a6e8-48af-9417-5422ce60dcec\") " pod="openshift-image-registry/image-registry-697d97f7c8-zcbc4" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.734865 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-hxdwr"] Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.737109 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dv4qh\" (UniqueName: \"kubernetes.io/projected/52fd7f11-0ca1-4af5-98a0-00789fb541e6-kube-api-access-dv4qh\") pod \"machine-config-operator-74547568cd-qph2m\" (UID: \"52fd7f11-0ca1-4af5-98a0-00789fb541e6\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-qph2m" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.754005 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-zmhj8"] Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.759517 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9f57a689-3b37-4c87-a02f-7898dbbaa665-bound-sa-token\") pod \"ingress-operator-5b745b69d9-nn9tr\" (UID: \"9f57a689-3b37-4c87-a02f-7898dbbaa665\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-nn9tr" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.771279 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-twh47"] Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.790603 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4mmwx\" (UniqueName: \"kubernetes.io/projected/1919d4bf-d3e1-4648-bcd6-1e7b0f5a0756-kube-api-access-4mmwx\") pod \"openshift-config-operator-7777fb866f-r7h6p\" (UID: \"1919d4bf-d3e1-4648-bcd6-1e7b0f5a0756\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-r7h6p" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.809350 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 06:37:27 crc kubenswrapper[4720]: E0122 06:37:27.809566 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 06:37:28.309512302 +0000 UTC m=+140.451419007 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.809776 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zcbc4\" (UID: \"c27ad45d-a6e8-48af-9417-5422ce60dcec\") " pod="openshift-image-registry/image-registry-697d97f7c8-zcbc4" Jan 22 06:37:27 crc kubenswrapper[4720]: E0122 06:37:27.810220 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 06:37:28.310205692 +0000 UTC m=+140.452112387 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zcbc4" (UID: "c27ad45d-a6e8-48af-9417-5422ce60dcec") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.814741 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-dsfv4" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.818756 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vwvlp\" (UniqueName: \"kubernetes.io/projected/a5ffab1e-2d22-407c-a1ea-ce4e8a74b4c7-kube-api-access-vwvlp\") pod \"service-ca-operator-777779d784-tk7sp\" (UID: \"a5ffab1e-2d22-407c-a1ea-ce4e8a74b4c7\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-tk7sp" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.831973 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-tfvxx" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.837036 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rppbz\" (UniqueName: \"kubernetes.io/projected/06a737ed-a93e-407f-a8c9-4f096bc8d7dd-kube-api-access-rppbz\") pod \"dns-default-t8k8z\" (UID: \"06a737ed-a93e-407f-a8c9-4f096bc8d7dd\") " pod="openshift-dns/dns-default-t8k8z" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.839142 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-r7h6p" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.852513 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-t25tr" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.871018 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vss24\" (UniqueName: \"kubernetes.io/projected/5c4c11aa-147f-4cd0-8beb-05f19b0c690d-kube-api-access-vss24\") pod \"olm-operator-6b444d44fb-vmvqs\" (UID: \"5c4c11aa-147f-4cd0-8beb-05f19b0c690d\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vmvqs" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.871237 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-rlg6q" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.879118 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q2p5t\" (UniqueName: \"kubernetes.io/projected/da5e3c21-a4d3-4a75-8375-4cd909ee8a05-kube-api-access-q2p5t\") pod \"machine-config-server-tdfb6\" (UID: \"da5e3c21-a4d3-4a75-8375-4cd909ee8a05\") " pod="openshift-machine-config-operator/machine-config-server-tdfb6" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.896010 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-txxg7\" (UniqueName: \"kubernetes.io/projected/5cdfd3a3-6548-4657-a810-55f8eaac886b-kube-api-access-txxg7\") pod \"package-server-manager-789f6589d5-24gcw\" (UID: \"5cdfd3a3-6548-4657-a810-55f8eaac886b\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-24gcw" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.908663 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-nn9tr" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.910346 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 06:37:27 crc kubenswrapper[4720]: E0122 06:37:27.910809 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 06:37:28.410787537 +0000 UTC m=+140.552694242 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.914835 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8qfn8\" (UniqueName: \"kubernetes.io/projected/ef604d7d-576b-48eb-8131-888627c5c681-kube-api-access-8qfn8\") pod \"ingress-canary-rg9qd\" (UID: \"ef604d7d-576b-48eb-8131-888627c5c681\") " pod="openshift-ingress-canary/ingress-canary-rg9qd" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.915008 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-xf5cz" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.930277 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-x9zg2" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.945762 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7889h\" (UniqueName: \"kubernetes.io/projected/636ee97b-f5c5-4079-bb13-35d75fa7ffa9-kube-api-access-7889h\") pod \"service-ca-9c57cc56f-4b66q\" (UID: \"636ee97b-f5c5-4079-bb13-35d75fa7ffa9\") " pod="openshift-service-ca/service-ca-9c57cc56f-4b66q" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.945976 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-bj86g" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.949969 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-4ztkj"] Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.953948 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-qph2m" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.966400 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-8khmt"] Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.971558 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t8499\" (UniqueName: \"kubernetes.io/projected/41f9ff9a-13f9-49b2-8ba6-0f56462cc94c-kube-api-access-t8499\") pod \"marketplace-operator-79b997595-nhzl2\" (UID: \"41f9ff9a-13f9-49b2-8ba6-0f56462cc94c\") " pod="openshift-marketplace/marketplace-operator-79b997595-nhzl2" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.974051 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jhq6v\" (UniqueName: \"kubernetes.io/projected/e65daf94-2073-4b05-8b99-f80d7f777d12-kube-api-access-jhq6v\") pod \"collect-profiles-29484390-gc7lm\" (UID: \"e65daf94-2073-4b05-8b99-f80d7f777d12\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484390-gc7lm" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.989631 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-g9f7l" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.990676 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484390-gc7lm" Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.993440 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2n672"] Jan 22 06:37:27 crc kubenswrapper[4720]: I0122 06:37:27.994590 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qzhz2\" (UniqueName: \"kubernetes.io/projected/2a217772-16ab-414b-b3b6-3758c65a8c58-kube-api-access-qzhz2\") pod \"csi-hostpathplugin-9n8jj\" (UID: \"2a217772-16ab-414b-b3b6-3758c65a8c58\") " pod="hostpath-provisioner/csi-hostpathplugin-9n8jj" Jan 22 06:37:28 crc kubenswrapper[4720]: I0122 06:37:28.001671 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-nhzl2" Jan 22 06:37:28 crc kubenswrapper[4720]: W0122 06:37:28.009165 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf574ab44_d876_47fc_b23e_a46666fdaf9e.slice/crio-1c74ebe70c3421a1fbee45683d47f4df0e93afbb87b5475de49f170b490504ed WatchSource:0}: Error finding container 1c74ebe70c3421a1fbee45683d47f4df0e93afbb87b5475de49f170b490504ed: Status 404 returned error can't find the container with id 1c74ebe70c3421a1fbee45683d47f4df0e93afbb87b5475de49f170b490504ed Jan 22 06:37:28 crc kubenswrapper[4720]: I0122 06:37:28.011729 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-4b66q" Jan 22 06:37:28 crc kubenswrapper[4720]: I0122 06:37:28.011994 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zcbc4\" (UID: \"c27ad45d-a6e8-48af-9417-5422ce60dcec\") " pod="openshift-image-registry/image-registry-697d97f7c8-zcbc4" Jan 22 06:37:28 crc kubenswrapper[4720]: E0122 06:37:28.012459 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 06:37:28.512442162 +0000 UTC m=+140.654348867 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zcbc4" (UID: "c27ad45d-a6e8-48af-9417-5422ce60dcec") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 06:37:28 crc kubenswrapper[4720]: W0122 06:37:28.017875 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddef7efcb_32f5_4a8b_9be9_9fc39456c534.slice/crio-a1065e4e0f6ec3f02269eb9652594e035e3d40142750f4df6a34b7a30787cb22 WatchSource:0}: Error finding container a1065e4e0f6ec3f02269eb9652594e035e3d40142750f4df6a34b7a30787cb22: Status 404 returned error can't find the container with id a1065e4e0f6ec3f02269eb9652594e035e3d40142750f4df6a34b7a30787cb22 Jan 22 06:37:28 crc kubenswrapper[4720]: I0122 06:37:28.019183 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-khxjq\" (UniqueName: \"kubernetes.io/projected/bfc7dce3-7c14-4844-b363-d7f9422769cd-kube-api-access-khxjq\") pod \"catalog-operator-68c6474976-n9zff\" (UID: \"bfc7dce3-7c14-4844-b363-d7f9422769cd\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-n9zff" Jan 22 06:37:28 crc kubenswrapper[4720]: I0122 06:37:28.024894 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vmvqs" Jan 22 06:37:28 crc kubenswrapper[4720]: I0122 06:37:28.029548 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-24gcw" Jan 22 06:37:28 crc kubenswrapper[4720]: I0122 06:37:28.037959 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-tk7sp" Jan 22 06:37:28 crc kubenswrapper[4720]: I0122 06:37:28.038797 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7ngqp\" (UniqueName: \"kubernetes.io/projected/84f9138c-fc5c-4cd9-83c7-2c6cc2d9b3a0-kube-api-access-7ngqp\") pod \"packageserver-d55dfcdfc-x4kkz\" (UID: \"84f9138c-fc5c-4cd9-83c7-2c6cc2d9b3a0\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-x4kkz" Jan 22 06:37:28 crc kubenswrapper[4720]: I0122 06:37:28.070049 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-9n8jj" Jan 22 06:37:28 crc kubenswrapper[4720]: I0122 06:37:28.080528 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-rg9qd" Jan 22 06:37:28 crc kubenswrapper[4720]: I0122 06:37:28.090308 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-t8k8z" Jan 22 06:37:28 crc kubenswrapper[4720]: I0122 06:37:28.095105 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-bkx6t" event={"ID":"4b9e32c5-534c-42ed-96fd-4e747d7084dd","Type":"ContainerStarted","Data":"5807cadc8ea58a53375416cf3cd8fdc9532e34ef3b8b68c2f691f218705b7759"} Jan 22 06:37:28 crc kubenswrapper[4720]: I0122 06:37:28.095162 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-bkx6t" event={"ID":"4b9e32c5-534c-42ed-96fd-4e747d7084dd","Type":"ContainerStarted","Data":"f3b41213da8b1bc179a5a722a4a44574aefae32cf16e9b87bf57e39359443b73"} Jan 22 06:37:28 crc kubenswrapper[4720]: I0122 06:37:28.099893 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-tdfb6" Jan 22 06:37:28 crc kubenswrapper[4720]: I0122 06:37:28.109484 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-6j6b9" event={"ID":"12b3f8d7-d79f-48e6-be2f-eeb97827e913","Type":"ContainerStarted","Data":"c3af65fe296215092a99670f657aff6e12fdcd81966935473698a3dc02505420"} Jan 22 06:37:28 crc kubenswrapper[4720]: I0122 06:37:28.109533 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-6j6b9" event={"ID":"12b3f8d7-d79f-48e6-be2f-eeb97827e913","Type":"ContainerStarted","Data":"500acb39b62c05621fb52a9efa4d317b90e48b5a09a414f243ff6fd20b2ce7b1"} Jan 22 06:37:28 crc kubenswrapper[4720]: I0122 06:37:28.112841 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 06:37:28 crc kubenswrapper[4720]: I0122 06:37:28.113377 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-hrglt" event={"ID":"610296d1-12dc-4132-8ef9-9cc37ed81a3d","Type":"ContainerStarted","Data":"2c4b34716070cd65ae968e3dbd4e7cf178c226c2ba5a1dbecaff29cb8cd54dd7"} Jan 22 06:37:28 crc kubenswrapper[4720]: I0122 06:37:28.113442 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-hrglt" event={"ID":"610296d1-12dc-4132-8ef9-9cc37ed81a3d","Type":"ContainerStarted","Data":"b109b11b37234dcc36cd52de3b65f98170e6ed336ba5dc2fca12d0bc782983a1"} Jan 22 06:37:28 crc kubenswrapper[4720]: E0122 06:37:28.113397 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 06:37:28.613377317 +0000 UTC m=+140.755284022 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 06:37:28 crc kubenswrapper[4720]: I0122 06:37:28.113948 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-hrglt" Jan 22 06:37:28 crc kubenswrapper[4720]: I0122 06:37:28.115548 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-x6khz" event={"ID":"824d4c6b-8052-429c-a050-4339913991b5","Type":"ContainerStarted","Data":"294b87f5aaffa6f2c19b7bdf7910a5f5d4790a67a2816326b48b7d27a9554c7b"} Jan 22 06:37:28 crc kubenswrapper[4720]: I0122 06:37:28.170078 4720 patch_prober.go:28] interesting pod/console-operator-58897d9998-hrglt container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.8:8443/readyz\": dial tcp 10.217.0.8:8443: connect: connection refused" start-of-body= Jan 22 06:37:28 crc kubenswrapper[4720]: I0122 06:37:28.170210 4720 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-hrglt" podUID="610296d1-12dc-4132-8ef9-9cc37ed81a3d" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.8:8443/readyz\": dial tcp 10.217.0.8:8443: connect: connection refused" Jan 22 06:37:28 crc kubenswrapper[4720]: I0122 06:37:28.177888 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-gxkzq" event={"ID":"508eaeea-db9b-4801-a9d3-a758e3ae9502","Type":"ContainerStarted","Data":"9bc9b941f7c8ad12159f344c981f602a4d2e44205a59a4d4340247cba159a001"} Jan 22 06:37:28 crc kubenswrapper[4720]: I0122 06:37:28.177949 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-gxkzq" event={"ID":"508eaeea-db9b-4801-a9d3-a758e3ae9502","Type":"ContainerStarted","Data":"58568531637fc48f04358aa29bcfbcfda9fa1c2b3f8b3987421bb8d9943e45e6"} Jan 22 06:37:28 crc kubenswrapper[4720]: I0122 06:37:28.178930 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-gxkzq" Jan 22 06:37:28 crc kubenswrapper[4720]: I0122 06:37:28.182708 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-8khmt" event={"ID":"def7efcb-32f5-4a8b-9be9-9fc39456c534","Type":"ContainerStarted","Data":"a1065e4e0f6ec3f02269eb9652594e035e3d40142750f4df6a34b7a30787cb22"} Jan 22 06:37:28 crc kubenswrapper[4720]: I0122 06:37:28.192897 4720 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-gxkzq container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.10:8443/healthz\": dial tcp 10.217.0.10:8443: connect: connection refused" start-of-body= Jan 22 06:37:28 crc kubenswrapper[4720]: I0122 06:37:28.193033 4720 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-gxkzq" podUID="508eaeea-db9b-4801-a9d3-a758e3ae9502" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.10:8443/healthz\": dial tcp 10.217.0.10:8443: connect: connection refused" Jan 22 06:37:28 crc kubenswrapper[4720]: I0122 06:37:28.196358 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-4ztkj" event={"ID":"f574ab44-d876-47fc-b23e-a46666fdaf9e","Type":"ContainerStarted","Data":"1c74ebe70c3421a1fbee45683d47f4df0e93afbb87b5475de49f170b490504ed"} Jan 22 06:37:28 crc kubenswrapper[4720]: I0122 06:37:28.249902 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zcbc4\" (UID: \"c27ad45d-a6e8-48af-9417-5422ce60dcec\") " pod="openshift-image-registry/image-registry-697d97f7c8-zcbc4" Jan 22 06:37:28 crc kubenswrapper[4720]: E0122 06:37:28.253306 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 06:37:28.753262647 +0000 UTC m=+140.895169352 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zcbc4" (UID: "c27ad45d-a6e8-48af-9417-5422ce60dcec") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 06:37:28 crc kubenswrapper[4720]: I0122 06:37:28.262640 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-n9zff" Jan 22 06:37:28 crc kubenswrapper[4720]: I0122 06:37:28.268275 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-dhklt" Jan 22 06:37:28 crc kubenswrapper[4720]: I0122 06:37:28.268338 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gz8mf" event={"ID":"7b9dafa1-4a65-48a2-bf74-5bfcea6aa310","Type":"ContainerStarted","Data":"cd3e365141652637c97b8ed7272b4367d176c39b7b513f9eccd1ff5d9cc4014a"} Jan 22 06:37:28 crc kubenswrapper[4720]: I0122 06:37:28.268385 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gz8mf" event={"ID":"7b9dafa1-4a65-48a2-bf74-5bfcea6aa310","Type":"ContainerStarted","Data":"5b2969786070d4af44cc5d98d58b6c666733d7ebcf96484d57da96e03e11e55d"} Jan 22 06:37:28 crc kubenswrapper[4720]: I0122 06:37:28.268403 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-zv6lm" event={"ID":"86ad3ffd-89b2-4b4a-b1b1-72d6ad907204","Type":"ContainerStarted","Data":"8500c559d9ee3415d9214bf5106ac73d580edddeb82863b177b4bf6ac6f0be07"} Jan 22 06:37:28 crc kubenswrapper[4720]: I0122 06:37:28.268421 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-zv6lm" event={"ID":"86ad3ffd-89b2-4b4a-b1b1-72d6ad907204","Type":"ContainerStarted","Data":"c5fd865239210e1750f480bcfb9d45e08e6a72b727a98c562dfe6f9cca9746a9"} Jan 22 06:37:28 crc kubenswrapper[4720]: I0122 06:37:28.268437 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-hxdwr" event={"ID":"42322892-7874-4c59-ab1a-e3f205212e2e","Type":"ContainerStarted","Data":"8e5b9397d346ba4253a205df7c634d8956eadfa9c41d61a94a7976f05cd4f7a4"} Jan 22 06:37:28 crc kubenswrapper[4720]: I0122 06:37:28.268452 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-hxdwr" event={"ID":"42322892-7874-4c59-ab1a-e3f205212e2e","Type":"ContainerStarted","Data":"29bbcd341b0e9bc2d1926bfc668def3bc57e45973e441b3bafafd6d0ea66ae9a"} Jan 22 06:37:28 crc kubenswrapper[4720]: I0122 06:37:28.268467 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-dhklt" event={"ID":"3f7c9fba-71e2-44d4-9601-be0ffa541be4","Type":"ContainerStarted","Data":"4f97b9b13645eb606ce13a5d46ecd0447ac2ef480597dd15283f1323c6cc676c"} Jan 22 06:37:28 crc kubenswrapper[4720]: I0122 06:37:28.268725 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-dhklt" event={"ID":"3f7c9fba-71e2-44d4-9601-be0ffa541be4","Type":"ContainerStarted","Data":"a39126b9faad9e2b2fc2a69217c4e4799a1bf64d17de49ec063690b97535b1b4"} Jan 22 06:37:28 crc kubenswrapper[4720]: I0122 06:37:28.271688 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-x4kkz" Jan 22 06:37:28 crc kubenswrapper[4720]: I0122 06:37:28.274830 4720 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-dhklt container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.23:8443/healthz\": dial tcp 10.217.0.23:8443: connect: connection refused" start-of-body= Jan 22 06:37:28 crc kubenswrapper[4720]: I0122 06:37:28.274899 4720 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-dhklt" podUID="3f7c9fba-71e2-44d4-9601-be0ffa541be4" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.23:8443/healthz\": dial tcp 10.217.0.23:8443: connect: connection refused" Jan 22 06:37:28 crc kubenswrapper[4720]: I0122 06:37:28.279369 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-ws6w8" event={"ID":"dc1c1a54-81dc-4e91-80db-606befa6c477","Type":"ContainerStarted","Data":"33cc3885af3c92be2f75e7d6a32ac0781825a58f8ea7da432088999c419417b9"} Jan 22 06:37:28 crc kubenswrapper[4720]: I0122 06:37:28.279448 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-ws6w8" Jan 22 06:37:28 crc kubenswrapper[4720]: I0122 06:37:28.279473 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-ws6w8" event={"ID":"dc1c1a54-81dc-4e91-80db-606befa6c477","Type":"ContainerStarted","Data":"bb3593e45cb168ff1ea4b8c50b85e1da1aea5bab78361b6ad94c547073d7230a"} Jan 22 06:37:28 crc kubenswrapper[4720]: I0122 06:37:28.288552 4720 patch_prober.go:28] interesting pod/downloads-7954f5f757-ws6w8 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Jan 22 06:37:28 crc kubenswrapper[4720]: I0122 06:37:28.289082 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-6j6b9" Jan 22 06:37:28 crc kubenswrapper[4720]: I0122 06:37:28.288651 4720 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-ws6w8" podUID="dc1c1a54-81dc-4e91-80db-606befa6c477" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Jan 22 06:37:28 crc kubenswrapper[4720]: I0122 06:37:28.321966 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-dsfv4"] Jan 22 06:37:28 crc kubenswrapper[4720]: I0122 06:37:28.322969 4720 patch_prober.go:28] interesting pod/router-default-5444994796-6j6b9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 22 06:37:28 crc kubenswrapper[4720]: [-]has-synced failed: reason withheld Jan 22 06:37:28 crc kubenswrapper[4720]: [+]process-running ok Jan 22 06:37:28 crc kubenswrapper[4720]: healthz check failed Jan 22 06:37:28 crc kubenswrapper[4720]: I0122 06:37:28.323018 4720 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-6j6b9" podUID="12b3f8d7-d79f-48e6-be2f-eeb97827e913" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 22 06:37:28 crc kubenswrapper[4720]: I0122 06:37:28.326366 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-vp8tq" event={"ID":"0a21ae7b-9111-4c9f-a378-f2acdb19931a","Type":"ContainerStarted","Data":"85b3392bd6d1b940f7e5952dc94140d5443b7fe0c090bf6d2d872637d20fc59a"} Jan 22 06:37:28 crc kubenswrapper[4720]: I0122 06:37:28.326445 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-vp8tq" event={"ID":"0a21ae7b-9111-4c9f-a378-f2acdb19931a","Type":"ContainerStarted","Data":"f8a1bbd1ba9b0747f3229ccaaee8b19a5c73e67d23ce46e23f36b0f7f4695acb"} Jan 22 06:37:28 crc kubenswrapper[4720]: I0122 06:37:28.326985 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-vp8tq" Jan 22 06:37:28 crc kubenswrapper[4720]: I0122 06:37:28.327956 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-twh47" event={"ID":"87a73166-b8c6-4dab-bd45-46b640a4b1c5","Type":"ContainerStarted","Data":"b91b825719b1782f45cd83807a8177460a0bf9cedc272ee6350c61843ee9ed6e"} Jan 22 06:37:28 crc kubenswrapper[4720]: I0122 06:37:28.337709 4720 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-vp8tq container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.7:6443/healthz\": dial tcp 10.217.0.7:6443: connect: connection refused" start-of-body= Jan 22 06:37:28 crc kubenswrapper[4720]: I0122 06:37:28.337831 4720 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-vp8tq" podUID="0a21ae7b-9111-4c9f-a378-f2acdb19931a" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.7:6443/healthz\": dial tcp 10.217.0.7:6443: connect: connection refused" Jan 22 06:37:28 crc kubenswrapper[4720]: I0122 06:37:28.344692 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-zmhj8" event={"ID":"b768bae9-692e-4039-8fea-d88359e16ee4","Type":"ContainerStarted","Data":"cad312afcb24c19743f9a2f15af8edcd7a66bf2e05fb9f58d5c126980549905c"} Jan 22 06:37:28 crc kubenswrapper[4720]: I0122 06:37:28.346565 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2n672" event={"ID":"4dae2f17-e6e0-4ad0-9eca-3d3adaac3c24","Type":"ContainerStarted","Data":"57e5548ea104542ef0a916e1bbf8d4a7ec038ddc85d581f1dfba245d3b67c22a"} Jan 22 06:37:28 crc kubenswrapper[4720]: I0122 06:37:28.362649 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 06:37:28 crc kubenswrapper[4720]: E0122 06:37:28.364078 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 06:37:28.864048111 +0000 UTC m=+141.005954866 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 06:37:28 crc kubenswrapper[4720]: I0122 06:37:28.366449 4720 generic.go:334] "Generic (PLEG): container finished" podID="1088f6d1-1bac-4e7c-a944-2e9b5d259413" containerID="11026900dd71c688be6a9b7ee7b83da910d9d626ccb820e728f4ab556304a3a6" exitCode=0 Jan 22 06:37:28 crc kubenswrapper[4720]: I0122 06:37:28.366575 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9cvs7" event={"ID":"1088f6d1-1bac-4e7c-a944-2e9b5d259413","Type":"ContainerDied","Data":"11026900dd71c688be6a9b7ee7b83da910d9d626ccb820e728f4ab556304a3a6"} Jan 22 06:37:28 crc kubenswrapper[4720]: I0122 06:37:28.366625 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9cvs7" event={"ID":"1088f6d1-1bac-4e7c-a944-2e9b5d259413","Type":"ContainerStarted","Data":"c2f349715ca8426e0f537c3cf7be611b7c70b0c6af51833b66902667b8b920bf"} Jan 22 06:37:28 crc kubenswrapper[4720]: I0122 06:37:28.404944 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gxfr8" event={"ID":"e41cd5a0-a754-4161-938a-463f2673d37e","Type":"ContainerStarted","Data":"871bb6e8b62c71d5223f051ebfc9d7f3fc8402654e939b27e67d53d025ad0edd"} Jan 22 06:37:28 crc kubenswrapper[4720]: I0122 06:37:28.405978 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gxfr8" event={"ID":"e41cd5a0-a754-4161-938a-463f2673d37e","Type":"ContainerStarted","Data":"d68f58de1df825913399b434c3821b3e813e325e591ec64c90dd2a483687fbc4"} Jan 22 06:37:28 crc kubenswrapper[4720]: I0122 06:37:28.464463 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zcbc4\" (UID: \"c27ad45d-a6e8-48af-9417-5422ce60dcec\") " pod="openshift-image-registry/image-registry-697d97f7c8-zcbc4" Jan 22 06:37:28 crc kubenswrapper[4720]: E0122 06:37:28.467657 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 06:37:28.96763595 +0000 UTC m=+141.109542655 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zcbc4" (UID: "c27ad45d-a6e8-48af-9417-5422ce60dcec") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 06:37:28 crc kubenswrapper[4720]: W0122 06:37:28.477901 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podda5e3c21_a4d3_4a75_8375_4cd909ee8a05.slice/crio-b2f3f7b7922ae6189caab1d4dbd77acf637159a66ac68d24b859e47218e58f10 WatchSource:0}: Error finding container b2f3f7b7922ae6189caab1d4dbd77acf637159a66ac68d24b859e47218e58f10: Status 404 returned error can't find the container with id b2f3f7b7922ae6189caab1d4dbd77acf637159a66ac68d24b859e47218e58f10 Jan 22 06:37:28 crc kubenswrapper[4720]: I0122 06:37:28.564136 4720 csr.go:261] certificate signing request csr-glmww is approved, waiting to be issued Jan 22 06:37:28 crc kubenswrapper[4720]: I0122 06:37:28.566885 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 06:37:28 crc kubenswrapper[4720]: E0122 06:37:28.568192 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 06:37:29.068174684 +0000 UTC m=+141.210081389 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 06:37:28 crc kubenswrapper[4720]: I0122 06:37:28.575847 4720 csr.go:257] certificate signing request csr-glmww is issued Jan 22 06:37:28 crc kubenswrapper[4720]: I0122 06:37:28.629699 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-nn9tr"] Jan 22 06:37:28 crc kubenswrapper[4720]: I0122 06:37:28.670967 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zcbc4\" (UID: \"c27ad45d-a6e8-48af-9417-5422ce60dcec\") " pod="openshift-image-registry/image-registry-697d97f7c8-zcbc4" Jan 22 06:37:28 crc kubenswrapper[4720]: E0122 06:37:28.671362 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 06:37:29.171348492 +0000 UTC m=+141.313255197 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zcbc4" (UID: "c27ad45d-a6e8-48af-9417-5422ce60dcec") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 06:37:28 crc kubenswrapper[4720]: I0122 06:37:28.743114 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-r7h6p"] Jan 22 06:37:28 crc kubenswrapper[4720]: I0122 06:37:28.772609 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 06:37:28 crc kubenswrapper[4720]: E0122 06:37:28.772760 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 06:37:29.27273349 +0000 UTC m=+141.414640195 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 06:37:28 crc kubenswrapper[4720]: I0122 06:37:28.773652 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zcbc4\" (UID: \"c27ad45d-a6e8-48af-9417-5422ce60dcec\") " pod="openshift-image-registry/image-registry-697d97f7c8-zcbc4" Jan 22 06:37:28 crc kubenswrapper[4720]: E0122 06:37:28.773993 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 06:37:29.273985116 +0000 UTC m=+141.415891821 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zcbc4" (UID: "c27ad45d-a6e8-48af-9417-5422ce60dcec") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 06:37:28 crc kubenswrapper[4720]: I0122 06:37:28.779673 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-t25tr"] Jan 22 06:37:28 crc kubenswrapper[4720]: I0122 06:37:28.878388 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 06:37:28 crc kubenswrapper[4720]: E0122 06:37:28.878893 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 06:37:29.378867582 +0000 UTC m=+141.520774287 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 06:37:28 crc kubenswrapper[4720]: W0122 06:37:28.899149 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1919d4bf_d3e1_4648_bcd6_1e7b0f5a0756.slice/crio-a3d706fa6c72edfa4c1179ffdb353509140b1aafca9038b7f558a08d3eb75ea1 WatchSource:0}: Error finding container a3d706fa6c72edfa4c1179ffdb353509140b1aafca9038b7f558a08d3eb75ea1: Status 404 returned error can't find the container with id a3d706fa6c72edfa4c1179ffdb353509140b1aafca9038b7f558a08d3eb75ea1 Jan 22 06:37:28 crc kubenswrapper[4720]: I0122 06:37:28.980771 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zcbc4\" (UID: \"c27ad45d-a6e8-48af-9417-5422ce60dcec\") " pod="openshift-image-registry/image-registry-697d97f7c8-zcbc4" Jan 22 06:37:28 crc kubenswrapper[4720]: E0122 06:37:28.981276 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 06:37:29.481260029 +0000 UTC m=+141.623166734 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zcbc4" (UID: "c27ad45d-a6e8-48af-9417-5422ce60dcec") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 06:37:28 crc kubenswrapper[4720]: I0122 06:37:28.996945 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-6j6b9" podStartSLOduration=115.996925753 podStartE2EDuration="1m55.996925753s" podCreationTimestamp="2026-01-22 06:35:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 06:37:28.992706493 +0000 UTC m=+141.134613198" watchObservedRunningTime="2026-01-22 06:37:28.996925753 +0000 UTC m=+141.138832458" Jan 22 06:37:29 crc kubenswrapper[4720]: I0122 06:37:29.082428 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 06:37:29 crc kubenswrapper[4720]: E0122 06:37:29.082989 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 06:37:29.582949325 +0000 UTC m=+141.724856020 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 06:37:29 crc kubenswrapper[4720]: I0122 06:37:29.083780 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zcbc4\" (UID: \"c27ad45d-a6e8-48af-9417-5422ce60dcec\") " pod="openshift-image-registry/image-registry-697d97f7c8-zcbc4" Jan 22 06:37:29 crc kubenswrapper[4720]: E0122 06:37:29.084237 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 06:37:29.584227501 +0000 UTC m=+141.726134206 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zcbc4" (UID: "c27ad45d-a6e8-48af-9417-5422ce60dcec") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 06:37:29 crc kubenswrapper[4720]: I0122 06:37:29.113229 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-ws6w8" podStartSLOduration=117.113202694 podStartE2EDuration="1m57.113202694s" podCreationTimestamp="2026-01-22 06:35:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 06:37:29.110743444 +0000 UTC m=+141.252650159" watchObservedRunningTime="2026-01-22 06:37:29.113202694 +0000 UTC m=+141.255109399" Jan 22 06:37:29 crc kubenswrapper[4720]: I0122 06:37:29.156926 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-qph2m"] Jan 22 06:37:29 crc kubenswrapper[4720]: I0122 06:37:29.162296 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-dhklt" podStartSLOduration=117.162271456 podStartE2EDuration="1m57.162271456s" podCreationTimestamp="2026-01-22 06:35:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 06:37:29.151961784 +0000 UTC m=+141.293868499" watchObservedRunningTime="2026-01-22 06:37:29.162271456 +0000 UTC m=+141.304178161" Jan 22 06:37:29 crc kubenswrapper[4720]: I0122 06:37:29.166034 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-tfvxx"] Jan 22 06:37:29 crc kubenswrapper[4720]: I0122 06:37:29.184797 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 06:37:29 crc kubenswrapper[4720]: E0122 06:37:29.185169 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 06:37:29.685151136 +0000 UTC m=+141.827057831 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 06:37:29 crc kubenswrapper[4720]: I0122 06:37:29.202260 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-gxkzq" podStartSLOduration=116.202243051 podStartE2EDuration="1m56.202243051s" podCreationTimestamp="2026-01-22 06:35:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 06:37:29.20079692 +0000 UTC m=+141.342703635" watchObservedRunningTime="2026-01-22 06:37:29.202243051 +0000 UTC m=+141.344149756" Jan 22 06:37:29 crc kubenswrapper[4720]: W0122 06:37:29.208530 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf7103470_2ea6_46ac_ba17_32ea3ffb00ae.slice/crio-69bf33e6c1349faed83e8f5ffabdb7068c5524cefd6424901b7db5b55089beca WatchSource:0}: Error finding container 69bf33e6c1349faed83e8f5ffabdb7068c5524cefd6424901b7db5b55089beca: Status 404 returned error can't find the container with id 69bf33e6c1349faed83e8f5ffabdb7068c5524cefd6424901b7db5b55089beca Jan 22 06:37:29 crc kubenswrapper[4720]: I0122 06:37:29.267342 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-x9zg2"] Jan 22 06:37:29 crc kubenswrapper[4720]: I0122 06:37:29.287029 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zcbc4\" (UID: \"c27ad45d-a6e8-48af-9417-5422ce60dcec\") " pod="openshift-image-registry/image-registry-697d97f7c8-zcbc4" Jan 22 06:37:29 crc kubenswrapper[4720]: E0122 06:37:29.287485 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 06:37:29.787452759 +0000 UTC m=+141.929359464 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zcbc4" (UID: "c27ad45d-a6e8-48af-9417-5422ce60dcec") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 06:37:29 crc kubenswrapper[4720]: I0122 06:37:29.305791 4720 patch_prober.go:28] interesting pod/router-default-5444994796-6j6b9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 22 06:37:29 crc kubenswrapper[4720]: [-]has-synced failed: reason withheld Jan 22 06:37:29 crc kubenswrapper[4720]: [+]process-running ok Jan 22 06:37:29 crc kubenswrapper[4720]: healthz check failed Jan 22 06:37:29 crc kubenswrapper[4720]: I0122 06:37:29.305817 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-rg9qd"] Jan 22 06:37:29 crc kubenswrapper[4720]: I0122 06:37:29.305867 4720 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-6j6b9" podUID="12b3f8d7-d79f-48e6-be2f-eeb97827e913" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 22 06:37:29 crc kubenswrapper[4720]: I0122 06:37:29.317280 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-hrglt" podStartSLOduration=117.317265875 podStartE2EDuration="1m57.317265875s" podCreationTimestamp="2026-01-22 06:35:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 06:37:29.316055621 +0000 UTC m=+141.457962346" watchObservedRunningTime="2026-01-22 06:37:29.317265875 +0000 UTC m=+141.459172590" Jan 22 06:37:29 crc kubenswrapper[4720]: I0122 06:37:29.336879 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-4b66q"] Jan 22 06:37:29 crc kubenswrapper[4720]: I0122 06:37:29.343667 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484390-gc7lm"] Jan 22 06:37:29 crc kubenswrapper[4720]: I0122 06:37:29.347655 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-rlg6q"] Jan 22 06:37:29 crc kubenswrapper[4720]: W0122 06:37:29.370492 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef604d7d_576b_48eb_8131_888627c5c681.slice/crio-52ad5606f8eec6aff34cb9c7f8d397f830db57be3f055a579625ddc852d3dfed WatchSource:0}: Error finding container 52ad5606f8eec6aff34cb9c7f8d397f830db57be3f055a579625ddc852d3dfed: Status 404 returned error can't find the container with id 52ad5606f8eec6aff34cb9c7f8d397f830db57be3f055a579625ddc852d3dfed Jan 22 06:37:29 crc kubenswrapper[4720]: I0122 06:37:29.388727 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 06:37:29 crc kubenswrapper[4720]: E0122 06:37:29.415599 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 06:37:29.915549355 +0000 UTC m=+142.057456060 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 06:37:29 crc kubenswrapper[4720]: I0122 06:37:29.416592 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zcbc4\" (UID: \"c27ad45d-a6e8-48af-9417-5422ce60dcec\") " pod="openshift-image-registry/image-registry-697d97f7c8-zcbc4" Jan 22 06:37:29 crc kubenswrapper[4720]: E0122 06:37:29.418602 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 06:37:29.918570831 +0000 UTC m=+142.060477536 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zcbc4" (UID: "c27ad45d-a6e8-48af-9417-5422ce60dcec") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 06:37:29 crc kubenswrapper[4720]: I0122 06:37:29.450050 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-vp8tq" podStartSLOduration=117.450025233 podStartE2EDuration="1m57.450025233s" podCreationTimestamp="2026-01-22 06:35:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 06:37:29.444428985 +0000 UTC m=+141.586335690" watchObservedRunningTime="2026-01-22 06:37:29.450025233 +0000 UTC m=+141.591931938" Jan 22 06:37:29 crc kubenswrapper[4720]: I0122 06:37:29.482139 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-t8k8z"] Jan 22 06:37:29 crc kubenswrapper[4720]: I0122 06:37:29.483985 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-x4kkz"] Jan 22 06:37:29 crc kubenswrapper[4720]: I0122 06:37:29.486222 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-zv6lm" podStartSLOduration=117.486208 podStartE2EDuration="1m57.486208s" podCreationTimestamp="2026-01-22 06:35:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 06:37:29.483506474 +0000 UTC m=+141.625413179" watchObservedRunningTime="2026-01-22 06:37:29.486208 +0000 UTC m=+141.628114705" Jan 22 06:37:29 crc kubenswrapper[4720]: I0122 06:37:29.506234 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-tfvxx" event={"ID":"f7103470-2ea6-46ac-ba17-32ea3ffb00ae","Type":"ContainerStarted","Data":"69bf33e6c1349faed83e8f5ffabdb7068c5524cefd6424901b7db5b55089beca"} Jan 22 06:37:29 crc kubenswrapper[4720]: I0122 06:37:29.522672 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 06:37:29 crc kubenswrapper[4720]: E0122 06:37:29.523928 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 06:37:30.023615632 +0000 UTC m=+142.165522347 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 06:37:29 crc kubenswrapper[4720]: I0122 06:37:29.526894 4720 generic.go:334] "Generic (PLEG): container finished" podID="8486a6bf-b477-46be-9841-94481ef84313" containerID="a42637983d88e82d6569c760f6706ef87951589bfd83ee53d7585a6df27a1244" exitCode=0 Jan 22 06:37:29 crc kubenswrapper[4720]: I0122 06:37:29.527019 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-dsfv4" event={"ID":"8486a6bf-b477-46be-9841-94481ef84313","Type":"ContainerDied","Data":"a42637983d88e82d6569c760f6706ef87951589bfd83ee53d7585a6df27a1244"} Jan 22 06:37:29 crc kubenswrapper[4720]: I0122 06:37:29.527060 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-dsfv4" event={"ID":"8486a6bf-b477-46be-9841-94481ef84313","Type":"ContainerStarted","Data":"2fb2eab2774a762f02be11c4b69927ca3f3d8e1cab9502def487f68dcac6cfc9"} Jan 22 06:37:29 crc kubenswrapper[4720]: I0122 06:37:29.527615 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-nhzl2"] Jan 22 06:37:29 crc kubenswrapper[4720]: I0122 06:37:29.533214 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-8khmt" event={"ID":"def7efcb-32f5-4a8b-9be9-9fc39456c534","Type":"ContainerStarted","Data":"5e78b2ad136eb7dac83607519144082452bb611931df45d31ba511ca45a3cab8"} Jan 22 06:37:29 crc kubenswrapper[4720]: I0122 06:37:29.534579 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-g9f7l"] Jan 22 06:37:29 crc kubenswrapper[4720]: I0122 06:37:29.536626 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-rg9qd" event={"ID":"ef604d7d-576b-48eb-8131-888627c5c681","Type":"ContainerStarted","Data":"52ad5606f8eec6aff34cb9c7f8d397f830db57be3f055a579625ddc852d3dfed"} Jan 22 06:37:29 crc kubenswrapper[4720]: I0122 06:37:29.575600 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-qph2m" event={"ID":"52fd7f11-0ca1-4af5-98a0-00789fb541e6","Type":"ContainerStarted","Data":"bc57c9dccd037167fc03ce886a9b9ee469e9b1ea9d58acd50f9f45b25cc19ef7"} Jan 22 06:37:29 crc kubenswrapper[4720]: I0122 06:37:29.577542 4720 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-01-22 06:32:28 +0000 UTC, rotation deadline is 2026-10-31 18:02:20.549250853 +0000 UTC Jan 22 06:37:29 crc kubenswrapper[4720]: I0122 06:37:29.577602 4720 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 6779h24m50.971650859s for next certificate rotation Jan 22 06:37:29 crc kubenswrapper[4720]: I0122 06:37:29.597883 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-zmhj8" event={"ID":"b768bae9-692e-4039-8fea-d88359e16ee4","Type":"ContainerStarted","Data":"447fc918452b38a8bfd6be7ed2d435aadaee8b8609cc75b83a9e6d8f15fcaaa2"} Jan 22 06:37:29 crc kubenswrapper[4720]: I0122 06:37:29.608674 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-r7h6p" event={"ID":"1919d4bf-d3e1-4648-bcd6-1e7b0f5a0756","Type":"ContainerStarted","Data":"ef191871c74d6da6ebdc2b83170f9cd99ac428b0fcb729af9e0dac90c280fac5"} Jan 22 06:37:29 crc kubenswrapper[4720]: I0122 06:37:29.608753 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-r7h6p" event={"ID":"1919d4bf-d3e1-4648-bcd6-1e7b0f5a0756","Type":"ContainerStarted","Data":"a3d706fa6c72edfa4c1179ffdb353509140b1aafca9038b7f558a08d3eb75ea1"} Jan 22 06:37:29 crc kubenswrapper[4720]: I0122 06:37:29.615346 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9cvs7" event={"ID":"1088f6d1-1bac-4e7c-a944-2e9b5d259413","Type":"ContainerStarted","Data":"183683b0fa3fae5789bd890db38635858837de0e2137c6cc2f003ab04eb33862"} Jan 22 06:37:29 crc kubenswrapper[4720]: I0122 06:37:29.621004 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-x6khz" event={"ID":"824d4c6b-8052-429c-a050-4339913991b5","Type":"ContainerStarted","Data":"8922e73cbb89ada0f6ba58ef7fcb3081d2802a45912846e7fdb7eb0acf516d77"} Jan 22 06:37:29 crc kubenswrapper[4720]: I0122 06:37:29.622022 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-rlg6q" event={"ID":"baa1be6a-a3ce-4a10-9038-8e2cc8e7079c","Type":"ContainerStarted","Data":"8924752c37f4d425e13593f69b7cc5377dd2528bb54b98fdff0edf0a3a6d2caf"} Jan 22 06:37:29 crc kubenswrapper[4720]: W0122 06:37:29.624396 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb0d496aa_81c7_47cf_9966_00c96cecc997.slice/crio-74cfe4c0ea05c8f94b00913049dc11b527f8c1211bff4c3ccf73c2a4b5576527 WatchSource:0}: Error finding container 74cfe4c0ea05c8f94b00913049dc11b527f8c1211bff4c3ccf73c2a4b5576527: Status 404 returned error can't find the container with id 74cfe4c0ea05c8f94b00913049dc11b527f8c1211bff4c3ccf73c2a4b5576527 Jan 22 06:37:29 crc kubenswrapper[4720]: I0122 06:37:29.624775 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zcbc4\" (UID: \"c27ad45d-a6e8-48af-9417-5422ce60dcec\") " pod="openshift-image-registry/image-registry-697d97f7c8-zcbc4" Jan 22 06:37:29 crc kubenswrapper[4720]: E0122 06:37:29.630541 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 06:37:30.130524426 +0000 UTC m=+142.272431131 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zcbc4" (UID: "c27ad45d-a6e8-48af-9417-5422ce60dcec") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 06:37:29 crc kubenswrapper[4720]: I0122 06:37:29.642623 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2n672" event={"ID":"4dae2f17-e6e0-4ad0-9eca-3d3adaac3c24","Type":"ContainerStarted","Data":"b29cb1655606bf7260588e2961fe4acadac554979da925e21b7a1e7e4abca169"} Jan 22 06:37:29 crc kubenswrapper[4720]: I0122 06:37:29.657362 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gz8mf" podStartSLOduration=117.657324857 podStartE2EDuration="1m57.657324857s" podCreationTimestamp="2026-01-22 06:35:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 06:37:29.642951259 +0000 UTC m=+141.784857984" watchObservedRunningTime="2026-01-22 06:37:29.657324857 +0000 UTC m=+141.799231562" Jan 22 06:37:29 crc kubenswrapper[4720]: I0122 06:37:29.658812 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-t25tr" event={"ID":"655a100f-fb0a-4668-8d78-3b357542dad4","Type":"ContainerStarted","Data":"f48de7a3614b4c7d1d363686fa428ead2dfe44c37a4be8c53444ab933f937d0c"} Jan 22 06:37:29 crc kubenswrapper[4720]: I0122 06:37:29.658859 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-t25tr" event={"ID":"655a100f-fb0a-4668-8d78-3b357542dad4","Type":"ContainerStarted","Data":"16513b1ed07a185897e4a06abb94294bf4010e12196f46b96093f60876f0f694"} Jan 22 06:37:29 crc kubenswrapper[4720]: I0122 06:37:29.663897 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-bkx6t" podStartSLOduration=117.663884363 podStartE2EDuration="1m57.663884363s" podCreationTimestamp="2026-01-22 06:35:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 06:37:29.599065934 +0000 UTC m=+141.740972639" watchObservedRunningTime="2026-01-22 06:37:29.663884363 +0000 UTC m=+141.805791068" Jan 22 06:37:29 crc kubenswrapper[4720]: I0122 06:37:29.675253 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vmvqs"] Jan 22 06:37:29 crc kubenswrapper[4720]: I0122 06:37:29.679242 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-tdfb6" event={"ID":"da5e3c21-a4d3-4a75-8375-4cd909ee8a05","Type":"ContainerStarted","Data":"5dfc161af14520099ae2824fc2e0e9474a39c6564c08e7525b1b39a44528dc48"} Jan 22 06:37:29 crc kubenswrapper[4720]: I0122 06:37:29.679325 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-tdfb6" event={"ID":"da5e3c21-a4d3-4a75-8375-4cd909ee8a05","Type":"ContainerStarted","Data":"b2f3f7b7922ae6189caab1d4dbd77acf637159a66ac68d24b859e47218e58f10"} Jan 22 06:37:29 crc kubenswrapper[4720]: I0122 06:37:29.692843 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-4b66q" event={"ID":"636ee97b-f5c5-4079-bb13-35d75fa7ffa9","Type":"ContainerStarted","Data":"657447bd5fa85ca6de67ba0e8dc7a79919e65c585886b0157ee504bd30eccbc5"} Jan 22 06:37:29 crc kubenswrapper[4720]: I0122 06:37:29.696600 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-9n8jj"] Jan 22 06:37:29 crc kubenswrapper[4720]: I0122 06:37:29.723220 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-24gcw"] Jan 22 06:37:29 crc kubenswrapper[4720]: I0122 06:37:29.730075 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 06:37:29 crc kubenswrapper[4720]: E0122 06:37:29.730284 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 06:37:30.230244807 +0000 UTC m=+142.372151512 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 06:37:29 crc kubenswrapper[4720]: I0122 06:37:29.730734 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zcbc4\" (UID: \"c27ad45d-a6e8-48af-9417-5422ce60dcec\") " pod="openshift-image-registry/image-registry-697d97f7c8-zcbc4" Jan 22 06:37:29 crc kubenswrapper[4720]: E0122 06:37:29.731830 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 06:37:30.231816761 +0000 UTC m=+142.373723456 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zcbc4" (UID: "c27ad45d-a6e8-48af-9417-5422ce60dcec") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 06:37:29 crc kubenswrapper[4720]: I0122 06:37:29.733485 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-n9zff"] Jan 22 06:37:29 crc kubenswrapper[4720]: I0122 06:37:29.753607 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-twh47" event={"ID":"87a73166-b8c6-4dab-bd45-46b640a4b1c5","Type":"ContainerStarted","Data":"6b44e19a8de10c05f60fb823f88f910a9f30304d0cf047a0e7bd24d336c17667"} Jan 22 06:37:29 crc kubenswrapper[4720]: I0122 06:37:29.788722 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-bj86g"] Jan 22 06:37:29 crc kubenswrapper[4720]: I0122 06:37:29.801261 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-tk7sp"] Jan 22 06:37:29 crc kubenswrapper[4720]: I0122 06:37:29.801320 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29484390-gc7lm" event={"ID":"e65daf94-2073-4b05-8b99-f80d7f777d12","Type":"ContainerStarted","Data":"ad4611868f2299a9453be1cdf058a0a9993267e3971efe4696c6ca06e6a1d860"} Jan 22 06:37:29 crc kubenswrapper[4720]: I0122 06:37:29.815955 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-xf5cz"] Jan 22 06:37:29 crc kubenswrapper[4720]: I0122 06:37:29.826816 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-hxdwr" event={"ID":"42322892-7874-4c59-ab1a-e3f205212e2e","Type":"ContainerStarted","Data":"b213ed30c1a452f0af0f84f3cc986a961c5ae98774841e27a25e28273ea8b17b"} Jan 22 06:37:29 crc kubenswrapper[4720]: I0122 06:37:29.832843 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 06:37:29 crc kubenswrapper[4720]: E0122 06:37:29.835529 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 06:37:30.335502624 +0000 UTC m=+142.477409329 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 06:37:29 crc kubenswrapper[4720]: I0122 06:37:29.852027 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gxfr8" podStartSLOduration=117.852000413 podStartE2EDuration="1m57.852000413s" podCreationTimestamp="2026-01-22 06:35:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 06:37:29.836105062 +0000 UTC m=+141.978011757" watchObservedRunningTime="2026-01-22 06:37:29.852000413 +0000 UTC m=+141.993907118" Jan 22 06:37:29 crc kubenswrapper[4720]: I0122 06:37:29.913258 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-4ztkj" event={"ID":"f574ab44-d876-47fc-b23e-a46666fdaf9e","Type":"ContainerStarted","Data":"ab106641076d355d64fd1b728d2a71b6135c5f3f1f36a18a1bf8366740eb0f80"} Jan 22 06:37:29 crc kubenswrapper[4720]: I0122 06:37:29.918046 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2n672" podStartSLOduration=116.918026487 podStartE2EDuration="1m56.918026487s" podCreationTimestamp="2026-01-22 06:35:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 06:37:29.91462053 +0000 UTC m=+142.056527245" watchObservedRunningTime="2026-01-22 06:37:29.918026487 +0000 UTC m=+142.059933192" Jan 22 06:37:29 crc kubenswrapper[4720]: I0122 06:37:29.937438 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-x9zg2" event={"ID":"29b6ba9b-6a4d-4e72-be18-2d9dbe9921a9","Type":"ContainerStarted","Data":"caf09764ef7bebf88fc33e1dd223ec12aceee8385783c78918d4c85e194198b8"} Jan 22 06:37:29 crc kubenswrapper[4720]: I0122 06:37:29.938018 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zcbc4\" (UID: \"c27ad45d-a6e8-48af-9417-5422ce60dcec\") " pod="openshift-image-registry/image-registry-697d97f7c8-zcbc4" Jan 22 06:37:29 crc kubenswrapper[4720]: E0122 06:37:29.938493 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 06:37:30.438469327 +0000 UTC m=+142.580376032 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zcbc4" (UID: "c27ad45d-a6e8-48af-9417-5422ce60dcec") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 06:37:29 crc kubenswrapper[4720]: I0122 06:37:29.961415 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-nn9tr" event={"ID":"9f57a689-3b37-4c87-a02f-7898dbbaa665","Type":"ContainerStarted","Data":"036f9ecdafbcd270b7c8cd0731432de146ffb5e1a91f33a2e905dee9ef02b3d1"} Jan 22 06:37:29 crc kubenswrapper[4720]: I0122 06:37:29.961458 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-nn9tr" event={"ID":"9f57a689-3b37-4c87-a02f-7898dbbaa665","Type":"ContainerStarted","Data":"bebde25aa26a63d3f68cba30cda3e728264bae8cf7f575b22becf53d5f71d2d4"} Jan 22 06:37:29 crc kubenswrapper[4720]: I0122 06:37:29.967705 4720 patch_prober.go:28] interesting pod/downloads-7954f5f757-ws6w8 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Jan 22 06:37:29 crc kubenswrapper[4720]: I0122 06:37:29.967784 4720 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-ws6w8" podUID="dc1c1a54-81dc-4e91-80db-606befa6c477" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Jan 22 06:37:29 crc kubenswrapper[4720]: I0122 06:37:29.982896 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9cvs7" podStartSLOduration=116.982875647 podStartE2EDuration="1m56.982875647s" podCreationTimestamp="2026-01-22 06:35:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 06:37:29.967296765 +0000 UTC m=+142.109203470" watchObservedRunningTime="2026-01-22 06:37:29.982875647 +0000 UTC m=+142.124782352" Jan 22 06:37:29 crc kubenswrapper[4720]: I0122 06:37:29.985372 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-dhklt" Jan 22 06:37:29 crc kubenswrapper[4720]: I0122 06:37:29.992208 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-gxkzq" Jan 22 06:37:30 crc kubenswrapper[4720]: I0122 06:37:30.032864 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-hxdwr" podStartSLOduration=117.032820755 podStartE2EDuration="1m57.032820755s" podCreationTimestamp="2026-01-22 06:35:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 06:37:30.014856965 +0000 UTC m=+142.156763670" watchObservedRunningTime="2026-01-22 06:37:30.032820755 +0000 UTC m=+142.174727460" Jan 22 06:37:30 crc kubenswrapper[4720]: I0122 06:37:30.046678 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 06:37:30 crc kubenswrapper[4720]: E0122 06:37:30.047684 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 06:37:30.547663116 +0000 UTC m=+142.689569821 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 06:37:30 crc kubenswrapper[4720]: I0122 06:37:30.152043 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-tdfb6" podStartSLOduration=6.152026798 podStartE2EDuration="6.152026798s" podCreationTimestamp="2026-01-22 06:37:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 06:37:30.046258976 +0000 UTC m=+142.188165691" watchObservedRunningTime="2026-01-22 06:37:30.152026798 +0000 UTC m=+142.293933503" Jan 22 06:37:30 crc kubenswrapper[4720]: I0122 06:37:30.156321 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zcbc4\" (UID: \"c27ad45d-a6e8-48af-9417-5422ce60dcec\") " pod="openshift-image-registry/image-registry-697d97f7c8-zcbc4" Jan 22 06:37:30 crc kubenswrapper[4720]: E0122 06:37:30.183246 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 06:37:30.683224804 +0000 UTC m=+142.825131509 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zcbc4" (UID: "c27ad45d-a6e8-48af-9417-5422ce60dcec") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 06:37:30 crc kubenswrapper[4720]: I0122 06:37:30.205610 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-twh47" podStartSLOduration=117.205587678 podStartE2EDuration="1m57.205587678s" podCreationTimestamp="2026-01-22 06:35:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 06:37:30.202528572 +0000 UTC m=+142.344435277" watchObservedRunningTime="2026-01-22 06:37:30.205587678 +0000 UTC m=+142.347494383" Jan 22 06:37:30 crc kubenswrapper[4720]: I0122 06:37:30.236555 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-t25tr" podStartSLOduration=118.236536477 podStartE2EDuration="1m58.236536477s" podCreationTimestamp="2026-01-22 06:35:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 06:37:30.236408903 +0000 UTC m=+142.378315638" watchObservedRunningTime="2026-01-22 06:37:30.236536477 +0000 UTC m=+142.378443182" Jan 22 06:37:30 crc kubenswrapper[4720]: I0122 06:37:30.261368 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 06:37:30 crc kubenswrapper[4720]: E0122 06:37:30.261815 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 06:37:30.761782023 +0000 UTC m=+142.903688728 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 06:37:30 crc kubenswrapper[4720]: I0122 06:37:30.278086 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-zmhj8" podStartSLOduration=117.278059035 podStartE2EDuration="1m57.278059035s" podCreationTimestamp="2026-01-22 06:35:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 06:37:30.268410481 +0000 UTC m=+142.410317186" watchObservedRunningTime="2026-01-22 06:37:30.278059035 +0000 UTC m=+142.419965740" Jan 22 06:37:30 crc kubenswrapper[4720]: I0122 06:37:30.301955 4720 patch_prober.go:28] interesting pod/router-default-5444994796-6j6b9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 22 06:37:30 crc kubenswrapper[4720]: [-]has-synced failed: reason withheld Jan 22 06:37:30 crc kubenswrapper[4720]: [+]process-running ok Jan 22 06:37:30 crc kubenswrapper[4720]: healthz check failed Jan 22 06:37:30 crc kubenswrapper[4720]: I0122 06:37:30.302009 4720 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-6j6b9" podUID="12b3f8d7-d79f-48e6-be2f-eeb97827e913" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 22 06:37:30 crc kubenswrapper[4720]: I0122 06:37:30.366902 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zcbc4\" (UID: \"c27ad45d-a6e8-48af-9417-5422ce60dcec\") " pod="openshift-image-registry/image-registry-697d97f7c8-zcbc4" Jan 22 06:37:30 crc kubenswrapper[4720]: E0122 06:37:30.370581 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 06:37:30.870562801 +0000 UTC m=+143.012469506 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zcbc4" (UID: "c27ad45d-a6e8-48af-9417-5422ce60dcec") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 06:37:30 crc kubenswrapper[4720]: I0122 06:37:30.451215 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-x9zg2" podStartSLOduration=117.451189859 podStartE2EDuration="1m57.451189859s" podCreationTimestamp="2026-01-22 06:35:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 06:37:30.397852295 +0000 UTC m=+142.539759000" watchObservedRunningTime="2026-01-22 06:37:30.451189859 +0000 UTC m=+142.593096564" Jan 22 06:37:30 crc kubenswrapper[4720]: I0122 06:37:30.477983 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 06:37:30 crc kubenswrapper[4720]: E0122 06:37:30.478132 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 06:37:30.978110033 +0000 UTC m=+143.120016738 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 06:37:30 crc kubenswrapper[4720]: I0122 06:37:30.478970 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zcbc4\" (UID: \"c27ad45d-a6e8-48af-9417-5422ce60dcec\") " pod="openshift-image-registry/image-registry-697d97f7c8-zcbc4" Jan 22 06:37:30 crc kubenswrapper[4720]: E0122 06:37:30.479448 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 06:37:30.9794111 +0000 UTC m=+143.121317805 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zcbc4" (UID: "c27ad45d-a6e8-48af-9417-5422ce60dcec") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 06:37:30 crc kubenswrapper[4720]: I0122 06:37:30.589755 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 06:37:30 crc kubenswrapper[4720]: E0122 06:37:30.590271 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 06:37:31.090237616 +0000 UTC m=+143.232144321 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 06:37:30 crc kubenswrapper[4720]: I0122 06:37:30.602581 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-vp8tq" Jan 22 06:37:30 crc kubenswrapper[4720]: I0122 06:37:30.602635 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-hrglt" Jan 22 06:37:30 crc kubenswrapper[4720]: I0122 06:37:30.695776 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zcbc4\" (UID: \"c27ad45d-a6e8-48af-9417-5422ce60dcec\") " pod="openshift-image-registry/image-registry-697d97f7c8-zcbc4" Jan 22 06:37:30 crc kubenswrapper[4720]: E0122 06:37:30.696587 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 06:37:31.196570304 +0000 UTC m=+143.338477009 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zcbc4" (UID: "c27ad45d-a6e8-48af-9417-5422ce60dcec") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 06:37:30 crc kubenswrapper[4720]: I0122 06:37:30.798441 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 06:37:30 crc kubenswrapper[4720]: E0122 06:37:30.799062 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 06:37:31.299043802 +0000 UTC m=+143.440950507 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 06:37:30 crc kubenswrapper[4720]: I0122 06:37:30.912582 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zcbc4\" (UID: \"c27ad45d-a6e8-48af-9417-5422ce60dcec\") " pod="openshift-image-registry/image-registry-697d97f7c8-zcbc4" Jan 22 06:37:30 crc kubenswrapper[4720]: E0122 06:37:30.913017 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 06:37:31.413001697 +0000 UTC m=+143.554908402 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zcbc4" (UID: "c27ad45d-a6e8-48af-9417-5422ce60dcec") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 06:37:31 crc kubenswrapper[4720]: I0122 06:37:31.013643 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 06:37:31 crc kubenswrapper[4720]: E0122 06:37:31.013889 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 06:37:31.51387036 +0000 UTC m=+143.655777055 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 06:37:31 crc kubenswrapper[4720]: I0122 06:37:31.013997 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zcbc4\" (UID: \"c27ad45d-a6e8-48af-9417-5422ce60dcec\") " pod="openshift-image-registry/image-registry-697d97f7c8-zcbc4" Jan 22 06:37:31 crc kubenswrapper[4720]: E0122 06:37:31.014723 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 06:37:31.514421235 +0000 UTC m=+143.656327930 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zcbc4" (UID: "c27ad45d-a6e8-48af-9417-5422ce60dcec") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 06:37:31 crc kubenswrapper[4720]: I0122 06:37:31.028777 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-n9zff" event={"ID":"bfc7dce3-7c14-4844-b363-d7f9422769cd","Type":"ContainerStarted","Data":"622a54dde0a9abae387a3c772f71b406f6265a81a35edf49aaffba08020a940a"} Jan 22 06:37:31 crc kubenswrapper[4720]: I0122 06:37:31.028845 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-n9zff" event={"ID":"bfc7dce3-7c14-4844-b363-d7f9422769cd","Type":"ContainerStarted","Data":"08a9c59da967eb4dd91f123a7435abe48492db5eb73757e32c472041315034f4"} Jan 22 06:37:31 crc kubenswrapper[4720]: I0122 06:37:31.030356 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-n9zff" Jan 22 06:37:31 crc kubenswrapper[4720]: I0122 06:37:31.036383 4720 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-n9zff container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.41:8443/healthz\": dial tcp 10.217.0.41:8443: connect: connection refused" start-of-body= Jan 22 06:37:31 crc kubenswrapper[4720]: I0122 06:37:31.036449 4720 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-n9zff" podUID="bfc7dce3-7c14-4844-b363-d7f9422769cd" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.41:8443/healthz\": dial tcp 10.217.0.41:8443: connect: connection refused" Jan 22 06:37:31 crc kubenswrapper[4720]: I0122 06:37:31.044165 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-24gcw" event={"ID":"5cdfd3a3-6548-4657-a810-55f8eaac886b","Type":"ContainerStarted","Data":"e4fa64e7f9005fb2554acc2d2131a8e8d235317553e7f7fca86de1bdd1afb961"} Jan 22 06:37:31 crc kubenswrapper[4720]: I0122 06:37:31.074701 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-rlg6q" event={"ID":"baa1be6a-a3ce-4a10-9038-8e2cc8e7079c","Type":"ContainerStarted","Data":"531f5eaa1adc7081ad59d5b2b6274a6ddcbdf5ff5efc9a50f9754f8fd1bbc561"} Jan 22 06:37:31 crc kubenswrapper[4720]: I0122 06:37:31.116296 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 06:37:31 crc kubenswrapper[4720]: E0122 06:37:31.116834 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 06:37:31.616814231 +0000 UTC m=+143.758720936 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 06:37:31 crc kubenswrapper[4720]: I0122 06:37:31.117205 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zcbc4\" (UID: \"c27ad45d-a6e8-48af-9417-5422ce60dcec\") " pod="openshift-image-registry/image-registry-697d97f7c8-zcbc4" Jan 22 06:37:31 crc kubenswrapper[4720]: E0122 06:37:31.118179 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 06:37:31.61817041 +0000 UTC m=+143.760077105 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zcbc4" (UID: "c27ad45d-a6e8-48af-9417-5422ce60dcec") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 06:37:31 crc kubenswrapper[4720]: I0122 06:37:31.120403 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-x4kkz" event={"ID":"84f9138c-fc5c-4cd9-83c7-2c6cc2d9b3a0","Type":"ContainerStarted","Data":"4e97da69028e8c2934d66078d736f2005aea2c5d6d604fd02ebc16ef58ed6a9f"} Jan 22 06:37:31 crc kubenswrapper[4720]: I0122 06:37:31.120477 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-x4kkz" event={"ID":"84f9138c-fc5c-4cd9-83c7-2c6cc2d9b3a0","Type":"ContainerStarted","Data":"550dbe64ab6acac428a9b84f7f4cc61100c3f0d3b3e3351c9191f175f684fb51"} Jan 22 06:37:31 crc kubenswrapper[4720]: I0122 06:37:31.121774 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-x4kkz" Jan 22 06:37:31 crc kubenswrapper[4720]: I0122 06:37:31.129765 4720 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-x4kkz container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.29:5443/healthz\": dial tcp 10.217.0.29:5443: connect: connection refused" start-of-body= Jan 22 06:37:31 crc kubenswrapper[4720]: I0122 06:37:31.129831 4720 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-x4kkz" podUID="84f9138c-fc5c-4cd9-83c7-2c6cc2d9b3a0" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.29:5443/healthz\": dial tcp 10.217.0.29:5443: connect: connection refused" Jan 22 06:37:31 crc kubenswrapper[4720]: I0122 06:37:31.141359 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-rg9qd" event={"ID":"ef604d7d-576b-48eb-8131-888627c5c681","Type":"ContainerStarted","Data":"ae6eb4e27ae3e4a71c95443852e20452eb36d3fcb4281771567459718986bb7a"} Jan 22 06:37:31 crc kubenswrapper[4720]: I0122 06:37:31.184124 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-9n8jj" event={"ID":"2a217772-16ab-414b-b3b6-3758c65a8c58","Type":"ContainerStarted","Data":"51b240151b4afcfca6f89933f98173d2f78d784fe47f37f6edc78d1f3b47d695"} Jan 22 06:37:31 crc kubenswrapper[4720]: I0122 06:37:31.218146 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 06:37:31 crc kubenswrapper[4720]: E0122 06:37:31.219072 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 06:37:31.719055093 +0000 UTC m=+143.860961788 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 06:37:31 crc kubenswrapper[4720]: I0122 06:37:31.225096 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-dsfv4" event={"ID":"8486a6bf-b477-46be-9841-94481ef84313","Type":"ContainerStarted","Data":"5e456322c14d0eab95e0a20c9174cc5e654fcb8f277f4a5c7ced18b715c52f36"} Jan 22 06:37:31 crc kubenswrapper[4720]: I0122 06:37:31.269059 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-n9zff" podStartSLOduration=118.269031902 podStartE2EDuration="1m58.269031902s" podCreationTimestamp="2026-01-22 06:35:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 06:37:31.245619927 +0000 UTC m=+143.387526632" watchObservedRunningTime="2026-01-22 06:37:31.269031902 +0000 UTC m=+143.410938617" Jan 22 06:37:31 crc kubenswrapper[4720]: I0122 06:37:31.279021 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-x4kkz" podStartSLOduration=118.278995805 podStartE2EDuration="1m58.278995805s" podCreationTimestamp="2026-01-22 06:35:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 06:37:31.276185115 +0000 UTC m=+143.418091820" watchObservedRunningTime="2026-01-22 06:37:31.278995805 +0000 UTC m=+143.420902500" Jan 22 06:37:31 crc kubenswrapper[4720]: I0122 06:37:31.279739 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-g9f7l" event={"ID":"b0d496aa-81c7-47cf-9966-00c96cecc997","Type":"ContainerStarted","Data":"74cfe4c0ea05c8f94b00913049dc11b527f8c1211bff4c3ccf73c2a4b5576527"} Jan 22 06:37:31 crc kubenswrapper[4720]: I0122 06:37:31.299122 4720 patch_prober.go:28] interesting pod/router-default-5444994796-6j6b9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 22 06:37:31 crc kubenswrapper[4720]: [-]has-synced failed: reason withheld Jan 22 06:37:31 crc kubenswrapper[4720]: [+]process-running ok Jan 22 06:37:31 crc kubenswrapper[4720]: healthz check failed Jan 22 06:37:31 crc kubenswrapper[4720]: I0122 06:37:31.299188 4720 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-6j6b9" podUID="12b3f8d7-d79f-48e6-be2f-eeb97827e913" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 22 06:37:31 crc kubenswrapper[4720]: I0122 06:37:31.300027 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-4b66q" event={"ID":"636ee97b-f5c5-4079-bb13-35d75fa7ffa9","Type":"ContainerStarted","Data":"42ab659d7c7f951fc90b36e91ec5db505eaf5c718e6c15b3d02981e4f37b2df9"} Jan 22 06:37:31 crc kubenswrapper[4720]: I0122 06:37:31.319060 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zcbc4\" (UID: \"c27ad45d-a6e8-48af-9417-5422ce60dcec\") " pod="openshift-image-registry/image-registry-697d97f7c8-zcbc4" Jan 22 06:37:31 crc kubenswrapper[4720]: E0122 06:37:31.320832 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 06:37:31.820817742 +0000 UTC m=+143.962724447 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zcbc4" (UID: "c27ad45d-a6e8-48af-9417-5422ce60dcec") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 06:37:31 crc kubenswrapper[4720]: I0122 06:37:31.333801 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-x6khz" event={"ID":"824d4c6b-8052-429c-a050-4339913991b5","Type":"ContainerStarted","Data":"bf7b16d0ae4497fd201fd6ffab2332c419fe551fa99cb16ed95914e1117045f3"} Jan 22 06:37:31 crc kubenswrapper[4720]: I0122 06:37:31.340509 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-tk7sp" event={"ID":"a5ffab1e-2d22-407c-a1ea-ce4e8a74b4c7","Type":"ContainerStarted","Data":"95d070024818bb243dabaf8318c7d78aa0dd8a1abc35f78040440e625d0e549c"} Jan 22 06:37:31 crc kubenswrapper[4720]: I0122 06:37:31.360660 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-rg9qd" podStartSLOduration=7.360642472 podStartE2EDuration="7.360642472s" podCreationTimestamp="2026-01-22 06:37:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 06:37:31.316938062 +0000 UTC m=+143.458844767" watchObservedRunningTime="2026-01-22 06:37:31.360642472 +0000 UTC m=+143.502549177" Jan 22 06:37:31 crc kubenswrapper[4720]: I0122 06:37:31.364149 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-g9f7l" podStartSLOduration=118.364139521 podStartE2EDuration="1m58.364139521s" podCreationTimestamp="2026-01-22 06:35:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 06:37:31.358754748 +0000 UTC m=+143.500661453" watchObservedRunningTime="2026-01-22 06:37:31.364139521 +0000 UTC m=+143.506046226" Jan 22 06:37:31 crc kubenswrapper[4720]: I0122 06:37:31.384071 4720 generic.go:334] "Generic (PLEG): container finished" podID="1919d4bf-d3e1-4648-bcd6-1e7b0f5a0756" containerID="ef191871c74d6da6ebdc2b83170f9cd99ac428b0fcb729af9e0dac90c280fac5" exitCode=0 Jan 22 06:37:31 crc kubenswrapper[4720]: I0122 06:37:31.384545 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-r7h6p" event={"ID":"1919d4bf-d3e1-4648-bcd6-1e7b0f5a0756","Type":"ContainerDied","Data":"ef191871c74d6da6ebdc2b83170f9cd99ac428b0fcb729af9e0dac90c280fac5"} Jan 22 06:37:31 crc kubenswrapper[4720]: I0122 06:37:31.384612 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-r7h6p" event={"ID":"1919d4bf-d3e1-4648-bcd6-1e7b0f5a0756","Type":"ContainerStarted","Data":"a9fb1bcd4823597bdc0130f5a3d05e6e417062b5dd9be07bbef7b372e03837bb"} Jan 22 06:37:31 crc kubenswrapper[4720]: I0122 06:37:31.384720 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-r7h6p" Jan 22 06:37:31 crc kubenswrapper[4720]: I0122 06:37:31.388089 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-4b66q" podStartSLOduration=118.388075171 podStartE2EDuration="1m58.388075171s" podCreationTimestamp="2026-01-22 06:35:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 06:37:31.38629349 +0000 UTC m=+143.528200205" watchObservedRunningTime="2026-01-22 06:37:31.388075171 +0000 UTC m=+143.529981886" Jan 22 06:37:31 crc kubenswrapper[4720]: I0122 06:37:31.412240 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-nhzl2" event={"ID":"41f9ff9a-13f9-49b2-8ba6-0f56462cc94c","Type":"ContainerStarted","Data":"84563aa1228da1b60aeed2a84b7aab7fc81ef587a6288b7357e30f1403934c79"} Jan 22 06:37:31 crc kubenswrapper[4720]: I0122 06:37:31.412291 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-nhzl2" event={"ID":"41f9ff9a-13f9-49b2-8ba6-0f56462cc94c","Type":"ContainerStarted","Data":"6591c95f3e3f2e5948260f1e7be83c08b2e294a0ff894541e742808920565c4a"} Jan 22 06:37:31 crc kubenswrapper[4720]: I0122 06:37:31.413337 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-nhzl2" Jan 22 06:37:31 crc kubenswrapper[4720]: I0122 06:37:31.419112 4720 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-nhzl2 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.35:8080/healthz\": dial tcp 10.217.0.35:8080: connect: connection refused" start-of-body= Jan 22 06:37:31 crc kubenswrapper[4720]: I0122 06:37:31.419173 4720 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-nhzl2" podUID="41f9ff9a-13f9-49b2-8ba6-0f56462cc94c" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.35:8080/healthz\": dial tcp 10.217.0.35:8080: connect: connection refused" Jan 22 06:37:31 crc kubenswrapper[4720]: I0122 06:37:31.424368 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 06:37:31 crc kubenswrapper[4720]: E0122 06:37:31.424545 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 06:37:31.924509995 +0000 UTC m=+144.066416710 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 06:37:31 crc kubenswrapper[4720]: I0122 06:37:31.425163 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zcbc4\" (UID: \"c27ad45d-a6e8-48af-9417-5422ce60dcec\") " pod="openshift-image-registry/image-registry-697d97f7c8-zcbc4" Jan 22 06:37:31 crc kubenswrapper[4720]: E0122 06:37:31.427065 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 06:37:31.927053957 +0000 UTC m=+144.068960662 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zcbc4" (UID: "c27ad45d-a6e8-48af-9417-5422ce60dcec") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 06:37:31 crc kubenswrapper[4720]: I0122 06:37:31.444335 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-r7h6p" podStartSLOduration=119.444317057 podStartE2EDuration="1m59.444317057s" podCreationTimestamp="2026-01-22 06:35:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 06:37:31.443401741 +0000 UTC m=+143.585308446" watchObservedRunningTime="2026-01-22 06:37:31.444317057 +0000 UTC m=+143.586223762" Jan 22 06:37:31 crc kubenswrapper[4720]: I0122 06:37:31.447521 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29484390-gc7lm" event={"ID":"e65daf94-2073-4b05-8b99-f80d7f777d12","Type":"ContainerStarted","Data":"22439d6ab6366b7b10e629bb151a0d740839990657e42c6e5e7d0508c60a1d7d"} Jan 22 06:37:31 crc kubenswrapper[4720]: I0122 06:37:31.451013 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-tfvxx" event={"ID":"f7103470-2ea6-46ac-ba17-32ea3ffb00ae","Type":"ContainerStarted","Data":"0cd7396c08f6a980ecc66a820048fbbae428daaf2acd44bf75d775052b8992cd"} Jan 22 06:37:31 crc kubenswrapper[4720]: I0122 06:37:31.474439 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-tk7sp" podStartSLOduration=118.474418151 podStartE2EDuration="1m58.474418151s" podCreationTimestamp="2026-01-22 06:35:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 06:37:31.472531598 +0000 UTC m=+143.614438303" watchObservedRunningTime="2026-01-22 06:37:31.474418151 +0000 UTC m=+143.616324856" Jan 22 06:37:31 crc kubenswrapper[4720]: I0122 06:37:31.510710 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-8khmt" event={"ID":"def7efcb-32f5-4a8b-9be9-9fc39456c534","Type":"ContainerStarted","Data":"d4327c526ffd1d08670cb425d09a9197dca8da04ad2f3af21156cfb52aad3b24"} Jan 22 06:37:31 crc kubenswrapper[4720]: I0122 06:37:31.534467 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 06:37:31 crc kubenswrapper[4720]: E0122 06:37:31.534815 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 06:37:32.034794465 +0000 UTC m=+144.176701170 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 06:37:31 crc kubenswrapper[4720]: I0122 06:37:31.549662 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-t8k8z" event={"ID":"06a737ed-a93e-407f-a8c9-4f096bc8d7dd","Type":"ContainerStarted","Data":"e9297b1539ed0190dd602ef303f8db7df00259e154cd6c7da4920ead25606950"} Jan 22 06:37:31 crc kubenswrapper[4720]: I0122 06:37:31.569683 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-x6khz" podStartSLOduration=119.569662165 podStartE2EDuration="1m59.569662165s" podCreationTimestamp="2026-01-22 06:35:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 06:37:31.533781956 +0000 UTC m=+143.675688671" watchObservedRunningTime="2026-01-22 06:37:31.569662165 +0000 UTC m=+143.711568870" Jan 22 06:37:31 crc kubenswrapper[4720]: I0122 06:37:31.573567 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29484390-gc7lm" podStartSLOduration=119.573544515 podStartE2EDuration="1m59.573544515s" podCreationTimestamp="2026-01-22 06:35:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 06:37:31.570154589 +0000 UTC m=+143.712061294" watchObservedRunningTime="2026-01-22 06:37:31.573544515 +0000 UTC m=+143.715451220" Jan 22 06:37:31 crc kubenswrapper[4720]: I0122 06:37:31.596981 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-qph2m" event={"ID":"52fd7f11-0ca1-4af5-98a0-00789fb541e6","Type":"ContainerStarted","Data":"818e8b2b5ed8cf8341641f5f767f6a37d88a0f910c94f3b4781104c08d55e6b5"} Jan 22 06:37:31 crc kubenswrapper[4720]: I0122 06:37:31.597035 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-qph2m" event={"ID":"52fd7f11-0ca1-4af5-98a0-00789fb541e6","Type":"ContainerStarted","Data":"bd18646a6ac0004146b8c0041416ca6079b8b396050a704d98ea67355fccacc0"} Jan 22 06:37:31 crc kubenswrapper[4720]: I0122 06:37:31.616159 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vmvqs" event={"ID":"5c4c11aa-147f-4cd0-8beb-05f19b0c690d","Type":"ContainerStarted","Data":"cdea2844a3881271f9b20340cb21dd8af297d9d7c9bdfe5160344a70492725f2"} Jan 22 06:37:31 crc kubenswrapper[4720]: I0122 06:37:31.617138 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vmvqs" Jan 22 06:37:31 crc kubenswrapper[4720]: I0122 06:37:31.624070 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-tfvxx" podStartSLOduration=119.624053818 podStartE2EDuration="1m59.624053818s" podCreationTimestamp="2026-01-22 06:35:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 06:37:31.619460958 +0000 UTC m=+143.761367663" watchObservedRunningTime="2026-01-22 06:37:31.624053818 +0000 UTC m=+143.765960523" Jan 22 06:37:31 crc kubenswrapper[4720]: I0122 06:37:31.638922 4720 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-vmvqs container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.32:8443/healthz\": dial tcp 10.217.0.32:8443: connect: connection refused" start-of-body= Jan 22 06:37:31 crc kubenswrapper[4720]: I0122 06:37:31.639001 4720 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vmvqs" podUID="5c4c11aa-147f-4cd0-8beb-05f19b0c690d" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.32:8443/healthz\": dial tcp 10.217.0.32:8443: connect: connection refused" Jan 22 06:37:31 crc kubenswrapper[4720]: I0122 06:37:31.654375 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zcbc4\" (UID: \"c27ad45d-a6e8-48af-9417-5422ce60dcec\") " pod="openshift-image-registry/image-registry-697d97f7c8-zcbc4" Jan 22 06:37:31 crc kubenswrapper[4720]: E0122 06:37:31.656369 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 06:37:32.156353925 +0000 UTC m=+144.298260630 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zcbc4" (UID: "c27ad45d-a6e8-48af-9417-5422ce60dcec") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 06:37:31 crc kubenswrapper[4720]: I0122 06:37:31.661543 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-nhzl2" podStartSLOduration=118.661520022 podStartE2EDuration="1m58.661520022s" podCreationTimestamp="2026-01-22 06:35:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 06:37:31.659575557 +0000 UTC m=+143.801482262" watchObservedRunningTime="2026-01-22 06:37:31.661520022 +0000 UTC m=+143.803426727" Jan 22 06:37:31 crc kubenswrapper[4720]: I0122 06:37:31.670040 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-nn9tr" event={"ID":"9f57a689-3b37-4c87-a02f-7898dbbaa665","Type":"ContainerStarted","Data":"227647eb259fd8383803cd545e41d6bece686477f1ba9cb4104292066a854bed"} Jan 22 06:37:31 crc kubenswrapper[4720]: I0122 06:37:31.704800 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-bj86g" event={"ID":"630eae9a-c1b8-47ce-873a-3ef59ef6c002","Type":"ContainerStarted","Data":"83a8a54c3143340a039b72e8ff11d6b4cdafacf649ef886c4226e9b6021fd9ac"} Jan 22 06:37:31 crc kubenswrapper[4720]: I0122 06:37:31.704856 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-bj86g" event={"ID":"630eae9a-c1b8-47ce-873a-3ef59ef6c002","Type":"ContainerStarted","Data":"7c293f941d419dfcd1f75f694dbce341efed226f1270dc4e2e6b4322c62b42ac"} Jan 22 06:37:31 crc kubenswrapper[4720]: I0122 06:37:31.724918 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-8khmt" podStartSLOduration=118.72488513 podStartE2EDuration="1m58.72488513s" podCreationTimestamp="2026-01-22 06:35:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 06:37:31.72418766 +0000 UTC m=+143.866094375" watchObservedRunningTime="2026-01-22 06:37:31.72488513 +0000 UTC m=+143.866791835" Jan 22 06:37:31 crc kubenswrapper[4720]: I0122 06:37:31.749483 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-x9zg2" event={"ID":"29b6ba9b-6a4d-4e72-be18-2d9dbe9921a9","Type":"ContainerStarted","Data":"d0eef21e73589ffece05f5466db05ddf88c95271d26dcf7a85715a8261210768"} Jan 22 06:37:31 crc kubenswrapper[4720]: I0122 06:37:31.757007 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 06:37:31 crc kubenswrapper[4720]: E0122 06:37:31.757624 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 06:37:32.257602839 +0000 UTC m=+144.399509544 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 06:37:31 crc kubenswrapper[4720]: I0122 06:37:31.776191 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vmvqs" podStartSLOduration=118.776174516 podStartE2EDuration="1m58.776174516s" podCreationTimestamp="2026-01-22 06:35:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 06:37:31.774402436 +0000 UTC m=+143.916309151" watchObservedRunningTime="2026-01-22 06:37:31.776174516 +0000 UTC m=+143.918081221" Jan 22 06:37:31 crc kubenswrapper[4720]: I0122 06:37:31.820298 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-xf5cz" event={"ID":"a8593368-7930-499d-aa21-6526251ce66c","Type":"ContainerStarted","Data":"98d1f475722dc76c51494eee30dd8d2b3c381fa16dd5a9a8b168b6407db17e94"} Jan 22 06:37:31 crc kubenswrapper[4720]: I0122 06:37:31.857048 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-4ztkj" event={"ID":"f574ab44-d876-47fc-b23e-a46666fdaf9e","Type":"ContainerStarted","Data":"af516f8281869b008a43fa45fdd3936a665c3cfb043be47a2f940e9b2cd16628"} Jan 22 06:37:31 crc kubenswrapper[4720]: I0122 06:37:31.861473 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zcbc4\" (UID: \"c27ad45d-a6e8-48af-9417-5422ce60dcec\") " pod="openshift-image-registry/image-registry-697d97f7c8-zcbc4" Jan 22 06:37:31 crc kubenswrapper[4720]: E0122 06:37:31.893029 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 06:37:32.393005882 +0000 UTC m=+144.534912587 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zcbc4" (UID: "c27ad45d-a6e8-48af-9417-5422ce60dcec") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 06:37:31 crc kubenswrapper[4720]: I0122 06:37:31.959579 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-nn9tr" podStartSLOduration=118.95956011 podStartE2EDuration="1m58.95956011s" podCreationTimestamp="2026-01-22 06:35:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 06:37:31.859058598 +0000 UTC m=+144.000965303" watchObservedRunningTime="2026-01-22 06:37:31.95956011 +0000 UTC m=+144.101466815" Jan 22 06:37:31 crc kubenswrapper[4720]: I0122 06:37:31.962653 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 06:37:31 crc kubenswrapper[4720]: E0122 06:37:31.964359 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 06:37:32.464334635 +0000 UTC m=+144.606241340 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 06:37:32 crc kubenswrapper[4720]: I0122 06:37:32.000729 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9cvs7" Jan 22 06:37:32 crc kubenswrapper[4720]: I0122 06:37:32.000830 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9cvs7" Jan 22 06:37:32 crc kubenswrapper[4720]: I0122 06:37:32.015756 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-bj86g" podStartSLOduration=120.015736584 podStartE2EDuration="2m0.015736584s" podCreationTimestamp="2026-01-22 06:35:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 06:37:31.957415899 +0000 UTC m=+144.099322604" watchObservedRunningTime="2026-01-22 06:37:32.015736584 +0000 UTC m=+144.157643289" Jan 22 06:37:32 crc kubenswrapper[4720]: I0122 06:37:32.066584 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zcbc4\" (UID: \"c27ad45d-a6e8-48af-9417-5422ce60dcec\") " pod="openshift-image-registry/image-registry-697d97f7c8-zcbc4" Jan 22 06:37:32 crc kubenswrapper[4720]: E0122 06:37:32.067412 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 06:37:32.56738991 +0000 UTC m=+144.709296615 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zcbc4" (UID: "c27ad45d-a6e8-48af-9417-5422ce60dcec") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 06:37:32 crc kubenswrapper[4720]: I0122 06:37:32.070098 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-qph2m" podStartSLOduration=119.070082247 podStartE2EDuration="1m59.070082247s" podCreationTimestamp="2026-01-22 06:35:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 06:37:32.036235286 +0000 UTC m=+144.178141991" watchObservedRunningTime="2026-01-22 06:37:32.070082247 +0000 UTC m=+144.211988952" Jan 22 06:37:32 crc kubenswrapper[4720]: I0122 06:37:32.078921 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9cvs7" Jan 22 06:37:32 crc kubenswrapper[4720]: I0122 06:37:32.123616 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-4ztkj" podStartSLOduration=120.123595696 podStartE2EDuration="2m0.123595696s" podCreationTimestamp="2026-01-22 06:35:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 06:37:32.070807137 +0000 UTC m=+144.212713842" watchObservedRunningTime="2026-01-22 06:37:32.123595696 +0000 UTC m=+144.265502411" Jan 22 06:37:32 crc kubenswrapper[4720]: I0122 06:37:32.171041 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 06:37:32 crc kubenswrapper[4720]: E0122 06:37:32.172013 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 06:37:32.671988449 +0000 UTC m=+144.813895154 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 06:37:32 crc kubenswrapper[4720]: I0122 06:37:32.274732 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zcbc4\" (UID: \"c27ad45d-a6e8-48af-9417-5422ce60dcec\") " pod="openshift-image-registry/image-registry-697d97f7c8-zcbc4" Jan 22 06:37:32 crc kubenswrapper[4720]: E0122 06:37:32.275159 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 06:37:32.775140837 +0000 UTC m=+144.917047542 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zcbc4" (UID: "c27ad45d-a6e8-48af-9417-5422ce60dcec") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 06:37:32 crc kubenswrapper[4720]: I0122 06:37:32.299122 4720 patch_prober.go:28] interesting pod/router-default-5444994796-6j6b9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 22 06:37:32 crc kubenswrapper[4720]: [-]has-synced failed: reason withheld Jan 22 06:37:32 crc kubenswrapper[4720]: [+]process-running ok Jan 22 06:37:32 crc kubenswrapper[4720]: healthz check failed Jan 22 06:37:32 crc kubenswrapper[4720]: I0122 06:37:32.299237 4720 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-6j6b9" podUID="12b3f8d7-d79f-48e6-be2f-eeb97827e913" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 22 06:37:32 crc kubenswrapper[4720]: I0122 06:37:32.376231 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 06:37:32 crc kubenswrapper[4720]: E0122 06:37:32.376756 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 06:37:32.87672861 +0000 UTC m=+145.018635315 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 06:37:32 crc kubenswrapper[4720]: I0122 06:37:32.478141 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zcbc4\" (UID: \"c27ad45d-a6e8-48af-9417-5422ce60dcec\") " pod="openshift-image-registry/image-registry-697d97f7c8-zcbc4" Jan 22 06:37:32 crc kubenswrapper[4720]: E0122 06:37:32.478701 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 06:37:32.978676434 +0000 UTC m=+145.120583139 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zcbc4" (UID: "c27ad45d-a6e8-48af-9417-5422ce60dcec") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 06:37:32 crc kubenswrapper[4720]: I0122 06:37:32.579881 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 06:37:32 crc kubenswrapper[4720]: E0122 06:37:32.580101 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 06:37:33.080062931 +0000 UTC m=+145.221969636 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 06:37:32 crc kubenswrapper[4720]: I0122 06:37:32.580252 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zcbc4\" (UID: \"c27ad45d-a6e8-48af-9417-5422ce60dcec\") " pod="openshift-image-registry/image-registry-697d97f7c8-zcbc4" Jan 22 06:37:32 crc kubenswrapper[4720]: E0122 06:37:32.580658 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 06:37:33.080648198 +0000 UTC m=+145.222554903 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zcbc4" (UID: "c27ad45d-a6e8-48af-9417-5422ce60dcec") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 06:37:32 crc kubenswrapper[4720]: I0122 06:37:32.681416 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 06:37:32 crc kubenswrapper[4720]: E0122 06:37:32.681620 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 06:37:33.181579563 +0000 UTC m=+145.323486258 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 06:37:32 crc kubenswrapper[4720]: I0122 06:37:32.681872 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zcbc4\" (UID: \"c27ad45d-a6e8-48af-9417-5422ce60dcec\") " pod="openshift-image-registry/image-registry-697d97f7c8-zcbc4" Jan 22 06:37:32 crc kubenswrapper[4720]: E0122 06:37:32.682302 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 06:37:33.182292403 +0000 UTC m=+145.324199108 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zcbc4" (UID: "c27ad45d-a6e8-48af-9417-5422ce60dcec") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 06:37:32 crc kubenswrapper[4720]: I0122 06:37:32.783127 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 06:37:32 crc kubenswrapper[4720]: E0122 06:37:32.783373 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 06:37:33.283331791 +0000 UTC m=+145.425238496 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 06:37:32 crc kubenswrapper[4720]: I0122 06:37:32.783604 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zcbc4\" (UID: \"c27ad45d-a6e8-48af-9417-5422ce60dcec\") " pod="openshift-image-registry/image-registry-697d97f7c8-zcbc4" Jan 22 06:37:32 crc kubenswrapper[4720]: E0122 06:37:32.784078 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 06:37:33.284043941 +0000 UTC m=+145.425950646 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zcbc4" (UID: "c27ad45d-a6e8-48af-9417-5422ce60dcec") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 06:37:32 crc kubenswrapper[4720]: I0122 06:37:32.866413 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-24gcw" event={"ID":"5cdfd3a3-6548-4657-a810-55f8eaac886b","Type":"ContainerStarted","Data":"3d5f12fca437a6f7c2230b72406dea7399ad878ff226e2ff2e563ededd4170c5"} Jan 22 06:37:32 crc kubenswrapper[4720]: I0122 06:37:32.866464 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-24gcw" event={"ID":"5cdfd3a3-6548-4657-a810-55f8eaac886b","Type":"ContainerStarted","Data":"2d58b4ad075146197b8b7dd1aafda41cd85ee00871c8475271e4ba2ac3eec275"} Jan 22 06:37:32 crc kubenswrapper[4720]: I0122 06:37:32.866522 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-24gcw" Jan 22 06:37:32 crc kubenswrapper[4720]: I0122 06:37:32.869566 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-rlg6q" event={"ID":"baa1be6a-a3ce-4a10-9038-8e2cc8e7079c","Type":"ContainerStarted","Data":"82bf30924bd8f46b8fe60c0f003cf3e43e800fe3a7825d4ed195ca9b21ff1c98"} Jan 22 06:37:32 crc kubenswrapper[4720]: I0122 06:37:32.871253 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-9n8jj" event={"ID":"2a217772-16ab-414b-b3b6-3758c65a8c58","Type":"ContainerStarted","Data":"ef1ff7e42ebfc2e28a01e5e8295a79299bd9289ce8045d79b889de5bedbc22eb"} Jan 22 06:37:32 crc kubenswrapper[4720]: I0122 06:37:32.873168 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vmvqs" event={"ID":"5c4c11aa-147f-4cd0-8beb-05f19b0c690d","Type":"ContainerStarted","Data":"ebcb0caba8b6edb06a86d3ab946cd8a0f414db80ee6aed49c706a815b08d09b9"} Jan 22 06:37:32 crc kubenswrapper[4720]: I0122 06:37:32.876127 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-dsfv4" event={"ID":"8486a6bf-b477-46be-9841-94481ef84313","Type":"ContainerStarted","Data":"ccc7fd9bba4d1a217a32080933b7e1aba38cb78513de50156d0ba33c4b8ddc4b"} Jan 22 06:37:32 crc kubenswrapper[4720]: I0122 06:37:32.878323 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-xf5cz" event={"ID":"a8593368-7930-499d-aa21-6526251ce66c","Type":"ContainerStarted","Data":"73d83de46797ec2098d3b0df00d75571a6a64f3498e2037c8302f3f28cb85747"} Jan 22 06:37:32 crc kubenswrapper[4720]: I0122 06:37:32.878387 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-xf5cz" event={"ID":"a8593368-7930-499d-aa21-6526251ce66c","Type":"ContainerStarted","Data":"fc5a73987b444edea615de7b4befb4314e8a5ec93531a5d83859c41a80830590"} Jan 22 06:37:32 crc kubenswrapper[4720]: I0122 06:37:32.879968 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-g9f7l" event={"ID":"b0d496aa-81c7-47cf-9966-00c96cecc997","Type":"ContainerStarted","Data":"059dc1edad784c0d312a823fc5a054960c874c5f14bfc2df542f5dd0af05b595"} Jan 22 06:37:32 crc kubenswrapper[4720]: I0122 06:37:32.881609 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-tk7sp" event={"ID":"a5ffab1e-2d22-407c-a1ea-ce4e8a74b4c7","Type":"ContainerStarted","Data":"a6468983030f6facb0d2d0b46a5ccd94e2c7ee3fcc330e242e043ba235733561"} Jan 22 06:37:32 crc kubenswrapper[4720]: I0122 06:37:32.883417 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-t8k8z" event={"ID":"06a737ed-a93e-407f-a8c9-4f096bc8d7dd","Type":"ContainerStarted","Data":"e48c64d06a37a93d5200abe993648f01618a29cef01f655f738f5dc591677a56"} Jan 22 06:37:32 crc kubenswrapper[4720]: I0122 06:37:32.883472 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-t8k8z" event={"ID":"06a737ed-a93e-407f-a8c9-4f096bc8d7dd","Type":"ContainerStarted","Data":"5cff046e31950093456b4faae7d85da337eb357f7451913fc93cf75acd23a54c"} Jan 22 06:37:32 crc kubenswrapper[4720]: I0122 06:37:32.884875 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 06:37:32 crc kubenswrapper[4720]: E0122 06:37:32.885288 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 06:37:33.385254794 +0000 UTC m=+145.527161499 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 06:37:32 crc kubenswrapper[4720]: I0122 06:37:32.885697 4720 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-nhzl2 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.35:8080/healthz\": dial tcp 10.217.0.35:8080: connect: connection refused" start-of-body= Jan 22 06:37:32 crc kubenswrapper[4720]: I0122 06:37:32.885844 4720 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-nhzl2" podUID="41f9ff9a-13f9-49b2-8ba6-0f56462cc94c" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.35:8080/healthz\": dial tcp 10.217.0.35:8080: connect: connection refused" Jan 22 06:37:32 crc kubenswrapper[4720]: I0122 06:37:32.885975 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-t8k8z" Jan 22 06:37:32 crc kubenswrapper[4720]: I0122 06:37:32.895843 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9cvs7" Jan 22 06:37:32 crc kubenswrapper[4720]: I0122 06:37:32.896883 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-n9zff" Jan 22 06:37:32 crc kubenswrapper[4720]: I0122 06:37:32.978360 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-24gcw" podStartSLOduration=119.978337615 podStartE2EDuration="1m59.978337615s" podCreationTimestamp="2026-01-22 06:35:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 06:37:32.919948268 +0000 UTC m=+145.061854983" watchObservedRunningTime="2026-01-22 06:37:32.978337615 +0000 UTC m=+145.120244320" Jan 22 06:37:32 crc kubenswrapper[4720]: I0122 06:37:32.981649 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-rlg6q" podStartSLOduration=119.981639779 podStartE2EDuration="1m59.981639779s" podCreationTimestamp="2026-01-22 06:35:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 06:37:32.973256171 +0000 UTC m=+145.115162876" watchObservedRunningTime="2026-01-22 06:37:32.981639779 +0000 UTC m=+145.123546484" Jan 22 06:37:32 crc kubenswrapper[4720]: I0122 06:37:32.986629 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zcbc4\" (UID: \"c27ad45d-a6e8-48af-9417-5422ce60dcec\") " pod="openshift-image-registry/image-registry-697d97f7c8-zcbc4" Jan 22 06:37:32 crc kubenswrapper[4720]: E0122 06:37:32.996208 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 06:37:33.496190792 +0000 UTC m=+145.638097497 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zcbc4" (UID: "c27ad45d-a6e8-48af-9417-5422ce60dcec") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 06:37:32 crc kubenswrapper[4720]: I0122 06:37:32.998539 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vmvqs" Jan 22 06:37:33 crc kubenswrapper[4720]: I0122 06:37:33.073887 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-t8k8z" podStartSLOduration=9.073868387 podStartE2EDuration="9.073868387s" podCreationTimestamp="2026-01-22 06:37:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 06:37:33.015004786 +0000 UTC m=+145.156911491" watchObservedRunningTime="2026-01-22 06:37:33.073868387 +0000 UTC m=+145.215775092" Jan 22 06:37:33 crc kubenswrapper[4720]: I0122 06:37:33.086790 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-x4kkz" Jan 22 06:37:33 crc kubenswrapper[4720]: I0122 06:37:33.088062 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 06:37:33 crc kubenswrapper[4720]: E0122 06:37:33.088219 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 06:37:33.588195744 +0000 UTC m=+145.730102449 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 06:37:33 crc kubenswrapper[4720]: I0122 06:37:33.088399 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zcbc4\" (UID: \"c27ad45d-a6e8-48af-9417-5422ce60dcec\") " pod="openshift-image-registry/image-registry-697d97f7c8-zcbc4" Jan 22 06:37:33 crc kubenswrapper[4720]: E0122 06:37:33.089771 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 06:37:33.589755918 +0000 UTC m=+145.731662623 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zcbc4" (UID: "c27ad45d-a6e8-48af-9417-5422ce60dcec") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 06:37:33 crc kubenswrapper[4720]: I0122 06:37:33.192713 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 06:37:33 crc kubenswrapper[4720]: E0122 06:37:33.193821 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 06:37:33.693356118 +0000 UTC m=+145.835262823 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 06:37:33 crc kubenswrapper[4720]: I0122 06:37:33.195290 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-dsfv4" podStartSLOduration=121.195275463 podStartE2EDuration="2m1.195275463s" podCreationTimestamp="2026-01-22 06:35:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 06:37:33.193647937 +0000 UTC m=+145.335554642" watchObservedRunningTime="2026-01-22 06:37:33.195275463 +0000 UTC m=+145.337182168" Jan 22 06:37:33 crc kubenswrapper[4720]: I0122 06:37:33.197660 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zcbc4\" (UID: \"c27ad45d-a6e8-48af-9417-5422ce60dcec\") " pod="openshift-image-registry/image-registry-697d97f7c8-zcbc4" Jan 22 06:37:33 crc kubenswrapper[4720]: E0122 06:37:33.198043 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 06:37:33.698034431 +0000 UTC m=+145.839941136 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zcbc4" (UID: "c27ad45d-a6e8-48af-9417-5422ce60dcec") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 06:37:33 crc kubenswrapper[4720]: I0122 06:37:33.291993 4720 patch_prober.go:28] interesting pod/router-default-5444994796-6j6b9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 22 06:37:33 crc kubenswrapper[4720]: [-]has-synced failed: reason withheld Jan 22 06:37:33 crc kubenswrapper[4720]: [+]process-running ok Jan 22 06:37:33 crc kubenswrapper[4720]: healthz check failed Jan 22 06:37:33 crc kubenswrapper[4720]: I0122 06:37:33.292058 4720 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-6j6b9" podUID="12b3f8d7-d79f-48e6-be2f-eeb97827e913" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 22 06:37:33 crc kubenswrapper[4720]: I0122 06:37:33.298662 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 06:37:33 crc kubenswrapper[4720]: E0122 06:37:33.299082 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 06:37:33.799066289 +0000 UTC m=+145.940972994 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 06:37:33 crc kubenswrapper[4720]: I0122 06:37:33.358491 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-xf5cz" podStartSLOduration=120.358468715 podStartE2EDuration="2m0.358468715s" podCreationTimestamp="2026-01-22 06:35:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 06:37:33.299259694 +0000 UTC m=+145.441166399" watchObservedRunningTime="2026-01-22 06:37:33.358468715 +0000 UTC m=+145.500375410" Jan 22 06:37:33 crc kubenswrapper[4720]: I0122 06:37:33.399886 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zcbc4\" (UID: \"c27ad45d-a6e8-48af-9417-5422ce60dcec\") " pod="openshift-image-registry/image-registry-697d97f7c8-zcbc4" Jan 22 06:37:33 crc kubenswrapper[4720]: E0122 06:37:33.400231 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 06:37:33.90021586 +0000 UTC m=+146.042122565 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zcbc4" (UID: "c27ad45d-a6e8-48af-9417-5422ce60dcec") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 06:37:33 crc kubenswrapper[4720]: I0122 06:37:33.501280 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 06:37:33 crc kubenswrapper[4720]: E0122 06:37:33.501660 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 06:37:34.001642568 +0000 UTC m=+146.143549273 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 06:37:33 crc kubenswrapper[4720]: I0122 06:37:33.606023 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zcbc4\" (UID: \"c27ad45d-a6e8-48af-9417-5422ce60dcec\") " pod="openshift-image-registry/image-registry-697d97f7c8-zcbc4" Jan 22 06:37:33 crc kubenswrapper[4720]: E0122 06:37:33.606828 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 06:37:34.106811473 +0000 UTC m=+146.248718178 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zcbc4" (UID: "c27ad45d-a6e8-48af-9417-5422ce60dcec") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 06:37:33 crc kubenswrapper[4720]: I0122 06:37:33.707195 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 06:37:33 crc kubenswrapper[4720]: E0122 06:37:33.707676 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 06:37:34.207655006 +0000 UTC m=+146.349561711 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 06:37:33 crc kubenswrapper[4720]: I0122 06:37:33.809384 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zcbc4\" (UID: \"c27ad45d-a6e8-48af-9417-5422ce60dcec\") " pod="openshift-image-registry/image-registry-697d97f7c8-zcbc4" Jan 22 06:37:33 crc kubenswrapper[4720]: E0122 06:37:33.809750 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 06:37:34.309733373 +0000 UTC m=+146.451640078 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zcbc4" (UID: "c27ad45d-a6e8-48af-9417-5422ce60dcec") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 06:37:33 crc kubenswrapper[4720]: I0122 06:37:33.851109 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-r7h6p" Jan 22 06:37:33 crc kubenswrapper[4720]: I0122 06:37:33.901968 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-9n8jj" event={"ID":"2a217772-16ab-414b-b3b6-3758c65a8c58","Type":"ContainerStarted","Data":"67d9b2458cb31efbc07742b331294c994fde09855c3f9dacbd6c35a20cb8c818"} Jan 22 06:37:33 crc kubenswrapper[4720]: I0122 06:37:33.903434 4720 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-nhzl2 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.35:8080/healthz\": dial tcp 10.217.0.35:8080: connect: connection refused" start-of-body= Jan 22 06:37:33 crc kubenswrapper[4720]: I0122 06:37:33.903495 4720 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-nhzl2" podUID="41f9ff9a-13f9-49b2-8ba6-0f56462cc94c" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.35:8080/healthz\": dial tcp 10.217.0.35:8080: connect: connection refused" Jan 22 06:37:33 crc kubenswrapper[4720]: I0122 06:37:33.910091 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 06:37:33 crc kubenswrapper[4720]: E0122 06:37:33.910394 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 06:37:34.410351049 +0000 UTC m=+146.552257754 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 06:37:33 crc kubenswrapper[4720]: I0122 06:37:33.910694 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zcbc4\" (UID: \"c27ad45d-a6e8-48af-9417-5422ce60dcec\") " pod="openshift-image-registry/image-registry-697d97f7c8-zcbc4" Jan 22 06:37:33 crc kubenswrapper[4720]: E0122 06:37:33.911105 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 06:37:34.4110895 +0000 UTC m=+146.552996205 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zcbc4" (UID: "c27ad45d-a6e8-48af-9417-5422ce60dcec") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 06:37:34 crc kubenswrapper[4720]: I0122 06:37:34.012302 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 06:37:34 crc kubenswrapper[4720]: E0122 06:37:34.013198 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 06:37:34.513165157 +0000 UTC m=+146.655071862 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 06:37:34 crc kubenswrapper[4720]: I0122 06:37:34.013717 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zcbc4\" (UID: \"c27ad45d-a6e8-48af-9417-5422ce60dcec\") " pod="openshift-image-registry/image-registry-697d97f7c8-zcbc4" Jan 22 06:37:34 crc kubenswrapper[4720]: E0122 06:37:34.016267 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 06:37:34.516250174 +0000 UTC m=+146.658156879 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zcbc4" (UID: "c27ad45d-a6e8-48af-9417-5422ce60dcec") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 06:37:34 crc kubenswrapper[4720]: I0122 06:37:34.060090 4720 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Jan 22 06:37:34 crc kubenswrapper[4720]: I0122 06:37:34.072410 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-bvbhh"] Jan 22 06:37:34 crc kubenswrapper[4720]: I0122 06:37:34.073488 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bvbhh" Jan 22 06:37:34 crc kubenswrapper[4720]: I0122 06:37:34.095232 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 22 06:37:34 crc kubenswrapper[4720]: I0122 06:37:34.099856 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bvbhh"] Jan 22 06:37:34 crc kubenswrapper[4720]: I0122 06:37:34.114691 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 06:37:34 crc kubenswrapper[4720]: I0122 06:37:34.114779 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/557f2e7c-b408-456f-bfc8-b6714839b46a-catalog-content\") pod \"community-operators-bvbhh\" (UID: \"557f2e7c-b408-456f-bfc8-b6714839b46a\") " pod="openshift-marketplace/community-operators-bvbhh" Jan 22 06:37:34 crc kubenswrapper[4720]: I0122 06:37:34.114821 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/557f2e7c-b408-456f-bfc8-b6714839b46a-utilities\") pod \"community-operators-bvbhh\" (UID: \"557f2e7c-b408-456f-bfc8-b6714839b46a\") " pod="openshift-marketplace/community-operators-bvbhh" Jan 22 06:37:34 crc kubenswrapper[4720]: E0122 06:37:34.114886 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 06:37:34.614853713 +0000 UTC m=+146.756760408 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 06:37:34 crc kubenswrapper[4720]: I0122 06:37:34.114953 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zcbc4\" (UID: \"c27ad45d-a6e8-48af-9417-5422ce60dcec\") " pod="openshift-image-registry/image-registry-697d97f7c8-zcbc4" Jan 22 06:37:34 crc kubenswrapper[4720]: I0122 06:37:34.115160 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kxqxz\" (UniqueName: \"kubernetes.io/projected/557f2e7c-b408-456f-bfc8-b6714839b46a-kube-api-access-kxqxz\") pod \"community-operators-bvbhh\" (UID: \"557f2e7c-b408-456f-bfc8-b6714839b46a\") " pod="openshift-marketplace/community-operators-bvbhh" Jan 22 06:37:34 crc kubenswrapper[4720]: E0122 06:37:34.115483 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 06:37:34.61546014 +0000 UTC m=+146.757366845 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zcbc4" (UID: "c27ad45d-a6e8-48af-9417-5422ce60dcec") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 06:37:34 crc kubenswrapper[4720]: I0122 06:37:34.219237 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 06:37:34 crc kubenswrapper[4720]: I0122 06:37:34.219861 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kxqxz\" (UniqueName: \"kubernetes.io/projected/557f2e7c-b408-456f-bfc8-b6714839b46a-kube-api-access-kxqxz\") pod \"community-operators-bvbhh\" (UID: \"557f2e7c-b408-456f-bfc8-b6714839b46a\") " pod="openshift-marketplace/community-operators-bvbhh" Jan 22 06:37:34 crc kubenswrapper[4720]: I0122 06:37:34.220056 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/557f2e7c-b408-456f-bfc8-b6714839b46a-catalog-content\") pod \"community-operators-bvbhh\" (UID: \"557f2e7c-b408-456f-bfc8-b6714839b46a\") " pod="openshift-marketplace/community-operators-bvbhh" Jan 22 06:37:34 crc kubenswrapper[4720]: I0122 06:37:34.220147 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/557f2e7c-b408-456f-bfc8-b6714839b46a-utilities\") pod \"community-operators-bvbhh\" (UID: \"557f2e7c-b408-456f-bfc8-b6714839b46a\") " pod="openshift-marketplace/community-operators-bvbhh" Jan 22 06:37:34 crc kubenswrapper[4720]: E0122 06:37:34.223659 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 06:37:34.72361787 +0000 UTC m=+146.865524575 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 06:37:34 crc kubenswrapper[4720]: I0122 06:37:34.225938 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/557f2e7c-b408-456f-bfc8-b6714839b46a-utilities\") pod \"community-operators-bvbhh\" (UID: \"557f2e7c-b408-456f-bfc8-b6714839b46a\") " pod="openshift-marketplace/community-operators-bvbhh" Jan 22 06:37:34 crc kubenswrapper[4720]: I0122 06:37:34.226240 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/557f2e7c-b408-456f-bfc8-b6714839b46a-catalog-content\") pod \"community-operators-bvbhh\" (UID: \"557f2e7c-b408-456f-bfc8-b6714839b46a\") " pod="openshift-marketplace/community-operators-bvbhh" Jan 22 06:37:34 crc kubenswrapper[4720]: I0122 06:37:34.268811 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kxqxz\" (UniqueName: \"kubernetes.io/projected/557f2e7c-b408-456f-bfc8-b6714839b46a-kube-api-access-kxqxz\") pod \"community-operators-bvbhh\" (UID: \"557f2e7c-b408-456f-bfc8-b6714839b46a\") " pod="openshift-marketplace/community-operators-bvbhh" Jan 22 06:37:34 crc kubenswrapper[4720]: I0122 06:37:34.274627 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-dgfdc"] Jan 22 06:37:34 crc kubenswrapper[4720]: I0122 06:37:34.275782 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dgfdc" Jan 22 06:37:34 crc kubenswrapper[4720]: I0122 06:37:34.287588 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 22 06:37:34 crc kubenswrapper[4720]: I0122 06:37:34.291190 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-dgfdc"] Jan 22 06:37:34 crc kubenswrapper[4720]: I0122 06:37:34.295632 4720 patch_prober.go:28] interesting pod/router-default-5444994796-6j6b9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 22 06:37:34 crc kubenswrapper[4720]: [-]has-synced failed: reason withheld Jan 22 06:37:34 crc kubenswrapper[4720]: [+]process-running ok Jan 22 06:37:34 crc kubenswrapper[4720]: healthz check failed Jan 22 06:37:34 crc kubenswrapper[4720]: I0122 06:37:34.295761 4720 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-6j6b9" podUID="12b3f8d7-d79f-48e6-be2f-eeb97827e913" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 22 06:37:34 crc kubenswrapper[4720]: I0122 06:37:34.322337 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67487e16-e2f8-441f-9fd2-41e1997d91df-utilities\") pod \"certified-operators-dgfdc\" (UID: \"67487e16-e2f8-441f-9fd2-41e1997d91df\") " pod="openshift-marketplace/certified-operators-dgfdc" Jan 22 06:37:34 crc kubenswrapper[4720]: I0122 06:37:34.322533 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l8tjc\" (UniqueName: \"kubernetes.io/projected/67487e16-e2f8-441f-9fd2-41e1997d91df-kube-api-access-l8tjc\") pod \"certified-operators-dgfdc\" (UID: \"67487e16-e2f8-441f-9fd2-41e1997d91df\") " pod="openshift-marketplace/certified-operators-dgfdc" Jan 22 06:37:34 crc kubenswrapper[4720]: I0122 06:37:34.322601 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67487e16-e2f8-441f-9fd2-41e1997d91df-catalog-content\") pod \"certified-operators-dgfdc\" (UID: \"67487e16-e2f8-441f-9fd2-41e1997d91df\") " pod="openshift-marketplace/certified-operators-dgfdc" Jan 22 06:37:34 crc kubenswrapper[4720]: I0122 06:37:34.322679 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zcbc4\" (UID: \"c27ad45d-a6e8-48af-9417-5422ce60dcec\") " pod="openshift-image-registry/image-registry-697d97f7c8-zcbc4" Jan 22 06:37:34 crc kubenswrapper[4720]: E0122 06:37:34.323093 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-22 06:37:34.823075753 +0000 UTC m=+146.964982448 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-zcbc4" (UID: "c27ad45d-a6e8-48af-9417-5422ce60dcec") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 06:37:34 crc kubenswrapper[4720]: I0122 06:37:34.371300 4720 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-01-22T06:37:34.06012911Z","Handler":null,"Name":""} Jan 22 06:37:34 crc kubenswrapper[4720]: I0122 06:37:34.388683 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bvbhh" Jan 22 06:37:34 crc kubenswrapper[4720]: I0122 06:37:34.424009 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 06:37:34 crc kubenswrapper[4720]: I0122 06:37:34.424349 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67487e16-e2f8-441f-9fd2-41e1997d91df-utilities\") pod \"certified-operators-dgfdc\" (UID: \"67487e16-e2f8-441f-9fd2-41e1997d91df\") " pod="openshift-marketplace/certified-operators-dgfdc" Jan 22 06:37:34 crc kubenswrapper[4720]: I0122 06:37:34.424397 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l8tjc\" (UniqueName: \"kubernetes.io/projected/67487e16-e2f8-441f-9fd2-41e1997d91df-kube-api-access-l8tjc\") pod \"certified-operators-dgfdc\" (UID: \"67487e16-e2f8-441f-9fd2-41e1997d91df\") " pod="openshift-marketplace/certified-operators-dgfdc" Jan 22 06:37:34 crc kubenswrapper[4720]: I0122 06:37:34.424424 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67487e16-e2f8-441f-9fd2-41e1997d91df-catalog-content\") pod \"certified-operators-dgfdc\" (UID: \"67487e16-e2f8-441f-9fd2-41e1997d91df\") " pod="openshift-marketplace/certified-operators-dgfdc" Jan 22 06:37:34 crc kubenswrapper[4720]: I0122 06:37:34.424867 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67487e16-e2f8-441f-9fd2-41e1997d91df-catalog-content\") pod \"certified-operators-dgfdc\" (UID: \"67487e16-e2f8-441f-9fd2-41e1997d91df\") " pod="openshift-marketplace/certified-operators-dgfdc" Jan 22 06:37:34 crc kubenswrapper[4720]: E0122 06:37:34.424967 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-22 06:37:34.924950684 +0000 UTC m=+147.066857389 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 22 06:37:34 crc kubenswrapper[4720]: I0122 06:37:34.425540 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67487e16-e2f8-441f-9fd2-41e1997d91df-utilities\") pod \"certified-operators-dgfdc\" (UID: \"67487e16-e2f8-441f-9fd2-41e1997d91df\") " pod="openshift-marketplace/certified-operators-dgfdc" Jan 22 06:37:34 crc kubenswrapper[4720]: I0122 06:37:34.447092 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l8tjc\" (UniqueName: \"kubernetes.io/projected/67487e16-e2f8-441f-9fd2-41e1997d91df-kube-api-access-l8tjc\") pod \"certified-operators-dgfdc\" (UID: \"67487e16-e2f8-441f-9fd2-41e1997d91df\") " pod="openshift-marketplace/certified-operators-dgfdc" Jan 22 06:37:34 crc kubenswrapper[4720]: I0122 06:37:34.453376 4720 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Jan 22 06:37:34 crc kubenswrapper[4720]: I0122 06:37:34.453421 4720 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Jan 22 06:37:34 crc kubenswrapper[4720]: I0122 06:37:34.459231 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-dlsd5"] Jan 22 06:37:34 crc kubenswrapper[4720]: I0122 06:37:34.460210 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dlsd5" Jan 22 06:37:34 crc kubenswrapper[4720]: I0122 06:37:34.475583 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-dlsd5"] Jan 22 06:37:34 crc kubenswrapper[4720]: I0122 06:37:34.532033 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zcbc4\" (UID: \"c27ad45d-a6e8-48af-9417-5422ce60dcec\") " pod="openshift-image-registry/image-registry-697d97f7c8-zcbc4" Jan 22 06:37:34 crc kubenswrapper[4720]: I0122 06:37:34.532120 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b692d0a1-233a-41a6-b673-79eb7648c3b8-catalog-content\") pod \"community-operators-dlsd5\" (UID: \"b692d0a1-233a-41a6-b673-79eb7648c3b8\") " pod="openshift-marketplace/community-operators-dlsd5" Jan 22 06:37:34 crc kubenswrapper[4720]: I0122 06:37:34.532189 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zwp27\" (UniqueName: \"kubernetes.io/projected/b692d0a1-233a-41a6-b673-79eb7648c3b8-kube-api-access-zwp27\") pod \"community-operators-dlsd5\" (UID: \"b692d0a1-233a-41a6-b673-79eb7648c3b8\") " pod="openshift-marketplace/community-operators-dlsd5" Jan 22 06:37:34 crc kubenswrapper[4720]: I0122 06:37:34.532211 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b692d0a1-233a-41a6-b673-79eb7648c3b8-utilities\") pod \"community-operators-dlsd5\" (UID: \"b692d0a1-233a-41a6-b673-79eb7648c3b8\") " pod="openshift-marketplace/community-operators-dlsd5" Jan 22 06:37:34 crc kubenswrapper[4720]: I0122 06:37:34.541997 4720 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 22 06:37:34 crc kubenswrapper[4720]: I0122 06:37:34.542043 4720 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zcbc4\" (UID: \"c27ad45d-a6e8-48af-9417-5422ce60dcec\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-zcbc4" Jan 22 06:37:34 crc kubenswrapper[4720]: I0122 06:37:34.605166 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-zcbc4\" (UID: \"c27ad45d-a6e8-48af-9417-5422ce60dcec\") " pod="openshift-image-registry/image-registry-697d97f7c8-zcbc4" Jan 22 06:37:34 crc kubenswrapper[4720]: I0122 06:37:34.605659 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dgfdc" Jan 22 06:37:34 crc kubenswrapper[4720]: I0122 06:37:34.636371 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 22 06:37:34 crc kubenswrapper[4720]: I0122 06:37:34.636552 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 06:37:34 crc kubenswrapper[4720]: I0122 06:37:34.636605 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b692d0a1-233a-41a6-b673-79eb7648c3b8-catalog-content\") pod \"community-operators-dlsd5\" (UID: \"b692d0a1-233a-41a6-b673-79eb7648c3b8\") " pod="openshift-marketplace/community-operators-dlsd5" Jan 22 06:37:34 crc kubenswrapper[4720]: I0122 06:37:34.636642 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 06:37:34 crc kubenswrapper[4720]: I0122 06:37:34.636666 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zwp27\" (UniqueName: \"kubernetes.io/projected/b692d0a1-233a-41a6-b673-79eb7648c3b8-kube-api-access-zwp27\") pod \"community-operators-dlsd5\" (UID: \"b692d0a1-233a-41a6-b673-79eb7648c3b8\") " pod="openshift-marketplace/community-operators-dlsd5" Jan 22 06:37:34 crc kubenswrapper[4720]: I0122 06:37:34.636682 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b692d0a1-233a-41a6-b673-79eb7648c3b8-utilities\") pod \"community-operators-dlsd5\" (UID: \"b692d0a1-233a-41a6-b673-79eb7648c3b8\") " pod="openshift-marketplace/community-operators-dlsd5" Jan 22 06:37:34 crc kubenswrapper[4720]: I0122 06:37:34.636705 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 06:37:34 crc kubenswrapper[4720]: I0122 06:37:34.636725 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 06:37:34 crc kubenswrapper[4720]: I0122 06:37:34.638084 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b692d0a1-233a-41a6-b673-79eb7648c3b8-utilities\") pod \"community-operators-dlsd5\" (UID: \"b692d0a1-233a-41a6-b673-79eb7648c3b8\") " pod="openshift-marketplace/community-operators-dlsd5" Jan 22 06:37:34 crc kubenswrapper[4720]: I0122 06:37:34.638102 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b692d0a1-233a-41a6-b673-79eb7648c3b8-catalog-content\") pod \"community-operators-dlsd5\" (UID: \"b692d0a1-233a-41a6-b673-79eb7648c3b8\") " pod="openshift-marketplace/community-operators-dlsd5" Jan 22 06:37:34 crc kubenswrapper[4720]: I0122 06:37:34.639670 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 06:37:34 crc kubenswrapper[4720]: I0122 06:37:34.642579 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 06:37:34 crc kubenswrapper[4720]: I0122 06:37:34.643323 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 06:37:34 crc kubenswrapper[4720]: I0122 06:37:34.644203 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 06:37:34 crc kubenswrapper[4720]: I0122 06:37:34.647120 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 22 06:37:34 crc kubenswrapper[4720]: I0122 06:37:34.664650 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zwp27\" (UniqueName: \"kubernetes.io/projected/b692d0a1-233a-41a6-b673-79eb7648c3b8-kube-api-access-zwp27\") pod \"community-operators-dlsd5\" (UID: \"b692d0a1-233a-41a6-b673-79eb7648c3b8\") " pod="openshift-marketplace/community-operators-dlsd5" Jan 22 06:37:34 crc kubenswrapper[4720]: I0122 06:37:34.669273 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-tv5kl"] Jan 22 06:37:34 crc kubenswrapper[4720]: I0122 06:37:34.670466 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tv5kl" Jan 22 06:37:34 crc kubenswrapper[4720]: I0122 06:37:34.689405 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-tv5kl"] Jan 22 06:37:34 crc kubenswrapper[4720]: I0122 06:37:34.706040 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 22 06:37:34 crc kubenswrapper[4720]: I0122 06:37:34.737206 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/75d99952-87c4-42b4-9679-689a9b8e3c63-utilities\") pod \"certified-operators-tv5kl\" (UID: \"75d99952-87c4-42b4-9679-689a9b8e3c63\") " pod="openshift-marketplace/certified-operators-tv5kl" Jan 22 06:37:34 crc kubenswrapper[4720]: I0122 06:37:34.737273 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/75d99952-87c4-42b4-9679-689a9b8e3c63-catalog-content\") pod \"certified-operators-tv5kl\" (UID: \"75d99952-87c4-42b4-9679-689a9b8e3c63\") " pod="openshift-marketplace/certified-operators-tv5kl" Jan 22 06:37:34 crc kubenswrapper[4720]: I0122 06:37:34.737332 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jsmhr\" (UniqueName: \"kubernetes.io/projected/75d99952-87c4-42b4-9679-689a9b8e3c63-kube-api-access-jsmhr\") pod \"certified-operators-tv5kl\" (UID: \"75d99952-87c4-42b4-9679-689a9b8e3c63\") " pod="openshift-marketplace/certified-operators-tv5kl" Jan 22 06:37:34 crc kubenswrapper[4720]: I0122 06:37:34.740649 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 06:37:34 crc kubenswrapper[4720]: I0122 06:37:34.750669 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 22 06:37:34 crc kubenswrapper[4720]: I0122 06:37:34.754349 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bvbhh"] Jan 22 06:37:34 crc kubenswrapper[4720]: I0122 06:37:34.775875 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dlsd5" Jan 22 06:37:34 crc kubenswrapper[4720]: I0122 06:37:34.793590 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-zcbc4" Jan 22 06:37:34 crc kubenswrapper[4720]: I0122 06:37:34.844628 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/75d99952-87c4-42b4-9679-689a9b8e3c63-utilities\") pod \"certified-operators-tv5kl\" (UID: \"75d99952-87c4-42b4-9679-689a9b8e3c63\") " pod="openshift-marketplace/certified-operators-tv5kl" Jan 22 06:37:34 crc kubenswrapper[4720]: I0122 06:37:34.845064 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/75d99952-87c4-42b4-9679-689a9b8e3c63-catalog-content\") pod \"certified-operators-tv5kl\" (UID: \"75d99952-87c4-42b4-9679-689a9b8e3c63\") " pod="openshift-marketplace/certified-operators-tv5kl" Jan 22 06:37:34 crc kubenswrapper[4720]: I0122 06:37:34.845108 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jsmhr\" (UniqueName: \"kubernetes.io/projected/75d99952-87c4-42b4-9679-689a9b8e3c63-kube-api-access-jsmhr\") pod \"certified-operators-tv5kl\" (UID: \"75d99952-87c4-42b4-9679-689a9b8e3c63\") " pod="openshift-marketplace/certified-operators-tv5kl" Jan 22 06:37:34 crc kubenswrapper[4720]: I0122 06:37:34.845485 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/75d99952-87c4-42b4-9679-689a9b8e3c63-utilities\") pod \"certified-operators-tv5kl\" (UID: \"75d99952-87c4-42b4-9679-689a9b8e3c63\") " pod="openshift-marketplace/certified-operators-tv5kl" Jan 22 06:37:34 crc kubenswrapper[4720]: I0122 06:37:34.845578 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/75d99952-87c4-42b4-9679-689a9b8e3c63-catalog-content\") pod \"certified-operators-tv5kl\" (UID: \"75d99952-87c4-42b4-9679-689a9b8e3c63\") " pod="openshift-marketplace/certified-operators-tv5kl" Jan 22 06:37:34 crc kubenswrapper[4720]: I0122 06:37:34.865305 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jsmhr\" (UniqueName: \"kubernetes.io/projected/75d99952-87c4-42b4-9679-689a9b8e3c63-kube-api-access-jsmhr\") pod \"certified-operators-tv5kl\" (UID: \"75d99952-87c4-42b4-9679-689a9b8e3c63\") " pod="openshift-marketplace/certified-operators-tv5kl" Jan 22 06:37:34 crc kubenswrapper[4720]: I0122 06:37:34.916094 4720 generic.go:334] "Generic (PLEG): container finished" podID="e65daf94-2073-4b05-8b99-f80d7f777d12" containerID="22439d6ab6366b7b10e629bb151a0d740839990657e42c6e5e7d0508c60a1d7d" exitCode=0 Jan 22 06:37:34 crc kubenswrapper[4720]: I0122 06:37:34.916176 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29484390-gc7lm" event={"ID":"e65daf94-2073-4b05-8b99-f80d7f777d12","Type":"ContainerDied","Data":"22439d6ab6366b7b10e629bb151a0d740839990657e42c6e5e7d0508c60a1d7d"} Jan 22 06:37:34 crc kubenswrapper[4720]: I0122 06:37:34.918121 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bvbhh" event={"ID":"557f2e7c-b408-456f-bfc8-b6714839b46a","Type":"ContainerStarted","Data":"07ec8caf765924256883e9a75bdc3f59dd113b4d1d82e018c73b7751df7caa7b"} Jan 22 06:37:34 crc kubenswrapper[4720]: I0122 06:37:34.926518 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-9n8jj" event={"ID":"2a217772-16ab-414b-b3b6-3758c65a8c58","Type":"ContainerStarted","Data":"bd4638a61fdf2cfb27061d6684850544d8c34a24894e65221e021a1bb8bd9f5e"} Jan 22 06:37:34 crc kubenswrapper[4720]: I0122 06:37:34.926575 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-9n8jj" event={"ID":"2a217772-16ab-414b-b3b6-3758c65a8c58","Type":"ContainerStarted","Data":"8f9e5792965a4b53d238f715102caeb6709af884ac092f602a94c7e4d6162080"} Jan 22 06:37:34 crc kubenswrapper[4720]: I0122 06:37:34.927511 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-dgfdc"] Jan 22 06:37:34 crc kubenswrapper[4720]: I0122 06:37:34.959959 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-9n8jj" podStartSLOduration=10.959941289 podStartE2EDuration="10.959941289s" podCreationTimestamp="2026-01-22 06:37:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 06:37:34.959868047 +0000 UTC m=+147.101774742" watchObservedRunningTime="2026-01-22 06:37:34.959941289 +0000 UTC m=+147.101847984" Jan 22 06:37:35 crc kubenswrapper[4720]: W0122 06:37:35.019853 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod67487e16_e2f8_441f_9fd2_41e1997d91df.slice/crio-ced1ca345ac7b9710a7df25da559a5802d45fafbbcdea07a2e1c1b3f65b83df5 WatchSource:0}: Error finding container ced1ca345ac7b9710a7df25da559a5802d45fafbbcdea07a2e1c1b3f65b83df5: Status 404 returned error can't find the container with id ced1ca345ac7b9710a7df25da559a5802d45fafbbcdea07a2e1c1b3f65b83df5 Jan 22 06:37:35 crc kubenswrapper[4720]: I0122 06:37:35.030142 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tv5kl" Jan 22 06:37:35 crc kubenswrapper[4720]: W0122 06:37:35.137276 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d751cbb_f2e2_430d_9754_c882a5e924a5.slice/crio-793cb8000bbfa6c10f15c2afeeccd15f56f91cf3dc0273cccf2ec71b6d5e4c28 WatchSource:0}: Error finding container 793cb8000bbfa6c10f15c2afeeccd15f56f91cf3dc0273cccf2ec71b6d5e4c28: Status 404 returned error can't find the container with id 793cb8000bbfa6c10f15c2afeeccd15f56f91cf3dc0273cccf2ec71b6d5e4c28 Jan 22 06:37:35 crc kubenswrapper[4720]: I0122 06:37:35.303216 4720 patch_prober.go:28] interesting pod/router-default-5444994796-6j6b9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 22 06:37:35 crc kubenswrapper[4720]: [-]has-synced failed: reason withheld Jan 22 06:37:35 crc kubenswrapper[4720]: [+]process-running ok Jan 22 06:37:35 crc kubenswrapper[4720]: healthz check failed Jan 22 06:37:35 crc kubenswrapper[4720]: I0122 06:37:35.303278 4720 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-6j6b9" podUID="12b3f8d7-d79f-48e6-be2f-eeb97827e913" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 22 06:37:35 crc kubenswrapper[4720]: I0122 06:37:35.352969 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-zcbc4"] Jan 22 06:37:35 crc kubenswrapper[4720]: I0122 06:37:35.477508 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-tv5kl"] Jan 22 06:37:35 crc kubenswrapper[4720]: W0122 06:37:35.496429 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod75d99952_87c4_42b4_9679_689a9b8e3c63.slice/crio-02a0b4d93babf43e686b0c9e3f1f96cbad306e9ffe9cb8c7b0eee8305d964d80 WatchSource:0}: Error finding container 02a0b4d93babf43e686b0c9e3f1f96cbad306e9ffe9cb8c7b0eee8305d964d80: Status 404 returned error can't find the container with id 02a0b4d93babf43e686b0c9e3f1f96cbad306e9ffe9cb8c7b0eee8305d964d80 Jan 22 06:37:35 crc kubenswrapper[4720]: W0122 06:37:35.527003 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5fe485a1_e14f_4c09_b5b9_f252bc42b7e8.slice/crio-4eb8fd86693c1fdcf01e61bac82001d79b1bf77696017f0362aaed6a597149db WatchSource:0}: Error finding container 4eb8fd86693c1fdcf01e61bac82001d79b1bf77696017f0362aaed6a597149db: Status 404 returned error can't find the container with id 4eb8fd86693c1fdcf01e61bac82001d79b1bf77696017f0362aaed6a597149db Jan 22 06:37:35 crc kubenswrapper[4720]: I0122 06:37:35.550375 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-dlsd5"] Jan 22 06:37:35 crc kubenswrapper[4720]: W0122 06:37:35.569278 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb692d0a1_233a_41a6_b673_79eb7648c3b8.slice/crio-fa57ce837c81c21acabaecef35c5f912e5e08c4fa08120f882583fb9221d5600 WatchSource:0}: Error finding container fa57ce837c81c21acabaecef35c5f912e5e08c4fa08120f882583fb9221d5600: Status 404 returned error can't find the container with id fa57ce837c81c21acabaecef35c5f912e5e08c4fa08120f882583fb9221d5600 Jan 22 06:37:35 crc kubenswrapper[4720]: I0122 06:37:35.895969 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 22 06:37:35 crc kubenswrapper[4720]: I0122 06:37:35.896982 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 22 06:37:35 crc kubenswrapper[4720]: I0122 06:37:35.898575 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 22 06:37:35 crc kubenswrapper[4720]: I0122 06:37:35.900613 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 22 06:37:35 crc kubenswrapper[4720]: I0122 06:37:35.912027 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 22 06:37:35 crc kubenswrapper[4720]: I0122 06:37:35.931147 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-zcbc4" event={"ID":"c27ad45d-a6e8-48af-9417-5422ce60dcec","Type":"ContainerStarted","Data":"010b5a9962c9f0671fd301bdb2f34e77b12f2f1912188ded589e9f3f88489a55"} Jan 22 06:37:35 crc kubenswrapper[4720]: I0122 06:37:35.931397 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-zcbc4" event={"ID":"c27ad45d-a6e8-48af-9417-5422ce60dcec","Type":"ContainerStarted","Data":"df5261edc36757235a737f8384eb22ddaaabb5a1005f44880e50bf5c0775be26"} Jan 22 06:37:35 crc kubenswrapper[4720]: I0122 06:37:35.932308 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-zcbc4" Jan 22 06:37:35 crc kubenswrapper[4720]: I0122 06:37:35.933666 4720 generic.go:334] "Generic (PLEG): container finished" podID="67487e16-e2f8-441f-9fd2-41e1997d91df" containerID="d6e21f9637f2c316934d65f453d95ebc458f8cce28cd08450e8ba3e1bb0b2a4f" exitCode=0 Jan 22 06:37:35 crc kubenswrapper[4720]: I0122 06:37:35.933802 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dgfdc" event={"ID":"67487e16-e2f8-441f-9fd2-41e1997d91df","Type":"ContainerDied","Data":"d6e21f9637f2c316934d65f453d95ebc458f8cce28cd08450e8ba3e1bb0b2a4f"} Jan 22 06:37:35 crc kubenswrapper[4720]: I0122 06:37:35.933883 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dgfdc" event={"ID":"67487e16-e2f8-441f-9fd2-41e1997d91df","Type":"ContainerStarted","Data":"ced1ca345ac7b9710a7df25da559a5802d45fafbbcdea07a2e1c1b3f65b83df5"} Jan 22 06:37:35 crc kubenswrapper[4720]: I0122 06:37:35.935615 4720 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 22 06:37:35 crc kubenswrapper[4720]: I0122 06:37:35.936029 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"e081cb0d24ee3c432ac24a777fb16dd02fbe47f6e29716d21ac44b7d56e7b1ac"} Jan 22 06:37:35 crc kubenswrapper[4720]: I0122 06:37:35.936081 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"793cb8000bbfa6c10f15c2afeeccd15f56f91cf3dc0273cccf2ec71b6d5e4c28"} Jan 22 06:37:35 crc kubenswrapper[4720]: I0122 06:37:35.937697 4720 generic.go:334] "Generic (PLEG): container finished" podID="557f2e7c-b408-456f-bfc8-b6714839b46a" containerID="2300866ef4420be03e1e5f9a64abc1bfcdbd5cfc3054b71779b86dd3dcde38ad" exitCode=0 Jan 22 06:37:35 crc kubenswrapper[4720]: I0122 06:37:35.937764 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bvbhh" event={"ID":"557f2e7c-b408-456f-bfc8-b6714839b46a","Type":"ContainerDied","Data":"2300866ef4420be03e1e5f9a64abc1bfcdbd5cfc3054b71779b86dd3dcde38ad"} Jan 22 06:37:35 crc kubenswrapper[4720]: I0122 06:37:35.940536 4720 generic.go:334] "Generic (PLEG): container finished" podID="75d99952-87c4-42b4-9679-689a9b8e3c63" containerID="9518f60810364cf372b6ccdca6a52dd6c89d6d02e564a6d27ad3bac57964838e" exitCode=0 Jan 22 06:37:35 crc kubenswrapper[4720]: I0122 06:37:35.940573 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tv5kl" event={"ID":"75d99952-87c4-42b4-9679-689a9b8e3c63","Type":"ContainerDied","Data":"9518f60810364cf372b6ccdca6a52dd6c89d6d02e564a6d27ad3bac57964838e"} Jan 22 06:37:35 crc kubenswrapper[4720]: I0122 06:37:35.940593 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tv5kl" event={"ID":"75d99952-87c4-42b4-9679-689a9b8e3c63","Type":"ContainerStarted","Data":"02a0b4d93babf43e686b0c9e3f1f96cbad306e9ffe9cb8c7b0eee8305d964d80"} Jan 22 06:37:35 crc kubenswrapper[4720]: I0122 06:37:35.942723 4720 generic.go:334] "Generic (PLEG): container finished" podID="b692d0a1-233a-41a6-b673-79eb7648c3b8" containerID="6532848adf57f0baefbc3174a61697838923f41ec34413ecb9d18c49a5865764" exitCode=0 Jan 22 06:37:35 crc kubenswrapper[4720]: I0122 06:37:35.942981 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dlsd5" event={"ID":"b692d0a1-233a-41a6-b673-79eb7648c3b8","Type":"ContainerDied","Data":"6532848adf57f0baefbc3174a61697838923f41ec34413ecb9d18c49a5865764"} Jan 22 06:37:35 crc kubenswrapper[4720]: I0122 06:37:35.943007 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dlsd5" event={"ID":"b692d0a1-233a-41a6-b673-79eb7648c3b8","Type":"ContainerStarted","Data":"fa57ce837c81c21acabaecef35c5f912e5e08c4fa08120f882583fb9221d5600"} Jan 22 06:37:35 crc kubenswrapper[4720]: I0122 06:37:35.944413 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"4ad30dbadab48318f2dac817cd93b0f224fc5bd3145435bcdf016c5037b45226"} Jan 22 06:37:35 crc kubenswrapper[4720]: I0122 06:37:35.944441 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"020f7e6f5c5cc02d37eb9a4d9d9c8b2a0d4806f3aa635af80ae6d27c760d8cb6"} Jan 22 06:37:35 crc kubenswrapper[4720]: I0122 06:37:35.944758 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 06:37:35 crc kubenswrapper[4720]: I0122 06:37:35.946001 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"1fa9c2dd242fd3b9b26ef3ac91cf7044f27dafbd85448eea2040ac3da3467b9c"} Jan 22 06:37:35 crc kubenswrapper[4720]: I0122 06:37:35.946018 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"4eb8fd86693c1fdcf01e61bac82001d79b1bf77696017f0362aaed6a597149db"} Jan 22 06:37:35 crc kubenswrapper[4720]: I0122 06:37:35.988690 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-zcbc4" podStartSLOduration=123.988668896 podStartE2EDuration="2m3.988668896s" podCreationTimestamp="2026-01-22 06:35:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 06:37:35.988418739 +0000 UTC m=+148.130325444" watchObservedRunningTime="2026-01-22 06:37:35.988668896 +0000 UTC m=+148.130575611" Jan 22 06:37:36 crc kubenswrapper[4720]: I0122 06:37:36.078717 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1fc4cc03-d3f8-4a0c-be79-a8fc2c824291-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"1fc4cc03-d3f8-4a0c-be79-a8fc2c824291\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 22 06:37:36 crc kubenswrapper[4720]: I0122 06:37:36.078900 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1fc4cc03-d3f8-4a0c-be79-a8fc2c824291-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"1fc4cc03-d3f8-4a0c-be79-a8fc2c824291\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 22 06:37:36 crc kubenswrapper[4720]: I0122 06:37:36.182597 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1fc4cc03-d3f8-4a0c-be79-a8fc2c824291-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"1fc4cc03-d3f8-4a0c-be79-a8fc2c824291\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 22 06:37:36 crc kubenswrapper[4720]: I0122 06:37:36.182835 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1fc4cc03-d3f8-4a0c-be79-a8fc2c824291-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"1fc4cc03-d3f8-4a0c-be79-a8fc2c824291\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 22 06:37:36 crc kubenswrapper[4720]: I0122 06:37:36.182867 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1fc4cc03-d3f8-4a0c-be79-a8fc2c824291-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"1fc4cc03-d3f8-4a0c-be79-a8fc2c824291\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 22 06:37:36 crc kubenswrapper[4720]: I0122 06:37:36.207210 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1fc4cc03-d3f8-4a0c-be79-a8fc2c824291-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"1fc4cc03-d3f8-4a0c-be79-a8fc2c824291\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 22 06:37:36 crc kubenswrapper[4720]: I0122 06:37:36.214608 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 22 06:37:36 crc kubenswrapper[4720]: I0122 06:37:36.234072 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Jan 22 06:37:36 crc kubenswrapper[4720]: I0122 06:37:36.255889 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-z59w9"] Jan 22 06:37:36 crc kubenswrapper[4720]: I0122 06:37:36.263149 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-z59w9" Jan 22 06:37:36 crc kubenswrapper[4720]: I0122 06:37:36.280411 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 22 06:37:36 crc kubenswrapper[4720]: I0122 06:37:36.285103 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-z59w9"] Jan 22 06:37:36 crc kubenswrapper[4720]: I0122 06:37:36.299661 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484390-gc7lm" Jan 22 06:37:36 crc kubenswrapper[4720]: I0122 06:37:36.301469 4720 patch_prober.go:28] interesting pod/router-default-5444994796-6j6b9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 22 06:37:36 crc kubenswrapper[4720]: [-]has-synced failed: reason withheld Jan 22 06:37:36 crc kubenswrapper[4720]: [+]process-running ok Jan 22 06:37:36 crc kubenswrapper[4720]: healthz check failed Jan 22 06:37:36 crc kubenswrapper[4720]: I0122 06:37:36.301537 4720 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-6j6b9" podUID="12b3f8d7-d79f-48e6-be2f-eeb97827e913" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 22 06:37:36 crc kubenswrapper[4720]: I0122 06:37:36.386530 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e65daf94-2073-4b05-8b99-f80d7f777d12-config-volume\") pod \"e65daf94-2073-4b05-8b99-f80d7f777d12\" (UID: \"e65daf94-2073-4b05-8b99-f80d7f777d12\") " Jan 22 06:37:36 crc kubenswrapper[4720]: I0122 06:37:36.386614 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhq6v\" (UniqueName: \"kubernetes.io/projected/e65daf94-2073-4b05-8b99-f80d7f777d12-kube-api-access-jhq6v\") pod \"e65daf94-2073-4b05-8b99-f80d7f777d12\" (UID: \"e65daf94-2073-4b05-8b99-f80d7f777d12\") " Jan 22 06:37:36 crc kubenswrapper[4720]: I0122 06:37:36.386704 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e65daf94-2073-4b05-8b99-f80d7f777d12-secret-volume\") pod \"e65daf94-2073-4b05-8b99-f80d7f777d12\" (UID: \"e65daf94-2073-4b05-8b99-f80d7f777d12\") " Jan 22 06:37:36 crc kubenswrapper[4720]: I0122 06:37:36.386847 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42ecbfe2-1714-40ca-b7ac-191fcbd65b0e-catalog-content\") pod \"redhat-marketplace-z59w9\" (UID: \"42ecbfe2-1714-40ca-b7ac-191fcbd65b0e\") " pod="openshift-marketplace/redhat-marketplace-z59w9" Jan 22 06:37:36 crc kubenswrapper[4720]: I0122 06:37:36.386932 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bk7kl\" (UniqueName: \"kubernetes.io/projected/42ecbfe2-1714-40ca-b7ac-191fcbd65b0e-kube-api-access-bk7kl\") pod \"redhat-marketplace-z59w9\" (UID: \"42ecbfe2-1714-40ca-b7ac-191fcbd65b0e\") " pod="openshift-marketplace/redhat-marketplace-z59w9" Jan 22 06:37:36 crc kubenswrapper[4720]: I0122 06:37:36.386953 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42ecbfe2-1714-40ca-b7ac-191fcbd65b0e-utilities\") pod \"redhat-marketplace-z59w9\" (UID: \"42ecbfe2-1714-40ca-b7ac-191fcbd65b0e\") " pod="openshift-marketplace/redhat-marketplace-z59w9" Jan 22 06:37:36 crc kubenswrapper[4720]: I0122 06:37:36.387693 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e65daf94-2073-4b05-8b99-f80d7f777d12-config-volume" (OuterVolumeSpecName: "config-volume") pod "e65daf94-2073-4b05-8b99-f80d7f777d12" (UID: "e65daf94-2073-4b05-8b99-f80d7f777d12"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 06:37:36 crc kubenswrapper[4720]: I0122 06:37:36.393484 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e65daf94-2073-4b05-8b99-f80d7f777d12-kube-api-access-jhq6v" (OuterVolumeSpecName: "kube-api-access-jhq6v") pod "e65daf94-2073-4b05-8b99-f80d7f777d12" (UID: "e65daf94-2073-4b05-8b99-f80d7f777d12"). InnerVolumeSpecName "kube-api-access-jhq6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 06:37:36 crc kubenswrapper[4720]: I0122 06:37:36.395938 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e65daf94-2073-4b05-8b99-f80d7f777d12-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "e65daf94-2073-4b05-8b99-f80d7f777d12" (UID: "e65daf94-2073-4b05-8b99-f80d7f777d12"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 06:37:36 crc kubenswrapper[4720]: I0122 06:37:36.457437 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 22 06:37:36 crc kubenswrapper[4720]: W0122 06:37:36.467540 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod1fc4cc03_d3f8_4a0c_be79_a8fc2c824291.slice/crio-0c71509b8f165b120b33bbf58606d7ccf3e19c230539afe2fc1a47611fc0e67e WatchSource:0}: Error finding container 0c71509b8f165b120b33bbf58606d7ccf3e19c230539afe2fc1a47611fc0e67e: Status 404 returned error can't find the container with id 0c71509b8f165b120b33bbf58606d7ccf3e19c230539afe2fc1a47611fc0e67e Jan 22 06:37:36 crc kubenswrapper[4720]: I0122 06:37:36.488798 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42ecbfe2-1714-40ca-b7ac-191fcbd65b0e-catalog-content\") pod \"redhat-marketplace-z59w9\" (UID: \"42ecbfe2-1714-40ca-b7ac-191fcbd65b0e\") " pod="openshift-marketplace/redhat-marketplace-z59w9" Jan 22 06:37:36 crc kubenswrapper[4720]: I0122 06:37:36.488882 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bk7kl\" (UniqueName: \"kubernetes.io/projected/42ecbfe2-1714-40ca-b7ac-191fcbd65b0e-kube-api-access-bk7kl\") pod \"redhat-marketplace-z59w9\" (UID: \"42ecbfe2-1714-40ca-b7ac-191fcbd65b0e\") " pod="openshift-marketplace/redhat-marketplace-z59w9" Jan 22 06:37:36 crc kubenswrapper[4720]: I0122 06:37:36.488934 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42ecbfe2-1714-40ca-b7ac-191fcbd65b0e-utilities\") pod \"redhat-marketplace-z59w9\" (UID: \"42ecbfe2-1714-40ca-b7ac-191fcbd65b0e\") " pod="openshift-marketplace/redhat-marketplace-z59w9" Jan 22 06:37:36 crc kubenswrapper[4720]: I0122 06:37:36.489050 4720 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e65daf94-2073-4b05-8b99-f80d7f777d12-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 22 06:37:36 crc kubenswrapper[4720]: I0122 06:37:36.489067 4720 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e65daf94-2073-4b05-8b99-f80d7f777d12-config-volume\") on node \"crc\" DevicePath \"\"" Jan 22 06:37:36 crc kubenswrapper[4720]: I0122 06:37:36.489081 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhq6v\" (UniqueName: \"kubernetes.io/projected/e65daf94-2073-4b05-8b99-f80d7f777d12-kube-api-access-jhq6v\") on node \"crc\" DevicePath \"\"" Jan 22 06:37:36 crc kubenswrapper[4720]: I0122 06:37:36.489460 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42ecbfe2-1714-40ca-b7ac-191fcbd65b0e-catalog-content\") pod \"redhat-marketplace-z59w9\" (UID: \"42ecbfe2-1714-40ca-b7ac-191fcbd65b0e\") " pod="openshift-marketplace/redhat-marketplace-z59w9" Jan 22 06:37:36 crc kubenswrapper[4720]: I0122 06:37:36.489938 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42ecbfe2-1714-40ca-b7ac-191fcbd65b0e-utilities\") pod \"redhat-marketplace-z59w9\" (UID: \"42ecbfe2-1714-40ca-b7ac-191fcbd65b0e\") " pod="openshift-marketplace/redhat-marketplace-z59w9" Jan 22 06:37:36 crc kubenswrapper[4720]: I0122 06:37:36.515982 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bk7kl\" (UniqueName: \"kubernetes.io/projected/42ecbfe2-1714-40ca-b7ac-191fcbd65b0e-kube-api-access-bk7kl\") pod \"redhat-marketplace-z59w9\" (UID: \"42ecbfe2-1714-40ca-b7ac-191fcbd65b0e\") " pod="openshift-marketplace/redhat-marketplace-z59w9" Jan 22 06:37:36 crc kubenswrapper[4720]: I0122 06:37:36.613014 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-z59w9" Jan 22 06:37:36 crc kubenswrapper[4720]: I0122 06:37:36.660357 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-9bxdr"] Jan 22 06:37:36 crc kubenswrapper[4720]: E0122 06:37:36.660576 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e65daf94-2073-4b05-8b99-f80d7f777d12" containerName="collect-profiles" Jan 22 06:37:36 crc kubenswrapper[4720]: I0122 06:37:36.660588 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="e65daf94-2073-4b05-8b99-f80d7f777d12" containerName="collect-profiles" Jan 22 06:37:36 crc kubenswrapper[4720]: I0122 06:37:36.660685 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="e65daf94-2073-4b05-8b99-f80d7f777d12" containerName="collect-profiles" Jan 22 06:37:36 crc kubenswrapper[4720]: I0122 06:37:36.661471 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9bxdr" Jan 22 06:37:36 crc kubenswrapper[4720]: I0122 06:37:36.706230 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9bxdr"] Jan 22 06:37:36 crc kubenswrapper[4720]: I0122 06:37:36.792616 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/65587c45-16b7-47d5-882f-b57a4beb79c5-utilities\") pod \"redhat-marketplace-9bxdr\" (UID: \"65587c45-16b7-47d5-882f-b57a4beb79c5\") " pod="openshift-marketplace/redhat-marketplace-9bxdr" Jan 22 06:37:36 crc kubenswrapper[4720]: I0122 06:37:36.792678 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zg2mq\" (UniqueName: \"kubernetes.io/projected/65587c45-16b7-47d5-882f-b57a4beb79c5-kube-api-access-zg2mq\") pod \"redhat-marketplace-9bxdr\" (UID: \"65587c45-16b7-47d5-882f-b57a4beb79c5\") " pod="openshift-marketplace/redhat-marketplace-9bxdr" Jan 22 06:37:36 crc kubenswrapper[4720]: I0122 06:37:36.792958 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/65587c45-16b7-47d5-882f-b57a4beb79c5-catalog-content\") pod \"redhat-marketplace-9bxdr\" (UID: \"65587c45-16b7-47d5-882f-b57a4beb79c5\") " pod="openshift-marketplace/redhat-marketplace-9bxdr" Jan 22 06:37:36 crc kubenswrapper[4720]: I0122 06:37:36.800357 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-zv6lm" Jan 22 06:37:36 crc kubenswrapper[4720]: I0122 06:37:36.801233 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-zv6lm" Jan 22 06:37:36 crc kubenswrapper[4720]: I0122 06:37:36.808766 4720 patch_prober.go:28] interesting pod/console-f9d7485db-zv6lm container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.9:8443/health\": dial tcp 10.217.0.9:8443: connect: connection refused" start-of-body= Jan 22 06:37:36 crc kubenswrapper[4720]: I0122 06:37:36.808818 4720 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-zv6lm" podUID="86ad3ffd-89b2-4b4a-b1b1-72d6ad907204" containerName="console" probeResult="failure" output="Get \"https://10.217.0.9:8443/health\": dial tcp 10.217.0.9:8443: connect: connection refused" Jan 22 06:37:36 crc kubenswrapper[4720]: I0122 06:37:36.849530 4720 patch_prober.go:28] interesting pod/downloads-7954f5f757-ws6w8 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Jan 22 06:37:36 crc kubenswrapper[4720]: I0122 06:37:36.849604 4720 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-ws6w8" podUID="dc1c1a54-81dc-4e91-80db-606befa6c477" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Jan 22 06:37:36 crc kubenswrapper[4720]: I0122 06:37:36.850058 4720 patch_prober.go:28] interesting pod/downloads-7954f5f757-ws6w8 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Jan 22 06:37:36 crc kubenswrapper[4720]: I0122 06:37:36.850117 4720 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-ws6w8" podUID="dc1c1a54-81dc-4e91-80db-606befa6c477" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Jan 22 06:37:36 crc kubenswrapper[4720]: I0122 06:37:36.894661 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/65587c45-16b7-47d5-882f-b57a4beb79c5-catalog-content\") pod \"redhat-marketplace-9bxdr\" (UID: \"65587c45-16b7-47d5-882f-b57a4beb79c5\") " pod="openshift-marketplace/redhat-marketplace-9bxdr" Jan 22 06:37:36 crc kubenswrapper[4720]: I0122 06:37:36.895106 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/65587c45-16b7-47d5-882f-b57a4beb79c5-utilities\") pod \"redhat-marketplace-9bxdr\" (UID: \"65587c45-16b7-47d5-882f-b57a4beb79c5\") " pod="openshift-marketplace/redhat-marketplace-9bxdr" Jan 22 06:37:36 crc kubenswrapper[4720]: I0122 06:37:36.895132 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zg2mq\" (UniqueName: \"kubernetes.io/projected/65587c45-16b7-47d5-882f-b57a4beb79c5-kube-api-access-zg2mq\") pod \"redhat-marketplace-9bxdr\" (UID: \"65587c45-16b7-47d5-882f-b57a4beb79c5\") " pod="openshift-marketplace/redhat-marketplace-9bxdr" Jan 22 06:37:36 crc kubenswrapper[4720]: I0122 06:37:36.896890 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/65587c45-16b7-47d5-882f-b57a4beb79c5-utilities\") pod \"redhat-marketplace-9bxdr\" (UID: \"65587c45-16b7-47d5-882f-b57a4beb79c5\") " pod="openshift-marketplace/redhat-marketplace-9bxdr" Jan 22 06:37:36 crc kubenswrapper[4720]: I0122 06:37:36.899088 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/65587c45-16b7-47d5-882f-b57a4beb79c5-catalog-content\") pod \"redhat-marketplace-9bxdr\" (UID: \"65587c45-16b7-47d5-882f-b57a4beb79c5\") " pod="openshift-marketplace/redhat-marketplace-9bxdr" Jan 22 06:37:36 crc kubenswrapper[4720]: I0122 06:37:36.924097 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-z59w9"] Jan 22 06:37:36 crc kubenswrapper[4720]: I0122 06:37:36.928162 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zg2mq\" (UniqueName: \"kubernetes.io/projected/65587c45-16b7-47d5-882f-b57a4beb79c5-kube-api-access-zg2mq\") pod \"redhat-marketplace-9bxdr\" (UID: \"65587c45-16b7-47d5-882f-b57a4beb79c5\") " pod="openshift-marketplace/redhat-marketplace-9bxdr" Jan 22 06:37:36 crc kubenswrapper[4720]: I0122 06:37:36.982412 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9bxdr" Jan 22 06:37:37 crc kubenswrapper[4720]: I0122 06:37:37.004569 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484390-gc7lm" Jan 22 06:37:37 crc kubenswrapper[4720]: I0122 06:37:37.004729 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29484390-gc7lm" event={"ID":"e65daf94-2073-4b05-8b99-f80d7f777d12","Type":"ContainerDied","Data":"ad4611868f2299a9453be1cdf058a0a9993267e3971efe4696c6ca06e6a1d860"} Jan 22 06:37:37 crc kubenswrapper[4720]: I0122 06:37:37.004763 4720 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ad4611868f2299a9453be1cdf058a0a9993267e3971efe4696c6ca06e6a1d860" Jan 22 06:37:37 crc kubenswrapper[4720]: I0122 06:37:37.019618 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"1fc4cc03-d3f8-4a0c-be79-a8fc2c824291","Type":"ContainerStarted","Data":"0c71509b8f165b120b33bbf58606d7ccf3e19c230539afe2fc1a47611fc0e67e"} Jan 22 06:37:37 crc kubenswrapper[4720]: I0122 06:37:37.100281 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=2.100257396 podStartE2EDuration="2.100257396s" podCreationTimestamp="2026-01-22 06:37:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 06:37:37.064142521 +0000 UTC m=+149.206049226" watchObservedRunningTime="2026-01-22 06:37:37.100257396 +0000 UTC m=+149.242164101" Jan 22 06:37:37 crc kubenswrapper[4720]: I0122 06:37:37.103133 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 22 06:37:37 crc kubenswrapper[4720]: I0122 06:37:37.104345 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 22 06:37:37 crc kubenswrapper[4720]: I0122 06:37:37.117627 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Jan 22 06:37:37 crc kubenswrapper[4720]: I0122 06:37:37.122091 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Jan 22 06:37:37 crc kubenswrapper[4720]: I0122 06:37:37.123284 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 22 06:37:37 crc kubenswrapper[4720]: I0122 06:37:37.214399 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bc2e7191-f9fa-4c8d-9dc6-6ebdd5e5aacd-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"bc2e7191-f9fa-4c8d-9dc6-6ebdd5e5aacd\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 22 06:37:37 crc kubenswrapper[4720]: I0122 06:37:37.214454 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bc2e7191-f9fa-4c8d-9dc6-6ebdd5e5aacd-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"bc2e7191-f9fa-4c8d-9dc6-6ebdd5e5aacd\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 22 06:37:37 crc kubenswrapper[4720]: I0122 06:37:37.262227 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-nkz4c"] Jan 22 06:37:37 crc kubenswrapper[4720]: I0122 06:37:37.263929 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nkz4c" Jan 22 06:37:37 crc kubenswrapper[4720]: I0122 06:37:37.268363 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 22 06:37:37 crc kubenswrapper[4720]: I0122 06:37:37.287128 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-nkz4c"] Jan 22 06:37:37 crc kubenswrapper[4720]: I0122 06:37:37.290064 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-6j6b9" Jan 22 06:37:37 crc kubenswrapper[4720]: I0122 06:37:37.299401 4720 patch_prober.go:28] interesting pod/router-default-5444994796-6j6b9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 22 06:37:37 crc kubenswrapper[4720]: [-]has-synced failed: reason withheld Jan 22 06:37:37 crc kubenswrapper[4720]: [+]process-running ok Jan 22 06:37:37 crc kubenswrapper[4720]: healthz check failed Jan 22 06:37:37 crc kubenswrapper[4720]: I0122 06:37:37.299489 4720 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-6j6b9" podUID="12b3f8d7-d79f-48e6-be2f-eeb97827e913" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 22 06:37:37 crc kubenswrapper[4720]: I0122 06:37:37.317048 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bc2e7191-f9fa-4c8d-9dc6-6ebdd5e5aacd-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"bc2e7191-f9fa-4c8d-9dc6-6ebdd5e5aacd\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 22 06:37:37 crc kubenswrapper[4720]: I0122 06:37:37.317124 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bc2e7191-f9fa-4c8d-9dc6-6ebdd5e5aacd-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"bc2e7191-f9fa-4c8d-9dc6-6ebdd5e5aacd\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 22 06:37:37 crc kubenswrapper[4720]: I0122 06:37:37.317665 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bc2e7191-f9fa-4c8d-9dc6-6ebdd5e5aacd-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"bc2e7191-f9fa-4c8d-9dc6-6ebdd5e5aacd\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 22 06:37:37 crc kubenswrapper[4720]: I0122 06:37:37.338652 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bc2e7191-f9fa-4c8d-9dc6-6ebdd5e5aacd-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"bc2e7191-f9fa-4c8d-9dc6-6ebdd5e5aacd\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 22 06:37:37 crc kubenswrapper[4720]: I0122 06:37:37.418666 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c8e6204f-9762-43b9-859a-74aaf49f30f4-utilities\") pod \"redhat-operators-nkz4c\" (UID: \"c8e6204f-9762-43b9-859a-74aaf49f30f4\") " pod="openshift-marketplace/redhat-operators-nkz4c" Jan 22 06:37:37 crc kubenswrapper[4720]: I0122 06:37:37.418759 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ghnhg\" (UniqueName: \"kubernetes.io/projected/c8e6204f-9762-43b9-859a-74aaf49f30f4-kube-api-access-ghnhg\") pod \"redhat-operators-nkz4c\" (UID: \"c8e6204f-9762-43b9-859a-74aaf49f30f4\") " pod="openshift-marketplace/redhat-operators-nkz4c" Jan 22 06:37:37 crc kubenswrapper[4720]: I0122 06:37:37.418853 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c8e6204f-9762-43b9-859a-74aaf49f30f4-catalog-content\") pod \"redhat-operators-nkz4c\" (UID: \"c8e6204f-9762-43b9-859a-74aaf49f30f4\") " pod="openshift-marketplace/redhat-operators-nkz4c" Jan 22 06:37:37 crc kubenswrapper[4720]: I0122 06:37:37.441262 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 22 06:37:37 crc kubenswrapper[4720]: I0122 06:37:37.519804 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ghnhg\" (UniqueName: \"kubernetes.io/projected/c8e6204f-9762-43b9-859a-74aaf49f30f4-kube-api-access-ghnhg\") pod \"redhat-operators-nkz4c\" (UID: \"c8e6204f-9762-43b9-859a-74aaf49f30f4\") " pod="openshift-marketplace/redhat-operators-nkz4c" Jan 22 06:37:37 crc kubenswrapper[4720]: I0122 06:37:37.520346 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c8e6204f-9762-43b9-859a-74aaf49f30f4-catalog-content\") pod \"redhat-operators-nkz4c\" (UID: \"c8e6204f-9762-43b9-859a-74aaf49f30f4\") " pod="openshift-marketplace/redhat-operators-nkz4c" Jan 22 06:37:37 crc kubenswrapper[4720]: I0122 06:37:37.520410 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c8e6204f-9762-43b9-859a-74aaf49f30f4-utilities\") pod \"redhat-operators-nkz4c\" (UID: \"c8e6204f-9762-43b9-859a-74aaf49f30f4\") " pod="openshift-marketplace/redhat-operators-nkz4c" Jan 22 06:37:37 crc kubenswrapper[4720]: I0122 06:37:37.520902 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c8e6204f-9762-43b9-859a-74aaf49f30f4-utilities\") pod \"redhat-operators-nkz4c\" (UID: \"c8e6204f-9762-43b9-859a-74aaf49f30f4\") " pod="openshift-marketplace/redhat-operators-nkz4c" Jan 22 06:37:37 crc kubenswrapper[4720]: I0122 06:37:37.521032 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c8e6204f-9762-43b9-859a-74aaf49f30f4-catalog-content\") pod \"redhat-operators-nkz4c\" (UID: \"c8e6204f-9762-43b9-859a-74aaf49f30f4\") " pod="openshift-marketplace/redhat-operators-nkz4c" Jan 22 06:37:37 crc kubenswrapper[4720]: I0122 06:37:37.542285 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ghnhg\" (UniqueName: \"kubernetes.io/projected/c8e6204f-9762-43b9-859a-74aaf49f30f4-kube-api-access-ghnhg\") pod \"redhat-operators-nkz4c\" (UID: \"c8e6204f-9762-43b9-859a-74aaf49f30f4\") " pod="openshift-marketplace/redhat-operators-nkz4c" Jan 22 06:37:37 crc kubenswrapper[4720]: I0122 06:37:37.594753 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nkz4c" Jan 22 06:37:37 crc kubenswrapper[4720]: W0122 06:37:37.601556 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod65587c45_16b7_47d5_882f_b57a4beb79c5.slice/crio-15debe37aaf611dc17ed017bb0d05b479d054ecd893e74240f0f925e40ac32f0 WatchSource:0}: Error finding container 15debe37aaf611dc17ed017bb0d05b479d054ecd893e74240f0f925e40ac32f0: Status 404 returned error can't find the container with id 15debe37aaf611dc17ed017bb0d05b479d054ecd893e74240f0f925e40ac32f0 Jan 22 06:37:37 crc kubenswrapper[4720]: I0122 06:37:37.604879 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9bxdr"] Jan 22 06:37:37 crc kubenswrapper[4720]: I0122 06:37:37.657671 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-zl8pg"] Jan 22 06:37:37 crc kubenswrapper[4720]: I0122 06:37:37.659116 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zl8pg" Jan 22 06:37:37 crc kubenswrapper[4720]: I0122 06:37:37.669410 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zl8pg"] Jan 22 06:37:37 crc kubenswrapper[4720]: I0122 06:37:37.816625 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-dsfv4" Jan 22 06:37:37 crc kubenswrapper[4720]: I0122 06:37:37.816677 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-dsfv4" Jan 22 06:37:37 crc kubenswrapper[4720]: I0122 06:37:37.826120 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-dsfv4" Jan 22 06:37:37 crc kubenswrapper[4720]: I0122 06:37:37.826981 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mxt42\" (UniqueName: \"kubernetes.io/projected/0e7aefc5-0cea-4908-99f3-7038ed16f7a0-kube-api-access-mxt42\") pod \"redhat-operators-zl8pg\" (UID: \"0e7aefc5-0cea-4908-99f3-7038ed16f7a0\") " pod="openshift-marketplace/redhat-operators-zl8pg" Jan 22 06:37:37 crc kubenswrapper[4720]: I0122 06:37:37.827106 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0e7aefc5-0cea-4908-99f3-7038ed16f7a0-catalog-content\") pod \"redhat-operators-zl8pg\" (UID: \"0e7aefc5-0cea-4908-99f3-7038ed16f7a0\") " pod="openshift-marketplace/redhat-operators-zl8pg" Jan 22 06:37:37 crc kubenswrapper[4720]: I0122 06:37:37.827135 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0e7aefc5-0cea-4908-99f3-7038ed16f7a0-utilities\") pod \"redhat-operators-zl8pg\" (UID: \"0e7aefc5-0cea-4908-99f3-7038ed16f7a0\") " pod="openshift-marketplace/redhat-operators-zl8pg" Jan 22 06:37:37 crc kubenswrapper[4720]: I0122 06:37:37.928453 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mxt42\" (UniqueName: \"kubernetes.io/projected/0e7aefc5-0cea-4908-99f3-7038ed16f7a0-kube-api-access-mxt42\") pod \"redhat-operators-zl8pg\" (UID: \"0e7aefc5-0cea-4908-99f3-7038ed16f7a0\") " pod="openshift-marketplace/redhat-operators-zl8pg" Jan 22 06:37:37 crc kubenswrapper[4720]: I0122 06:37:37.928631 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0e7aefc5-0cea-4908-99f3-7038ed16f7a0-catalog-content\") pod \"redhat-operators-zl8pg\" (UID: \"0e7aefc5-0cea-4908-99f3-7038ed16f7a0\") " pod="openshift-marketplace/redhat-operators-zl8pg" Jan 22 06:37:37 crc kubenswrapper[4720]: I0122 06:37:37.928655 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0e7aefc5-0cea-4908-99f3-7038ed16f7a0-utilities\") pod \"redhat-operators-zl8pg\" (UID: \"0e7aefc5-0cea-4908-99f3-7038ed16f7a0\") " pod="openshift-marketplace/redhat-operators-zl8pg" Jan 22 06:37:37 crc kubenswrapper[4720]: I0122 06:37:37.930252 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0e7aefc5-0cea-4908-99f3-7038ed16f7a0-catalog-content\") pod \"redhat-operators-zl8pg\" (UID: \"0e7aefc5-0cea-4908-99f3-7038ed16f7a0\") " pod="openshift-marketplace/redhat-operators-zl8pg" Jan 22 06:37:37 crc kubenswrapper[4720]: I0122 06:37:37.930676 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0e7aefc5-0cea-4908-99f3-7038ed16f7a0-utilities\") pod \"redhat-operators-zl8pg\" (UID: \"0e7aefc5-0cea-4908-99f3-7038ed16f7a0\") " pod="openshift-marketplace/redhat-operators-zl8pg" Jan 22 06:37:37 crc kubenswrapper[4720]: I0122 06:37:37.951633 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mxt42\" (UniqueName: \"kubernetes.io/projected/0e7aefc5-0cea-4908-99f3-7038ed16f7a0-kube-api-access-mxt42\") pod \"redhat-operators-zl8pg\" (UID: \"0e7aefc5-0cea-4908-99f3-7038ed16f7a0\") " pod="openshift-marketplace/redhat-operators-zl8pg" Jan 22 06:37:38 crc kubenswrapper[4720]: I0122 06:37:38.006821 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-nhzl2" Jan 22 06:37:38 crc kubenswrapper[4720]: I0122 06:37:38.008889 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 22 06:37:38 crc kubenswrapper[4720]: I0122 06:37:38.011727 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zl8pg" Jan 22 06:37:38 crc kubenswrapper[4720]: W0122 06:37:38.020391 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podbc2e7191_f9fa_4c8d_9dc6_6ebdd5e5aacd.slice/crio-2621a1ab47d6b5480ede41d6c4edfe1deb328fec41ef8c0746016736c08ce876 WatchSource:0}: Error finding container 2621a1ab47d6b5480ede41d6c4edfe1deb328fec41ef8c0746016736c08ce876: Status 404 returned error can't find the container with id 2621a1ab47d6b5480ede41d6c4edfe1deb328fec41ef8c0746016736c08ce876 Jan 22 06:37:38 crc kubenswrapper[4720]: I0122 06:37:38.038071 4720 generic.go:334] "Generic (PLEG): container finished" podID="42ecbfe2-1714-40ca-b7ac-191fcbd65b0e" containerID="492b16e7ca9e3e6c6b7336c843d9d1eb38a67f872a952a7b07221ef061414dca" exitCode=0 Jan 22 06:37:38 crc kubenswrapper[4720]: I0122 06:37:38.038210 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-z59w9" event={"ID":"42ecbfe2-1714-40ca-b7ac-191fcbd65b0e","Type":"ContainerDied","Data":"492b16e7ca9e3e6c6b7336c843d9d1eb38a67f872a952a7b07221ef061414dca"} Jan 22 06:37:38 crc kubenswrapper[4720]: I0122 06:37:38.038246 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-z59w9" event={"ID":"42ecbfe2-1714-40ca-b7ac-191fcbd65b0e","Type":"ContainerStarted","Data":"2c19198f85b808611a5372518ffe199395400555541fa30d5328d21745746fe7"} Jan 22 06:37:38 crc kubenswrapper[4720]: I0122 06:37:38.057425 4720 generic.go:334] "Generic (PLEG): container finished" podID="65587c45-16b7-47d5-882f-b57a4beb79c5" containerID="afd9e4f5c33f27b4fd289931e96ddd90662d68e8753055d41f324e571c2e2e88" exitCode=0 Jan 22 06:37:38 crc kubenswrapper[4720]: I0122 06:37:38.057617 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9bxdr" event={"ID":"65587c45-16b7-47d5-882f-b57a4beb79c5","Type":"ContainerDied","Data":"afd9e4f5c33f27b4fd289931e96ddd90662d68e8753055d41f324e571c2e2e88"} Jan 22 06:37:38 crc kubenswrapper[4720]: I0122 06:37:38.057656 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9bxdr" event={"ID":"65587c45-16b7-47d5-882f-b57a4beb79c5","Type":"ContainerStarted","Data":"15debe37aaf611dc17ed017bb0d05b479d054ecd893e74240f0f925e40ac32f0"} Jan 22 06:37:38 crc kubenswrapper[4720]: I0122 06:37:38.063261 4720 generic.go:334] "Generic (PLEG): container finished" podID="1fc4cc03-d3f8-4a0c-be79-a8fc2c824291" containerID="8de3431934892a51acb01a086fbee033124db3b29cd26d1bb466b73c670af1b9" exitCode=0 Jan 22 06:37:38 crc kubenswrapper[4720]: I0122 06:37:38.063321 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"1fc4cc03-d3f8-4a0c-be79-a8fc2c824291","Type":"ContainerDied","Data":"8de3431934892a51acb01a086fbee033124db3b29cd26d1bb466b73c670af1b9"} Jan 22 06:37:38 crc kubenswrapper[4720]: I0122 06:37:38.070417 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-dsfv4" Jan 22 06:37:38 crc kubenswrapper[4720]: I0122 06:37:38.112896 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-nkz4c"] Jan 22 06:37:38 crc kubenswrapper[4720]: W0122 06:37:38.264721 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc8e6204f_9762_43b9_859a_74aaf49f30f4.slice/crio-643936999e1caa9fadde39e103a14bee57ba457da811ba16db55e9d831f415e0 WatchSource:0}: Error finding container 643936999e1caa9fadde39e103a14bee57ba457da811ba16db55e9d831f415e0: Status 404 returned error can't find the container with id 643936999e1caa9fadde39e103a14bee57ba457da811ba16db55e9d831f415e0 Jan 22 06:37:38 crc kubenswrapper[4720]: I0122 06:37:38.309202 4720 patch_prober.go:28] interesting pod/router-default-5444994796-6j6b9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 22 06:37:38 crc kubenswrapper[4720]: [-]has-synced failed: reason withheld Jan 22 06:37:38 crc kubenswrapper[4720]: [+]process-running ok Jan 22 06:37:38 crc kubenswrapper[4720]: healthz check failed Jan 22 06:37:38 crc kubenswrapper[4720]: I0122 06:37:38.309297 4720 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-6j6b9" podUID="12b3f8d7-d79f-48e6-be2f-eeb97827e913" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 22 06:37:38 crc kubenswrapper[4720]: I0122 06:37:38.549643 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zl8pg"] Jan 22 06:37:39 crc kubenswrapper[4720]: I0122 06:37:39.076539 4720 generic.go:334] "Generic (PLEG): container finished" podID="c8e6204f-9762-43b9-859a-74aaf49f30f4" containerID="0626055bf456fed27a01e50de9ec6b06989a30050c6e7c7c04f19f982bc457a7" exitCode=0 Jan 22 06:37:39 crc kubenswrapper[4720]: I0122 06:37:39.076699 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nkz4c" event={"ID":"c8e6204f-9762-43b9-859a-74aaf49f30f4","Type":"ContainerDied","Data":"0626055bf456fed27a01e50de9ec6b06989a30050c6e7c7c04f19f982bc457a7"} Jan 22 06:37:39 crc kubenswrapper[4720]: I0122 06:37:39.076815 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nkz4c" event={"ID":"c8e6204f-9762-43b9-859a-74aaf49f30f4","Type":"ContainerStarted","Data":"643936999e1caa9fadde39e103a14bee57ba457da811ba16db55e9d831f415e0"} Jan 22 06:37:39 crc kubenswrapper[4720]: I0122 06:37:39.087559 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zl8pg" event={"ID":"0e7aefc5-0cea-4908-99f3-7038ed16f7a0","Type":"ContainerStarted","Data":"045022ee17aef92e911961fa3cd5a5afadf6a97d4aa606217063ba74da1b1299"} Jan 22 06:37:39 crc kubenswrapper[4720]: I0122 06:37:39.094254 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"bc2e7191-f9fa-4c8d-9dc6-6ebdd5e5aacd","Type":"ContainerStarted","Data":"2621a1ab47d6b5480ede41d6c4edfe1deb328fec41ef8c0746016736c08ce876"} Jan 22 06:37:39 crc kubenswrapper[4720]: I0122 06:37:39.292463 4720 patch_prober.go:28] interesting pod/router-default-5444994796-6j6b9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 22 06:37:39 crc kubenswrapper[4720]: [-]has-synced failed: reason withheld Jan 22 06:37:39 crc kubenswrapper[4720]: [+]process-running ok Jan 22 06:37:39 crc kubenswrapper[4720]: healthz check failed Jan 22 06:37:39 crc kubenswrapper[4720]: I0122 06:37:39.292818 4720 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-6j6b9" podUID="12b3f8d7-d79f-48e6-be2f-eeb97827e913" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 22 06:37:39 crc kubenswrapper[4720]: I0122 06:37:39.414181 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 22 06:37:39 crc kubenswrapper[4720]: I0122 06:37:39.551226 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1fc4cc03-d3f8-4a0c-be79-a8fc2c824291-kube-api-access\") pod \"1fc4cc03-d3f8-4a0c-be79-a8fc2c824291\" (UID: \"1fc4cc03-d3f8-4a0c-be79-a8fc2c824291\") " Jan 22 06:37:39 crc kubenswrapper[4720]: I0122 06:37:39.551372 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1fc4cc03-d3f8-4a0c-be79-a8fc2c824291-kubelet-dir\") pod \"1fc4cc03-d3f8-4a0c-be79-a8fc2c824291\" (UID: \"1fc4cc03-d3f8-4a0c-be79-a8fc2c824291\") " Jan 22 06:37:39 crc kubenswrapper[4720]: I0122 06:37:39.551644 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1fc4cc03-d3f8-4a0c-be79-a8fc2c824291-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "1fc4cc03-d3f8-4a0c-be79-a8fc2c824291" (UID: "1fc4cc03-d3f8-4a0c-be79-a8fc2c824291"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 06:37:39 crc kubenswrapper[4720]: I0122 06:37:39.556984 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1fc4cc03-d3f8-4a0c-be79-a8fc2c824291-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1fc4cc03-d3f8-4a0c-be79-a8fc2c824291" (UID: "1fc4cc03-d3f8-4a0c-be79-a8fc2c824291"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 06:37:39 crc kubenswrapper[4720]: I0122 06:37:39.653848 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1fc4cc03-d3f8-4a0c-be79-a8fc2c824291-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 22 06:37:39 crc kubenswrapper[4720]: I0122 06:37:39.653893 4720 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1fc4cc03-d3f8-4a0c-be79-a8fc2c824291-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 22 06:37:40 crc kubenswrapper[4720]: I0122 06:37:40.101139 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"1fc4cc03-d3f8-4a0c-be79-a8fc2c824291","Type":"ContainerDied","Data":"0c71509b8f165b120b33bbf58606d7ccf3e19c230539afe2fc1a47611fc0e67e"} Jan 22 06:37:40 crc kubenswrapper[4720]: I0122 06:37:40.101183 4720 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0c71509b8f165b120b33bbf58606d7ccf3e19c230539afe2fc1a47611fc0e67e" Jan 22 06:37:40 crc kubenswrapper[4720]: I0122 06:37:40.101245 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 22 06:37:40 crc kubenswrapper[4720]: I0122 06:37:40.109686 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zl8pg" event={"ID":"0e7aefc5-0cea-4908-99f3-7038ed16f7a0","Type":"ContainerStarted","Data":"5984fdd6640b2f34518c9ca2db6d75570d5258a6927f5c2cc9ad0fc2192f2a30"} Jan 22 06:37:40 crc kubenswrapper[4720]: I0122 06:37:40.129096 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"bc2e7191-f9fa-4c8d-9dc6-6ebdd5e5aacd","Type":"ContainerStarted","Data":"a672017c5ee79a2514667bec3db8d1d9ba4e390446428aa1a087112abde65ce9"} Jan 22 06:37:40 crc kubenswrapper[4720]: I0122 06:37:40.297178 4720 patch_prober.go:28] interesting pod/router-default-5444994796-6j6b9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 22 06:37:40 crc kubenswrapper[4720]: [-]has-synced failed: reason withheld Jan 22 06:37:40 crc kubenswrapper[4720]: [+]process-running ok Jan 22 06:37:40 crc kubenswrapper[4720]: healthz check failed Jan 22 06:37:40 crc kubenswrapper[4720]: I0122 06:37:40.297480 4720 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-6j6b9" podUID="12b3f8d7-d79f-48e6-be2f-eeb97827e913" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 22 06:37:41 crc kubenswrapper[4720]: I0122 06:37:41.152117 4720 generic.go:334] "Generic (PLEG): container finished" podID="0e7aefc5-0cea-4908-99f3-7038ed16f7a0" containerID="5984fdd6640b2f34518c9ca2db6d75570d5258a6927f5c2cc9ad0fc2192f2a30" exitCode=0 Jan 22 06:37:41 crc kubenswrapper[4720]: I0122 06:37:41.153067 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zl8pg" event={"ID":"0e7aefc5-0cea-4908-99f3-7038ed16f7a0","Type":"ContainerDied","Data":"5984fdd6640b2f34518c9ca2db6d75570d5258a6927f5c2cc9ad0fc2192f2a30"} Jan 22 06:37:41 crc kubenswrapper[4720]: I0122 06:37:41.218151 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=4.213891231 podStartE2EDuration="4.213891231s" podCreationTimestamp="2026-01-22 06:37:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 06:37:41.192609447 +0000 UTC m=+153.334516152" watchObservedRunningTime="2026-01-22 06:37:41.213891231 +0000 UTC m=+153.355797936" Jan 22 06:37:41 crc kubenswrapper[4720]: I0122 06:37:41.294735 4720 patch_prober.go:28] interesting pod/router-default-5444994796-6j6b9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 22 06:37:41 crc kubenswrapper[4720]: [-]has-synced failed: reason withheld Jan 22 06:37:41 crc kubenswrapper[4720]: [+]process-running ok Jan 22 06:37:41 crc kubenswrapper[4720]: healthz check failed Jan 22 06:37:41 crc kubenswrapper[4720]: I0122 06:37:41.294815 4720 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-6j6b9" podUID="12b3f8d7-d79f-48e6-be2f-eeb97827e913" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 22 06:37:42 crc kubenswrapper[4720]: I0122 06:37:42.180054 4720 generic.go:334] "Generic (PLEG): container finished" podID="bc2e7191-f9fa-4c8d-9dc6-6ebdd5e5aacd" containerID="a672017c5ee79a2514667bec3db8d1d9ba4e390446428aa1a087112abde65ce9" exitCode=0 Jan 22 06:37:42 crc kubenswrapper[4720]: I0122 06:37:42.180116 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"bc2e7191-f9fa-4c8d-9dc6-6ebdd5e5aacd","Type":"ContainerDied","Data":"a672017c5ee79a2514667bec3db8d1d9ba4e390446428aa1a087112abde65ce9"} Jan 22 06:37:42 crc kubenswrapper[4720]: I0122 06:37:42.291727 4720 patch_prober.go:28] interesting pod/router-default-5444994796-6j6b9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 22 06:37:42 crc kubenswrapper[4720]: [-]has-synced failed: reason withheld Jan 22 06:37:42 crc kubenswrapper[4720]: [+]process-running ok Jan 22 06:37:42 crc kubenswrapper[4720]: healthz check failed Jan 22 06:37:42 crc kubenswrapper[4720]: I0122 06:37:42.291799 4720 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-6j6b9" podUID="12b3f8d7-d79f-48e6-be2f-eeb97827e913" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 22 06:37:43 crc kubenswrapper[4720]: I0122 06:37:43.097392 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-t8k8z" Jan 22 06:37:43 crc kubenswrapper[4720]: I0122 06:37:43.292551 4720 patch_prober.go:28] interesting pod/router-default-5444994796-6j6b9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 22 06:37:43 crc kubenswrapper[4720]: [-]has-synced failed: reason withheld Jan 22 06:37:43 crc kubenswrapper[4720]: [+]process-running ok Jan 22 06:37:43 crc kubenswrapper[4720]: healthz check failed Jan 22 06:37:43 crc kubenswrapper[4720]: I0122 06:37:43.296074 4720 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-6j6b9" podUID="12b3f8d7-d79f-48e6-be2f-eeb97827e913" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 22 06:37:44 crc kubenswrapper[4720]: I0122 06:37:44.291959 4720 patch_prober.go:28] interesting pod/router-default-5444994796-6j6b9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 22 06:37:44 crc kubenswrapper[4720]: [-]has-synced failed: reason withheld Jan 22 06:37:44 crc kubenswrapper[4720]: [+]process-running ok Jan 22 06:37:44 crc kubenswrapper[4720]: healthz check failed Jan 22 06:37:44 crc kubenswrapper[4720]: I0122 06:37:44.292028 4720 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-6j6b9" podUID="12b3f8d7-d79f-48e6-be2f-eeb97827e913" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 22 06:37:45 crc kubenswrapper[4720]: I0122 06:37:45.293231 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-6j6b9" Jan 22 06:37:45 crc kubenswrapper[4720]: I0122 06:37:45.297866 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-6j6b9" Jan 22 06:37:46 crc kubenswrapper[4720]: I0122 06:37:46.800242 4720 patch_prober.go:28] interesting pod/console-f9d7485db-zv6lm container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.9:8443/health\": dial tcp 10.217.0.9:8443: connect: connection refused" start-of-body= Jan 22 06:37:46 crc kubenswrapper[4720]: I0122 06:37:46.800306 4720 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-zv6lm" podUID="86ad3ffd-89b2-4b4a-b1b1-72d6ad907204" containerName="console" probeResult="failure" output="Get \"https://10.217.0.9:8443/health\": dial tcp 10.217.0.9:8443: connect: connection refused" Jan 22 06:37:46 crc kubenswrapper[4720]: I0122 06:37:46.855112 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-ws6w8" Jan 22 06:37:47 crc kubenswrapper[4720]: I0122 06:37:47.234851 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-pc2f4" Jan 22 06:37:48 crc kubenswrapper[4720]: I0122 06:37:48.444364 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 22 06:37:48 crc kubenswrapper[4720]: I0122 06:37:48.542597 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bc2e7191-f9fa-4c8d-9dc6-6ebdd5e5aacd-kubelet-dir\") pod \"bc2e7191-f9fa-4c8d-9dc6-6ebdd5e5aacd\" (UID: \"bc2e7191-f9fa-4c8d-9dc6-6ebdd5e5aacd\") " Jan 22 06:37:48 crc kubenswrapper[4720]: I0122 06:37:48.542710 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bc2e7191-f9fa-4c8d-9dc6-6ebdd5e5aacd-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "bc2e7191-f9fa-4c8d-9dc6-6ebdd5e5aacd" (UID: "bc2e7191-f9fa-4c8d-9dc6-6ebdd5e5aacd"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 06:37:48 crc kubenswrapper[4720]: I0122 06:37:48.543175 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bc2e7191-f9fa-4c8d-9dc6-6ebdd5e5aacd-kube-api-access\") pod \"bc2e7191-f9fa-4c8d-9dc6-6ebdd5e5aacd\" (UID: \"bc2e7191-f9fa-4c8d-9dc6-6ebdd5e5aacd\") " Jan 22 06:37:48 crc kubenswrapper[4720]: I0122 06:37:48.544657 4720 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bc2e7191-f9fa-4c8d-9dc6-6ebdd5e5aacd-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 22 06:37:48 crc kubenswrapper[4720]: I0122 06:37:48.551739 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc2e7191-f9fa-4c8d-9dc6-6ebdd5e5aacd-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "bc2e7191-f9fa-4c8d-9dc6-6ebdd5e5aacd" (UID: "bc2e7191-f9fa-4c8d-9dc6-6ebdd5e5aacd"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 06:37:48 crc kubenswrapper[4720]: I0122 06:37:48.647384 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bc2e7191-f9fa-4c8d-9dc6-6ebdd5e5aacd-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 22 06:37:49 crc kubenswrapper[4720]: I0122 06:37:49.246289 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"bc2e7191-f9fa-4c8d-9dc6-6ebdd5e5aacd","Type":"ContainerDied","Data":"2621a1ab47d6b5480ede41d6c4edfe1deb328fec41ef8c0746016736c08ce876"} Jan 22 06:37:49 crc kubenswrapper[4720]: I0122 06:37:49.246340 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 22 06:37:49 crc kubenswrapper[4720]: I0122 06:37:49.246355 4720 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2621a1ab47d6b5480ede41d6c4edfe1deb328fec41ef8c0746016736c08ce876" Jan 22 06:37:54 crc kubenswrapper[4720]: I0122 06:37:54.803110 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-zcbc4" Jan 22 06:37:55 crc kubenswrapper[4720]: I0122 06:37:55.453931 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/409f50e8-9b68-4efe-8eb4-bc144d383817-metrics-certs\") pod \"network-metrics-daemon-kvtch\" (UID: \"409f50e8-9b68-4efe-8eb4-bc144d383817\") " pod="openshift-multus/network-metrics-daemon-kvtch" Jan 22 06:37:55 crc kubenswrapper[4720]: I0122 06:37:55.465017 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/409f50e8-9b68-4efe-8eb4-bc144d383817-metrics-certs\") pod \"network-metrics-daemon-kvtch\" (UID: \"409f50e8-9b68-4efe-8eb4-bc144d383817\") " pod="openshift-multus/network-metrics-daemon-kvtch" Jan 22 06:37:55 crc kubenswrapper[4720]: I0122 06:37:55.710045 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kvtch" Jan 22 06:37:58 crc kubenswrapper[4720]: I0122 06:37:58.263791 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-zv6lm" Jan 22 06:37:58 crc kubenswrapper[4720]: I0122 06:37:58.270633 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-zv6lm" Jan 22 06:37:58 crc kubenswrapper[4720]: I0122 06:37:58.330365 4720 patch_prober.go:28] interesting pod/router-default-5444994796-6j6b9 container/router namespace/openshift-ingress: Readiness probe status=failure output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 22 06:37:58 crc kubenswrapper[4720]: I0122 06:37:58.330725 4720 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ingress/router-default-5444994796-6j6b9" podUID="12b3f8d7-d79f-48e6-be2f-eeb97827e913" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 22 06:37:59 crc kubenswrapper[4720]: I0122 06:37:59.781049 4720 patch_prober.go:28] interesting pod/machine-config-daemon-bnsvd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 06:37:59 crc kubenswrapper[4720]: I0122 06:37:59.781213 4720 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" podUID="f4b26e9d-6a95-4b1c-9750-88b6aa100c67" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 06:38:08 crc kubenswrapper[4720]: I0122 06:38:08.034589 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-24gcw" Jan 22 06:38:08 crc kubenswrapper[4720]: E0122 06:38:08.593149 4720 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: reading blob sha256:bb28df9596f21787435f83dcb227d72eefd3603318b7d3461e9225570ddef962: Get \"https://registry.redhat.io/v2/redhat/redhat-operator-index/blobs/sha256:bb28df9596f21787435f83dcb227d72eefd3603318b7d3461e9225570ddef962\": context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 22 06:38:08 crc kubenswrapper[4720]: E0122 06:38:08.594206 4720 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mxt42,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-zl8pg_openshift-marketplace(0e7aefc5-0cea-4908-99f3-7038ed16f7a0): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: reading blob sha256:bb28df9596f21787435f83dcb227d72eefd3603318b7d3461e9225570ddef962: Get \"https://registry.redhat.io/v2/redhat/redhat-operator-index/blobs/sha256:bb28df9596f21787435f83dcb227d72eefd3603318b7d3461e9225570ddef962\": context canceled" logger="UnhandledError" Jan 22 06:38:08 crc kubenswrapper[4720]: E0122 06:38:08.595686 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: reading blob sha256:bb28df9596f21787435f83dcb227d72eefd3603318b7d3461e9225570ddef962: Get \\\"https://registry.redhat.io/v2/redhat/redhat-operator-index/blobs/sha256:bb28df9596f21787435f83dcb227d72eefd3603318b7d3461e9225570ddef962\\\": context canceled\"" pod="openshift-marketplace/redhat-operators-zl8pg" podUID="0e7aefc5-0cea-4908-99f3-7038ed16f7a0" Jan 22 06:38:08 crc kubenswrapper[4720]: E0122 06:38:08.680679 4720 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 22 06:38:08 crc kubenswrapper[4720]: E0122 06:38:08.681015 4720 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jsmhr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-tv5kl_openshift-marketplace(75d99952-87c4-42b4-9679-689a9b8e3c63): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 22 06:38:08 crc kubenswrapper[4720]: E0122 06:38:08.682341 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-tv5kl" podUID="75d99952-87c4-42b4-9679-689a9b8e3c63" Jan 22 06:38:09 crc kubenswrapper[4720]: E0122 06:38:09.815554 4720 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 22 06:38:09 crc kubenswrapper[4720]: E0122 06:38:09.816324 4720 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bk7kl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-z59w9_openshift-marketplace(42ecbfe2-1714-40ca-b7ac-191fcbd65b0e): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 22 06:38:09 crc kubenswrapper[4720]: E0122 06:38:09.817449 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-z59w9" podUID="42ecbfe2-1714-40ca-b7ac-191fcbd65b0e" Jan 22 06:38:12 crc kubenswrapper[4720]: E0122 06:38:12.693636 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-z59w9" podUID="42ecbfe2-1714-40ca-b7ac-191fcbd65b0e" Jan 22 06:38:12 crc kubenswrapper[4720]: E0122 06:38:12.693723 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-zl8pg" podUID="0e7aefc5-0cea-4908-99f3-7038ed16f7a0" Jan 22 06:38:12 crc kubenswrapper[4720]: E0122 06:38:12.693793 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-tv5kl" podUID="75d99952-87c4-42b4-9679-689a9b8e3c63" Jan 22 06:38:12 crc kubenswrapper[4720]: E0122 06:38:12.801897 4720 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 22 06:38:12 crc kubenswrapper[4720]: E0122 06:38:12.802777 4720 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-l8tjc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-dgfdc_openshift-marketplace(67487e16-e2f8-441f-9fd2-41e1997d91df): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 22 06:38:12 crc kubenswrapper[4720]: E0122 06:38:12.804510 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-dgfdc" podUID="67487e16-e2f8-441f-9fd2-41e1997d91df" Jan 22 06:38:13 crc kubenswrapper[4720]: I0122 06:38:13.889484 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 22 06:38:13 crc kubenswrapper[4720]: E0122 06:38:13.890075 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc2e7191-f9fa-4c8d-9dc6-6ebdd5e5aacd" containerName="pruner" Jan 22 06:38:13 crc kubenswrapper[4720]: I0122 06:38:13.890089 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc2e7191-f9fa-4c8d-9dc6-6ebdd5e5aacd" containerName="pruner" Jan 22 06:38:13 crc kubenswrapper[4720]: E0122 06:38:13.890098 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1fc4cc03-d3f8-4a0c-be79-a8fc2c824291" containerName="pruner" Jan 22 06:38:13 crc kubenswrapper[4720]: I0122 06:38:13.890104 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="1fc4cc03-d3f8-4a0c-be79-a8fc2c824291" containerName="pruner" Jan 22 06:38:13 crc kubenswrapper[4720]: I0122 06:38:13.890208 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc2e7191-f9fa-4c8d-9dc6-6ebdd5e5aacd" containerName="pruner" Jan 22 06:38:13 crc kubenswrapper[4720]: I0122 06:38:13.890223 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="1fc4cc03-d3f8-4a0c-be79-a8fc2c824291" containerName="pruner" Jan 22 06:38:13 crc kubenswrapper[4720]: I0122 06:38:13.890721 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 22 06:38:13 crc kubenswrapper[4720]: I0122 06:38:13.892584 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 22 06:38:13 crc kubenswrapper[4720]: I0122 06:38:13.892624 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 22 06:38:13 crc kubenswrapper[4720]: I0122 06:38:13.893940 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 22 06:38:13 crc kubenswrapper[4720]: I0122 06:38:13.954845 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/22356802-c9a4-4e07-a208-daa013fe13de-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"22356802-c9a4-4e07-a208-daa013fe13de\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 22 06:38:13 crc kubenswrapper[4720]: I0122 06:38:13.955007 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/22356802-c9a4-4e07-a208-daa013fe13de-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"22356802-c9a4-4e07-a208-daa013fe13de\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 22 06:38:14 crc kubenswrapper[4720]: I0122 06:38:14.056191 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/22356802-c9a4-4e07-a208-daa013fe13de-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"22356802-c9a4-4e07-a208-daa013fe13de\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 22 06:38:14 crc kubenswrapper[4720]: I0122 06:38:14.056312 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/22356802-c9a4-4e07-a208-daa013fe13de-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"22356802-c9a4-4e07-a208-daa013fe13de\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 22 06:38:14 crc kubenswrapper[4720]: I0122 06:38:14.056424 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/22356802-c9a4-4e07-a208-daa013fe13de-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"22356802-c9a4-4e07-a208-daa013fe13de\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 22 06:38:14 crc kubenswrapper[4720]: E0122 06:38:14.086269 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-dgfdc" podUID="67487e16-e2f8-441f-9fd2-41e1997d91df" Jan 22 06:38:14 crc kubenswrapper[4720]: I0122 06:38:14.088057 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/22356802-c9a4-4e07-a208-daa013fe13de-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"22356802-c9a4-4e07-a208-daa013fe13de\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 22 06:38:14 crc kubenswrapper[4720]: E0122 06:38:14.144826 4720 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 22 06:38:14 crc kubenswrapper[4720]: E0122 06:38:14.145058 4720 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kxqxz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-bvbhh_openshift-marketplace(557f2e7c-b408-456f-bfc8-b6714839b46a): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 22 06:38:14 crc kubenswrapper[4720]: E0122 06:38:14.146762 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-bvbhh" podUID="557f2e7c-b408-456f-bfc8-b6714839b46a" Jan 22 06:38:14 crc kubenswrapper[4720]: E0122 06:38:14.184737 4720 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 22 06:38:14 crc kubenswrapper[4720]: E0122 06:38:14.185310 4720 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zg2mq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-9bxdr_openshift-marketplace(65587c45-16b7-47d5-882f-b57a4beb79c5): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 22 06:38:14 crc kubenswrapper[4720]: E0122 06:38:14.186656 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-9bxdr" podUID="65587c45-16b7-47d5-882f-b57a4beb79c5" Jan 22 06:38:14 crc kubenswrapper[4720]: I0122 06:38:14.214741 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 22 06:38:14 crc kubenswrapper[4720]: E0122 06:38:14.231011 4720 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 22 06:38:14 crc kubenswrapper[4720]: E0122 06:38:14.231413 4720 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zwp27,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-dlsd5_openshift-marketplace(b692d0a1-233a-41a6-b673-79eb7648c3b8): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 22 06:38:14 crc kubenswrapper[4720]: E0122 06:38:14.232691 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-dlsd5" podUID="b692d0a1-233a-41a6-b673-79eb7648c3b8" Jan 22 06:38:14 crc kubenswrapper[4720]: I0122 06:38:14.433018 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nkz4c" event={"ID":"c8e6204f-9762-43b9-859a-74aaf49f30f4","Type":"ContainerStarted","Data":"35abea23ccd25eac1a67d61239ebaeeb96c39265435a4221f5e1789754d50006"} Jan 22 06:38:14 crc kubenswrapper[4720]: E0122 06:38:14.435416 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-dlsd5" podUID="b692d0a1-233a-41a6-b673-79eb7648c3b8" Jan 22 06:38:14 crc kubenswrapper[4720]: E0122 06:38:14.435437 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-bvbhh" podUID="557f2e7c-b408-456f-bfc8-b6714839b46a" Jan 22 06:38:14 crc kubenswrapper[4720]: E0122 06:38:14.435470 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-9bxdr" podUID="65587c45-16b7-47d5-882f-b57a4beb79c5" Jan 22 06:38:14 crc kubenswrapper[4720]: I0122 06:38:14.662321 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-kvtch"] Jan 22 06:38:14 crc kubenswrapper[4720]: I0122 06:38:14.744423 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 22 06:38:14 crc kubenswrapper[4720]: I0122 06:38:14.799437 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 22 06:38:15 crc kubenswrapper[4720]: I0122 06:38:15.441802 4720 generic.go:334] "Generic (PLEG): container finished" podID="c8e6204f-9762-43b9-859a-74aaf49f30f4" containerID="35abea23ccd25eac1a67d61239ebaeeb96c39265435a4221f5e1789754d50006" exitCode=0 Jan 22 06:38:15 crc kubenswrapper[4720]: I0122 06:38:15.441943 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nkz4c" event={"ID":"c8e6204f-9762-43b9-859a-74aaf49f30f4","Type":"ContainerDied","Data":"35abea23ccd25eac1a67d61239ebaeeb96c39265435a4221f5e1789754d50006"} Jan 22 06:38:15 crc kubenswrapper[4720]: I0122 06:38:15.446563 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"22356802-c9a4-4e07-a208-daa013fe13de","Type":"ContainerStarted","Data":"69a221d1aab7a19842479e543a5393ec45ca9e8f12032d5223ce02cc27b4d7e8"} Jan 22 06:38:15 crc kubenswrapper[4720]: I0122 06:38:15.446643 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"22356802-c9a4-4e07-a208-daa013fe13de","Type":"ContainerStarted","Data":"a574aca04e677662a9a67d304abf6c8ed31e69c81ef8c8ef93bb9d6c6104851b"} Jan 22 06:38:15 crc kubenswrapper[4720]: I0122 06:38:15.451779 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-kvtch" event={"ID":"409f50e8-9b68-4efe-8eb4-bc144d383817","Type":"ContainerStarted","Data":"7a908a29db9f691872c82e58ba17f0a3bc7540a3024b5da6fdfcba6281096754"} Jan 22 06:38:15 crc kubenswrapper[4720]: I0122 06:38:15.451849 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-kvtch" event={"ID":"409f50e8-9b68-4efe-8eb4-bc144d383817","Type":"ContainerStarted","Data":"06763bf62610717db089f01e6e51276f041ea21bf2617f360baeb15dc9c7e8ac"} Jan 22 06:38:15 crc kubenswrapper[4720]: I0122 06:38:15.451871 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-kvtch" event={"ID":"409f50e8-9b68-4efe-8eb4-bc144d383817","Type":"ContainerStarted","Data":"019ddb991394a1a0f0483ae9ba55d1c4a6ab9d1eae101a77b7dd7829870e28f8"} Jan 22 06:38:15 crc kubenswrapper[4720]: I0122 06:38:15.511678 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=2.511644725 podStartE2EDuration="2.511644725s" podCreationTimestamp="2026-01-22 06:38:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 06:38:15.493601023 +0000 UTC m=+187.635507748" watchObservedRunningTime="2026-01-22 06:38:15.511644725 +0000 UTC m=+187.653551430" Jan 22 06:38:15 crc kubenswrapper[4720]: I0122 06:38:15.512699 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-kvtch" podStartSLOduration=163.512689875 podStartE2EDuration="2m43.512689875s" podCreationTimestamp="2026-01-22 06:35:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 06:38:15.510708609 +0000 UTC m=+187.652615344" watchObservedRunningTime="2026-01-22 06:38:15.512689875 +0000 UTC m=+187.654596590" Jan 22 06:38:16 crc kubenswrapper[4720]: I0122 06:38:16.460691 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nkz4c" event={"ID":"c8e6204f-9762-43b9-859a-74aaf49f30f4","Type":"ContainerStarted","Data":"b6d436a4de33fd7e0b314a607636017f40c2d26b8ed4e1ef36bab6c0042c6064"} Jan 22 06:38:16 crc kubenswrapper[4720]: I0122 06:38:16.463052 4720 generic.go:334] "Generic (PLEG): container finished" podID="22356802-c9a4-4e07-a208-daa013fe13de" containerID="69a221d1aab7a19842479e543a5393ec45ca9e8f12032d5223ce02cc27b4d7e8" exitCode=0 Jan 22 06:38:16 crc kubenswrapper[4720]: I0122 06:38:16.463159 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"22356802-c9a4-4e07-a208-daa013fe13de","Type":"ContainerDied","Data":"69a221d1aab7a19842479e543a5393ec45ca9e8f12032d5223ce02cc27b4d7e8"} Jan 22 06:38:16 crc kubenswrapper[4720]: I0122 06:38:16.484070 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-nkz4c" podStartSLOduration=2.636666516 podStartE2EDuration="39.484046985s" podCreationTimestamp="2026-01-22 06:37:37 +0000 UTC" firstStartedPulling="2026-01-22 06:37:39.078266917 +0000 UTC m=+151.220173622" lastFinishedPulling="2026-01-22 06:38:15.925647386 +0000 UTC m=+188.067554091" observedRunningTime="2026-01-22 06:38:16.480746131 +0000 UTC m=+188.622652836" watchObservedRunningTime="2026-01-22 06:38:16.484046985 +0000 UTC m=+188.625953700" Jan 22 06:38:17 crc kubenswrapper[4720]: I0122 06:38:17.596273 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-nkz4c" Jan 22 06:38:17 crc kubenswrapper[4720]: I0122 06:38:17.596762 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-nkz4c" Jan 22 06:38:17 crc kubenswrapper[4720]: I0122 06:38:17.825571 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 22 06:38:17 crc kubenswrapper[4720]: I0122 06:38:17.927461 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/22356802-c9a4-4e07-a208-daa013fe13de-kube-api-access\") pod \"22356802-c9a4-4e07-a208-daa013fe13de\" (UID: \"22356802-c9a4-4e07-a208-daa013fe13de\") " Jan 22 06:38:17 crc kubenswrapper[4720]: I0122 06:38:17.928202 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/22356802-c9a4-4e07-a208-daa013fe13de-kubelet-dir\") pod \"22356802-c9a4-4e07-a208-daa013fe13de\" (UID: \"22356802-c9a4-4e07-a208-daa013fe13de\") " Jan 22 06:38:17 crc kubenswrapper[4720]: I0122 06:38:17.928341 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/22356802-c9a4-4e07-a208-daa013fe13de-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "22356802-c9a4-4e07-a208-daa013fe13de" (UID: "22356802-c9a4-4e07-a208-daa013fe13de"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 06:38:17 crc kubenswrapper[4720]: I0122 06:38:17.928580 4720 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/22356802-c9a4-4e07-a208-daa013fe13de-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 22 06:38:17 crc kubenswrapper[4720]: I0122 06:38:17.934501 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22356802-c9a4-4e07-a208-daa013fe13de-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "22356802-c9a4-4e07-a208-daa013fe13de" (UID: "22356802-c9a4-4e07-a208-daa013fe13de"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 06:38:18 crc kubenswrapper[4720]: I0122 06:38:18.029703 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/22356802-c9a4-4e07-a208-daa013fe13de-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 22 06:38:18 crc kubenswrapper[4720]: I0122 06:38:18.482832 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"22356802-c9a4-4e07-a208-daa013fe13de","Type":"ContainerDied","Data":"a574aca04e677662a9a67d304abf6c8ed31e69c81ef8c8ef93bb9d6c6104851b"} Jan 22 06:38:18 crc kubenswrapper[4720]: I0122 06:38:18.482948 4720 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a574aca04e677662a9a67d304abf6c8ed31e69c81ef8c8ef93bb9d6c6104851b" Jan 22 06:38:18 crc kubenswrapper[4720]: I0122 06:38:18.482890 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 22 06:38:18 crc kubenswrapper[4720]: I0122 06:38:18.722567 4720 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-nkz4c" podUID="c8e6204f-9762-43b9-859a-74aaf49f30f4" containerName="registry-server" probeResult="failure" output=< Jan 22 06:38:18 crc kubenswrapper[4720]: timeout: failed to connect service ":50051" within 1s Jan 22 06:38:18 crc kubenswrapper[4720]: > Jan 22 06:38:18 crc kubenswrapper[4720]: I0122 06:38:18.881860 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 22 06:38:18 crc kubenswrapper[4720]: E0122 06:38:18.882181 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="22356802-c9a4-4e07-a208-daa013fe13de" containerName="pruner" Jan 22 06:38:18 crc kubenswrapper[4720]: I0122 06:38:18.882199 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="22356802-c9a4-4e07-a208-daa013fe13de" containerName="pruner" Jan 22 06:38:18 crc kubenswrapper[4720]: I0122 06:38:18.882347 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="22356802-c9a4-4e07-a208-daa013fe13de" containerName="pruner" Jan 22 06:38:18 crc kubenswrapper[4720]: I0122 06:38:18.882929 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 22 06:38:18 crc kubenswrapper[4720]: I0122 06:38:18.886871 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 22 06:38:18 crc kubenswrapper[4720]: I0122 06:38:18.887015 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 22 06:38:18 crc kubenswrapper[4720]: I0122 06:38:18.897861 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 22 06:38:18 crc kubenswrapper[4720]: I0122 06:38:18.946533 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/35ef24cd-5470-42e1-9bdc-c68ec760aae2-kubelet-dir\") pod \"installer-9-crc\" (UID: \"35ef24cd-5470-42e1-9bdc-c68ec760aae2\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 22 06:38:18 crc kubenswrapper[4720]: I0122 06:38:18.946745 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/35ef24cd-5470-42e1-9bdc-c68ec760aae2-kube-api-access\") pod \"installer-9-crc\" (UID: \"35ef24cd-5470-42e1-9bdc-c68ec760aae2\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 22 06:38:18 crc kubenswrapper[4720]: I0122 06:38:18.946971 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/35ef24cd-5470-42e1-9bdc-c68ec760aae2-var-lock\") pod \"installer-9-crc\" (UID: \"35ef24cd-5470-42e1-9bdc-c68ec760aae2\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 22 06:38:19 crc kubenswrapper[4720]: I0122 06:38:19.048527 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/35ef24cd-5470-42e1-9bdc-c68ec760aae2-kube-api-access\") pod \"installer-9-crc\" (UID: \"35ef24cd-5470-42e1-9bdc-c68ec760aae2\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 22 06:38:19 crc kubenswrapper[4720]: I0122 06:38:19.048726 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/35ef24cd-5470-42e1-9bdc-c68ec760aae2-var-lock\") pod \"installer-9-crc\" (UID: \"35ef24cd-5470-42e1-9bdc-c68ec760aae2\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 22 06:38:19 crc kubenswrapper[4720]: I0122 06:38:19.048854 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/35ef24cd-5470-42e1-9bdc-c68ec760aae2-var-lock\") pod \"installer-9-crc\" (UID: \"35ef24cd-5470-42e1-9bdc-c68ec760aae2\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 22 06:38:19 crc kubenswrapper[4720]: I0122 06:38:19.049145 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/35ef24cd-5470-42e1-9bdc-c68ec760aae2-kubelet-dir\") pod \"installer-9-crc\" (UID: \"35ef24cd-5470-42e1-9bdc-c68ec760aae2\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 22 06:38:19 crc kubenswrapper[4720]: I0122 06:38:19.049269 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/35ef24cd-5470-42e1-9bdc-c68ec760aae2-kubelet-dir\") pod \"installer-9-crc\" (UID: \"35ef24cd-5470-42e1-9bdc-c68ec760aae2\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 22 06:38:19 crc kubenswrapper[4720]: I0122 06:38:19.068980 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/35ef24cd-5470-42e1-9bdc-c68ec760aae2-kube-api-access\") pod \"installer-9-crc\" (UID: \"35ef24cd-5470-42e1-9bdc-c68ec760aae2\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 22 06:38:19 crc kubenswrapper[4720]: I0122 06:38:19.199036 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 22 06:38:19 crc kubenswrapper[4720]: I0122 06:38:19.696321 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 22 06:38:20 crc kubenswrapper[4720]: I0122 06:38:20.497399 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"35ef24cd-5470-42e1-9bdc-c68ec760aae2","Type":"ContainerStarted","Data":"0ced4493328f6104b78cb5fa0d139a6993caaa000a5705f976d4ffb4239dea66"} Jan 22 06:38:21 crc kubenswrapper[4720]: I0122 06:38:21.504855 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"35ef24cd-5470-42e1-9bdc-c68ec760aae2","Type":"ContainerStarted","Data":"d3a1764e955ca548b34f199fac5c54e94189b187a5d9c70f34bc08177aa4ad8e"} Jan 22 06:38:21 crc kubenswrapper[4720]: I0122 06:38:21.521761 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=3.521737677 podStartE2EDuration="3.521737677s" podCreationTimestamp="2026-01-22 06:38:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 06:38:21.521378777 +0000 UTC m=+193.663285492" watchObservedRunningTime="2026-01-22 06:38:21.521737677 +0000 UTC m=+193.663644382" Jan 22 06:38:28 crc kubenswrapper[4720]: I0122 06:38:28.513010 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-nkz4c" Jan 22 06:38:28 crc kubenswrapper[4720]: I0122 06:38:28.581729 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-nkz4c" Jan 22 06:38:29 crc kubenswrapper[4720]: I0122 06:38:29.553856 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-z59w9" event={"ID":"42ecbfe2-1714-40ca-b7ac-191fcbd65b0e","Type":"ContainerStarted","Data":"2e11d5a77a252e9201d9c5db27af519b28d837ec1aed35a6b209d9b5ed416605"} Jan 22 06:38:29 crc kubenswrapper[4720]: I0122 06:38:29.558723 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zl8pg" event={"ID":"0e7aefc5-0cea-4908-99f3-7038ed16f7a0","Type":"ContainerStarted","Data":"f3a241d253b5e003839145ec868cd60175a752667349bbc99c878c23a387757b"} Jan 22 06:38:29 crc kubenswrapper[4720]: I0122 06:38:29.561074 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dgfdc" event={"ID":"67487e16-e2f8-441f-9fd2-41e1997d91df","Type":"ContainerStarted","Data":"3187d94e4980da5cada473954186e3f11de0b89e7539c155ff6309ad9ab4ea1c"} Jan 22 06:38:29 crc kubenswrapper[4720]: I0122 06:38:29.563352 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9bxdr" event={"ID":"65587c45-16b7-47d5-882f-b57a4beb79c5","Type":"ContainerStarted","Data":"4002294fc5864e34d28b6cb78370512daba45b43f8d4f8e56d105a119a32a049"} Jan 22 06:38:29 crc kubenswrapper[4720]: I0122 06:38:29.565341 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bvbhh" event={"ID":"557f2e7c-b408-456f-bfc8-b6714839b46a","Type":"ContainerStarted","Data":"4e911eae137de14ecbd15af4801bd3f8be27e5d69e51d012fd21832ebf3acebd"} Jan 22 06:38:29 crc kubenswrapper[4720]: I0122 06:38:29.571191 4720 generic.go:334] "Generic (PLEG): container finished" podID="75d99952-87c4-42b4-9679-689a9b8e3c63" containerID="379c41485aa8df1d51ed013515fcf9aee2faa83b7a42fffebbd6b55c0d5f4e5f" exitCode=0 Jan 22 06:38:29 crc kubenswrapper[4720]: I0122 06:38:29.571432 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tv5kl" event={"ID":"75d99952-87c4-42b4-9679-689a9b8e3c63","Type":"ContainerDied","Data":"379c41485aa8df1d51ed013515fcf9aee2faa83b7a42fffebbd6b55c0d5f4e5f"} Jan 22 06:38:29 crc kubenswrapper[4720]: I0122 06:38:29.574961 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dlsd5" event={"ID":"b692d0a1-233a-41a6-b673-79eb7648c3b8","Type":"ContainerStarted","Data":"80bb9fbb458d15c1532a3b3ff1f288a38aa5bc229f70a71dac5351a5b1881af6"} Jan 22 06:38:29 crc kubenswrapper[4720]: I0122 06:38:29.780795 4720 patch_prober.go:28] interesting pod/machine-config-daemon-bnsvd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 06:38:29 crc kubenswrapper[4720]: I0122 06:38:29.781162 4720 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" podUID="f4b26e9d-6a95-4b1c-9750-88b6aa100c67" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 06:38:29 crc kubenswrapper[4720]: E0122 06:38:29.907634 4720 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod557f2e7c_b408_456f_bfc8_b6714839b46a.slice/crio-conmon-4e911eae137de14ecbd15af4801bd3f8be27e5d69e51d012fd21832ebf3acebd.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod557f2e7c_b408_456f_bfc8_b6714839b46a.slice/crio-4e911eae137de14ecbd15af4801bd3f8be27e5d69e51d012fd21832ebf3acebd.scope\": RecentStats: unable to find data in memory cache]" Jan 22 06:38:30 crc kubenswrapper[4720]: I0122 06:38:30.583161 4720 generic.go:334] "Generic (PLEG): container finished" podID="b692d0a1-233a-41a6-b673-79eb7648c3b8" containerID="80bb9fbb458d15c1532a3b3ff1f288a38aa5bc229f70a71dac5351a5b1881af6" exitCode=0 Jan 22 06:38:30 crc kubenswrapper[4720]: I0122 06:38:30.583281 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dlsd5" event={"ID":"b692d0a1-233a-41a6-b673-79eb7648c3b8","Type":"ContainerDied","Data":"80bb9fbb458d15c1532a3b3ff1f288a38aa5bc229f70a71dac5351a5b1881af6"} Jan 22 06:38:30 crc kubenswrapper[4720]: I0122 06:38:30.587075 4720 generic.go:334] "Generic (PLEG): container finished" podID="42ecbfe2-1714-40ca-b7ac-191fcbd65b0e" containerID="2e11d5a77a252e9201d9c5db27af519b28d837ec1aed35a6b209d9b5ed416605" exitCode=0 Jan 22 06:38:30 crc kubenswrapper[4720]: I0122 06:38:30.587129 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-z59w9" event={"ID":"42ecbfe2-1714-40ca-b7ac-191fcbd65b0e","Type":"ContainerDied","Data":"2e11d5a77a252e9201d9c5db27af519b28d837ec1aed35a6b209d9b5ed416605"} Jan 22 06:38:30 crc kubenswrapper[4720]: I0122 06:38:30.594181 4720 generic.go:334] "Generic (PLEG): container finished" podID="0e7aefc5-0cea-4908-99f3-7038ed16f7a0" containerID="f3a241d253b5e003839145ec868cd60175a752667349bbc99c878c23a387757b" exitCode=0 Jan 22 06:38:30 crc kubenswrapper[4720]: I0122 06:38:30.594250 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zl8pg" event={"ID":"0e7aefc5-0cea-4908-99f3-7038ed16f7a0","Type":"ContainerDied","Data":"f3a241d253b5e003839145ec868cd60175a752667349bbc99c878c23a387757b"} Jan 22 06:38:30 crc kubenswrapper[4720]: I0122 06:38:30.597941 4720 generic.go:334] "Generic (PLEG): container finished" podID="67487e16-e2f8-441f-9fd2-41e1997d91df" containerID="3187d94e4980da5cada473954186e3f11de0b89e7539c155ff6309ad9ab4ea1c" exitCode=0 Jan 22 06:38:30 crc kubenswrapper[4720]: I0122 06:38:30.598017 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dgfdc" event={"ID":"67487e16-e2f8-441f-9fd2-41e1997d91df","Type":"ContainerDied","Data":"3187d94e4980da5cada473954186e3f11de0b89e7539c155ff6309ad9ab4ea1c"} Jan 22 06:38:30 crc kubenswrapper[4720]: I0122 06:38:30.600140 4720 generic.go:334] "Generic (PLEG): container finished" podID="65587c45-16b7-47d5-882f-b57a4beb79c5" containerID="4002294fc5864e34d28b6cb78370512daba45b43f8d4f8e56d105a119a32a049" exitCode=0 Jan 22 06:38:30 crc kubenswrapper[4720]: I0122 06:38:30.600178 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9bxdr" event={"ID":"65587c45-16b7-47d5-882f-b57a4beb79c5","Type":"ContainerDied","Data":"4002294fc5864e34d28b6cb78370512daba45b43f8d4f8e56d105a119a32a049"} Jan 22 06:38:30 crc kubenswrapper[4720]: I0122 06:38:30.602966 4720 generic.go:334] "Generic (PLEG): container finished" podID="557f2e7c-b408-456f-bfc8-b6714839b46a" containerID="4e911eae137de14ecbd15af4801bd3f8be27e5d69e51d012fd21832ebf3acebd" exitCode=0 Jan 22 06:38:30 crc kubenswrapper[4720]: I0122 06:38:30.603023 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bvbhh" event={"ID":"557f2e7c-b408-456f-bfc8-b6714839b46a","Type":"ContainerDied","Data":"4e911eae137de14ecbd15af4801bd3f8be27e5d69e51d012fd21832ebf3acebd"} Jan 22 06:38:30 crc kubenswrapper[4720]: I0122 06:38:30.606483 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tv5kl" event={"ID":"75d99952-87c4-42b4-9679-689a9b8e3c63","Type":"ContainerStarted","Data":"e4ee6173e20f1ccc2964e46ec93ed3daf12adeaa6a9ab65e3c2f9432f2d7b97c"} Jan 22 06:38:30 crc kubenswrapper[4720]: I0122 06:38:30.772379 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-tv5kl" podStartSLOduration=2.505409615 podStartE2EDuration="56.772349575s" podCreationTimestamp="2026-01-22 06:37:34 +0000 UTC" firstStartedPulling="2026-01-22 06:37:35.941583049 +0000 UTC m=+148.083489754" lastFinishedPulling="2026-01-22 06:38:30.208523009 +0000 UTC m=+202.350429714" observedRunningTime="2026-01-22 06:38:30.763160794 +0000 UTC m=+202.905067489" watchObservedRunningTime="2026-01-22 06:38:30.772349575 +0000 UTC m=+202.914256290" Jan 22 06:38:31 crc kubenswrapper[4720]: I0122 06:38:31.616766 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dgfdc" event={"ID":"67487e16-e2f8-441f-9fd2-41e1997d91df","Type":"ContainerStarted","Data":"be07eacf9ac2b8f6a2b52544e60aebcd745de2376532ca47276b19811c6c6acb"} Jan 22 06:38:31 crc kubenswrapper[4720]: I0122 06:38:31.620599 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9bxdr" event={"ID":"65587c45-16b7-47d5-882f-b57a4beb79c5","Type":"ContainerStarted","Data":"d8daef62c81dd209ca9a779a06ad27e97fe6741b5008839c2ff23e411dab33ff"} Jan 22 06:38:31 crc kubenswrapper[4720]: I0122 06:38:31.623556 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-z59w9" event={"ID":"42ecbfe2-1714-40ca-b7ac-191fcbd65b0e","Type":"ContainerStarted","Data":"34816e439dd030ce2ecf1a7f4102df4439518dcb55c621262166471c4536e4a8"} Jan 22 06:38:31 crc kubenswrapper[4720]: I0122 06:38:31.626121 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zl8pg" event={"ID":"0e7aefc5-0cea-4908-99f3-7038ed16f7a0","Type":"ContainerStarted","Data":"e4e54d2ecdd4061ffd2deb6c8575f75622a2f038b2410dbc9ea8342aa1e62182"} Jan 22 06:38:31 crc kubenswrapper[4720]: I0122 06:38:31.649833 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-9bxdr" podStartSLOduration=2.743270301 podStartE2EDuration="55.649814345s" podCreationTimestamp="2026-01-22 06:37:36 +0000 UTC" firstStartedPulling="2026-01-22 06:37:38.102954015 +0000 UTC m=+150.244860720" lastFinishedPulling="2026-01-22 06:38:31.009498049 +0000 UTC m=+203.151404764" observedRunningTime="2026-01-22 06:38:31.646799973 +0000 UTC m=+203.788706678" watchObservedRunningTime="2026-01-22 06:38:31.649814345 +0000 UTC m=+203.791721050" Jan 22 06:38:31 crc kubenswrapper[4720]: I0122 06:38:31.690098 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-z59w9" podStartSLOduration=2.808156599 podStartE2EDuration="55.690072816s" podCreationTimestamp="2026-01-22 06:37:36 +0000 UTC" firstStartedPulling="2026-01-22 06:37:38.101465813 +0000 UTC m=+150.243372518" lastFinishedPulling="2026-01-22 06:38:30.983382 +0000 UTC m=+203.125288735" observedRunningTime="2026-01-22 06:38:31.689444607 +0000 UTC m=+203.831351322" watchObservedRunningTime="2026-01-22 06:38:31.690072816 +0000 UTC m=+203.831979521" Jan 22 06:38:32 crc kubenswrapper[4720]: I0122 06:38:32.633169 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dlsd5" event={"ID":"b692d0a1-233a-41a6-b673-79eb7648c3b8","Type":"ContainerStarted","Data":"bfc095255073f80b3b211dc677e38b20d156bd3c97c9f9aa02b70c2a2d69b8e2"} Jan 22 06:38:32 crc kubenswrapper[4720]: I0122 06:38:32.636826 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bvbhh" event={"ID":"557f2e7c-b408-456f-bfc8-b6714839b46a","Type":"ContainerStarted","Data":"65108877e25c65782214e1cdcf650489c6d812dec07aa151ea3caf11d47171ff"} Jan 22 06:38:32 crc kubenswrapper[4720]: I0122 06:38:32.661964 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-dlsd5" podStartSLOduration=2.961913565 podStartE2EDuration="58.661945414s" podCreationTimestamp="2026-01-22 06:37:34 +0000 UTC" firstStartedPulling="2026-01-22 06:37:35.943567436 +0000 UTC m=+148.085474141" lastFinishedPulling="2026-01-22 06:38:31.643599285 +0000 UTC m=+203.785505990" observedRunningTime="2026-01-22 06:38:32.657496578 +0000 UTC m=+204.799403283" watchObservedRunningTime="2026-01-22 06:38:32.661945414 +0000 UTC m=+204.803852109" Jan 22 06:38:32 crc kubenswrapper[4720]: I0122 06:38:32.687524 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-dgfdc" podStartSLOduration=3.62060972 podStartE2EDuration="58.687498845s" podCreationTimestamp="2026-01-22 06:37:34 +0000 UTC" firstStartedPulling="2026-01-22 06:37:35.935373293 +0000 UTC m=+148.077279998" lastFinishedPulling="2026-01-22 06:38:31.002262418 +0000 UTC m=+203.144169123" observedRunningTime="2026-01-22 06:38:32.683139392 +0000 UTC m=+204.825046107" watchObservedRunningTime="2026-01-22 06:38:32.687498845 +0000 UTC m=+204.829405550" Jan 22 06:38:32 crc kubenswrapper[4720]: I0122 06:38:32.710351 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-zl8pg" podStartSLOduration=5.815024654 podStartE2EDuration="55.710326254s" podCreationTimestamp="2026-01-22 06:37:37 +0000 UTC" firstStartedPulling="2026-01-22 06:37:41.156350598 +0000 UTC m=+153.298257303" lastFinishedPulling="2026-01-22 06:38:31.051652198 +0000 UTC m=+203.193558903" observedRunningTime="2026-01-22 06:38:32.705900878 +0000 UTC m=+204.847807583" watchObservedRunningTime="2026-01-22 06:38:32.710326254 +0000 UTC m=+204.852232959" Jan 22 06:38:32 crc kubenswrapper[4720]: I0122 06:38:32.729609 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-bvbhh" podStartSLOduration=3.573058337 podStartE2EDuration="58.729584123s" podCreationTimestamp="2026-01-22 06:37:34 +0000 UTC" firstStartedPulling="2026-01-22 06:37:35.939561912 +0000 UTC m=+148.081468617" lastFinishedPulling="2026-01-22 06:38:31.096087678 +0000 UTC m=+203.237994403" observedRunningTime="2026-01-22 06:38:32.728883601 +0000 UTC m=+204.870790306" watchObservedRunningTime="2026-01-22 06:38:32.729584123 +0000 UTC m=+204.871490828" Jan 22 06:38:34 crc kubenswrapper[4720]: I0122 06:38:34.389485 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-bvbhh" Jan 22 06:38:34 crc kubenswrapper[4720]: I0122 06:38:34.390364 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-bvbhh" Jan 22 06:38:34 crc kubenswrapper[4720]: I0122 06:38:34.437555 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-bvbhh" Jan 22 06:38:34 crc kubenswrapper[4720]: I0122 06:38:34.606493 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-dgfdc" Jan 22 06:38:34 crc kubenswrapper[4720]: I0122 06:38:34.606564 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-dgfdc" Jan 22 06:38:34 crc kubenswrapper[4720]: I0122 06:38:34.653236 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-dgfdc" Jan 22 06:38:34 crc kubenswrapper[4720]: I0122 06:38:34.776831 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-dlsd5" Jan 22 06:38:34 crc kubenswrapper[4720]: I0122 06:38:34.776888 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-dlsd5" Jan 22 06:38:34 crc kubenswrapper[4720]: I0122 06:38:34.828704 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-dlsd5" Jan 22 06:38:35 crc kubenswrapper[4720]: I0122 06:38:35.030561 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-tv5kl" Jan 22 06:38:35 crc kubenswrapper[4720]: I0122 06:38:35.030633 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-tv5kl" Jan 22 06:38:35 crc kubenswrapper[4720]: I0122 06:38:35.070460 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-tv5kl" Jan 22 06:38:35 crc kubenswrapper[4720]: I0122 06:38:35.722338 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-tv5kl" Jan 22 06:38:36 crc kubenswrapper[4720]: I0122 06:38:36.614338 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-z59w9" Jan 22 06:38:36 crc kubenswrapper[4720]: I0122 06:38:36.614668 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-z59w9" Jan 22 06:38:36 crc kubenswrapper[4720]: I0122 06:38:36.678317 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-z59w9" Jan 22 06:38:36 crc kubenswrapper[4720]: I0122 06:38:36.731835 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-z59w9" Jan 22 06:38:36 crc kubenswrapper[4720]: I0122 06:38:36.983047 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-9bxdr" Jan 22 06:38:36 crc kubenswrapper[4720]: I0122 06:38:36.983395 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-9bxdr" Jan 22 06:38:37 crc kubenswrapper[4720]: I0122 06:38:37.050406 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-9bxdr" Jan 22 06:38:37 crc kubenswrapper[4720]: I0122 06:38:37.723312 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-9bxdr" Jan 22 06:38:38 crc kubenswrapper[4720]: I0122 06:38:38.012752 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-zl8pg" Jan 22 06:38:38 crc kubenswrapper[4720]: I0122 06:38:38.012815 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-zl8pg" Jan 22 06:38:38 crc kubenswrapper[4720]: I0122 06:38:38.444214 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-tv5kl"] Jan 22 06:38:38 crc kubenswrapper[4720]: I0122 06:38:38.444530 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-tv5kl" podUID="75d99952-87c4-42b4-9679-689a9b8e3c63" containerName="registry-server" containerID="cri-o://e4ee6173e20f1ccc2964e46ec93ed3daf12adeaa6a9ab65e3c2f9432f2d7b97c" gracePeriod=2 Jan 22 06:38:39 crc kubenswrapper[4720]: I0122 06:38:39.060628 4720 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-zl8pg" podUID="0e7aefc5-0cea-4908-99f3-7038ed16f7a0" containerName="registry-server" probeResult="failure" output=< Jan 22 06:38:39 crc kubenswrapper[4720]: timeout: failed to connect service ":50051" within 1s Jan 22 06:38:39 crc kubenswrapper[4720]: > Jan 22 06:38:39 crc kubenswrapper[4720]: I0122 06:38:39.486069 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tv5kl" Jan 22 06:38:39 crc kubenswrapper[4720]: I0122 06:38:39.661085 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/75d99952-87c4-42b4-9679-689a9b8e3c63-utilities\") pod \"75d99952-87c4-42b4-9679-689a9b8e3c63\" (UID: \"75d99952-87c4-42b4-9679-689a9b8e3c63\") " Jan 22 06:38:39 crc kubenswrapper[4720]: I0122 06:38:39.661208 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jsmhr\" (UniqueName: \"kubernetes.io/projected/75d99952-87c4-42b4-9679-689a9b8e3c63-kube-api-access-jsmhr\") pod \"75d99952-87c4-42b4-9679-689a9b8e3c63\" (UID: \"75d99952-87c4-42b4-9679-689a9b8e3c63\") " Jan 22 06:38:39 crc kubenswrapper[4720]: I0122 06:38:39.661230 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/75d99952-87c4-42b4-9679-689a9b8e3c63-catalog-content\") pod \"75d99952-87c4-42b4-9679-689a9b8e3c63\" (UID: \"75d99952-87c4-42b4-9679-689a9b8e3c63\") " Jan 22 06:38:39 crc kubenswrapper[4720]: I0122 06:38:39.662776 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/75d99952-87c4-42b4-9679-689a9b8e3c63-utilities" (OuterVolumeSpecName: "utilities") pod "75d99952-87c4-42b4-9679-689a9b8e3c63" (UID: "75d99952-87c4-42b4-9679-689a9b8e3c63"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 06:38:39 crc kubenswrapper[4720]: I0122 06:38:39.667499 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/75d99952-87c4-42b4-9679-689a9b8e3c63-kube-api-access-jsmhr" (OuterVolumeSpecName: "kube-api-access-jsmhr") pod "75d99952-87c4-42b4-9679-689a9b8e3c63" (UID: "75d99952-87c4-42b4-9679-689a9b8e3c63"). InnerVolumeSpecName "kube-api-access-jsmhr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 06:38:39 crc kubenswrapper[4720]: I0122 06:38:39.689037 4720 generic.go:334] "Generic (PLEG): container finished" podID="75d99952-87c4-42b4-9679-689a9b8e3c63" containerID="e4ee6173e20f1ccc2964e46ec93ed3daf12adeaa6a9ab65e3c2f9432f2d7b97c" exitCode=0 Jan 22 06:38:39 crc kubenswrapper[4720]: I0122 06:38:39.689091 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tv5kl" event={"ID":"75d99952-87c4-42b4-9679-689a9b8e3c63","Type":"ContainerDied","Data":"e4ee6173e20f1ccc2964e46ec93ed3daf12adeaa6a9ab65e3c2f9432f2d7b97c"} Jan 22 06:38:39 crc kubenswrapper[4720]: I0122 06:38:39.689125 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tv5kl" event={"ID":"75d99952-87c4-42b4-9679-689a9b8e3c63","Type":"ContainerDied","Data":"02a0b4d93babf43e686b0c9e3f1f96cbad306e9ffe9cb8c7b0eee8305d964d80"} Jan 22 06:38:39 crc kubenswrapper[4720]: I0122 06:38:39.689154 4720 scope.go:117] "RemoveContainer" containerID="e4ee6173e20f1ccc2964e46ec93ed3daf12adeaa6a9ab65e3c2f9432f2d7b97c" Jan 22 06:38:39 crc kubenswrapper[4720]: I0122 06:38:39.689299 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tv5kl" Jan 22 06:38:39 crc kubenswrapper[4720]: I0122 06:38:39.725961 4720 scope.go:117] "RemoveContainer" containerID="379c41485aa8df1d51ed013515fcf9aee2faa83b7a42fffebbd6b55c0d5f4e5f" Jan 22 06:38:39 crc kubenswrapper[4720]: I0122 06:38:39.743284 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/75d99952-87c4-42b4-9679-689a9b8e3c63-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "75d99952-87c4-42b4-9679-689a9b8e3c63" (UID: "75d99952-87c4-42b4-9679-689a9b8e3c63"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 06:38:39 crc kubenswrapper[4720]: I0122 06:38:39.753494 4720 scope.go:117] "RemoveContainer" containerID="9518f60810364cf372b6ccdca6a52dd6c89d6d02e564a6d27ad3bac57964838e" Jan 22 06:38:39 crc kubenswrapper[4720]: I0122 06:38:39.764109 4720 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/75d99952-87c4-42b4-9679-689a9b8e3c63-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 06:38:39 crc kubenswrapper[4720]: I0122 06:38:39.764144 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jsmhr\" (UniqueName: \"kubernetes.io/projected/75d99952-87c4-42b4-9679-689a9b8e3c63-kube-api-access-jsmhr\") on node \"crc\" DevicePath \"\"" Jan 22 06:38:39 crc kubenswrapper[4720]: I0122 06:38:39.764161 4720 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/75d99952-87c4-42b4-9679-689a9b8e3c63-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 06:38:39 crc kubenswrapper[4720]: I0122 06:38:39.777862 4720 scope.go:117] "RemoveContainer" containerID="e4ee6173e20f1ccc2964e46ec93ed3daf12adeaa6a9ab65e3c2f9432f2d7b97c" Jan 22 06:38:39 crc kubenswrapper[4720]: E0122 06:38:39.778436 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e4ee6173e20f1ccc2964e46ec93ed3daf12adeaa6a9ab65e3c2f9432f2d7b97c\": container with ID starting with e4ee6173e20f1ccc2964e46ec93ed3daf12adeaa6a9ab65e3c2f9432f2d7b97c not found: ID does not exist" containerID="e4ee6173e20f1ccc2964e46ec93ed3daf12adeaa6a9ab65e3c2f9432f2d7b97c" Jan 22 06:38:39 crc kubenswrapper[4720]: I0122 06:38:39.778472 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e4ee6173e20f1ccc2964e46ec93ed3daf12adeaa6a9ab65e3c2f9432f2d7b97c"} err="failed to get container status \"e4ee6173e20f1ccc2964e46ec93ed3daf12adeaa6a9ab65e3c2f9432f2d7b97c\": rpc error: code = NotFound desc = could not find container \"e4ee6173e20f1ccc2964e46ec93ed3daf12adeaa6a9ab65e3c2f9432f2d7b97c\": container with ID starting with e4ee6173e20f1ccc2964e46ec93ed3daf12adeaa6a9ab65e3c2f9432f2d7b97c not found: ID does not exist" Jan 22 06:38:39 crc kubenswrapper[4720]: I0122 06:38:39.778527 4720 scope.go:117] "RemoveContainer" containerID="379c41485aa8df1d51ed013515fcf9aee2faa83b7a42fffebbd6b55c0d5f4e5f" Jan 22 06:38:39 crc kubenswrapper[4720]: E0122 06:38:39.779024 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"379c41485aa8df1d51ed013515fcf9aee2faa83b7a42fffebbd6b55c0d5f4e5f\": container with ID starting with 379c41485aa8df1d51ed013515fcf9aee2faa83b7a42fffebbd6b55c0d5f4e5f not found: ID does not exist" containerID="379c41485aa8df1d51ed013515fcf9aee2faa83b7a42fffebbd6b55c0d5f4e5f" Jan 22 06:38:39 crc kubenswrapper[4720]: I0122 06:38:39.779156 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"379c41485aa8df1d51ed013515fcf9aee2faa83b7a42fffebbd6b55c0d5f4e5f"} err="failed to get container status \"379c41485aa8df1d51ed013515fcf9aee2faa83b7a42fffebbd6b55c0d5f4e5f\": rpc error: code = NotFound desc = could not find container \"379c41485aa8df1d51ed013515fcf9aee2faa83b7a42fffebbd6b55c0d5f4e5f\": container with ID starting with 379c41485aa8df1d51ed013515fcf9aee2faa83b7a42fffebbd6b55c0d5f4e5f not found: ID does not exist" Jan 22 06:38:39 crc kubenswrapper[4720]: I0122 06:38:39.779196 4720 scope.go:117] "RemoveContainer" containerID="9518f60810364cf372b6ccdca6a52dd6c89d6d02e564a6d27ad3bac57964838e" Jan 22 06:38:39 crc kubenswrapper[4720]: E0122 06:38:39.779707 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9518f60810364cf372b6ccdca6a52dd6c89d6d02e564a6d27ad3bac57964838e\": container with ID starting with 9518f60810364cf372b6ccdca6a52dd6c89d6d02e564a6d27ad3bac57964838e not found: ID does not exist" containerID="9518f60810364cf372b6ccdca6a52dd6c89d6d02e564a6d27ad3bac57964838e" Jan 22 06:38:39 crc kubenswrapper[4720]: I0122 06:38:39.779740 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9518f60810364cf372b6ccdca6a52dd6c89d6d02e564a6d27ad3bac57964838e"} err="failed to get container status \"9518f60810364cf372b6ccdca6a52dd6c89d6d02e564a6d27ad3bac57964838e\": rpc error: code = NotFound desc = could not find container \"9518f60810364cf372b6ccdca6a52dd6c89d6d02e564a6d27ad3bac57964838e\": container with ID starting with 9518f60810364cf372b6ccdca6a52dd6c89d6d02e564a6d27ad3bac57964838e not found: ID does not exist" Jan 22 06:38:40 crc kubenswrapper[4720]: I0122 06:38:40.025966 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-tv5kl"] Jan 22 06:38:40 crc kubenswrapper[4720]: I0122 06:38:40.028731 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-tv5kl"] Jan 22 06:38:40 crc kubenswrapper[4720]: E0122 06:38:40.047522 4720 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod75d99952_87c4_42b4_9679_689a9b8e3c63.slice/crio-02a0b4d93babf43e686b0c9e3f1f96cbad306e9ffe9cb8c7b0eee8305d964d80\": RecentStats: unable to find data in memory cache]" Jan 22 06:38:40 crc kubenswrapper[4720]: I0122 06:38:40.233348 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="75d99952-87c4-42b4-9679-689a9b8e3c63" path="/var/lib/kubelet/pods/75d99952-87c4-42b4-9679-689a9b8e3c63/volumes" Jan 22 06:38:40 crc kubenswrapper[4720]: I0122 06:38:40.649436 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-9bxdr"] Jan 22 06:38:40 crc kubenswrapper[4720]: I0122 06:38:40.649893 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-9bxdr" podUID="65587c45-16b7-47d5-882f-b57a4beb79c5" containerName="registry-server" containerID="cri-o://d8daef62c81dd209ca9a779a06ad27e97fe6741b5008839c2ff23e411dab33ff" gracePeriod=2 Jan 22 06:38:41 crc kubenswrapper[4720]: I0122 06:38:41.139111 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9bxdr" Jan 22 06:38:41 crc kubenswrapper[4720]: I0122 06:38:41.286444 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zg2mq\" (UniqueName: \"kubernetes.io/projected/65587c45-16b7-47d5-882f-b57a4beb79c5-kube-api-access-zg2mq\") pod \"65587c45-16b7-47d5-882f-b57a4beb79c5\" (UID: \"65587c45-16b7-47d5-882f-b57a4beb79c5\") " Jan 22 06:38:41 crc kubenswrapper[4720]: I0122 06:38:41.286574 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/65587c45-16b7-47d5-882f-b57a4beb79c5-utilities\") pod \"65587c45-16b7-47d5-882f-b57a4beb79c5\" (UID: \"65587c45-16b7-47d5-882f-b57a4beb79c5\") " Jan 22 06:38:41 crc kubenswrapper[4720]: I0122 06:38:41.286665 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/65587c45-16b7-47d5-882f-b57a4beb79c5-catalog-content\") pod \"65587c45-16b7-47d5-882f-b57a4beb79c5\" (UID: \"65587c45-16b7-47d5-882f-b57a4beb79c5\") " Jan 22 06:38:41 crc kubenswrapper[4720]: I0122 06:38:41.288190 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/65587c45-16b7-47d5-882f-b57a4beb79c5-utilities" (OuterVolumeSpecName: "utilities") pod "65587c45-16b7-47d5-882f-b57a4beb79c5" (UID: "65587c45-16b7-47d5-882f-b57a4beb79c5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 06:38:41 crc kubenswrapper[4720]: I0122 06:38:41.292269 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/65587c45-16b7-47d5-882f-b57a4beb79c5-kube-api-access-zg2mq" (OuterVolumeSpecName: "kube-api-access-zg2mq") pod "65587c45-16b7-47d5-882f-b57a4beb79c5" (UID: "65587c45-16b7-47d5-882f-b57a4beb79c5"). InnerVolumeSpecName "kube-api-access-zg2mq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 06:38:41 crc kubenswrapper[4720]: I0122 06:38:41.316789 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/65587c45-16b7-47d5-882f-b57a4beb79c5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "65587c45-16b7-47d5-882f-b57a4beb79c5" (UID: "65587c45-16b7-47d5-882f-b57a4beb79c5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 06:38:41 crc kubenswrapper[4720]: I0122 06:38:41.388513 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zg2mq\" (UniqueName: \"kubernetes.io/projected/65587c45-16b7-47d5-882f-b57a4beb79c5-kube-api-access-zg2mq\") on node \"crc\" DevicePath \"\"" Jan 22 06:38:41 crc kubenswrapper[4720]: I0122 06:38:41.388581 4720 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/65587c45-16b7-47d5-882f-b57a4beb79c5-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 06:38:41 crc kubenswrapper[4720]: I0122 06:38:41.388606 4720 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/65587c45-16b7-47d5-882f-b57a4beb79c5-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 06:38:41 crc kubenswrapper[4720]: I0122 06:38:41.711223 4720 generic.go:334] "Generic (PLEG): container finished" podID="65587c45-16b7-47d5-882f-b57a4beb79c5" containerID="d8daef62c81dd209ca9a779a06ad27e97fe6741b5008839c2ff23e411dab33ff" exitCode=0 Jan 22 06:38:41 crc kubenswrapper[4720]: I0122 06:38:41.711334 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9bxdr" Jan 22 06:38:41 crc kubenswrapper[4720]: I0122 06:38:41.711341 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9bxdr" event={"ID":"65587c45-16b7-47d5-882f-b57a4beb79c5","Type":"ContainerDied","Data":"d8daef62c81dd209ca9a779a06ad27e97fe6741b5008839c2ff23e411dab33ff"} Jan 22 06:38:41 crc kubenswrapper[4720]: I0122 06:38:41.711438 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9bxdr" event={"ID":"65587c45-16b7-47d5-882f-b57a4beb79c5","Type":"ContainerDied","Data":"15debe37aaf611dc17ed017bb0d05b479d054ecd893e74240f0f925e40ac32f0"} Jan 22 06:38:41 crc kubenswrapper[4720]: I0122 06:38:41.711473 4720 scope.go:117] "RemoveContainer" containerID="d8daef62c81dd209ca9a779a06ad27e97fe6741b5008839c2ff23e411dab33ff" Jan 22 06:38:41 crc kubenswrapper[4720]: I0122 06:38:41.738565 4720 scope.go:117] "RemoveContainer" containerID="4002294fc5864e34d28b6cb78370512daba45b43f8d4f8e56d105a119a32a049" Jan 22 06:38:41 crc kubenswrapper[4720]: I0122 06:38:41.760041 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-9bxdr"] Jan 22 06:38:41 crc kubenswrapper[4720]: I0122 06:38:41.766514 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-9bxdr"] Jan 22 06:38:41 crc kubenswrapper[4720]: I0122 06:38:41.776735 4720 scope.go:117] "RemoveContainer" containerID="afd9e4f5c33f27b4fd289931e96ddd90662d68e8753055d41f324e571c2e2e88" Jan 22 06:38:41 crc kubenswrapper[4720]: I0122 06:38:41.795780 4720 scope.go:117] "RemoveContainer" containerID="d8daef62c81dd209ca9a779a06ad27e97fe6741b5008839c2ff23e411dab33ff" Jan 22 06:38:41 crc kubenswrapper[4720]: E0122 06:38:41.796487 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d8daef62c81dd209ca9a779a06ad27e97fe6741b5008839c2ff23e411dab33ff\": container with ID starting with d8daef62c81dd209ca9a779a06ad27e97fe6741b5008839c2ff23e411dab33ff not found: ID does not exist" containerID="d8daef62c81dd209ca9a779a06ad27e97fe6741b5008839c2ff23e411dab33ff" Jan 22 06:38:41 crc kubenswrapper[4720]: I0122 06:38:41.796564 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d8daef62c81dd209ca9a779a06ad27e97fe6741b5008839c2ff23e411dab33ff"} err="failed to get container status \"d8daef62c81dd209ca9a779a06ad27e97fe6741b5008839c2ff23e411dab33ff\": rpc error: code = NotFound desc = could not find container \"d8daef62c81dd209ca9a779a06ad27e97fe6741b5008839c2ff23e411dab33ff\": container with ID starting with d8daef62c81dd209ca9a779a06ad27e97fe6741b5008839c2ff23e411dab33ff not found: ID does not exist" Jan 22 06:38:41 crc kubenswrapper[4720]: I0122 06:38:41.796616 4720 scope.go:117] "RemoveContainer" containerID="4002294fc5864e34d28b6cb78370512daba45b43f8d4f8e56d105a119a32a049" Jan 22 06:38:41 crc kubenswrapper[4720]: E0122 06:38:41.797285 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4002294fc5864e34d28b6cb78370512daba45b43f8d4f8e56d105a119a32a049\": container with ID starting with 4002294fc5864e34d28b6cb78370512daba45b43f8d4f8e56d105a119a32a049 not found: ID does not exist" containerID="4002294fc5864e34d28b6cb78370512daba45b43f8d4f8e56d105a119a32a049" Jan 22 06:38:41 crc kubenswrapper[4720]: I0122 06:38:41.797349 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4002294fc5864e34d28b6cb78370512daba45b43f8d4f8e56d105a119a32a049"} err="failed to get container status \"4002294fc5864e34d28b6cb78370512daba45b43f8d4f8e56d105a119a32a049\": rpc error: code = NotFound desc = could not find container \"4002294fc5864e34d28b6cb78370512daba45b43f8d4f8e56d105a119a32a049\": container with ID starting with 4002294fc5864e34d28b6cb78370512daba45b43f8d4f8e56d105a119a32a049 not found: ID does not exist" Jan 22 06:38:41 crc kubenswrapper[4720]: I0122 06:38:41.797396 4720 scope.go:117] "RemoveContainer" containerID="afd9e4f5c33f27b4fd289931e96ddd90662d68e8753055d41f324e571c2e2e88" Jan 22 06:38:41 crc kubenswrapper[4720]: E0122 06:38:41.797730 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"afd9e4f5c33f27b4fd289931e96ddd90662d68e8753055d41f324e571c2e2e88\": container with ID starting with afd9e4f5c33f27b4fd289931e96ddd90662d68e8753055d41f324e571c2e2e88 not found: ID does not exist" containerID="afd9e4f5c33f27b4fd289931e96ddd90662d68e8753055d41f324e571c2e2e88" Jan 22 06:38:41 crc kubenswrapper[4720]: I0122 06:38:41.797782 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"afd9e4f5c33f27b4fd289931e96ddd90662d68e8753055d41f324e571c2e2e88"} err="failed to get container status \"afd9e4f5c33f27b4fd289931e96ddd90662d68e8753055d41f324e571c2e2e88\": rpc error: code = NotFound desc = could not find container \"afd9e4f5c33f27b4fd289931e96ddd90662d68e8753055d41f324e571c2e2e88\": container with ID starting with afd9e4f5c33f27b4fd289931e96ddd90662d68e8753055d41f324e571c2e2e88 not found: ID does not exist" Jan 22 06:38:42 crc kubenswrapper[4720]: I0122 06:38:42.222809 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="65587c45-16b7-47d5-882f-b57a4beb79c5" path="/var/lib/kubelet/pods/65587c45-16b7-47d5-882f-b57a4beb79c5/volumes" Jan 22 06:38:44 crc kubenswrapper[4720]: I0122 06:38:44.431017 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-bvbhh" Jan 22 06:38:44 crc kubenswrapper[4720]: I0122 06:38:44.667740 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-dgfdc" Jan 22 06:38:44 crc kubenswrapper[4720]: I0122 06:38:44.823639 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-dlsd5" Jan 22 06:38:46 crc kubenswrapper[4720]: I0122 06:38:46.587620 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-vp8tq"] Jan 22 06:38:46 crc kubenswrapper[4720]: I0122 06:38:46.849858 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-dlsd5"] Jan 22 06:38:46 crc kubenswrapper[4720]: I0122 06:38:46.850166 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-dlsd5" podUID="b692d0a1-233a-41a6-b673-79eb7648c3b8" containerName="registry-server" containerID="cri-o://bfc095255073f80b3b211dc677e38b20d156bd3c97c9f9aa02b70c2a2d69b8e2" gracePeriod=2 Jan 22 06:38:47 crc kubenswrapper[4720]: I0122 06:38:47.754418 4720 generic.go:334] "Generic (PLEG): container finished" podID="b692d0a1-233a-41a6-b673-79eb7648c3b8" containerID="bfc095255073f80b3b211dc677e38b20d156bd3c97c9f9aa02b70c2a2d69b8e2" exitCode=0 Jan 22 06:38:47 crc kubenswrapper[4720]: I0122 06:38:47.754484 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dlsd5" event={"ID":"b692d0a1-233a-41a6-b673-79eb7648c3b8","Type":"ContainerDied","Data":"bfc095255073f80b3b211dc677e38b20d156bd3c97c9f9aa02b70c2a2d69b8e2"} Jan 22 06:38:47 crc kubenswrapper[4720]: I0122 06:38:47.754791 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dlsd5" event={"ID":"b692d0a1-233a-41a6-b673-79eb7648c3b8","Type":"ContainerDied","Data":"fa57ce837c81c21acabaecef35c5f912e5e08c4fa08120f882583fb9221d5600"} Jan 22 06:38:47 crc kubenswrapper[4720]: I0122 06:38:47.754826 4720 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fa57ce837c81c21acabaecef35c5f912e5e08c4fa08120f882583fb9221d5600" Jan 22 06:38:47 crc kubenswrapper[4720]: I0122 06:38:47.754954 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dlsd5" Jan 22 06:38:47 crc kubenswrapper[4720]: I0122 06:38:47.891585 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b692d0a1-233a-41a6-b673-79eb7648c3b8-utilities\") pod \"b692d0a1-233a-41a6-b673-79eb7648c3b8\" (UID: \"b692d0a1-233a-41a6-b673-79eb7648c3b8\") " Jan 22 06:38:47 crc kubenswrapper[4720]: I0122 06:38:47.891650 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b692d0a1-233a-41a6-b673-79eb7648c3b8-catalog-content\") pod \"b692d0a1-233a-41a6-b673-79eb7648c3b8\" (UID: \"b692d0a1-233a-41a6-b673-79eb7648c3b8\") " Jan 22 06:38:47 crc kubenswrapper[4720]: I0122 06:38:47.891746 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zwp27\" (UniqueName: \"kubernetes.io/projected/b692d0a1-233a-41a6-b673-79eb7648c3b8-kube-api-access-zwp27\") pod \"b692d0a1-233a-41a6-b673-79eb7648c3b8\" (UID: \"b692d0a1-233a-41a6-b673-79eb7648c3b8\") " Jan 22 06:38:47 crc kubenswrapper[4720]: I0122 06:38:47.892768 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b692d0a1-233a-41a6-b673-79eb7648c3b8-utilities" (OuterVolumeSpecName: "utilities") pod "b692d0a1-233a-41a6-b673-79eb7648c3b8" (UID: "b692d0a1-233a-41a6-b673-79eb7648c3b8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 06:38:47 crc kubenswrapper[4720]: I0122 06:38:47.898074 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b692d0a1-233a-41a6-b673-79eb7648c3b8-kube-api-access-zwp27" (OuterVolumeSpecName: "kube-api-access-zwp27") pod "b692d0a1-233a-41a6-b673-79eb7648c3b8" (UID: "b692d0a1-233a-41a6-b673-79eb7648c3b8"). InnerVolumeSpecName "kube-api-access-zwp27". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 06:38:47 crc kubenswrapper[4720]: I0122 06:38:47.952654 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b692d0a1-233a-41a6-b673-79eb7648c3b8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b692d0a1-233a-41a6-b673-79eb7648c3b8" (UID: "b692d0a1-233a-41a6-b673-79eb7648c3b8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 06:38:47 crc kubenswrapper[4720]: I0122 06:38:47.993738 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zwp27\" (UniqueName: \"kubernetes.io/projected/b692d0a1-233a-41a6-b673-79eb7648c3b8-kube-api-access-zwp27\") on node \"crc\" DevicePath \"\"" Jan 22 06:38:47 crc kubenswrapper[4720]: I0122 06:38:47.993776 4720 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b692d0a1-233a-41a6-b673-79eb7648c3b8-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 06:38:47 crc kubenswrapper[4720]: I0122 06:38:47.993788 4720 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b692d0a1-233a-41a6-b673-79eb7648c3b8-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 06:38:48 crc kubenswrapper[4720]: I0122 06:38:48.052844 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-zl8pg" Jan 22 06:38:48 crc kubenswrapper[4720]: I0122 06:38:48.097632 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-zl8pg" Jan 22 06:38:48 crc kubenswrapper[4720]: I0122 06:38:48.758636 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dlsd5" Jan 22 06:38:48 crc kubenswrapper[4720]: I0122 06:38:48.776416 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-dlsd5"] Jan 22 06:38:48 crc kubenswrapper[4720]: I0122 06:38:48.782634 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-dlsd5"] Jan 22 06:38:50 crc kubenswrapper[4720]: I0122 06:38:50.217147 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b692d0a1-233a-41a6-b673-79eb7648c3b8" path="/var/lib/kubelet/pods/b692d0a1-233a-41a6-b673-79eb7648c3b8/volumes" Jan 22 06:38:50 crc kubenswrapper[4720]: I0122 06:38:50.442753 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zl8pg"] Jan 22 06:38:50 crc kubenswrapper[4720]: I0122 06:38:50.443076 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-zl8pg" podUID="0e7aefc5-0cea-4908-99f3-7038ed16f7a0" containerName="registry-server" containerID="cri-o://e4e54d2ecdd4061ffd2deb6c8575f75622a2f038b2410dbc9ea8342aa1e62182" gracePeriod=2 Jan 22 06:38:53 crc kubenswrapper[4720]: I0122 06:38:53.790941 4720 generic.go:334] "Generic (PLEG): container finished" podID="0e7aefc5-0cea-4908-99f3-7038ed16f7a0" containerID="e4e54d2ecdd4061ffd2deb6c8575f75622a2f038b2410dbc9ea8342aa1e62182" exitCode=0 Jan 22 06:38:53 crc kubenswrapper[4720]: I0122 06:38:53.791004 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zl8pg" event={"ID":"0e7aefc5-0cea-4908-99f3-7038ed16f7a0","Type":"ContainerDied","Data":"e4e54d2ecdd4061ffd2deb6c8575f75622a2f038b2410dbc9ea8342aa1e62182"} Jan 22 06:38:53 crc kubenswrapper[4720]: I0122 06:38:53.863487 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zl8pg" Jan 22 06:38:53 crc kubenswrapper[4720]: I0122 06:38:53.973667 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mxt42\" (UniqueName: \"kubernetes.io/projected/0e7aefc5-0cea-4908-99f3-7038ed16f7a0-kube-api-access-mxt42\") pod \"0e7aefc5-0cea-4908-99f3-7038ed16f7a0\" (UID: \"0e7aefc5-0cea-4908-99f3-7038ed16f7a0\") " Jan 22 06:38:53 crc kubenswrapper[4720]: I0122 06:38:53.973823 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0e7aefc5-0cea-4908-99f3-7038ed16f7a0-utilities\") pod \"0e7aefc5-0cea-4908-99f3-7038ed16f7a0\" (UID: \"0e7aefc5-0cea-4908-99f3-7038ed16f7a0\") " Jan 22 06:38:53 crc kubenswrapper[4720]: I0122 06:38:53.973886 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0e7aefc5-0cea-4908-99f3-7038ed16f7a0-catalog-content\") pod \"0e7aefc5-0cea-4908-99f3-7038ed16f7a0\" (UID: \"0e7aefc5-0cea-4908-99f3-7038ed16f7a0\") " Jan 22 06:38:53 crc kubenswrapper[4720]: I0122 06:38:53.975038 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0e7aefc5-0cea-4908-99f3-7038ed16f7a0-utilities" (OuterVolumeSpecName: "utilities") pod "0e7aefc5-0cea-4908-99f3-7038ed16f7a0" (UID: "0e7aefc5-0cea-4908-99f3-7038ed16f7a0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 06:38:53 crc kubenswrapper[4720]: I0122 06:38:53.980927 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0e7aefc5-0cea-4908-99f3-7038ed16f7a0-kube-api-access-mxt42" (OuterVolumeSpecName: "kube-api-access-mxt42") pod "0e7aefc5-0cea-4908-99f3-7038ed16f7a0" (UID: "0e7aefc5-0cea-4908-99f3-7038ed16f7a0"). InnerVolumeSpecName "kube-api-access-mxt42". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 06:38:54 crc kubenswrapper[4720]: I0122 06:38:54.075534 4720 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0e7aefc5-0cea-4908-99f3-7038ed16f7a0-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 06:38:54 crc kubenswrapper[4720]: I0122 06:38:54.075572 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mxt42\" (UniqueName: \"kubernetes.io/projected/0e7aefc5-0cea-4908-99f3-7038ed16f7a0-kube-api-access-mxt42\") on node \"crc\" DevicePath \"\"" Jan 22 06:38:54 crc kubenswrapper[4720]: I0122 06:38:54.137262 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0e7aefc5-0cea-4908-99f3-7038ed16f7a0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0e7aefc5-0cea-4908-99f3-7038ed16f7a0" (UID: "0e7aefc5-0cea-4908-99f3-7038ed16f7a0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 06:38:54 crc kubenswrapper[4720]: I0122 06:38:54.177280 4720 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0e7aefc5-0cea-4908-99f3-7038ed16f7a0-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 06:38:54 crc kubenswrapper[4720]: I0122 06:38:54.802109 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zl8pg" event={"ID":"0e7aefc5-0cea-4908-99f3-7038ed16f7a0","Type":"ContainerDied","Data":"045022ee17aef92e911961fa3cd5a5afadf6a97d4aa606217063ba74da1b1299"} Jan 22 06:38:54 crc kubenswrapper[4720]: I0122 06:38:54.802184 4720 scope.go:117] "RemoveContainer" containerID="e4e54d2ecdd4061ffd2deb6c8575f75622a2f038b2410dbc9ea8342aa1e62182" Jan 22 06:38:54 crc kubenswrapper[4720]: I0122 06:38:54.802205 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zl8pg" Jan 22 06:38:54 crc kubenswrapper[4720]: I0122 06:38:54.821220 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zl8pg"] Jan 22 06:38:54 crc kubenswrapper[4720]: I0122 06:38:54.823593 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-zl8pg"] Jan 22 06:38:54 crc kubenswrapper[4720]: I0122 06:38:54.824857 4720 scope.go:117] "RemoveContainer" containerID="f3a241d253b5e003839145ec868cd60175a752667349bbc99c878c23a387757b" Jan 22 06:38:54 crc kubenswrapper[4720]: I0122 06:38:54.845535 4720 scope.go:117] "RemoveContainer" containerID="5984fdd6640b2f34518c9ca2db6d75570d5258a6927f5c2cc9ad0fc2192f2a30" Jan 22 06:38:56 crc kubenswrapper[4720]: I0122 06:38:56.219113 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0e7aefc5-0cea-4908-99f3-7038ed16f7a0" path="/var/lib/kubelet/pods/0e7aefc5-0cea-4908-99f3-7038ed16f7a0/volumes" Jan 22 06:38:58 crc kubenswrapper[4720]: I0122 06:38:58.495615 4720 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 22 06:38:58 crc kubenswrapper[4720]: E0122 06:38:58.496195 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65587c45-16b7-47d5-882f-b57a4beb79c5" containerName="extract-content" Jan 22 06:38:58 crc kubenswrapper[4720]: I0122 06:38:58.496214 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="65587c45-16b7-47d5-882f-b57a4beb79c5" containerName="extract-content" Jan 22 06:38:58 crc kubenswrapper[4720]: E0122 06:38:58.496224 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65587c45-16b7-47d5-882f-b57a4beb79c5" containerName="registry-server" Jan 22 06:38:58 crc kubenswrapper[4720]: I0122 06:38:58.496232 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="65587c45-16b7-47d5-882f-b57a4beb79c5" containerName="registry-server" Jan 22 06:38:58 crc kubenswrapper[4720]: E0122 06:38:58.496250 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b692d0a1-233a-41a6-b673-79eb7648c3b8" containerName="registry-server" Jan 22 06:38:58 crc kubenswrapper[4720]: I0122 06:38:58.496257 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="b692d0a1-233a-41a6-b673-79eb7648c3b8" containerName="registry-server" Jan 22 06:38:58 crc kubenswrapper[4720]: E0122 06:38:58.496269 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b692d0a1-233a-41a6-b673-79eb7648c3b8" containerName="extract-content" Jan 22 06:38:58 crc kubenswrapper[4720]: I0122 06:38:58.496280 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="b692d0a1-233a-41a6-b673-79eb7648c3b8" containerName="extract-content" Jan 22 06:38:58 crc kubenswrapper[4720]: E0122 06:38:58.496292 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65587c45-16b7-47d5-882f-b57a4beb79c5" containerName="extract-utilities" Jan 22 06:38:58 crc kubenswrapper[4720]: I0122 06:38:58.496300 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="65587c45-16b7-47d5-882f-b57a4beb79c5" containerName="extract-utilities" Jan 22 06:38:58 crc kubenswrapper[4720]: E0122 06:38:58.496310 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75d99952-87c4-42b4-9679-689a9b8e3c63" containerName="registry-server" Jan 22 06:38:58 crc kubenswrapper[4720]: I0122 06:38:58.496317 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="75d99952-87c4-42b4-9679-689a9b8e3c63" containerName="registry-server" Jan 22 06:38:58 crc kubenswrapper[4720]: E0122 06:38:58.496329 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e7aefc5-0cea-4908-99f3-7038ed16f7a0" containerName="extract-content" Jan 22 06:38:58 crc kubenswrapper[4720]: I0122 06:38:58.496338 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e7aefc5-0cea-4908-99f3-7038ed16f7a0" containerName="extract-content" Jan 22 06:38:58 crc kubenswrapper[4720]: E0122 06:38:58.496347 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75d99952-87c4-42b4-9679-689a9b8e3c63" containerName="extract-content" Jan 22 06:38:58 crc kubenswrapper[4720]: I0122 06:38:58.496354 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="75d99952-87c4-42b4-9679-689a9b8e3c63" containerName="extract-content" Jan 22 06:38:58 crc kubenswrapper[4720]: E0122 06:38:58.496365 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e7aefc5-0cea-4908-99f3-7038ed16f7a0" containerName="extract-utilities" Jan 22 06:38:58 crc kubenswrapper[4720]: I0122 06:38:58.496371 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e7aefc5-0cea-4908-99f3-7038ed16f7a0" containerName="extract-utilities" Jan 22 06:38:58 crc kubenswrapper[4720]: E0122 06:38:58.496381 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e7aefc5-0cea-4908-99f3-7038ed16f7a0" containerName="registry-server" Jan 22 06:38:58 crc kubenswrapper[4720]: I0122 06:38:58.496388 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e7aefc5-0cea-4908-99f3-7038ed16f7a0" containerName="registry-server" Jan 22 06:38:58 crc kubenswrapper[4720]: E0122 06:38:58.496396 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75d99952-87c4-42b4-9679-689a9b8e3c63" containerName="extract-utilities" Jan 22 06:38:58 crc kubenswrapper[4720]: I0122 06:38:58.496403 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="75d99952-87c4-42b4-9679-689a9b8e3c63" containerName="extract-utilities" Jan 22 06:38:58 crc kubenswrapper[4720]: E0122 06:38:58.496416 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b692d0a1-233a-41a6-b673-79eb7648c3b8" containerName="extract-utilities" Jan 22 06:38:58 crc kubenswrapper[4720]: I0122 06:38:58.496423 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="b692d0a1-233a-41a6-b673-79eb7648c3b8" containerName="extract-utilities" Jan 22 06:38:58 crc kubenswrapper[4720]: I0122 06:38:58.496533 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="65587c45-16b7-47d5-882f-b57a4beb79c5" containerName="registry-server" Jan 22 06:38:58 crc kubenswrapper[4720]: I0122 06:38:58.496544 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e7aefc5-0cea-4908-99f3-7038ed16f7a0" containerName="registry-server" Jan 22 06:38:58 crc kubenswrapper[4720]: I0122 06:38:58.496562 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="75d99952-87c4-42b4-9679-689a9b8e3c63" containerName="registry-server" Jan 22 06:38:58 crc kubenswrapper[4720]: I0122 06:38:58.496572 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="b692d0a1-233a-41a6-b673-79eb7648c3b8" containerName="registry-server" Jan 22 06:38:58 crc kubenswrapper[4720]: I0122 06:38:58.497052 4720 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 22 06:38:58 crc kubenswrapper[4720]: I0122 06:38:58.497355 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 06:38:58 crc kubenswrapper[4720]: I0122 06:38:58.497540 4720 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 22 06:38:58 crc kubenswrapper[4720]: I0122 06:38:58.497595 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://b844e983a5de29cec0e9c51f209f04921cd45b437887ca38e93348bc0816832a" gracePeriod=15 Jan 22 06:38:58 crc kubenswrapper[4720]: E0122 06:38:58.497715 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 22 06:38:58 crc kubenswrapper[4720]: I0122 06:38:58.497730 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 22 06:38:58 crc kubenswrapper[4720]: E0122 06:38:58.497742 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 22 06:38:58 crc kubenswrapper[4720]: I0122 06:38:58.497751 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 22 06:38:58 crc kubenswrapper[4720]: E0122 06:38:58.497763 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 22 06:38:58 crc kubenswrapper[4720]: I0122 06:38:58.497770 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 22 06:38:58 crc kubenswrapper[4720]: E0122 06:38:58.497779 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 22 06:38:58 crc kubenswrapper[4720]: I0122 06:38:58.497787 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 22 06:38:58 crc kubenswrapper[4720]: E0122 06:38:58.497797 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 22 06:38:58 crc kubenswrapper[4720]: I0122 06:38:58.497805 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 22 06:38:58 crc kubenswrapper[4720]: I0122 06:38:58.497804 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://1d21c2ba52009379f44498a3064c67bcd1a54a6277cc2b1df9db949bf0871e95" gracePeriod=15 Jan 22 06:38:58 crc kubenswrapper[4720]: E0122 06:38:58.497821 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 22 06:38:58 crc kubenswrapper[4720]: I0122 06:38:58.497830 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 22 06:38:58 crc kubenswrapper[4720]: I0122 06:38:58.497780 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://1b944677a78ab673857bf2dd507dcc2eced1caef632613ec1e2054cb76ef57f5" gracePeriod=15 Jan 22 06:38:58 crc kubenswrapper[4720]: I0122 06:38:58.498027 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 22 06:38:58 crc kubenswrapper[4720]: I0122 06:38:58.498043 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 22 06:38:58 crc kubenswrapper[4720]: I0122 06:38:58.498056 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 22 06:38:58 crc kubenswrapper[4720]: I0122 06:38:58.497873 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://65a3043e63bb84b58bb5486070eb84265d51c514df2d766703a62382b3d6f66f" gracePeriod=15 Jan 22 06:38:58 crc kubenswrapper[4720]: I0122 06:38:58.497780 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://927a5a6aba8d7533888fb9e5e664fbaed25c6e6c0616c5b3c6f0afd6f45aa128" gracePeriod=15 Jan 22 06:38:58 crc kubenswrapper[4720]: I0122 06:38:58.498066 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 22 06:38:58 crc kubenswrapper[4720]: I0122 06:38:58.498313 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 22 06:38:58 crc kubenswrapper[4720]: I0122 06:38:58.498326 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 22 06:38:58 crc kubenswrapper[4720]: E0122 06:38:58.498442 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 22 06:38:58 crc kubenswrapper[4720]: I0122 06:38:58.498453 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 22 06:38:58 crc kubenswrapper[4720]: I0122 06:38:58.503497 4720 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="f4b27818a5e8e43d0dc095d08835c792" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" Jan 22 06:38:58 crc kubenswrapper[4720]: I0122 06:38:58.569646 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 22 06:38:58 crc kubenswrapper[4720]: I0122 06:38:58.645965 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 06:38:58 crc kubenswrapper[4720]: I0122 06:38:58.646029 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 06:38:58 crc kubenswrapper[4720]: I0122 06:38:58.646066 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 06:38:58 crc kubenswrapper[4720]: I0122 06:38:58.646081 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 06:38:58 crc kubenswrapper[4720]: I0122 06:38:58.646119 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 06:38:58 crc kubenswrapper[4720]: I0122 06:38:58.646141 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 06:38:58 crc kubenswrapper[4720]: I0122 06:38:58.646174 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 06:38:58 crc kubenswrapper[4720]: I0122 06:38:58.646189 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 06:38:58 crc kubenswrapper[4720]: I0122 06:38:58.747553 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 06:38:58 crc kubenswrapper[4720]: I0122 06:38:58.747599 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 06:38:58 crc kubenswrapper[4720]: I0122 06:38:58.747636 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 06:38:58 crc kubenswrapper[4720]: I0122 06:38:58.747655 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 06:38:58 crc kubenswrapper[4720]: I0122 06:38:58.747664 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 06:38:58 crc kubenswrapper[4720]: I0122 06:38:58.747676 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 06:38:58 crc kubenswrapper[4720]: I0122 06:38:58.747705 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 06:38:58 crc kubenswrapper[4720]: I0122 06:38:58.747736 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 06:38:58 crc kubenswrapper[4720]: I0122 06:38:58.747742 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 06:38:58 crc kubenswrapper[4720]: I0122 06:38:58.747770 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 06:38:58 crc kubenswrapper[4720]: I0122 06:38:58.747770 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 06:38:58 crc kubenswrapper[4720]: I0122 06:38:58.747789 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 06:38:58 crc kubenswrapper[4720]: I0122 06:38:58.747792 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 06:38:58 crc kubenswrapper[4720]: I0122 06:38:58.747818 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 06:38:58 crc kubenswrapper[4720]: I0122 06:38:58.747731 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 06:38:58 crc kubenswrapper[4720]: I0122 06:38:58.747802 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 06:38:58 crc kubenswrapper[4720]: I0122 06:38:58.832224 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 22 06:38:58 crc kubenswrapper[4720]: I0122 06:38:58.834409 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 22 06:38:58 crc kubenswrapper[4720]: I0122 06:38:58.835694 4720 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="1b944677a78ab673857bf2dd507dcc2eced1caef632613ec1e2054cb76ef57f5" exitCode=0 Jan 22 06:38:58 crc kubenswrapper[4720]: I0122 06:38:58.835758 4720 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="927a5a6aba8d7533888fb9e5e664fbaed25c6e6c0616c5b3c6f0afd6f45aa128" exitCode=0 Jan 22 06:38:58 crc kubenswrapper[4720]: I0122 06:38:58.835776 4720 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="1d21c2ba52009379f44498a3064c67bcd1a54a6277cc2b1df9db949bf0871e95" exitCode=0 Jan 22 06:38:58 crc kubenswrapper[4720]: I0122 06:38:58.835796 4720 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="65a3043e63bb84b58bb5486070eb84265d51c514df2d766703a62382b3d6f66f" exitCode=2 Jan 22 06:38:58 crc kubenswrapper[4720]: I0122 06:38:58.835834 4720 scope.go:117] "RemoveContainer" containerID="9f51bf89aa732948c3672632435a055b62f58316da07931b2773bc2d3c10789e" Jan 22 06:38:58 crc kubenswrapper[4720]: I0122 06:38:58.839341 4720 generic.go:334] "Generic (PLEG): container finished" podID="35ef24cd-5470-42e1-9bdc-c68ec760aae2" containerID="d3a1764e955ca548b34f199fac5c54e94189b187a5d9c70f34bc08177aa4ad8e" exitCode=0 Jan 22 06:38:58 crc kubenswrapper[4720]: I0122 06:38:58.839407 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"35ef24cd-5470-42e1-9bdc-c68ec760aae2","Type":"ContainerDied","Data":"d3a1764e955ca548b34f199fac5c54e94189b187a5d9c70f34bc08177aa4ad8e"} Jan 22 06:38:58 crc kubenswrapper[4720]: I0122 06:38:58.840848 4720 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Jan 22 06:38:58 crc kubenswrapper[4720]: I0122 06:38:58.841473 4720 status_manager.go:851] "Failed to get status for pod" podUID="35ef24cd-5470-42e1-9bdc-c68ec760aae2" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Jan 22 06:38:58 crc kubenswrapper[4720]: I0122 06:38:58.844836 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 06:38:58 crc kubenswrapper[4720]: W0122 06:38:58.883347 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-ffcbda8545c28c5bc80cc784056ede109798390fb3fed5bb1468cb8a0c385a5b WatchSource:0}: Error finding container ffcbda8545c28c5bc80cc784056ede109798390fb3fed5bb1468cb8a0c385a5b: Status 404 returned error can't find the container with id ffcbda8545c28c5bc80cc784056ede109798390fb3fed5bb1468cb8a0c385a5b Jan 22 06:38:58 crc kubenswrapper[4720]: E0122 06:38:58.888019 4720 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.147:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188cfa478e56ac6a openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 06:38:58.887429226 +0000 UTC m=+231.029335931,LastTimestamp:2026-01-22 06:38:58.887429226 +0000 UTC m=+231.029335931,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 06:38:59 crc kubenswrapper[4720]: I0122 06:38:59.780575 4720 patch_prober.go:28] interesting pod/machine-config-daemon-bnsvd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 06:38:59 crc kubenswrapper[4720]: I0122 06:38:59.780666 4720 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" podUID="f4b26e9d-6a95-4b1c-9750-88b6aa100c67" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 06:38:59 crc kubenswrapper[4720]: I0122 06:38:59.780721 4720 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" Jan 22 06:38:59 crc kubenswrapper[4720]: I0122 06:38:59.781451 4720 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"88eb6692702bcb8523c759d764bb8dede5af5a2890217a1c6897a5b18a7197dd"} pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 22 06:38:59 crc kubenswrapper[4720]: I0122 06:38:59.781539 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" podUID="f4b26e9d-6a95-4b1c-9750-88b6aa100c67" containerName="machine-config-daemon" containerID="cri-o://88eb6692702bcb8523c759d764bb8dede5af5a2890217a1c6897a5b18a7197dd" gracePeriod=600 Jan 22 06:38:59 crc kubenswrapper[4720]: I0122 06:38:59.853819 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 22 06:38:59 crc kubenswrapper[4720]: I0122 06:38:59.858054 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"b5a0fb08cc8f5673e06a8247a5446fdc96490ad686e59fb5c0414e0ea636c755"} Jan 22 06:38:59 crc kubenswrapper[4720]: I0122 06:38:59.858144 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"ffcbda8545c28c5bc80cc784056ede109798390fb3fed5bb1468cb8a0c385a5b"} Jan 22 06:38:59 crc kubenswrapper[4720]: I0122 06:38:59.858664 4720 status_manager.go:851] "Failed to get status for pod" podUID="35ef24cd-5470-42e1-9bdc-c68ec760aae2" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Jan 22 06:38:59 crc kubenswrapper[4720]: I0122 06:38:59.859429 4720 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Jan 22 06:38:59 crc kubenswrapper[4720]: E0122 06:38:59.985858 4720 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.147:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188cfa478e56ac6a openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 06:38:58.887429226 +0000 UTC m=+231.029335931,LastTimestamp:2026-01-22 06:38:58.887429226 +0000 UTC m=+231.029335931,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 06:39:00 crc kubenswrapper[4720]: I0122 06:39:00.212095 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 22 06:39:00 crc kubenswrapper[4720]: I0122 06:39:00.213702 4720 status_manager.go:851] "Failed to get status for pod" podUID="35ef24cd-5470-42e1-9bdc-c68ec760aae2" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Jan 22 06:39:00 crc kubenswrapper[4720]: I0122 06:39:00.214296 4720 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Jan 22 06:39:00 crc kubenswrapper[4720]: I0122 06:39:00.369233 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/35ef24cd-5470-42e1-9bdc-c68ec760aae2-var-lock\") pod \"35ef24cd-5470-42e1-9bdc-c68ec760aae2\" (UID: \"35ef24cd-5470-42e1-9bdc-c68ec760aae2\") " Jan 22 06:39:00 crc kubenswrapper[4720]: I0122 06:39:00.369369 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/35ef24cd-5470-42e1-9bdc-c68ec760aae2-kube-api-access\") pod \"35ef24cd-5470-42e1-9bdc-c68ec760aae2\" (UID: \"35ef24cd-5470-42e1-9bdc-c68ec760aae2\") " Jan 22 06:39:00 crc kubenswrapper[4720]: I0122 06:39:00.369469 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/35ef24cd-5470-42e1-9bdc-c68ec760aae2-var-lock" (OuterVolumeSpecName: "var-lock") pod "35ef24cd-5470-42e1-9bdc-c68ec760aae2" (UID: "35ef24cd-5470-42e1-9bdc-c68ec760aae2"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 06:39:00 crc kubenswrapper[4720]: I0122 06:39:00.369517 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/35ef24cd-5470-42e1-9bdc-c68ec760aae2-kubelet-dir\") pod \"35ef24cd-5470-42e1-9bdc-c68ec760aae2\" (UID: \"35ef24cd-5470-42e1-9bdc-c68ec760aae2\") " Jan 22 06:39:00 crc kubenswrapper[4720]: I0122 06:39:00.369640 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/35ef24cd-5470-42e1-9bdc-c68ec760aae2-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "35ef24cd-5470-42e1-9bdc-c68ec760aae2" (UID: "35ef24cd-5470-42e1-9bdc-c68ec760aae2"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 06:39:00 crc kubenswrapper[4720]: I0122 06:39:00.370055 4720 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/35ef24cd-5470-42e1-9bdc-c68ec760aae2-var-lock\") on node \"crc\" DevicePath \"\"" Jan 22 06:39:00 crc kubenswrapper[4720]: I0122 06:39:00.370092 4720 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/35ef24cd-5470-42e1-9bdc-c68ec760aae2-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 22 06:39:00 crc kubenswrapper[4720]: I0122 06:39:00.378688 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/35ef24cd-5470-42e1-9bdc-c68ec760aae2-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "35ef24cd-5470-42e1-9bdc-c68ec760aae2" (UID: "35ef24cd-5470-42e1-9bdc-c68ec760aae2"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 06:39:00 crc kubenswrapper[4720]: I0122 06:39:00.471965 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/35ef24cd-5470-42e1-9bdc-c68ec760aae2-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 22 06:39:00 crc kubenswrapper[4720]: I0122 06:39:00.867241 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 22 06:39:00 crc kubenswrapper[4720]: I0122 06:39:00.867239 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"35ef24cd-5470-42e1-9bdc-c68ec760aae2","Type":"ContainerDied","Data":"0ced4493328f6104b78cb5fa0d139a6993caaa000a5705f976d4ffb4239dea66"} Jan 22 06:39:00 crc kubenswrapper[4720]: I0122 06:39:00.867969 4720 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0ced4493328f6104b78cb5fa0d139a6993caaa000a5705f976d4ffb4239dea66" Jan 22 06:39:00 crc kubenswrapper[4720]: I0122 06:39:00.873232 4720 generic.go:334] "Generic (PLEG): container finished" podID="f4b26e9d-6a95-4b1c-9750-88b6aa100c67" containerID="88eb6692702bcb8523c759d764bb8dede5af5a2890217a1c6897a5b18a7197dd" exitCode=0 Jan 22 06:39:00 crc kubenswrapper[4720]: I0122 06:39:00.874610 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" event={"ID":"f4b26e9d-6a95-4b1c-9750-88b6aa100c67","Type":"ContainerDied","Data":"88eb6692702bcb8523c759d764bb8dede5af5a2890217a1c6897a5b18a7197dd"} Jan 22 06:39:00 crc kubenswrapper[4720]: I0122 06:39:00.874664 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" event={"ID":"f4b26e9d-6a95-4b1c-9750-88b6aa100c67","Type":"ContainerStarted","Data":"f83c910b79e584790834a758285c2f47f6303b6b8de79f48f26d6971c7a8b55e"} Jan 22 06:39:00 crc kubenswrapper[4720]: I0122 06:39:00.876139 4720 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Jan 22 06:39:00 crc kubenswrapper[4720]: I0122 06:39:00.876614 4720 status_manager.go:851] "Failed to get status for pod" podUID="f4b26e9d-6a95-4b1c-9750-88b6aa100c67" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-daemon-bnsvd\": dial tcp 38.102.83.147:6443: connect: connection refused" Jan 22 06:39:00 crc kubenswrapper[4720]: I0122 06:39:00.877233 4720 status_manager.go:851] "Failed to get status for pod" podUID="35ef24cd-5470-42e1-9bdc-c68ec760aae2" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Jan 22 06:39:00 crc kubenswrapper[4720]: I0122 06:39:00.905698 4720 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Jan 22 06:39:00 crc kubenswrapper[4720]: I0122 06:39:00.905947 4720 status_manager.go:851] "Failed to get status for pod" podUID="f4b26e9d-6a95-4b1c-9750-88b6aa100c67" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-daemon-bnsvd\": dial tcp 38.102.83.147:6443: connect: connection refused" Jan 22 06:39:00 crc kubenswrapper[4720]: I0122 06:39:00.906159 4720 status_manager.go:851] "Failed to get status for pod" podUID="35ef24cd-5470-42e1-9bdc-c68ec760aae2" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Jan 22 06:39:00 crc kubenswrapper[4720]: I0122 06:39:00.981494 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 22 06:39:00 crc kubenswrapper[4720]: I0122 06:39:00.982967 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 06:39:00 crc kubenswrapper[4720]: I0122 06:39:00.983870 4720 status_manager.go:851] "Failed to get status for pod" podUID="f4b26e9d-6a95-4b1c-9750-88b6aa100c67" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-daemon-bnsvd\": dial tcp 38.102.83.147:6443: connect: connection refused" Jan 22 06:39:00 crc kubenswrapper[4720]: I0122 06:39:00.984151 4720 status_manager.go:851] "Failed to get status for pod" podUID="35ef24cd-5470-42e1-9bdc-c68ec760aae2" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Jan 22 06:39:00 crc kubenswrapper[4720]: I0122 06:39:00.984352 4720 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Jan 22 06:39:00 crc kubenswrapper[4720]: I0122 06:39:00.984572 4720 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Jan 22 06:39:01 crc kubenswrapper[4720]: I0122 06:39:01.084374 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 22 06:39:01 crc kubenswrapper[4720]: I0122 06:39:01.084497 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 22 06:39:01 crc kubenswrapper[4720]: I0122 06:39:01.084590 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 22 06:39:01 crc kubenswrapper[4720]: I0122 06:39:01.084713 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 06:39:01 crc kubenswrapper[4720]: I0122 06:39:01.084775 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 06:39:01 crc kubenswrapper[4720]: I0122 06:39:01.084901 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 06:39:01 crc kubenswrapper[4720]: I0122 06:39:01.085422 4720 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 22 06:39:01 crc kubenswrapper[4720]: I0122 06:39:01.085466 4720 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 22 06:39:01 crc kubenswrapper[4720]: I0122 06:39:01.085486 4720 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Jan 22 06:39:01 crc kubenswrapper[4720]: E0122 06:39:01.373944 4720 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.147:6443: connect: connection refused" Jan 22 06:39:01 crc kubenswrapper[4720]: E0122 06:39:01.375149 4720 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.147:6443: connect: connection refused" Jan 22 06:39:01 crc kubenswrapper[4720]: E0122 06:39:01.375596 4720 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.147:6443: connect: connection refused" Jan 22 06:39:01 crc kubenswrapper[4720]: E0122 06:39:01.376174 4720 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.147:6443: connect: connection refused" Jan 22 06:39:01 crc kubenswrapper[4720]: E0122 06:39:01.376983 4720 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.147:6443: connect: connection refused" Jan 22 06:39:01 crc kubenswrapper[4720]: I0122 06:39:01.377035 4720 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Jan 22 06:39:01 crc kubenswrapper[4720]: E0122 06:39:01.377416 4720 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.147:6443: connect: connection refused" interval="200ms" Jan 22 06:39:01 crc kubenswrapper[4720]: E0122 06:39:01.578433 4720 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.147:6443: connect: connection refused" interval="400ms" Jan 22 06:39:01 crc kubenswrapper[4720]: I0122 06:39:01.885739 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 22 06:39:01 crc kubenswrapper[4720]: I0122 06:39:01.886862 4720 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="b844e983a5de29cec0e9c51f209f04921cd45b437887ca38e93348bc0816832a" exitCode=0 Jan 22 06:39:01 crc kubenswrapper[4720]: I0122 06:39:01.886993 4720 scope.go:117] "RemoveContainer" containerID="1b944677a78ab673857bf2dd507dcc2eced1caef632613ec1e2054cb76ef57f5" Jan 22 06:39:01 crc kubenswrapper[4720]: I0122 06:39:01.887112 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 06:39:01 crc kubenswrapper[4720]: I0122 06:39:01.913690 4720 scope.go:117] "RemoveContainer" containerID="927a5a6aba8d7533888fb9e5e664fbaed25c6e6c0616c5b3c6f0afd6f45aa128" Jan 22 06:39:01 crc kubenswrapper[4720]: I0122 06:39:01.916749 4720 status_manager.go:851] "Failed to get status for pod" podUID="f4b26e9d-6a95-4b1c-9750-88b6aa100c67" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-daemon-bnsvd\": dial tcp 38.102.83.147:6443: connect: connection refused" Jan 22 06:39:01 crc kubenswrapper[4720]: I0122 06:39:01.917515 4720 status_manager.go:851] "Failed to get status for pod" podUID="35ef24cd-5470-42e1-9bdc-c68ec760aae2" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Jan 22 06:39:01 crc kubenswrapper[4720]: I0122 06:39:01.918029 4720 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Jan 22 06:39:01 crc kubenswrapper[4720]: I0122 06:39:01.919012 4720 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Jan 22 06:39:01 crc kubenswrapper[4720]: I0122 06:39:01.934050 4720 scope.go:117] "RemoveContainer" containerID="1d21c2ba52009379f44498a3064c67bcd1a54a6277cc2b1df9db949bf0871e95" Jan 22 06:39:01 crc kubenswrapper[4720]: I0122 06:39:01.954264 4720 scope.go:117] "RemoveContainer" containerID="65a3043e63bb84b58bb5486070eb84265d51c514df2d766703a62382b3d6f66f" Jan 22 06:39:01 crc kubenswrapper[4720]: I0122 06:39:01.973767 4720 scope.go:117] "RemoveContainer" containerID="b844e983a5de29cec0e9c51f209f04921cd45b437887ca38e93348bc0816832a" Jan 22 06:39:01 crc kubenswrapper[4720]: E0122 06:39:01.979676 4720 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.147:6443: connect: connection refused" interval="800ms" Jan 22 06:39:01 crc kubenswrapper[4720]: I0122 06:39:01.995109 4720 scope.go:117] "RemoveContainer" containerID="3418e6eacd36147ae6ae1be72f23113dea7dbaab3597ba0fee76fbc846c0162f" Jan 22 06:39:02 crc kubenswrapper[4720]: I0122 06:39:02.044537 4720 scope.go:117] "RemoveContainer" containerID="1b944677a78ab673857bf2dd507dcc2eced1caef632613ec1e2054cb76ef57f5" Jan 22 06:39:02 crc kubenswrapper[4720]: E0122 06:39:02.045716 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1b944677a78ab673857bf2dd507dcc2eced1caef632613ec1e2054cb76ef57f5\": container with ID starting with 1b944677a78ab673857bf2dd507dcc2eced1caef632613ec1e2054cb76ef57f5 not found: ID does not exist" containerID="1b944677a78ab673857bf2dd507dcc2eced1caef632613ec1e2054cb76ef57f5" Jan 22 06:39:02 crc kubenswrapper[4720]: I0122 06:39:02.045799 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1b944677a78ab673857bf2dd507dcc2eced1caef632613ec1e2054cb76ef57f5"} err="failed to get container status \"1b944677a78ab673857bf2dd507dcc2eced1caef632613ec1e2054cb76ef57f5\": rpc error: code = NotFound desc = could not find container \"1b944677a78ab673857bf2dd507dcc2eced1caef632613ec1e2054cb76ef57f5\": container with ID starting with 1b944677a78ab673857bf2dd507dcc2eced1caef632613ec1e2054cb76ef57f5 not found: ID does not exist" Jan 22 06:39:02 crc kubenswrapper[4720]: I0122 06:39:02.045856 4720 scope.go:117] "RemoveContainer" containerID="927a5a6aba8d7533888fb9e5e664fbaed25c6e6c0616c5b3c6f0afd6f45aa128" Jan 22 06:39:02 crc kubenswrapper[4720]: E0122 06:39:02.046353 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"927a5a6aba8d7533888fb9e5e664fbaed25c6e6c0616c5b3c6f0afd6f45aa128\": container with ID starting with 927a5a6aba8d7533888fb9e5e664fbaed25c6e6c0616c5b3c6f0afd6f45aa128 not found: ID does not exist" containerID="927a5a6aba8d7533888fb9e5e664fbaed25c6e6c0616c5b3c6f0afd6f45aa128" Jan 22 06:39:02 crc kubenswrapper[4720]: I0122 06:39:02.046413 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"927a5a6aba8d7533888fb9e5e664fbaed25c6e6c0616c5b3c6f0afd6f45aa128"} err="failed to get container status \"927a5a6aba8d7533888fb9e5e664fbaed25c6e6c0616c5b3c6f0afd6f45aa128\": rpc error: code = NotFound desc = could not find container \"927a5a6aba8d7533888fb9e5e664fbaed25c6e6c0616c5b3c6f0afd6f45aa128\": container with ID starting with 927a5a6aba8d7533888fb9e5e664fbaed25c6e6c0616c5b3c6f0afd6f45aa128 not found: ID does not exist" Jan 22 06:39:02 crc kubenswrapper[4720]: I0122 06:39:02.046456 4720 scope.go:117] "RemoveContainer" containerID="1d21c2ba52009379f44498a3064c67bcd1a54a6277cc2b1df9db949bf0871e95" Jan 22 06:39:02 crc kubenswrapper[4720]: E0122 06:39:02.047124 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1d21c2ba52009379f44498a3064c67bcd1a54a6277cc2b1df9db949bf0871e95\": container with ID starting with 1d21c2ba52009379f44498a3064c67bcd1a54a6277cc2b1df9db949bf0871e95 not found: ID does not exist" containerID="1d21c2ba52009379f44498a3064c67bcd1a54a6277cc2b1df9db949bf0871e95" Jan 22 06:39:02 crc kubenswrapper[4720]: I0122 06:39:02.047179 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1d21c2ba52009379f44498a3064c67bcd1a54a6277cc2b1df9db949bf0871e95"} err="failed to get container status \"1d21c2ba52009379f44498a3064c67bcd1a54a6277cc2b1df9db949bf0871e95\": rpc error: code = NotFound desc = could not find container \"1d21c2ba52009379f44498a3064c67bcd1a54a6277cc2b1df9db949bf0871e95\": container with ID starting with 1d21c2ba52009379f44498a3064c67bcd1a54a6277cc2b1df9db949bf0871e95 not found: ID does not exist" Jan 22 06:39:02 crc kubenswrapper[4720]: I0122 06:39:02.047209 4720 scope.go:117] "RemoveContainer" containerID="65a3043e63bb84b58bb5486070eb84265d51c514df2d766703a62382b3d6f66f" Jan 22 06:39:02 crc kubenswrapper[4720]: E0122 06:39:02.047841 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"65a3043e63bb84b58bb5486070eb84265d51c514df2d766703a62382b3d6f66f\": container with ID starting with 65a3043e63bb84b58bb5486070eb84265d51c514df2d766703a62382b3d6f66f not found: ID does not exist" containerID="65a3043e63bb84b58bb5486070eb84265d51c514df2d766703a62382b3d6f66f" Jan 22 06:39:02 crc kubenswrapper[4720]: I0122 06:39:02.047883 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"65a3043e63bb84b58bb5486070eb84265d51c514df2d766703a62382b3d6f66f"} err="failed to get container status \"65a3043e63bb84b58bb5486070eb84265d51c514df2d766703a62382b3d6f66f\": rpc error: code = NotFound desc = could not find container \"65a3043e63bb84b58bb5486070eb84265d51c514df2d766703a62382b3d6f66f\": container with ID starting with 65a3043e63bb84b58bb5486070eb84265d51c514df2d766703a62382b3d6f66f not found: ID does not exist" Jan 22 06:39:02 crc kubenswrapper[4720]: I0122 06:39:02.047998 4720 scope.go:117] "RemoveContainer" containerID="b844e983a5de29cec0e9c51f209f04921cd45b437887ca38e93348bc0816832a" Jan 22 06:39:02 crc kubenswrapper[4720]: E0122 06:39:02.048597 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b844e983a5de29cec0e9c51f209f04921cd45b437887ca38e93348bc0816832a\": container with ID starting with b844e983a5de29cec0e9c51f209f04921cd45b437887ca38e93348bc0816832a not found: ID does not exist" containerID="b844e983a5de29cec0e9c51f209f04921cd45b437887ca38e93348bc0816832a" Jan 22 06:39:02 crc kubenswrapper[4720]: I0122 06:39:02.048640 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b844e983a5de29cec0e9c51f209f04921cd45b437887ca38e93348bc0816832a"} err="failed to get container status \"b844e983a5de29cec0e9c51f209f04921cd45b437887ca38e93348bc0816832a\": rpc error: code = NotFound desc = could not find container \"b844e983a5de29cec0e9c51f209f04921cd45b437887ca38e93348bc0816832a\": container with ID starting with b844e983a5de29cec0e9c51f209f04921cd45b437887ca38e93348bc0816832a not found: ID does not exist" Jan 22 06:39:02 crc kubenswrapper[4720]: I0122 06:39:02.048665 4720 scope.go:117] "RemoveContainer" containerID="3418e6eacd36147ae6ae1be72f23113dea7dbaab3597ba0fee76fbc846c0162f" Jan 22 06:39:02 crc kubenswrapper[4720]: E0122 06:39:02.049079 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3418e6eacd36147ae6ae1be72f23113dea7dbaab3597ba0fee76fbc846c0162f\": container with ID starting with 3418e6eacd36147ae6ae1be72f23113dea7dbaab3597ba0fee76fbc846c0162f not found: ID does not exist" containerID="3418e6eacd36147ae6ae1be72f23113dea7dbaab3597ba0fee76fbc846c0162f" Jan 22 06:39:02 crc kubenswrapper[4720]: I0122 06:39:02.049121 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3418e6eacd36147ae6ae1be72f23113dea7dbaab3597ba0fee76fbc846c0162f"} err="failed to get container status \"3418e6eacd36147ae6ae1be72f23113dea7dbaab3597ba0fee76fbc846c0162f\": rpc error: code = NotFound desc = could not find container \"3418e6eacd36147ae6ae1be72f23113dea7dbaab3597ba0fee76fbc846c0162f\": container with ID starting with 3418e6eacd36147ae6ae1be72f23113dea7dbaab3597ba0fee76fbc846c0162f not found: ID does not exist" Jan 22 06:39:02 crc kubenswrapper[4720]: I0122 06:39:02.219855 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Jan 22 06:39:02 crc kubenswrapper[4720]: E0122 06:39:02.781054 4720 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.147:6443: connect: connection refused" interval="1.6s" Jan 22 06:39:04 crc kubenswrapper[4720]: E0122 06:39:04.382338 4720 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.147:6443: connect: connection refused" interval="3.2s" Jan 22 06:39:07 crc kubenswrapper[4720]: E0122 06:39:07.583903 4720 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.147:6443: connect: connection refused" interval="6.4s" Jan 22 06:39:08 crc kubenswrapper[4720]: I0122 06:39:08.216834 4720 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Jan 22 06:39:08 crc kubenswrapper[4720]: I0122 06:39:08.217663 4720 status_manager.go:851] "Failed to get status for pod" podUID="f4b26e9d-6a95-4b1c-9750-88b6aa100c67" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-daemon-bnsvd\": dial tcp 38.102.83.147:6443: connect: connection refused" Jan 22 06:39:08 crc kubenswrapper[4720]: I0122 06:39:08.218216 4720 status_manager.go:851] "Failed to get status for pod" podUID="35ef24cd-5470-42e1-9bdc-c68ec760aae2" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Jan 22 06:39:10 crc kubenswrapper[4720]: E0122 06:39:10.006129 4720 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.147:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188cfa478e56ac6a openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-22 06:38:58.887429226 +0000 UTC m=+231.029335931,LastTimestamp:2026-01-22 06:38:58.887429226 +0000 UTC m=+231.029335931,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 22 06:39:10 crc kubenswrapper[4720]: I0122 06:39:10.210144 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 06:39:10 crc kubenswrapper[4720]: I0122 06:39:10.211317 4720 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Jan 22 06:39:10 crc kubenswrapper[4720]: I0122 06:39:10.212148 4720 status_manager.go:851] "Failed to get status for pod" podUID="f4b26e9d-6a95-4b1c-9750-88b6aa100c67" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-daemon-bnsvd\": dial tcp 38.102.83.147:6443: connect: connection refused" Jan 22 06:39:10 crc kubenswrapper[4720]: I0122 06:39:10.212716 4720 status_manager.go:851] "Failed to get status for pod" podUID="35ef24cd-5470-42e1-9bdc-c68ec760aae2" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Jan 22 06:39:10 crc kubenswrapper[4720]: I0122 06:39:10.225453 4720 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="71c3232e-a7c6-4127-b9ae-54b793cf40fc" Jan 22 06:39:10 crc kubenswrapper[4720]: I0122 06:39:10.225500 4720 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="71c3232e-a7c6-4127-b9ae-54b793cf40fc" Jan 22 06:39:10 crc kubenswrapper[4720]: E0122 06:39:10.226105 4720 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 06:39:10 crc kubenswrapper[4720]: I0122 06:39:10.227166 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 06:39:10 crc kubenswrapper[4720]: I0122 06:39:10.948469 4720 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="1424bd42c886e20d090aaf13bff2192bfa19df79195286a22f4aaaacad3c5723" exitCode=0 Jan 22 06:39:10 crc kubenswrapper[4720]: I0122 06:39:10.948590 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"1424bd42c886e20d090aaf13bff2192bfa19df79195286a22f4aaaacad3c5723"} Jan 22 06:39:10 crc kubenswrapper[4720]: I0122 06:39:10.948889 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"fa9076b3030b5fb9f68321bdc6c9086fc1e4d43f46bf1baf4e46e8284f5024b9"} Jan 22 06:39:10 crc kubenswrapper[4720]: I0122 06:39:10.949206 4720 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="71c3232e-a7c6-4127-b9ae-54b793cf40fc" Jan 22 06:39:10 crc kubenswrapper[4720]: I0122 06:39:10.949225 4720 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="71c3232e-a7c6-4127-b9ae-54b793cf40fc" Jan 22 06:39:10 crc kubenswrapper[4720]: I0122 06:39:10.949782 4720 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Jan 22 06:39:10 crc kubenswrapper[4720]: E0122 06:39:10.949782 4720 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 06:39:10 crc kubenswrapper[4720]: I0122 06:39:10.950377 4720 status_manager.go:851] "Failed to get status for pod" podUID="f4b26e9d-6a95-4b1c-9750-88b6aa100c67" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-daemon-bnsvd\": dial tcp 38.102.83.147:6443: connect: connection refused" Jan 22 06:39:10 crc kubenswrapper[4720]: I0122 06:39:10.950786 4720 status_manager.go:851] "Failed to get status for pod" podUID="35ef24cd-5470-42e1-9bdc-c68ec760aae2" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Jan 22 06:39:11 crc kubenswrapper[4720]: I0122 06:39:11.615290 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-vp8tq" podUID="0a21ae7b-9111-4c9f-a378-f2acdb19931a" containerName="oauth-openshift" containerID="cri-o://85b3392bd6d1b940f7e5952dc94140d5443b7fe0c090bf6d2d872637d20fc59a" gracePeriod=15 Jan 22 06:39:11 crc kubenswrapper[4720]: I0122 06:39:11.969880 4720 generic.go:334] "Generic (PLEG): container finished" podID="0a21ae7b-9111-4c9f-a378-f2acdb19931a" containerID="85b3392bd6d1b940f7e5952dc94140d5443b7fe0c090bf6d2d872637d20fc59a" exitCode=0 Jan 22 06:39:11 crc kubenswrapper[4720]: I0122 06:39:11.969977 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-vp8tq" event={"ID":"0a21ae7b-9111-4c9f-a378-f2acdb19931a","Type":"ContainerDied","Data":"85b3392bd6d1b940f7e5952dc94140d5443b7fe0c090bf6d2d872637d20fc59a"} Jan 22 06:39:11 crc kubenswrapper[4720]: I0122 06:39:11.986804 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"aac9f16f4bece6b6459f2a880c287baf15bfd9f2726a3aef012a86d69b50b1e1"} Jan 22 06:39:11 crc kubenswrapper[4720]: I0122 06:39:11.986865 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"31b9cccb5ee95e35bc2ca3102f48d85dbebff39b816bc8209ccba69d33cb83c8"} Jan 22 06:39:11 crc kubenswrapper[4720]: I0122 06:39:11.986876 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"904949eea2ae9285bb0e7c89b774fda76f28756bf8aa8e666f7fb539390f3780"} Jan 22 06:39:11 crc kubenswrapper[4720]: I0122 06:39:11.995227 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 22 06:39:11 crc kubenswrapper[4720]: I0122 06:39:11.995288 4720 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="eff41c6fc4cac32edebfbabd313603d50771e92427938f4fc8c12bc75563133d" exitCode=1 Jan 22 06:39:11 crc kubenswrapper[4720]: I0122 06:39:11.995340 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"eff41c6fc4cac32edebfbabd313603d50771e92427938f4fc8c12bc75563133d"} Jan 22 06:39:11 crc kubenswrapper[4720]: I0122 06:39:11.995823 4720 scope.go:117] "RemoveContainer" containerID="eff41c6fc4cac32edebfbabd313603d50771e92427938f4fc8c12bc75563133d" Jan 22 06:39:12 crc kubenswrapper[4720]: I0122 06:39:12.096602 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-vp8tq" Jan 22 06:39:12 crc kubenswrapper[4720]: I0122 06:39:12.246764 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/0a21ae7b-9111-4c9f-a378-f2acdb19931a-v4-0-config-user-template-error\") pod \"0a21ae7b-9111-4c9f-a378-f2acdb19931a\" (UID: \"0a21ae7b-9111-4c9f-a378-f2acdb19931a\") " Jan 22 06:39:12 crc kubenswrapper[4720]: I0122 06:39:12.247264 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/0a21ae7b-9111-4c9f-a378-f2acdb19931a-v4-0-config-user-template-login\") pod \"0a21ae7b-9111-4c9f-a378-f2acdb19931a\" (UID: \"0a21ae7b-9111-4c9f-a378-f2acdb19931a\") " Jan 22 06:39:12 crc kubenswrapper[4720]: I0122 06:39:12.247315 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/0a21ae7b-9111-4c9f-a378-f2acdb19931a-v4-0-config-user-idp-0-file-data\") pod \"0a21ae7b-9111-4c9f-a378-f2acdb19931a\" (UID: \"0a21ae7b-9111-4c9f-a378-f2acdb19931a\") " Jan 22 06:39:12 crc kubenswrapper[4720]: I0122 06:39:12.247359 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/0a21ae7b-9111-4c9f-a378-f2acdb19931a-v4-0-config-system-service-ca\") pod \"0a21ae7b-9111-4c9f-a378-f2acdb19931a\" (UID: \"0a21ae7b-9111-4c9f-a378-f2acdb19931a\") " Jan 22 06:39:12 crc kubenswrapper[4720]: I0122 06:39:12.247385 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/0a21ae7b-9111-4c9f-a378-f2acdb19931a-v4-0-config-system-session\") pod \"0a21ae7b-9111-4c9f-a378-f2acdb19931a\" (UID: \"0a21ae7b-9111-4c9f-a378-f2acdb19931a\") " Jan 22 06:39:12 crc kubenswrapper[4720]: I0122 06:39:12.247427 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0a21ae7b-9111-4c9f-a378-f2acdb19931a-audit-dir\") pod \"0a21ae7b-9111-4c9f-a378-f2acdb19931a\" (UID: \"0a21ae7b-9111-4c9f-a378-f2acdb19931a\") " Jan 22 06:39:12 crc kubenswrapper[4720]: I0122 06:39:12.247451 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/0a21ae7b-9111-4c9f-a378-f2acdb19931a-v4-0-config-system-serving-cert\") pod \"0a21ae7b-9111-4c9f-a378-f2acdb19931a\" (UID: \"0a21ae7b-9111-4c9f-a378-f2acdb19931a\") " Jan 22 06:39:12 crc kubenswrapper[4720]: I0122 06:39:12.247481 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/0a21ae7b-9111-4c9f-a378-f2acdb19931a-v4-0-config-user-template-provider-selection\") pod \"0a21ae7b-9111-4c9f-a378-f2acdb19931a\" (UID: \"0a21ae7b-9111-4c9f-a378-f2acdb19931a\") " Jan 22 06:39:12 crc kubenswrapper[4720]: I0122 06:39:12.247505 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/0a21ae7b-9111-4c9f-a378-f2acdb19931a-audit-policies\") pod \"0a21ae7b-9111-4c9f-a378-f2acdb19931a\" (UID: \"0a21ae7b-9111-4c9f-a378-f2acdb19931a\") " Jan 22 06:39:12 crc kubenswrapper[4720]: I0122 06:39:12.247531 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0a21ae7b-9111-4c9f-a378-f2acdb19931a-v4-0-config-system-trusted-ca-bundle\") pod \"0a21ae7b-9111-4c9f-a378-f2acdb19931a\" (UID: \"0a21ae7b-9111-4c9f-a378-f2acdb19931a\") " Jan 22 06:39:12 crc kubenswrapper[4720]: I0122 06:39:12.247551 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/0a21ae7b-9111-4c9f-a378-f2acdb19931a-v4-0-config-system-router-certs\") pod \"0a21ae7b-9111-4c9f-a378-f2acdb19931a\" (UID: \"0a21ae7b-9111-4c9f-a378-f2acdb19931a\") " Jan 22 06:39:12 crc kubenswrapper[4720]: I0122 06:39:12.247589 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/0a21ae7b-9111-4c9f-a378-f2acdb19931a-v4-0-config-system-cliconfig\") pod \"0a21ae7b-9111-4c9f-a378-f2acdb19931a\" (UID: \"0a21ae7b-9111-4c9f-a378-f2acdb19931a\") " Jan 22 06:39:12 crc kubenswrapper[4720]: I0122 06:39:12.247618 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/0a21ae7b-9111-4c9f-a378-f2acdb19931a-v4-0-config-system-ocp-branding-template\") pod \"0a21ae7b-9111-4c9f-a378-f2acdb19931a\" (UID: \"0a21ae7b-9111-4c9f-a378-f2acdb19931a\") " Jan 22 06:39:12 crc kubenswrapper[4720]: I0122 06:39:12.247654 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z727j\" (UniqueName: \"kubernetes.io/projected/0a21ae7b-9111-4c9f-a378-f2acdb19931a-kube-api-access-z727j\") pod \"0a21ae7b-9111-4c9f-a378-f2acdb19931a\" (UID: \"0a21ae7b-9111-4c9f-a378-f2acdb19931a\") " Jan 22 06:39:12 crc kubenswrapper[4720]: I0122 06:39:12.249260 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0a21ae7b-9111-4c9f-a378-f2acdb19931a-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "0a21ae7b-9111-4c9f-a378-f2acdb19931a" (UID: "0a21ae7b-9111-4c9f-a378-f2acdb19931a"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 06:39:12 crc kubenswrapper[4720]: I0122 06:39:12.249571 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0a21ae7b-9111-4c9f-a378-f2acdb19931a-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "0a21ae7b-9111-4c9f-a378-f2acdb19931a" (UID: "0a21ae7b-9111-4c9f-a378-f2acdb19931a"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 06:39:12 crc kubenswrapper[4720]: I0122 06:39:12.250277 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0a21ae7b-9111-4c9f-a378-f2acdb19931a-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "0a21ae7b-9111-4c9f-a378-f2acdb19931a" (UID: "0a21ae7b-9111-4c9f-a378-f2acdb19931a"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 06:39:12 crc kubenswrapper[4720]: I0122 06:39:12.250728 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0a21ae7b-9111-4c9f-a378-f2acdb19931a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "0a21ae7b-9111-4c9f-a378-f2acdb19931a" (UID: "0a21ae7b-9111-4c9f-a378-f2acdb19931a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 06:39:12 crc kubenswrapper[4720]: I0122 06:39:12.251241 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0a21ae7b-9111-4c9f-a378-f2acdb19931a-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "0a21ae7b-9111-4c9f-a378-f2acdb19931a" (UID: "0a21ae7b-9111-4c9f-a378-f2acdb19931a"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 06:39:12 crc kubenswrapper[4720]: I0122 06:39:12.253679 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a21ae7b-9111-4c9f-a378-f2acdb19931a-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "0a21ae7b-9111-4c9f-a378-f2acdb19931a" (UID: "0a21ae7b-9111-4c9f-a378-f2acdb19931a"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 06:39:12 crc kubenswrapper[4720]: I0122 06:39:12.253764 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a21ae7b-9111-4c9f-a378-f2acdb19931a-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "0a21ae7b-9111-4c9f-a378-f2acdb19931a" (UID: "0a21ae7b-9111-4c9f-a378-f2acdb19931a"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 06:39:12 crc kubenswrapper[4720]: I0122 06:39:12.254079 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a21ae7b-9111-4c9f-a378-f2acdb19931a-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "0a21ae7b-9111-4c9f-a378-f2acdb19931a" (UID: "0a21ae7b-9111-4c9f-a378-f2acdb19931a"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 06:39:12 crc kubenswrapper[4720]: I0122 06:39:12.254687 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a21ae7b-9111-4c9f-a378-f2acdb19931a-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "0a21ae7b-9111-4c9f-a378-f2acdb19931a" (UID: "0a21ae7b-9111-4c9f-a378-f2acdb19931a"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 06:39:12 crc kubenswrapper[4720]: I0122 06:39:12.254848 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a21ae7b-9111-4c9f-a378-f2acdb19931a-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "0a21ae7b-9111-4c9f-a378-f2acdb19931a" (UID: "0a21ae7b-9111-4c9f-a378-f2acdb19931a"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 06:39:12 crc kubenswrapper[4720]: I0122 06:39:12.256181 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0a21ae7b-9111-4c9f-a378-f2acdb19931a-kube-api-access-z727j" (OuterVolumeSpecName: "kube-api-access-z727j") pod "0a21ae7b-9111-4c9f-a378-f2acdb19931a" (UID: "0a21ae7b-9111-4c9f-a378-f2acdb19931a"). InnerVolumeSpecName "kube-api-access-z727j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 06:39:12 crc kubenswrapper[4720]: I0122 06:39:12.257774 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a21ae7b-9111-4c9f-a378-f2acdb19931a-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "0a21ae7b-9111-4c9f-a378-f2acdb19931a" (UID: "0a21ae7b-9111-4c9f-a378-f2acdb19931a"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 06:39:12 crc kubenswrapper[4720]: I0122 06:39:12.259247 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a21ae7b-9111-4c9f-a378-f2acdb19931a-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "0a21ae7b-9111-4c9f-a378-f2acdb19931a" (UID: "0a21ae7b-9111-4c9f-a378-f2acdb19931a"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 06:39:12 crc kubenswrapper[4720]: I0122 06:39:12.269210 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a21ae7b-9111-4c9f-a378-f2acdb19931a-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "0a21ae7b-9111-4c9f-a378-f2acdb19931a" (UID: "0a21ae7b-9111-4c9f-a378-f2acdb19931a"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 06:39:12 crc kubenswrapper[4720]: I0122 06:39:12.349401 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z727j\" (UniqueName: \"kubernetes.io/projected/0a21ae7b-9111-4c9f-a378-f2acdb19931a-kube-api-access-z727j\") on node \"crc\" DevicePath \"\"" Jan 22 06:39:12 crc kubenswrapper[4720]: I0122 06:39:12.349453 4720 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/0a21ae7b-9111-4c9f-a378-f2acdb19931a-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 22 06:39:12 crc kubenswrapper[4720]: I0122 06:39:12.349467 4720 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/0a21ae7b-9111-4c9f-a378-f2acdb19931a-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 22 06:39:12 crc kubenswrapper[4720]: I0122 06:39:12.349477 4720 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/0a21ae7b-9111-4c9f-a378-f2acdb19931a-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 22 06:39:12 crc kubenswrapper[4720]: I0122 06:39:12.349489 4720 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/0a21ae7b-9111-4c9f-a378-f2acdb19931a-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 22 06:39:12 crc kubenswrapper[4720]: I0122 06:39:12.349499 4720 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/0a21ae7b-9111-4c9f-a378-f2acdb19931a-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 22 06:39:12 crc kubenswrapper[4720]: I0122 06:39:12.349514 4720 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0a21ae7b-9111-4c9f-a378-f2acdb19931a-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 22 06:39:12 crc kubenswrapper[4720]: I0122 06:39:12.349527 4720 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/0a21ae7b-9111-4c9f-a378-f2acdb19931a-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 06:39:12 crc kubenswrapper[4720]: I0122 06:39:12.349540 4720 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/0a21ae7b-9111-4c9f-a378-f2acdb19931a-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 22 06:39:12 crc kubenswrapper[4720]: I0122 06:39:12.349552 4720 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/0a21ae7b-9111-4c9f-a378-f2acdb19931a-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 22 06:39:12 crc kubenswrapper[4720]: I0122 06:39:12.349562 4720 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0a21ae7b-9111-4c9f-a378-f2acdb19931a-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 06:39:12 crc kubenswrapper[4720]: I0122 06:39:12.349573 4720 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/0a21ae7b-9111-4c9f-a378-f2acdb19931a-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 22 06:39:12 crc kubenswrapper[4720]: I0122 06:39:12.349586 4720 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/0a21ae7b-9111-4c9f-a378-f2acdb19931a-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 22 06:39:12 crc kubenswrapper[4720]: I0122 06:39:12.349597 4720 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/0a21ae7b-9111-4c9f-a378-f2acdb19931a-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 22 06:39:13 crc kubenswrapper[4720]: I0122 06:39:13.006138 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"d0a8b54c2648f9240227a78a67716d90948d061668d59c6bae5a71fc13a0f685"} Jan 22 06:39:13 crc kubenswrapper[4720]: I0122 06:39:13.006193 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"9c13ae8602fe1438e299b144b047b16097672267b1f74f158cc0cdc7e544bd44"} Jan 22 06:39:13 crc kubenswrapper[4720]: I0122 06:39:13.006361 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 06:39:13 crc kubenswrapper[4720]: I0122 06:39:13.006511 4720 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="71c3232e-a7c6-4127-b9ae-54b793cf40fc" Jan 22 06:39:13 crc kubenswrapper[4720]: I0122 06:39:13.006544 4720 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="71c3232e-a7c6-4127-b9ae-54b793cf40fc" Jan 22 06:39:13 crc kubenswrapper[4720]: I0122 06:39:13.010213 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 22 06:39:13 crc kubenswrapper[4720]: I0122 06:39:13.010424 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"cb772751877d8b8f92782dfedba940fd280eaf5fdb90ceb286ab4ba3de0bb21b"} Jan 22 06:39:13 crc kubenswrapper[4720]: I0122 06:39:13.012329 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-vp8tq" event={"ID":"0a21ae7b-9111-4c9f-a378-f2acdb19931a","Type":"ContainerDied","Data":"f8a1bbd1ba9b0747f3229ccaaee8b19a5c73e67d23ce46e23f36b0f7f4695acb"} Jan 22 06:39:13 crc kubenswrapper[4720]: I0122 06:39:13.012407 4720 scope.go:117] "RemoveContainer" containerID="85b3392bd6d1b940f7e5952dc94140d5443b7fe0c090bf6d2d872637d20fc59a" Jan 22 06:39:13 crc kubenswrapper[4720]: I0122 06:39:13.012413 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-vp8tq" Jan 22 06:39:15 crc kubenswrapper[4720]: I0122 06:39:15.228539 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 06:39:15 crc kubenswrapper[4720]: I0122 06:39:15.229110 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 06:39:15 crc kubenswrapper[4720]: I0122 06:39:15.236677 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 06:39:18 crc kubenswrapper[4720]: I0122 06:39:18.022522 4720 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 06:39:18 crc kubenswrapper[4720]: I0122 06:39:18.047480 4720 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="71c3232e-a7c6-4127-b9ae-54b793cf40fc" Jan 22 06:39:18 crc kubenswrapper[4720]: I0122 06:39:18.047523 4720 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="71c3232e-a7c6-4127-b9ae-54b793cf40fc" Jan 22 06:39:18 crc kubenswrapper[4720]: I0122 06:39:18.058489 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 06:39:18 crc kubenswrapper[4720]: I0122 06:39:18.231923 4720 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="4584a3c2-c788-4fa4-9115-fbd9ab733569" Jan 22 06:39:19 crc kubenswrapper[4720]: I0122 06:39:19.053845 4720 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="71c3232e-a7c6-4127-b9ae-54b793cf40fc" Jan 22 06:39:19 crc kubenswrapper[4720]: I0122 06:39:19.053935 4720 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="71c3232e-a7c6-4127-b9ae-54b793cf40fc" Jan 22 06:39:19 crc kubenswrapper[4720]: I0122 06:39:19.057803 4720 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="4584a3c2-c788-4fa4-9115-fbd9ab733569" Jan 22 06:39:20 crc kubenswrapper[4720]: I0122 06:39:20.378090 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 06:39:20 crc kubenswrapper[4720]: I0122 06:39:20.378703 4720 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 22 06:39:20 crc kubenswrapper[4720]: I0122 06:39:20.378849 4720 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 22 06:39:20 crc kubenswrapper[4720]: I0122 06:39:20.583841 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 06:39:27 crc kubenswrapper[4720]: I0122 06:39:27.932403 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 22 06:39:28 crc kubenswrapper[4720]: I0122 06:39:28.652889 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 22 06:39:28 crc kubenswrapper[4720]: I0122 06:39:28.969074 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 22 06:39:29 crc kubenswrapper[4720]: I0122 06:39:29.148404 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 22 06:39:29 crc kubenswrapper[4720]: I0122 06:39:29.274078 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 22 06:39:29 crc kubenswrapper[4720]: I0122 06:39:29.462044 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 22 06:39:29 crc kubenswrapper[4720]: I0122 06:39:29.583509 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 22 06:39:29 crc kubenswrapper[4720]: I0122 06:39:29.622501 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 22 06:39:29 crc kubenswrapper[4720]: I0122 06:39:29.756016 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 22 06:39:29 crc kubenswrapper[4720]: I0122 06:39:29.784412 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 22 06:39:29 crc kubenswrapper[4720]: I0122 06:39:29.836360 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 22 06:39:29 crc kubenswrapper[4720]: I0122 06:39:29.993028 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 22 06:39:30 crc kubenswrapper[4720]: I0122 06:39:30.042553 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 22 06:39:30 crc kubenswrapper[4720]: I0122 06:39:30.239781 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 22 06:39:30 crc kubenswrapper[4720]: I0122 06:39:30.378281 4720 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 22 06:39:30 crc kubenswrapper[4720]: I0122 06:39:30.378690 4720 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 22 06:39:30 crc kubenswrapper[4720]: I0122 06:39:30.696668 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 22 06:39:30 crc kubenswrapper[4720]: I0122 06:39:30.785613 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 22 06:39:30 crc kubenswrapper[4720]: I0122 06:39:30.835454 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 22 06:39:30 crc kubenswrapper[4720]: I0122 06:39:30.891056 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 22 06:39:30 crc kubenswrapper[4720]: I0122 06:39:30.939199 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 22 06:39:30 crc kubenswrapper[4720]: I0122 06:39:30.986218 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 22 06:39:31 crc kubenswrapper[4720]: I0122 06:39:31.330618 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 22 06:39:31 crc kubenswrapper[4720]: I0122 06:39:31.497668 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 22 06:39:31 crc kubenswrapper[4720]: I0122 06:39:31.632450 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 22 06:39:31 crc kubenswrapper[4720]: I0122 06:39:31.679560 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 22 06:39:31 crc kubenswrapper[4720]: I0122 06:39:31.783932 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 22 06:39:31 crc kubenswrapper[4720]: I0122 06:39:31.877541 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 22 06:39:31 crc kubenswrapper[4720]: I0122 06:39:31.889724 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 22 06:39:31 crc kubenswrapper[4720]: I0122 06:39:31.948901 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 22 06:39:32 crc kubenswrapper[4720]: I0122 06:39:32.045836 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 22 06:39:32 crc kubenswrapper[4720]: I0122 06:39:32.048027 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 22 06:39:32 crc kubenswrapper[4720]: I0122 06:39:32.245367 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 22 06:39:32 crc kubenswrapper[4720]: I0122 06:39:32.360707 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 22 06:39:32 crc kubenswrapper[4720]: I0122 06:39:32.404699 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 22 06:39:32 crc kubenswrapper[4720]: I0122 06:39:32.420225 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 22 06:39:32 crc kubenswrapper[4720]: I0122 06:39:32.461173 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 22 06:39:32 crc kubenswrapper[4720]: I0122 06:39:32.542581 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 22 06:39:32 crc kubenswrapper[4720]: I0122 06:39:32.642157 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 22 06:39:32 crc kubenswrapper[4720]: I0122 06:39:32.654081 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 22 06:39:32 crc kubenswrapper[4720]: I0122 06:39:32.668467 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 22 06:39:32 crc kubenswrapper[4720]: I0122 06:39:32.686212 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 22 06:39:32 crc kubenswrapper[4720]: I0122 06:39:32.709382 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 22 06:39:32 crc kubenswrapper[4720]: I0122 06:39:32.866209 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 22 06:39:32 crc kubenswrapper[4720]: I0122 06:39:32.891253 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 22 06:39:32 crc kubenswrapper[4720]: I0122 06:39:32.903164 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 22 06:39:32 crc kubenswrapper[4720]: I0122 06:39:32.968604 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 22 06:39:33 crc kubenswrapper[4720]: I0122 06:39:33.000994 4720 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 22 06:39:33 crc kubenswrapper[4720]: I0122 06:39:33.132661 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 22 06:39:33 crc kubenswrapper[4720]: I0122 06:39:33.164963 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 22 06:39:33 crc kubenswrapper[4720]: I0122 06:39:33.288492 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 22 06:39:33 crc kubenswrapper[4720]: I0122 06:39:33.289138 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 22 06:39:33 crc kubenswrapper[4720]: I0122 06:39:33.334808 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 22 06:39:33 crc kubenswrapper[4720]: I0122 06:39:33.386398 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 22 06:39:33 crc kubenswrapper[4720]: I0122 06:39:33.520058 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 22 06:39:33 crc kubenswrapper[4720]: I0122 06:39:33.604275 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 22 06:39:33 crc kubenswrapper[4720]: I0122 06:39:33.688287 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 22 06:39:33 crc kubenswrapper[4720]: I0122 06:39:33.696440 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 22 06:39:33 crc kubenswrapper[4720]: I0122 06:39:33.745515 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 22 06:39:33 crc kubenswrapper[4720]: I0122 06:39:33.754030 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 22 06:39:33 crc kubenswrapper[4720]: I0122 06:39:33.825209 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 22 06:39:33 crc kubenswrapper[4720]: I0122 06:39:33.836198 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 22 06:39:33 crc kubenswrapper[4720]: I0122 06:39:33.869967 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 22 06:39:34 crc kubenswrapper[4720]: I0122 06:39:34.036892 4720 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 22 06:39:34 crc kubenswrapper[4720]: I0122 06:39:34.184210 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 22 06:39:34 crc kubenswrapper[4720]: I0122 06:39:34.263852 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 22 06:39:34 crc kubenswrapper[4720]: I0122 06:39:34.282133 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 22 06:39:34 crc kubenswrapper[4720]: I0122 06:39:34.308297 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 22 06:39:34 crc kubenswrapper[4720]: I0122 06:39:34.347936 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 22 06:39:34 crc kubenswrapper[4720]: I0122 06:39:34.411954 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 22 06:39:34 crc kubenswrapper[4720]: I0122 06:39:34.425430 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 22 06:39:34 crc kubenswrapper[4720]: I0122 06:39:34.562440 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 22 06:39:34 crc kubenswrapper[4720]: I0122 06:39:34.579412 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 22 06:39:34 crc kubenswrapper[4720]: I0122 06:39:34.762748 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 22 06:39:34 crc kubenswrapper[4720]: I0122 06:39:34.796806 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 22 06:39:34 crc kubenswrapper[4720]: I0122 06:39:34.830784 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 22 06:39:34 crc kubenswrapper[4720]: I0122 06:39:34.846481 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 22 06:39:34 crc kubenswrapper[4720]: I0122 06:39:34.873280 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 22 06:39:34 crc kubenswrapper[4720]: I0122 06:39:34.902550 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 22 06:39:34 crc kubenswrapper[4720]: I0122 06:39:34.906724 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 22 06:39:34 crc kubenswrapper[4720]: I0122 06:39:34.921684 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 22 06:39:34 crc kubenswrapper[4720]: I0122 06:39:34.953518 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 22 06:39:34 crc kubenswrapper[4720]: I0122 06:39:34.975598 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 22 06:39:34 crc kubenswrapper[4720]: I0122 06:39:34.976745 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 22 06:39:35 crc kubenswrapper[4720]: I0122 06:39:35.021880 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 22 06:39:35 crc kubenswrapper[4720]: I0122 06:39:35.209601 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 22 06:39:35 crc kubenswrapper[4720]: I0122 06:39:35.231152 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 22 06:39:35 crc kubenswrapper[4720]: I0122 06:39:35.236081 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 22 06:39:35 crc kubenswrapper[4720]: I0122 06:39:35.311132 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 22 06:39:35 crc kubenswrapper[4720]: I0122 06:39:35.361403 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 22 06:39:35 crc kubenswrapper[4720]: I0122 06:39:35.386021 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 22 06:39:35 crc kubenswrapper[4720]: I0122 06:39:35.398954 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 22 06:39:35 crc kubenswrapper[4720]: I0122 06:39:35.531660 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 22 06:39:35 crc kubenswrapper[4720]: I0122 06:39:35.533454 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 22 06:39:35 crc kubenswrapper[4720]: I0122 06:39:35.687951 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 22 06:39:35 crc kubenswrapper[4720]: I0122 06:39:35.767131 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 22 06:39:35 crc kubenswrapper[4720]: I0122 06:39:35.767492 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 22 06:39:35 crc kubenswrapper[4720]: I0122 06:39:35.794317 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 22 06:39:35 crc kubenswrapper[4720]: I0122 06:39:35.825156 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 22 06:39:36 crc kubenswrapper[4720]: I0122 06:39:36.005705 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 22 06:39:36 crc kubenswrapper[4720]: I0122 06:39:36.065560 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 22 06:39:36 crc kubenswrapper[4720]: I0122 06:39:36.152204 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 22 06:39:36 crc kubenswrapper[4720]: I0122 06:39:36.312816 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 22 06:39:36 crc kubenswrapper[4720]: I0122 06:39:36.411583 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 22 06:39:36 crc kubenswrapper[4720]: I0122 06:39:36.483827 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 22 06:39:36 crc kubenswrapper[4720]: I0122 06:39:36.529796 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 22 06:39:36 crc kubenswrapper[4720]: I0122 06:39:36.565392 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 22 06:39:36 crc kubenswrapper[4720]: I0122 06:39:36.640243 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 22 06:39:36 crc kubenswrapper[4720]: I0122 06:39:36.804664 4720 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 22 06:39:36 crc kubenswrapper[4720]: I0122 06:39:36.809667 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podStartSLOduration=38.809629638 podStartE2EDuration="38.809629638s" podCreationTimestamp="2026-01-22 06:38:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 06:39:17.868736947 +0000 UTC m=+250.010643682" watchObservedRunningTime="2026-01-22 06:39:36.809629638 +0000 UTC m=+268.951536353" Jan 22 06:39:36 crc kubenswrapper[4720]: I0122 06:39:36.810431 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc","openshift-authentication/oauth-openshift-558db77b4-vp8tq"] Jan 22 06:39:36 crc kubenswrapper[4720]: I0122 06:39:36.810493 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc","openshift-authentication/oauth-openshift-7687c8778f-lkfgf"] Jan 22 06:39:36 crc kubenswrapper[4720]: E0122 06:39:36.810734 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35ef24cd-5470-42e1-9bdc-c68ec760aae2" containerName="installer" Jan 22 06:39:36 crc kubenswrapper[4720]: I0122 06:39:36.810799 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="35ef24cd-5470-42e1-9bdc-c68ec760aae2" containerName="installer" Jan 22 06:39:36 crc kubenswrapper[4720]: E0122 06:39:36.810831 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a21ae7b-9111-4c9f-a378-f2acdb19931a" containerName="oauth-openshift" Jan 22 06:39:36 crc kubenswrapper[4720]: I0122 06:39:36.810840 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a21ae7b-9111-4c9f-a378-f2acdb19931a" containerName="oauth-openshift" Jan 22 06:39:36 crc kubenswrapper[4720]: I0122 06:39:36.811219 4720 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="71c3232e-a7c6-4127-b9ae-54b793cf40fc" Jan 22 06:39:36 crc kubenswrapper[4720]: I0122 06:39:36.811292 4720 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="71c3232e-a7c6-4127-b9ae-54b793cf40fc" Jan 22 06:39:36 crc kubenswrapper[4720]: I0122 06:39:36.811230 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="35ef24cd-5470-42e1-9bdc-c68ec760aae2" containerName="installer" Jan 22 06:39:36 crc kubenswrapper[4720]: I0122 06:39:36.811387 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a21ae7b-9111-4c9f-a378-f2acdb19931a" containerName="oauth-openshift" Jan 22 06:39:36 crc kubenswrapper[4720]: I0122 06:39:36.811995 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-7687c8778f-lkfgf" Jan 22 06:39:36 crc kubenswrapper[4720]: I0122 06:39:36.818843 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 22 06:39:36 crc kubenswrapper[4720]: I0122 06:39:36.819630 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 22 06:39:36 crc kubenswrapper[4720]: I0122 06:39:36.819690 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 22 06:39:36 crc kubenswrapper[4720]: I0122 06:39:36.820166 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 22 06:39:36 crc kubenswrapper[4720]: I0122 06:39:36.820804 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 22 06:39:36 crc kubenswrapper[4720]: I0122 06:39:36.820875 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 22 06:39:36 crc kubenswrapper[4720]: I0122 06:39:36.821036 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 22 06:39:36 crc kubenswrapper[4720]: I0122 06:39:36.821064 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 22 06:39:36 crc kubenswrapper[4720]: I0122 06:39:36.821691 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 22 06:39:36 crc kubenswrapper[4720]: I0122 06:39:36.822582 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 22 06:39:36 crc kubenswrapper[4720]: I0122 06:39:36.826877 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 22 06:39:36 crc kubenswrapper[4720]: I0122 06:39:36.828643 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 22 06:39:36 crc kubenswrapper[4720]: I0122 06:39:36.830671 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 22 06:39:36 crc kubenswrapper[4720]: I0122 06:39:36.836332 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 22 06:39:36 crc kubenswrapper[4720]: I0122 06:39:36.849215 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 22 06:39:36 crc kubenswrapper[4720]: I0122 06:39:36.853952 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 22 06:39:36 crc kubenswrapper[4720]: I0122 06:39:36.859148 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=18.859125002 podStartE2EDuration="18.859125002s" podCreationTimestamp="2026-01-22 06:39:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 06:39:36.857767353 +0000 UTC m=+268.999674068" watchObservedRunningTime="2026-01-22 06:39:36.859125002 +0000 UTC m=+269.001031707" Jan 22 06:39:36 crc kubenswrapper[4720]: I0122 06:39:36.904864 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 22 06:39:36 crc kubenswrapper[4720]: I0122 06:39:36.938488 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gj6mw\" (UniqueName: \"kubernetes.io/projected/cba38743-765b-48f5-a740-43d0687f95ed-kube-api-access-gj6mw\") pod \"oauth-openshift-7687c8778f-lkfgf\" (UID: \"cba38743-765b-48f5-a740-43d0687f95ed\") " pod="openshift-authentication/oauth-openshift-7687c8778f-lkfgf" Jan 22 06:39:36 crc kubenswrapper[4720]: I0122 06:39:36.938554 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/cba38743-765b-48f5-a740-43d0687f95ed-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7687c8778f-lkfgf\" (UID: \"cba38743-765b-48f5-a740-43d0687f95ed\") " pod="openshift-authentication/oauth-openshift-7687c8778f-lkfgf" Jan 22 06:39:36 crc kubenswrapper[4720]: I0122 06:39:36.938594 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/cba38743-765b-48f5-a740-43d0687f95ed-audit-policies\") pod \"oauth-openshift-7687c8778f-lkfgf\" (UID: \"cba38743-765b-48f5-a740-43d0687f95ed\") " pod="openshift-authentication/oauth-openshift-7687c8778f-lkfgf" Jan 22 06:39:36 crc kubenswrapper[4720]: I0122 06:39:36.938633 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/cba38743-765b-48f5-a740-43d0687f95ed-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7687c8778f-lkfgf\" (UID: \"cba38743-765b-48f5-a740-43d0687f95ed\") " pod="openshift-authentication/oauth-openshift-7687c8778f-lkfgf" Jan 22 06:39:36 crc kubenswrapper[4720]: I0122 06:39:36.938676 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/cba38743-765b-48f5-a740-43d0687f95ed-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7687c8778f-lkfgf\" (UID: \"cba38743-765b-48f5-a740-43d0687f95ed\") " pod="openshift-authentication/oauth-openshift-7687c8778f-lkfgf" Jan 22 06:39:36 crc kubenswrapper[4720]: I0122 06:39:36.938795 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cba38743-765b-48f5-a740-43d0687f95ed-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7687c8778f-lkfgf\" (UID: \"cba38743-765b-48f5-a740-43d0687f95ed\") " pod="openshift-authentication/oauth-openshift-7687c8778f-lkfgf" Jan 22 06:39:36 crc kubenswrapper[4720]: I0122 06:39:36.938831 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/cba38743-765b-48f5-a740-43d0687f95ed-audit-dir\") pod \"oauth-openshift-7687c8778f-lkfgf\" (UID: \"cba38743-765b-48f5-a740-43d0687f95ed\") " pod="openshift-authentication/oauth-openshift-7687c8778f-lkfgf" Jan 22 06:39:36 crc kubenswrapper[4720]: I0122 06:39:36.938902 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/cba38743-765b-48f5-a740-43d0687f95ed-v4-0-config-user-template-error\") pod \"oauth-openshift-7687c8778f-lkfgf\" (UID: \"cba38743-765b-48f5-a740-43d0687f95ed\") " pod="openshift-authentication/oauth-openshift-7687c8778f-lkfgf" Jan 22 06:39:36 crc kubenswrapper[4720]: I0122 06:39:36.938993 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/cba38743-765b-48f5-a740-43d0687f95ed-v4-0-config-system-session\") pod \"oauth-openshift-7687c8778f-lkfgf\" (UID: \"cba38743-765b-48f5-a740-43d0687f95ed\") " pod="openshift-authentication/oauth-openshift-7687c8778f-lkfgf" Jan 22 06:39:36 crc kubenswrapper[4720]: I0122 06:39:36.939033 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/cba38743-765b-48f5-a740-43d0687f95ed-v4-0-config-system-service-ca\") pod \"oauth-openshift-7687c8778f-lkfgf\" (UID: \"cba38743-765b-48f5-a740-43d0687f95ed\") " pod="openshift-authentication/oauth-openshift-7687c8778f-lkfgf" Jan 22 06:39:36 crc kubenswrapper[4720]: I0122 06:39:36.939118 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/cba38743-765b-48f5-a740-43d0687f95ed-v4-0-config-user-template-login\") pod \"oauth-openshift-7687c8778f-lkfgf\" (UID: \"cba38743-765b-48f5-a740-43d0687f95ed\") " pod="openshift-authentication/oauth-openshift-7687c8778f-lkfgf" Jan 22 06:39:36 crc kubenswrapper[4720]: I0122 06:39:36.939171 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/cba38743-765b-48f5-a740-43d0687f95ed-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7687c8778f-lkfgf\" (UID: \"cba38743-765b-48f5-a740-43d0687f95ed\") " pod="openshift-authentication/oauth-openshift-7687c8778f-lkfgf" Jan 22 06:39:36 crc kubenswrapper[4720]: I0122 06:39:36.939220 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/cba38743-765b-48f5-a740-43d0687f95ed-v4-0-config-system-router-certs\") pod \"oauth-openshift-7687c8778f-lkfgf\" (UID: \"cba38743-765b-48f5-a740-43d0687f95ed\") " pod="openshift-authentication/oauth-openshift-7687c8778f-lkfgf" Jan 22 06:39:36 crc kubenswrapper[4720]: I0122 06:39:36.939248 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/cba38743-765b-48f5-a740-43d0687f95ed-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-7687c8778f-lkfgf\" (UID: \"cba38743-765b-48f5-a740-43d0687f95ed\") " pod="openshift-authentication/oauth-openshift-7687c8778f-lkfgf" Jan 22 06:39:36 crc kubenswrapper[4720]: I0122 06:39:36.957297 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 22 06:39:36 crc kubenswrapper[4720]: I0122 06:39:36.966648 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 22 06:39:36 crc kubenswrapper[4720]: I0122 06:39:36.993756 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 22 06:39:37 crc kubenswrapper[4720]: I0122 06:39:37.000708 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 22 06:39:37 crc kubenswrapper[4720]: I0122 06:39:37.026967 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 22 06:39:37 crc kubenswrapper[4720]: I0122 06:39:37.040758 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/cba38743-765b-48f5-a740-43d0687f95ed-v4-0-config-user-template-error\") pod \"oauth-openshift-7687c8778f-lkfgf\" (UID: \"cba38743-765b-48f5-a740-43d0687f95ed\") " pod="openshift-authentication/oauth-openshift-7687c8778f-lkfgf" Jan 22 06:39:37 crc kubenswrapper[4720]: I0122 06:39:37.040820 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/cba38743-765b-48f5-a740-43d0687f95ed-v4-0-config-system-session\") pod \"oauth-openshift-7687c8778f-lkfgf\" (UID: \"cba38743-765b-48f5-a740-43d0687f95ed\") " pod="openshift-authentication/oauth-openshift-7687c8778f-lkfgf" Jan 22 06:39:37 crc kubenswrapper[4720]: I0122 06:39:37.040849 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/cba38743-765b-48f5-a740-43d0687f95ed-v4-0-config-system-service-ca\") pod \"oauth-openshift-7687c8778f-lkfgf\" (UID: \"cba38743-765b-48f5-a740-43d0687f95ed\") " pod="openshift-authentication/oauth-openshift-7687c8778f-lkfgf" Jan 22 06:39:37 crc kubenswrapper[4720]: I0122 06:39:37.040870 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/cba38743-765b-48f5-a740-43d0687f95ed-v4-0-config-user-template-login\") pod \"oauth-openshift-7687c8778f-lkfgf\" (UID: \"cba38743-765b-48f5-a740-43d0687f95ed\") " pod="openshift-authentication/oauth-openshift-7687c8778f-lkfgf" Jan 22 06:39:37 crc kubenswrapper[4720]: I0122 06:39:37.040902 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/cba38743-765b-48f5-a740-43d0687f95ed-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7687c8778f-lkfgf\" (UID: \"cba38743-765b-48f5-a740-43d0687f95ed\") " pod="openshift-authentication/oauth-openshift-7687c8778f-lkfgf" Jan 22 06:39:37 crc kubenswrapper[4720]: I0122 06:39:37.040959 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/cba38743-765b-48f5-a740-43d0687f95ed-v4-0-config-system-router-certs\") pod \"oauth-openshift-7687c8778f-lkfgf\" (UID: \"cba38743-765b-48f5-a740-43d0687f95ed\") " pod="openshift-authentication/oauth-openshift-7687c8778f-lkfgf" Jan 22 06:39:37 crc kubenswrapper[4720]: I0122 06:39:37.040989 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/cba38743-765b-48f5-a740-43d0687f95ed-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-7687c8778f-lkfgf\" (UID: \"cba38743-765b-48f5-a740-43d0687f95ed\") " pod="openshift-authentication/oauth-openshift-7687c8778f-lkfgf" Jan 22 06:39:37 crc kubenswrapper[4720]: I0122 06:39:37.041039 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gj6mw\" (UniqueName: \"kubernetes.io/projected/cba38743-765b-48f5-a740-43d0687f95ed-kube-api-access-gj6mw\") pod \"oauth-openshift-7687c8778f-lkfgf\" (UID: \"cba38743-765b-48f5-a740-43d0687f95ed\") " pod="openshift-authentication/oauth-openshift-7687c8778f-lkfgf" Jan 22 06:39:37 crc kubenswrapper[4720]: I0122 06:39:37.041065 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/cba38743-765b-48f5-a740-43d0687f95ed-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7687c8778f-lkfgf\" (UID: \"cba38743-765b-48f5-a740-43d0687f95ed\") " pod="openshift-authentication/oauth-openshift-7687c8778f-lkfgf" Jan 22 06:39:37 crc kubenswrapper[4720]: I0122 06:39:37.041096 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/cba38743-765b-48f5-a740-43d0687f95ed-audit-policies\") pod \"oauth-openshift-7687c8778f-lkfgf\" (UID: \"cba38743-765b-48f5-a740-43d0687f95ed\") " pod="openshift-authentication/oauth-openshift-7687c8778f-lkfgf" Jan 22 06:39:37 crc kubenswrapper[4720]: I0122 06:39:37.041118 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/cba38743-765b-48f5-a740-43d0687f95ed-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7687c8778f-lkfgf\" (UID: \"cba38743-765b-48f5-a740-43d0687f95ed\") " pod="openshift-authentication/oauth-openshift-7687c8778f-lkfgf" Jan 22 06:39:37 crc kubenswrapper[4720]: I0122 06:39:37.041147 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/cba38743-765b-48f5-a740-43d0687f95ed-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7687c8778f-lkfgf\" (UID: \"cba38743-765b-48f5-a740-43d0687f95ed\") " pod="openshift-authentication/oauth-openshift-7687c8778f-lkfgf" Jan 22 06:39:37 crc kubenswrapper[4720]: I0122 06:39:37.041186 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cba38743-765b-48f5-a740-43d0687f95ed-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7687c8778f-lkfgf\" (UID: \"cba38743-765b-48f5-a740-43d0687f95ed\") " pod="openshift-authentication/oauth-openshift-7687c8778f-lkfgf" Jan 22 06:39:37 crc kubenswrapper[4720]: I0122 06:39:37.041214 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/cba38743-765b-48f5-a740-43d0687f95ed-audit-dir\") pod \"oauth-openshift-7687c8778f-lkfgf\" (UID: \"cba38743-765b-48f5-a740-43d0687f95ed\") " pod="openshift-authentication/oauth-openshift-7687c8778f-lkfgf" Jan 22 06:39:37 crc kubenswrapper[4720]: I0122 06:39:37.041320 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/cba38743-765b-48f5-a740-43d0687f95ed-audit-dir\") pod \"oauth-openshift-7687c8778f-lkfgf\" (UID: \"cba38743-765b-48f5-a740-43d0687f95ed\") " pod="openshift-authentication/oauth-openshift-7687c8778f-lkfgf" Jan 22 06:39:37 crc kubenswrapper[4720]: I0122 06:39:37.042001 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/cba38743-765b-48f5-a740-43d0687f95ed-v4-0-config-system-service-ca\") pod \"oauth-openshift-7687c8778f-lkfgf\" (UID: \"cba38743-765b-48f5-a740-43d0687f95ed\") " pod="openshift-authentication/oauth-openshift-7687c8778f-lkfgf" Jan 22 06:39:37 crc kubenswrapper[4720]: I0122 06:39:37.043477 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/cba38743-765b-48f5-a740-43d0687f95ed-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7687c8778f-lkfgf\" (UID: \"cba38743-765b-48f5-a740-43d0687f95ed\") " pod="openshift-authentication/oauth-openshift-7687c8778f-lkfgf" Jan 22 06:39:37 crc kubenswrapper[4720]: I0122 06:39:37.043967 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/cba38743-765b-48f5-a740-43d0687f95ed-audit-policies\") pod \"oauth-openshift-7687c8778f-lkfgf\" (UID: \"cba38743-765b-48f5-a740-43d0687f95ed\") " pod="openshift-authentication/oauth-openshift-7687c8778f-lkfgf" Jan 22 06:39:37 crc kubenswrapper[4720]: I0122 06:39:37.044657 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cba38743-765b-48f5-a740-43d0687f95ed-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7687c8778f-lkfgf\" (UID: \"cba38743-765b-48f5-a740-43d0687f95ed\") " pod="openshift-authentication/oauth-openshift-7687c8778f-lkfgf" Jan 22 06:39:37 crc kubenswrapper[4720]: I0122 06:39:37.049449 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/cba38743-765b-48f5-a740-43d0687f95ed-v4-0-config-user-template-error\") pod \"oauth-openshift-7687c8778f-lkfgf\" (UID: \"cba38743-765b-48f5-a740-43d0687f95ed\") " pod="openshift-authentication/oauth-openshift-7687c8778f-lkfgf" Jan 22 06:39:37 crc kubenswrapper[4720]: I0122 06:39:37.049618 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/cba38743-765b-48f5-a740-43d0687f95ed-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7687c8778f-lkfgf\" (UID: \"cba38743-765b-48f5-a740-43d0687f95ed\") " pod="openshift-authentication/oauth-openshift-7687c8778f-lkfgf" Jan 22 06:39:37 crc kubenswrapper[4720]: I0122 06:39:37.050099 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/cba38743-765b-48f5-a740-43d0687f95ed-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7687c8778f-lkfgf\" (UID: \"cba38743-765b-48f5-a740-43d0687f95ed\") " pod="openshift-authentication/oauth-openshift-7687c8778f-lkfgf" Jan 22 06:39:37 crc kubenswrapper[4720]: I0122 06:39:37.050529 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/cba38743-765b-48f5-a740-43d0687f95ed-v4-0-config-user-template-login\") pod \"oauth-openshift-7687c8778f-lkfgf\" (UID: \"cba38743-765b-48f5-a740-43d0687f95ed\") " pod="openshift-authentication/oauth-openshift-7687c8778f-lkfgf" Jan 22 06:39:37 crc kubenswrapper[4720]: I0122 06:39:37.050608 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/cba38743-765b-48f5-a740-43d0687f95ed-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7687c8778f-lkfgf\" (UID: \"cba38743-765b-48f5-a740-43d0687f95ed\") " pod="openshift-authentication/oauth-openshift-7687c8778f-lkfgf" Jan 22 06:39:37 crc kubenswrapper[4720]: I0122 06:39:37.051334 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/cba38743-765b-48f5-a740-43d0687f95ed-v4-0-config-system-router-certs\") pod \"oauth-openshift-7687c8778f-lkfgf\" (UID: \"cba38743-765b-48f5-a740-43d0687f95ed\") " pod="openshift-authentication/oauth-openshift-7687c8778f-lkfgf" Jan 22 06:39:37 crc kubenswrapper[4720]: I0122 06:39:37.053575 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/cba38743-765b-48f5-a740-43d0687f95ed-v4-0-config-system-session\") pod \"oauth-openshift-7687c8778f-lkfgf\" (UID: \"cba38743-765b-48f5-a740-43d0687f95ed\") " pod="openshift-authentication/oauth-openshift-7687c8778f-lkfgf" Jan 22 06:39:37 crc kubenswrapper[4720]: I0122 06:39:37.056361 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/cba38743-765b-48f5-a740-43d0687f95ed-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-7687c8778f-lkfgf\" (UID: \"cba38743-765b-48f5-a740-43d0687f95ed\") " pod="openshift-authentication/oauth-openshift-7687c8778f-lkfgf" Jan 22 06:39:37 crc kubenswrapper[4720]: I0122 06:39:37.063257 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gj6mw\" (UniqueName: \"kubernetes.io/projected/cba38743-765b-48f5-a740-43d0687f95ed-kube-api-access-gj6mw\") pod \"oauth-openshift-7687c8778f-lkfgf\" (UID: \"cba38743-765b-48f5-a740-43d0687f95ed\") " pod="openshift-authentication/oauth-openshift-7687c8778f-lkfgf" Jan 22 06:39:37 crc kubenswrapper[4720]: I0122 06:39:37.077926 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 22 06:39:37 crc kubenswrapper[4720]: I0122 06:39:37.156384 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-7687c8778f-lkfgf" Jan 22 06:39:37 crc kubenswrapper[4720]: I0122 06:39:37.203936 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 22 06:39:37 crc kubenswrapper[4720]: I0122 06:39:37.228947 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 22 06:39:37 crc kubenswrapper[4720]: I0122 06:39:37.356274 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-7687c8778f-lkfgf"] Jan 22 06:39:37 crc kubenswrapper[4720]: I0122 06:39:37.365623 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 22 06:39:37 crc kubenswrapper[4720]: I0122 06:39:37.385199 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 22 06:39:37 crc kubenswrapper[4720]: I0122 06:39:37.460006 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 22 06:39:37 crc kubenswrapper[4720]: I0122 06:39:37.504355 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 22 06:39:37 crc kubenswrapper[4720]: I0122 06:39:37.615541 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 22 06:39:37 crc kubenswrapper[4720]: I0122 06:39:37.641850 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 22 06:39:37 crc kubenswrapper[4720]: I0122 06:39:37.670708 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 22 06:39:37 crc kubenswrapper[4720]: I0122 06:39:37.716776 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 22 06:39:37 crc kubenswrapper[4720]: I0122 06:39:37.745632 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 22 06:39:37 crc kubenswrapper[4720]: I0122 06:39:37.754284 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 22 06:39:37 crc kubenswrapper[4720]: I0122 06:39:37.763528 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 22 06:39:37 crc kubenswrapper[4720]: I0122 06:39:37.817567 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 22 06:39:37 crc kubenswrapper[4720]: I0122 06:39:37.831898 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 22 06:39:37 crc kubenswrapper[4720]: I0122 06:39:37.919598 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 22 06:39:37 crc kubenswrapper[4720]: I0122 06:39:37.927662 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 22 06:39:37 crc kubenswrapper[4720]: I0122 06:39:37.948250 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 22 06:39:37 crc kubenswrapper[4720]: I0122 06:39:37.950780 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 22 06:39:38 crc kubenswrapper[4720]: I0122 06:39:38.002310 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 22 06:39:38 crc kubenswrapper[4720]: I0122 06:39:38.172644 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 22 06:39:38 crc kubenswrapper[4720]: I0122 06:39:38.196974 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-7687c8778f-lkfgf" event={"ID":"cba38743-765b-48f5-a740-43d0687f95ed","Type":"ContainerStarted","Data":"9968dff9af905d6f1652ebd336e6f63f4066a743f7ee2af9520338407cb118ca"} Jan 22 06:39:38 crc kubenswrapper[4720]: I0122 06:39:38.197067 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-7687c8778f-lkfgf" event={"ID":"cba38743-765b-48f5-a740-43d0687f95ed","Type":"ContainerStarted","Data":"b2d1ff2fe10dadedf8647969aa6a4d84f647758e25696babf2148b7b35cba879"} Jan 22 06:39:38 crc kubenswrapper[4720]: I0122 06:39:38.197518 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-7687c8778f-lkfgf" Jan 22 06:39:38 crc kubenswrapper[4720]: I0122 06:39:38.218275 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0a21ae7b-9111-4c9f-a378-f2acdb19931a" path="/var/lib/kubelet/pods/0a21ae7b-9111-4c9f-a378-f2acdb19931a/volumes" Jan 22 06:39:38 crc kubenswrapper[4720]: I0122 06:39:38.230560 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-7687c8778f-lkfgf" Jan 22 06:39:38 crc kubenswrapper[4720]: I0122 06:39:38.231887 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 22 06:39:38 crc kubenswrapper[4720]: I0122 06:39:38.236076 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-7687c8778f-lkfgf" podStartSLOduration=52.236054987 podStartE2EDuration="52.236054987s" podCreationTimestamp="2026-01-22 06:38:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 06:39:38.230396462 +0000 UTC m=+270.372303177" watchObservedRunningTime="2026-01-22 06:39:38.236054987 +0000 UTC m=+270.377961692" Jan 22 06:39:38 crc kubenswrapper[4720]: I0122 06:39:38.274954 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 22 06:39:38 crc kubenswrapper[4720]: I0122 06:39:38.275177 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 22 06:39:38 crc kubenswrapper[4720]: I0122 06:39:38.328797 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 22 06:39:38 crc kubenswrapper[4720]: I0122 06:39:38.377309 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 22 06:39:38 crc kubenswrapper[4720]: I0122 06:39:38.391479 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 22 06:39:38 crc kubenswrapper[4720]: I0122 06:39:38.510567 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 22 06:39:38 crc kubenswrapper[4720]: I0122 06:39:38.704226 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 22 06:39:38 crc kubenswrapper[4720]: I0122 06:39:38.731688 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 22 06:39:38 crc kubenswrapper[4720]: I0122 06:39:38.880858 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 22 06:39:38 crc kubenswrapper[4720]: I0122 06:39:38.890719 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 22 06:39:38 crc kubenswrapper[4720]: I0122 06:39:38.979896 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 22 06:39:39 crc kubenswrapper[4720]: I0122 06:39:39.000825 4720 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 22 06:39:39 crc kubenswrapper[4720]: I0122 06:39:39.016054 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 22 06:39:39 crc kubenswrapper[4720]: I0122 06:39:39.059268 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 22 06:39:39 crc kubenswrapper[4720]: I0122 06:39:39.097585 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 22 06:39:39 crc kubenswrapper[4720]: I0122 06:39:39.122858 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 22 06:39:39 crc kubenswrapper[4720]: I0122 06:39:39.140028 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 22 06:39:39 crc kubenswrapper[4720]: I0122 06:39:39.162533 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 22 06:39:39 crc kubenswrapper[4720]: I0122 06:39:39.185526 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 22 06:39:39 crc kubenswrapper[4720]: I0122 06:39:39.197101 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 22 06:39:39 crc kubenswrapper[4720]: I0122 06:39:39.300244 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 22 06:39:39 crc kubenswrapper[4720]: I0122 06:39:39.310992 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 22 06:39:39 crc kubenswrapper[4720]: I0122 06:39:39.371464 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 22 06:39:39 crc kubenswrapper[4720]: I0122 06:39:39.381721 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 22 06:39:39 crc kubenswrapper[4720]: I0122 06:39:39.492217 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 22 06:39:39 crc kubenswrapper[4720]: I0122 06:39:39.511871 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 22 06:39:39 crc kubenswrapper[4720]: I0122 06:39:39.514222 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 22 06:39:39 crc kubenswrapper[4720]: I0122 06:39:39.520030 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 22 06:39:39 crc kubenswrapper[4720]: I0122 06:39:39.677317 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 22 06:39:39 crc kubenswrapper[4720]: I0122 06:39:39.726316 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 22 06:39:39 crc kubenswrapper[4720]: I0122 06:39:39.750184 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 22 06:39:39 crc kubenswrapper[4720]: I0122 06:39:39.808875 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 22 06:39:39 crc kubenswrapper[4720]: I0122 06:39:39.882004 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 22 06:39:39 crc kubenswrapper[4720]: I0122 06:39:39.884407 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 22 06:39:39 crc kubenswrapper[4720]: I0122 06:39:39.952167 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 22 06:39:39 crc kubenswrapper[4720]: I0122 06:39:39.991318 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 22 06:39:40 crc kubenswrapper[4720]: I0122 06:39:40.086707 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 22 06:39:40 crc kubenswrapper[4720]: I0122 06:39:40.241836 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 22 06:39:40 crc kubenswrapper[4720]: I0122 06:39:40.284125 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 22 06:39:40 crc kubenswrapper[4720]: I0122 06:39:40.346114 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 22 06:39:40 crc kubenswrapper[4720]: I0122 06:39:40.373232 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 22 06:39:40 crc kubenswrapper[4720]: I0122 06:39:40.383118 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 06:39:40 crc kubenswrapper[4720]: I0122 06:39:40.390816 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 22 06:39:40 crc kubenswrapper[4720]: I0122 06:39:40.397419 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 22 06:39:40 crc kubenswrapper[4720]: I0122 06:39:40.397563 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 22 06:39:40 crc kubenswrapper[4720]: I0122 06:39:40.423239 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 22 06:39:40 crc kubenswrapper[4720]: I0122 06:39:40.520106 4720 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 22 06:39:40 crc kubenswrapper[4720]: I0122 06:39:40.520497 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://b5a0fb08cc8f5673e06a8247a5446fdc96490ad686e59fb5c0414e0ea636c755" gracePeriod=5 Jan 22 06:39:40 crc kubenswrapper[4720]: I0122 06:39:40.522307 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 22 06:39:40 crc kubenswrapper[4720]: I0122 06:39:40.633950 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 22 06:39:40 crc kubenswrapper[4720]: I0122 06:39:40.746812 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 22 06:39:40 crc kubenswrapper[4720]: I0122 06:39:40.762891 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 22 06:39:40 crc kubenswrapper[4720]: I0122 06:39:40.882781 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 22 06:39:40 crc kubenswrapper[4720]: I0122 06:39:40.901804 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 22 06:39:40 crc kubenswrapper[4720]: I0122 06:39:40.970511 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 22 06:39:41 crc kubenswrapper[4720]: I0122 06:39:41.198315 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 22 06:39:41 crc kubenswrapper[4720]: I0122 06:39:41.236821 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 22 06:39:41 crc kubenswrapper[4720]: I0122 06:39:41.261764 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 22 06:39:41 crc kubenswrapper[4720]: I0122 06:39:41.347278 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 22 06:39:41 crc kubenswrapper[4720]: I0122 06:39:41.366028 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 22 06:39:41 crc kubenswrapper[4720]: I0122 06:39:41.390860 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 22 06:39:41 crc kubenswrapper[4720]: I0122 06:39:41.476196 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 22 06:39:41 crc kubenswrapper[4720]: I0122 06:39:41.488943 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 22 06:39:41 crc kubenswrapper[4720]: I0122 06:39:41.490334 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 22 06:39:41 crc kubenswrapper[4720]: I0122 06:39:41.552699 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 22 06:39:41 crc kubenswrapper[4720]: I0122 06:39:41.663271 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 22 06:39:41 crc kubenswrapper[4720]: I0122 06:39:41.926627 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 22 06:39:41 crc kubenswrapper[4720]: I0122 06:39:41.940058 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 22 06:39:41 crc kubenswrapper[4720]: I0122 06:39:41.948107 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 22 06:39:42 crc kubenswrapper[4720]: I0122 06:39:42.047334 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 22 06:39:42 crc kubenswrapper[4720]: I0122 06:39:42.146998 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 22 06:39:42 crc kubenswrapper[4720]: I0122 06:39:42.173278 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 22 06:39:42 crc kubenswrapper[4720]: I0122 06:39:42.268846 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 22 06:39:42 crc kubenswrapper[4720]: I0122 06:39:42.283930 4720 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 22 06:39:42 crc kubenswrapper[4720]: I0122 06:39:42.492843 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 22 06:39:42 crc kubenswrapper[4720]: I0122 06:39:42.519117 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 22 06:39:42 crc kubenswrapper[4720]: I0122 06:39:42.531670 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 22 06:39:42 crc kubenswrapper[4720]: I0122 06:39:42.713042 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 22 06:39:42 crc kubenswrapper[4720]: I0122 06:39:42.794599 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 22 06:39:42 crc kubenswrapper[4720]: I0122 06:39:42.900115 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 22 06:39:43 crc kubenswrapper[4720]: I0122 06:39:43.117929 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 22 06:39:43 crc kubenswrapper[4720]: I0122 06:39:43.242748 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 22 06:39:43 crc kubenswrapper[4720]: I0122 06:39:43.275588 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 22 06:39:43 crc kubenswrapper[4720]: I0122 06:39:43.319861 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 22 06:39:43 crc kubenswrapper[4720]: I0122 06:39:43.382762 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 22 06:39:43 crc kubenswrapper[4720]: I0122 06:39:43.404796 4720 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 22 06:39:43 crc kubenswrapper[4720]: I0122 06:39:43.438740 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 22 06:39:43 crc kubenswrapper[4720]: I0122 06:39:43.469784 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 22 06:39:43 crc kubenswrapper[4720]: I0122 06:39:43.652397 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 22 06:39:43 crc kubenswrapper[4720]: I0122 06:39:43.682432 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 22 06:39:43 crc kubenswrapper[4720]: I0122 06:39:43.701841 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 22 06:39:43 crc kubenswrapper[4720]: I0122 06:39:43.802006 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 22 06:39:44 crc kubenswrapper[4720]: I0122 06:39:44.129399 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 22 06:39:44 crc kubenswrapper[4720]: I0122 06:39:44.145696 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 22 06:39:44 crc kubenswrapper[4720]: I0122 06:39:44.239237 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 22 06:39:44 crc kubenswrapper[4720]: I0122 06:39:44.337599 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 22 06:39:44 crc kubenswrapper[4720]: I0122 06:39:44.870640 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 22 06:39:45 crc kubenswrapper[4720]: I0122 06:39:45.024086 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 22 06:39:45 crc kubenswrapper[4720]: I0122 06:39:45.039639 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 22 06:39:45 crc kubenswrapper[4720]: I0122 06:39:45.297309 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 22 06:39:45 crc kubenswrapper[4720]: I0122 06:39:45.371055 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 22 06:39:45 crc kubenswrapper[4720]: I0122 06:39:45.764938 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 22 06:39:45 crc kubenswrapper[4720]: I0122 06:39:45.960436 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 22 06:39:46 crc kubenswrapper[4720]: I0122 06:39:46.116464 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 22 06:39:46 crc kubenswrapper[4720]: I0122 06:39:46.142209 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 22 06:39:46 crc kubenswrapper[4720]: I0122 06:39:46.142330 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 06:39:46 crc kubenswrapper[4720]: I0122 06:39:46.174256 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 22 06:39:46 crc kubenswrapper[4720]: I0122 06:39:46.174353 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 22 06:39:46 crc kubenswrapper[4720]: I0122 06:39:46.174397 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 22 06:39:46 crc kubenswrapper[4720]: I0122 06:39:46.174470 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 22 06:39:46 crc kubenswrapper[4720]: I0122 06:39:46.174510 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 22 06:39:46 crc kubenswrapper[4720]: I0122 06:39:46.174903 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 06:39:46 crc kubenswrapper[4720]: I0122 06:39:46.175001 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 06:39:46 crc kubenswrapper[4720]: I0122 06:39:46.175040 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 06:39:46 crc kubenswrapper[4720]: I0122 06:39:46.175076 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 06:39:46 crc kubenswrapper[4720]: I0122 06:39:46.188945 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 06:39:46 crc kubenswrapper[4720]: I0122 06:39:46.223076 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Jan 22 06:39:46 crc kubenswrapper[4720]: I0122 06:39:46.223539 4720 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="" Jan 22 06:39:46 crc kubenswrapper[4720]: I0122 06:39:46.240102 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 22 06:39:46 crc kubenswrapper[4720]: I0122 06:39:46.240189 4720 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="6e481e87-9a72-4d48-b301-af82b5088a82" Jan 22 06:39:46 crc kubenswrapper[4720]: I0122 06:39:46.244199 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 22 06:39:46 crc kubenswrapper[4720]: I0122 06:39:46.249014 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 22 06:39:46 crc kubenswrapper[4720]: I0122 06:39:46.249100 4720 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="6e481e87-9a72-4d48-b301-af82b5088a82" Jan 22 06:39:46 crc kubenswrapper[4720]: I0122 06:39:46.249193 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 22 06:39:46 crc kubenswrapper[4720]: I0122 06:39:46.249271 4720 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="b5a0fb08cc8f5673e06a8247a5446fdc96490ad686e59fb5c0414e0ea636c755" exitCode=137 Jan 22 06:39:46 crc kubenswrapper[4720]: I0122 06:39:46.249351 4720 scope.go:117] "RemoveContainer" containerID="b5a0fb08cc8f5673e06a8247a5446fdc96490ad686e59fb5c0414e0ea636c755" Jan 22 06:39:46 crc kubenswrapper[4720]: I0122 06:39:46.249416 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 22 06:39:46 crc kubenswrapper[4720]: I0122 06:39:46.276383 4720 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 22 06:39:46 crc kubenswrapper[4720]: I0122 06:39:46.276425 4720 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Jan 22 06:39:46 crc kubenswrapper[4720]: I0122 06:39:46.276445 4720 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Jan 22 06:39:46 crc kubenswrapper[4720]: I0122 06:39:46.276464 4720 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 22 06:39:46 crc kubenswrapper[4720]: I0122 06:39:46.276483 4720 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Jan 22 06:39:46 crc kubenswrapper[4720]: I0122 06:39:46.283967 4720 scope.go:117] "RemoveContainer" containerID="b5a0fb08cc8f5673e06a8247a5446fdc96490ad686e59fb5c0414e0ea636c755" Jan 22 06:39:46 crc kubenswrapper[4720]: E0122 06:39:46.285040 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b5a0fb08cc8f5673e06a8247a5446fdc96490ad686e59fb5c0414e0ea636c755\": container with ID starting with b5a0fb08cc8f5673e06a8247a5446fdc96490ad686e59fb5c0414e0ea636c755 not found: ID does not exist" containerID="b5a0fb08cc8f5673e06a8247a5446fdc96490ad686e59fb5c0414e0ea636c755" Jan 22 06:39:46 crc kubenswrapper[4720]: I0122 06:39:46.285120 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b5a0fb08cc8f5673e06a8247a5446fdc96490ad686e59fb5c0414e0ea636c755"} err="failed to get container status \"b5a0fb08cc8f5673e06a8247a5446fdc96490ad686e59fb5c0414e0ea636c755\": rpc error: code = NotFound desc = could not find container \"b5a0fb08cc8f5673e06a8247a5446fdc96490ad686e59fb5c0414e0ea636c755\": container with ID starting with b5a0fb08cc8f5673e06a8247a5446fdc96490ad686e59fb5c0414e0ea636c755 not found: ID does not exist" Jan 22 06:40:00 crc kubenswrapper[4720]: I0122 06:40:00.316611 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-dhklt"] Jan 22 06:40:00 crc kubenswrapper[4720]: I0122 06:40:00.317696 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-dhklt" podUID="3f7c9fba-71e2-44d4-9601-be0ffa541be4" containerName="controller-manager" containerID="cri-o://4f97b9b13645eb606ce13a5d46ecd0447ac2ef480597dd15283f1323c6cc676c" gracePeriod=30 Jan 22 06:40:00 crc kubenswrapper[4720]: I0122 06:40:00.406245 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-gxkzq"] Jan 22 06:40:00 crc kubenswrapper[4720]: I0122 06:40:00.406799 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-gxkzq" podUID="508eaeea-db9b-4801-a9d3-a758e3ae9502" containerName="route-controller-manager" containerID="cri-o://9bc9b941f7c8ad12159f344c981f602a4d2e44205a59a4d4340247cba159a001" gracePeriod=30 Jan 22 06:40:00 crc kubenswrapper[4720]: I0122 06:40:00.752865 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-dhklt" Jan 22 06:40:00 crc kubenswrapper[4720]: I0122 06:40:00.835502 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-gxkzq" Jan 22 06:40:00 crc kubenswrapper[4720]: I0122 06:40:00.918160 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3f7c9fba-71e2-44d4-9601-be0ffa541be4-serving-cert\") pod \"3f7c9fba-71e2-44d4-9601-be0ffa541be4\" (UID: \"3f7c9fba-71e2-44d4-9601-be0ffa541be4\") " Jan 22 06:40:00 crc kubenswrapper[4720]: I0122 06:40:00.918244 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3f7c9fba-71e2-44d4-9601-be0ffa541be4-client-ca\") pod \"3f7c9fba-71e2-44d4-9601-be0ffa541be4\" (UID: \"3f7c9fba-71e2-44d4-9601-be0ffa541be4\") " Jan 22 06:40:00 crc kubenswrapper[4720]: I0122 06:40:00.918263 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3f7c9fba-71e2-44d4-9601-be0ffa541be4-proxy-ca-bundles\") pod \"3f7c9fba-71e2-44d4-9601-be0ffa541be4\" (UID: \"3f7c9fba-71e2-44d4-9601-be0ffa541be4\") " Jan 22 06:40:00 crc kubenswrapper[4720]: I0122 06:40:00.918430 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-69v9c\" (UniqueName: \"kubernetes.io/projected/3f7c9fba-71e2-44d4-9601-be0ffa541be4-kube-api-access-69v9c\") pod \"3f7c9fba-71e2-44d4-9601-be0ffa541be4\" (UID: \"3f7c9fba-71e2-44d4-9601-be0ffa541be4\") " Jan 22 06:40:00 crc kubenswrapper[4720]: I0122 06:40:00.918459 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3f7c9fba-71e2-44d4-9601-be0ffa541be4-config\") pod \"3f7c9fba-71e2-44d4-9601-be0ffa541be4\" (UID: \"3f7c9fba-71e2-44d4-9601-be0ffa541be4\") " Jan 22 06:40:00 crc kubenswrapper[4720]: I0122 06:40:00.919470 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3f7c9fba-71e2-44d4-9601-be0ffa541be4-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "3f7c9fba-71e2-44d4-9601-be0ffa541be4" (UID: "3f7c9fba-71e2-44d4-9601-be0ffa541be4"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 06:40:00 crc kubenswrapper[4720]: I0122 06:40:00.919566 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3f7c9fba-71e2-44d4-9601-be0ffa541be4-config" (OuterVolumeSpecName: "config") pod "3f7c9fba-71e2-44d4-9601-be0ffa541be4" (UID: "3f7c9fba-71e2-44d4-9601-be0ffa541be4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 06:40:00 crc kubenswrapper[4720]: I0122 06:40:00.919603 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3f7c9fba-71e2-44d4-9601-be0ffa541be4-client-ca" (OuterVolumeSpecName: "client-ca") pod "3f7c9fba-71e2-44d4-9601-be0ffa541be4" (UID: "3f7c9fba-71e2-44d4-9601-be0ffa541be4"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 06:40:00 crc kubenswrapper[4720]: I0122 06:40:00.925145 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f7c9fba-71e2-44d4-9601-be0ffa541be4-kube-api-access-69v9c" (OuterVolumeSpecName: "kube-api-access-69v9c") pod "3f7c9fba-71e2-44d4-9601-be0ffa541be4" (UID: "3f7c9fba-71e2-44d4-9601-be0ffa541be4"). InnerVolumeSpecName "kube-api-access-69v9c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 06:40:00 crc kubenswrapper[4720]: I0122 06:40:00.926640 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f7c9fba-71e2-44d4-9601-be0ffa541be4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "3f7c9fba-71e2-44d4-9601-be0ffa541be4" (UID: "3f7c9fba-71e2-44d4-9601-be0ffa541be4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 06:40:01 crc kubenswrapper[4720]: I0122 06:40:01.020375 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wqjj6\" (UniqueName: \"kubernetes.io/projected/508eaeea-db9b-4801-a9d3-a758e3ae9502-kube-api-access-wqjj6\") pod \"508eaeea-db9b-4801-a9d3-a758e3ae9502\" (UID: \"508eaeea-db9b-4801-a9d3-a758e3ae9502\") " Jan 22 06:40:01 crc kubenswrapper[4720]: I0122 06:40:01.020448 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/508eaeea-db9b-4801-a9d3-a758e3ae9502-config\") pod \"508eaeea-db9b-4801-a9d3-a758e3ae9502\" (UID: \"508eaeea-db9b-4801-a9d3-a758e3ae9502\") " Jan 22 06:40:01 crc kubenswrapper[4720]: I0122 06:40:01.020471 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/508eaeea-db9b-4801-a9d3-a758e3ae9502-client-ca\") pod \"508eaeea-db9b-4801-a9d3-a758e3ae9502\" (UID: \"508eaeea-db9b-4801-a9d3-a758e3ae9502\") " Jan 22 06:40:01 crc kubenswrapper[4720]: I0122 06:40:01.020497 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/508eaeea-db9b-4801-a9d3-a758e3ae9502-serving-cert\") pod \"508eaeea-db9b-4801-a9d3-a758e3ae9502\" (UID: \"508eaeea-db9b-4801-a9d3-a758e3ae9502\") " Jan 22 06:40:01 crc kubenswrapper[4720]: I0122 06:40:01.020836 4720 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3f7c9fba-71e2-44d4-9601-be0ffa541be4-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 06:40:01 crc kubenswrapper[4720]: I0122 06:40:01.020847 4720 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3f7c9fba-71e2-44d4-9601-be0ffa541be4-client-ca\") on node \"crc\" DevicePath \"\"" Jan 22 06:40:01 crc kubenswrapper[4720]: I0122 06:40:01.020855 4720 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3f7c9fba-71e2-44d4-9601-be0ffa541be4-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 22 06:40:01 crc kubenswrapper[4720]: I0122 06:40:01.020866 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-69v9c\" (UniqueName: \"kubernetes.io/projected/3f7c9fba-71e2-44d4-9601-be0ffa541be4-kube-api-access-69v9c\") on node \"crc\" DevicePath \"\"" Jan 22 06:40:01 crc kubenswrapper[4720]: I0122 06:40:01.020875 4720 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3f7c9fba-71e2-44d4-9601-be0ffa541be4-config\") on node \"crc\" DevicePath \"\"" Jan 22 06:40:01 crc kubenswrapper[4720]: I0122 06:40:01.022032 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/508eaeea-db9b-4801-a9d3-a758e3ae9502-config" (OuterVolumeSpecName: "config") pod "508eaeea-db9b-4801-a9d3-a758e3ae9502" (UID: "508eaeea-db9b-4801-a9d3-a758e3ae9502"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 06:40:01 crc kubenswrapper[4720]: I0122 06:40:01.022094 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/508eaeea-db9b-4801-a9d3-a758e3ae9502-client-ca" (OuterVolumeSpecName: "client-ca") pod "508eaeea-db9b-4801-a9d3-a758e3ae9502" (UID: "508eaeea-db9b-4801-a9d3-a758e3ae9502"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 06:40:01 crc kubenswrapper[4720]: I0122 06:40:01.026466 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/508eaeea-db9b-4801-a9d3-a758e3ae9502-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "508eaeea-db9b-4801-a9d3-a758e3ae9502" (UID: "508eaeea-db9b-4801-a9d3-a758e3ae9502"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 06:40:01 crc kubenswrapper[4720]: I0122 06:40:01.026670 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/508eaeea-db9b-4801-a9d3-a758e3ae9502-kube-api-access-wqjj6" (OuterVolumeSpecName: "kube-api-access-wqjj6") pod "508eaeea-db9b-4801-a9d3-a758e3ae9502" (UID: "508eaeea-db9b-4801-a9d3-a758e3ae9502"). InnerVolumeSpecName "kube-api-access-wqjj6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 06:40:01 crc kubenswrapper[4720]: I0122 06:40:01.122661 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wqjj6\" (UniqueName: \"kubernetes.io/projected/508eaeea-db9b-4801-a9d3-a758e3ae9502-kube-api-access-wqjj6\") on node \"crc\" DevicePath \"\"" Jan 22 06:40:01 crc kubenswrapper[4720]: I0122 06:40:01.122756 4720 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/508eaeea-db9b-4801-a9d3-a758e3ae9502-config\") on node \"crc\" DevicePath \"\"" Jan 22 06:40:01 crc kubenswrapper[4720]: I0122 06:40:01.122797 4720 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/508eaeea-db9b-4801-a9d3-a758e3ae9502-client-ca\") on node \"crc\" DevicePath \"\"" Jan 22 06:40:01 crc kubenswrapper[4720]: I0122 06:40:01.122828 4720 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/508eaeea-db9b-4801-a9d3-a758e3ae9502-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 06:40:01 crc kubenswrapper[4720]: I0122 06:40:01.363849 4720 generic.go:334] "Generic (PLEG): container finished" podID="508eaeea-db9b-4801-a9d3-a758e3ae9502" containerID="9bc9b941f7c8ad12159f344c981f602a4d2e44205a59a4d4340247cba159a001" exitCode=0 Jan 22 06:40:01 crc kubenswrapper[4720]: I0122 06:40:01.363981 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-gxkzq" event={"ID":"508eaeea-db9b-4801-a9d3-a758e3ae9502","Type":"ContainerDied","Data":"9bc9b941f7c8ad12159f344c981f602a4d2e44205a59a4d4340247cba159a001"} Jan 22 06:40:01 crc kubenswrapper[4720]: I0122 06:40:01.364539 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-gxkzq" event={"ID":"508eaeea-db9b-4801-a9d3-a758e3ae9502","Type":"ContainerDied","Data":"58568531637fc48f04358aa29bcfbcfda9fa1c2b3f8b3987421bb8d9943e45e6"} Jan 22 06:40:01 crc kubenswrapper[4720]: I0122 06:40:01.364587 4720 scope.go:117] "RemoveContainer" containerID="9bc9b941f7c8ad12159f344c981f602a4d2e44205a59a4d4340247cba159a001" Jan 22 06:40:01 crc kubenswrapper[4720]: I0122 06:40:01.364026 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-gxkzq" Jan 22 06:40:01 crc kubenswrapper[4720]: I0122 06:40:01.367956 4720 generic.go:334] "Generic (PLEG): container finished" podID="3f7c9fba-71e2-44d4-9601-be0ffa541be4" containerID="4f97b9b13645eb606ce13a5d46ecd0447ac2ef480597dd15283f1323c6cc676c" exitCode=0 Jan 22 06:40:01 crc kubenswrapper[4720]: I0122 06:40:01.368026 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-dhklt" event={"ID":"3f7c9fba-71e2-44d4-9601-be0ffa541be4","Type":"ContainerDied","Data":"4f97b9b13645eb606ce13a5d46ecd0447ac2ef480597dd15283f1323c6cc676c"} Jan 22 06:40:01 crc kubenswrapper[4720]: I0122 06:40:01.368071 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-dhklt" event={"ID":"3f7c9fba-71e2-44d4-9601-be0ffa541be4","Type":"ContainerDied","Data":"a39126b9faad9e2b2fc2a69217c4e4799a1bf64d17de49ec063690b97535b1b4"} Jan 22 06:40:01 crc kubenswrapper[4720]: I0122 06:40:01.368091 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-dhklt" Jan 22 06:40:01 crc kubenswrapper[4720]: I0122 06:40:01.398593 4720 scope.go:117] "RemoveContainer" containerID="9bc9b941f7c8ad12159f344c981f602a4d2e44205a59a4d4340247cba159a001" Jan 22 06:40:01 crc kubenswrapper[4720]: E0122 06:40:01.399295 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9bc9b941f7c8ad12159f344c981f602a4d2e44205a59a4d4340247cba159a001\": container with ID starting with 9bc9b941f7c8ad12159f344c981f602a4d2e44205a59a4d4340247cba159a001 not found: ID does not exist" containerID="9bc9b941f7c8ad12159f344c981f602a4d2e44205a59a4d4340247cba159a001" Jan 22 06:40:01 crc kubenswrapper[4720]: I0122 06:40:01.399363 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9bc9b941f7c8ad12159f344c981f602a4d2e44205a59a4d4340247cba159a001"} err="failed to get container status \"9bc9b941f7c8ad12159f344c981f602a4d2e44205a59a4d4340247cba159a001\": rpc error: code = NotFound desc = could not find container \"9bc9b941f7c8ad12159f344c981f602a4d2e44205a59a4d4340247cba159a001\": container with ID starting with 9bc9b941f7c8ad12159f344c981f602a4d2e44205a59a4d4340247cba159a001 not found: ID does not exist" Jan 22 06:40:01 crc kubenswrapper[4720]: I0122 06:40:01.399412 4720 scope.go:117] "RemoveContainer" containerID="4f97b9b13645eb606ce13a5d46ecd0447ac2ef480597dd15283f1323c6cc676c" Jan 22 06:40:01 crc kubenswrapper[4720]: I0122 06:40:01.433361 4720 scope.go:117] "RemoveContainer" containerID="4f97b9b13645eb606ce13a5d46ecd0447ac2ef480597dd15283f1323c6cc676c" Jan 22 06:40:01 crc kubenswrapper[4720]: E0122 06:40:01.434388 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4f97b9b13645eb606ce13a5d46ecd0447ac2ef480597dd15283f1323c6cc676c\": container with ID starting with 4f97b9b13645eb606ce13a5d46ecd0447ac2ef480597dd15283f1323c6cc676c not found: ID does not exist" containerID="4f97b9b13645eb606ce13a5d46ecd0447ac2ef480597dd15283f1323c6cc676c" Jan 22 06:40:01 crc kubenswrapper[4720]: I0122 06:40:01.434498 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4f97b9b13645eb606ce13a5d46ecd0447ac2ef480597dd15283f1323c6cc676c"} err="failed to get container status \"4f97b9b13645eb606ce13a5d46ecd0447ac2ef480597dd15283f1323c6cc676c\": rpc error: code = NotFound desc = could not find container \"4f97b9b13645eb606ce13a5d46ecd0447ac2ef480597dd15283f1323c6cc676c\": container with ID starting with 4f97b9b13645eb606ce13a5d46ecd0447ac2ef480597dd15283f1323c6cc676c not found: ID does not exist" Jan 22 06:40:01 crc kubenswrapper[4720]: I0122 06:40:01.438877 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-dhklt"] Jan 22 06:40:01 crc kubenswrapper[4720]: I0122 06:40:01.452032 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-dhklt"] Jan 22 06:40:01 crc kubenswrapper[4720]: I0122 06:40:01.461022 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-gxkzq"] Jan 22 06:40:01 crc kubenswrapper[4720]: I0122 06:40:01.466387 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-gxkzq"] Jan 22 06:40:01 crc kubenswrapper[4720]: I0122 06:40:01.939349 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-5964cbcb45-drf2n"] Jan 22 06:40:01 crc kubenswrapper[4720]: E0122 06:40:01.939624 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f7c9fba-71e2-44d4-9601-be0ffa541be4" containerName="controller-manager" Jan 22 06:40:01 crc kubenswrapper[4720]: I0122 06:40:01.939643 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f7c9fba-71e2-44d4-9601-be0ffa541be4" containerName="controller-manager" Jan 22 06:40:01 crc kubenswrapper[4720]: E0122 06:40:01.939666 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 22 06:40:01 crc kubenswrapper[4720]: I0122 06:40:01.939676 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 22 06:40:01 crc kubenswrapper[4720]: E0122 06:40:01.939691 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="508eaeea-db9b-4801-a9d3-a758e3ae9502" containerName="route-controller-manager" Jan 22 06:40:01 crc kubenswrapper[4720]: I0122 06:40:01.939700 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="508eaeea-db9b-4801-a9d3-a758e3ae9502" containerName="route-controller-manager" Jan 22 06:40:01 crc kubenswrapper[4720]: I0122 06:40:01.939808 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="508eaeea-db9b-4801-a9d3-a758e3ae9502" containerName="route-controller-manager" Jan 22 06:40:01 crc kubenswrapper[4720]: I0122 06:40:01.939823 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 22 06:40:01 crc kubenswrapper[4720]: I0122 06:40:01.939835 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f7c9fba-71e2-44d4-9601-be0ffa541be4" containerName="controller-manager" Jan 22 06:40:01 crc kubenswrapper[4720]: I0122 06:40:01.940391 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5964cbcb45-drf2n" Jan 22 06:40:01 crc kubenswrapper[4720]: I0122 06:40:01.943076 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 22 06:40:01 crc kubenswrapper[4720]: I0122 06:40:01.943427 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 22 06:40:01 crc kubenswrapper[4720]: I0122 06:40:01.944432 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 22 06:40:01 crc kubenswrapper[4720]: I0122 06:40:01.944502 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5598468cdf-65dfj"] Jan 22 06:40:01 crc kubenswrapper[4720]: I0122 06:40:01.944738 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 22 06:40:01 crc kubenswrapper[4720]: I0122 06:40:01.945633 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5598468cdf-65dfj" Jan 22 06:40:01 crc kubenswrapper[4720]: I0122 06:40:01.947681 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 22 06:40:01 crc kubenswrapper[4720]: I0122 06:40:01.947692 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 22 06:40:01 crc kubenswrapper[4720]: W0122 06:40:01.947755 4720 reflector.go:561] object-"openshift-route-controller-manager"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-route-controller-manager": no relationship found between node 'crc' and this object Jan 22 06:40:01 crc kubenswrapper[4720]: W0122 06:40:01.947755 4720 reflector.go:561] object-"openshift-route-controller-manager"/"config": failed to list *v1.ConfigMap: configmaps "config" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-route-controller-manager": no relationship found between node 'crc' and this object Jan 22 06:40:01 crc kubenswrapper[4720]: W0122 06:40:01.947692 4720 reflector.go:561] object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2": failed to list *v1.Secret: secrets "route-controller-manager-sa-dockercfg-h2zr2" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-route-controller-manager": no relationship found between node 'crc' and this object Jan 22 06:40:01 crc kubenswrapper[4720]: E0122 06:40:01.947812 4720 reflector.go:158] "Unhandled Error" err="object-\"openshift-route-controller-manager\"/\"config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"config\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-route-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 22 06:40:01 crc kubenswrapper[4720]: E0122 06:40:01.947855 4720 reflector.go:158] "Unhandled Error" err="object-\"openshift-route-controller-manager\"/\"route-controller-manager-sa-dockercfg-h2zr2\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"route-controller-manager-sa-dockercfg-h2zr2\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-route-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 22 06:40:01 crc kubenswrapper[4720]: E0122 06:40:01.947802 4720 reflector.go:158] "Unhandled Error" err="object-\"openshift-route-controller-manager\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-route-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 22 06:40:01 crc kubenswrapper[4720]: W0122 06:40:01.947798 4720 reflector.go:561] object-"openshift-route-controller-manager"/"client-ca": failed to list *v1.ConfigMap: configmaps "client-ca" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-route-controller-manager": no relationship found between node 'crc' and this object Jan 22 06:40:01 crc kubenswrapper[4720]: E0122 06:40:01.948118 4720 reflector.go:158] "Unhandled Error" err="object-\"openshift-route-controller-manager\"/\"client-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"client-ca\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-route-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 22 06:40:01 crc kubenswrapper[4720]: W0122 06:40:01.949888 4720 reflector.go:561] object-"openshift-route-controller-manager"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: configmaps "openshift-service-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-route-controller-manager": no relationship found between node 'crc' and this object Jan 22 06:40:01 crc kubenswrapper[4720]: E0122 06:40:01.949944 4720 reflector.go:158] "Unhandled Error" err="object-\"openshift-route-controller-manager\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-service-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-route-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 22 06:40:01 crc kubenswrapper[4720]: I0122 06:40:01.951122 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 22 06:40:01 crc kubenswrapper[4720]: I0122 06:40:01.964396 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 22 06:40:01 crc kubenswrapper[4720]: I0122 06:40:01.971225 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5964cbcb45-drf2n"] Jan 22 06:40:01 crc kubenswrapper[4720]: I0122 06:40:01.975881 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5598468cdf-65dfj"] Jan 22 06:40:02 crc kubenswrapper[4720]: I0122 06:40:02.040927 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41ddc794-fce7-4633-a525-2d491fca548e-config\") pod \"controller-manager-5964cbcb45-drf2n\" (UID: \"41ddc794-fce7-4633-a525-2d491fca548e\") " pod="openshift-controller-manager/controller-manager-5964cbcb45-drf2n" Jan 22 06:40:02 crc kubenswrapper[4720]: I0122 06:40:02.040986 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/41ddc794-fce7-4633-a525-2d491fca548e-proxy-ca-bundles\") pod \"controller-manager-5964cbcb45-drf2n\" (UID: \"41ddc794-fce7-4633-a525-2d491fca548e\") " pod="openshift-controller-manager/controller-manager-5964cbcb45-drf2n" Jan 22 06:40:02 crc kubenswrapper[4720]: I0122 06:40:02.041053 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-grk4c\" (UniqueName: \"kubernetes.io/projected/41ddc794-fce7-4633-a525-2d491fca548e-kube-api-access-grk4c\") pod \"controller-manager-5964cbcb45-drf2n\" (UID: \"41ddc794-fce7-4633-a525-2d491fca548e\") " pod="openshift-controller-manager/controller-manager-5964cbcb45-drf2n" Jan 22 06:40:02 crc kubenswrapper[4720]: I0122 06:40:02.041193 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5df93441-3446-474a-9f82-82bba08eb13f-client-ca\") pod \"route-controller-manager-5598468cdf-65dfj\" (UID: \"5df93441-3446-474a-9f82-82bba08eb13f\") " pod="openshift-route-controller-manager/route-controller-manager-5598468cdf-65dfj" Jan 22 06:40:02 crc kubenswrapper[4720]: I0122 06:40:02.041368 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5df93441-3446-474a-9f82-82bba08eb13f-config\") pod \"route-controller-manager-5598468cdf-65dfj\" (UID: \"5df93441-3446-474a-9f82-82bba08eb13f\") " pod="openshift-route-controller-manager/route-controller-manager-5598468cdf-65dfj" Jan 22 06:40:02 crc kubenswrapper[4720]: I0122 06:40:02.041406 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5df93441-3446-474a-9f82-82bba08eb13f-serving-cert\") pod \"route-controller-manager-5598468cdf-65dfj\" (UID: \"5df93441-3446-474a-9f82-82bba08eb13f\") " pod="openshift-route-controller-manager/route-controller-manager-5598468cdf-65dfj" Jan 22 06:40:02 crc kubenswrapper[4720]: I0122 06:40:02.041437 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k6sxv\" (UniqueName: \"kubernetes.io/projected/5df93441-3446-474a-9f82-82bba08eb13f-kube-api-access-k6sxv\") pod \"route-controller-manager-5598468cdf-65dfj\" (UID: \"5df93441-3446-474a-9f82-82bba08eb13f\") " pod="openshift-route-controller-manager/route-controller-manager-5598468cdf-65dfj" Jan 22 06:40:02 crc kubenswrapper[4720]: I0122 06:40:02.041519 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/41ddc794-fce7-4633-a525-2d491fca548e-client-ca\") pod \"controller-manager-5964cbcb45-drf2n\" (UID: \"41ddc794-fce7-4633-a525-2d491fca548e\") " pod="openshift-controller-manager/controller-manager-5964cbcb45-drf2n" Jan 22 06:40:02 crc kubenswrapper[4720]: I0122 06:40:02.041707 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/41ddc794-fce7-4633-a525-2d491fca548e-serving-cert\") pod \"controller-manager-5964cbcb45-drf2n\" (UID: \"41ddc794-fce7-4633-a525-2d491fca548e\") " pod="openshift-controller-manager/controller-manager-5964cbcb45-drf2n" Jan 22 06:40:02 crc kubenswrapper[4720]: I0122 06:40:02.143446 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/41ddc794-fce7-4633-a525-2d491fca548e-client-ca\") pod \"controller-manager-5964cbcb45-drf2n\" (UID: \"41ddc794-fce7-4633-a525-2d491fca548e\") " pod="openshift-controller-manager/controller-manager-5964cbcb45-drf2n" Jan 22 06:40:02 crc kubenswrapper[4720]: I0122 06:40:02.143536 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/41ddc794-fce7-4633-a525-2d491fca548e-serving-cert\") pod \"controller-manager-5964cbcb45-drf2n\" (UID: \"41ddc794-fce7-4633-a525-2d491fca548e\") " pod="openshift-controller-manager/controller-manager-5964cbcb45-drf2n" Jan 22 06:40:02 crc kubenswrapper[4720]: I0122 06:40:02.144489 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41ddc794-fce7-4633-a525-2d491fca548e-config\") pod \"controller-manager-5964cbcb45-drf2n\" (UID: \"41ddc794-fce7-4633-a525-2d491fca548e\") " pod="openshift-controller-manager/controller-manager-5964cbcb45-drf2n" Jan 22 06:40:02 crc kubenswrapper[4720]: I0122 06:40:02.144519 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/41ddc794-fce7-4633-a525-2d491fca548e-proxy-ca-bundles\") pod \"controller-manager-5964cbcb45-drf2n\" (UID: \"41ddc794-fce7-4633-a525-2d491fca548e\") " pod="openshift-controller-manager/controller-manager-5964cbcb45-drf2n" Jan 22 06:40:02 crc kubenswrapper[4720]: I0122 06:40:02.144546 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-grk4c\" (UniqueName: \"kubernetes.io/projected/41ddc794-fce7-4633-a525-2d491fca548e-kube-api-access-grk4c\") pod \"controller-manager-5964cbcb45-drf2n\" (UID: \"41ddc794-fce7-4633-a525-2d491fca548e\") " pod="openshift-controller-manager/controller-manager-5964cbcb45-drf2n" Jan 22 06:40:02 crc kubenswrapper[4720]: I0122 06:40:02.144571 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5df93441-3446-474a-9f82-82bba08eb13f-client-ca\") pod \"route-controller-manager-5598468cdf-65dfj\" (UID: \"5df93441-3446-474a-9f82-82bba08eb13f\") " pod="openshift-route-controller-manager/route-controller-manager-5598468cdf-65dfj" Jan 22 06:40:02 crc kubenswrapper[4720]: I0122 06:40:02.144628 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5df93441-3446-474a-9f82-82bba08eb13f-config\") pod \"route-controller-manager-5598468cdf-65dfj\" (UID: \"5df93441-3446-474a-9f82-82bba08eb13f\") " pod="openshift-route-controller-manager/route-controller-manager-5598468cdf-65dfj" Jan 22 06:40:02 crc kubenswrapper[4720]: I0122 06:40:02.144651 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5df93441-3446-474a-9f82-82bba08eb13f-serving-cert\") pod \"route-controller-manager-5598468cdf-65dfj\" (UID: \"5df93441-3446-474a-9f82-82bba08eb13f\") " pod="openshift-route-controller-manager/route-controller-manager-5598468cdf-65dfj" Jan 22 06:40:02 crc kubenswrapper[4720]: I0122 06:40:02.144670 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k6sxv\" (UniqueName: \"kubernetes.io/projected/5df93441-3446-474a-9f82-82bba08eb13f-kube-api-access-k6sxv\") pod \"route-controller-manager-5598468cdf-65dfj\" (UID: \"5df93441-3446-474a-9f82-82bba08eb13f\") " pod="openshift-route-controller-manager/route-controller-manager-5598468cdf-65dfj" Jan 22 06:40:02 crc kubenswrapper[4720]: I0122 06:40:02.145700 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/41ddc794-fce7-4633-a525-2d491fca548e-client-ca\") pod \"controller-manager-5964cbcb45-drf2n\" (UID: \"41ddc794-fce7-4633-a525-2d491fca548e\") " pod="openshift-controller-manager/controller-manager-5964cbcb45-drf2n" Jan 22 06:40:02 crc kubenswrapper[4720]: I0122 06:40:02.146155 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/41ddc794-fce7-4633-a525-2d491fca548e-proxy-ca-bundles\") pod \"controller-manager-5964cbcb45-drf2n\" (UID: \"41ddc794-fce7-4633-a525-2d491fca548e\") " pod="openshift-controller-manager/controller-manager-5964cbcb45-drf2n" Jan 22 06:40:02 crc kubenswrapper[4720]: I0122 06:40:02.146228 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41ddc794-fce7-4633-a525-2d491fca548e-config\") pod \"controller-manager-5964cbcb45-drf2n\" (UID: \"41ddc794-fce7-4633-a525-2d491fca548e\") " pod="openshift-controller-manager/controller-manager-5964cbcb45-drf2n" Jan 22 06:40:02 crc kubenswrapper[4720]: I0122 06:40:02.156743 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/41ddc794-fce7-4633-a525-2d491fca548e-serving-cert\") pod \"controller-manager-5964cbcb45-drf2n\" (UID: \"41ddc794-fce7-4633-a525-2d491fca548e\") " pod="openshift-controller-manager/controller-manager-5964cbcb45-drf2n" Jan 22 06:40:02 crc kubenswrapper[4720]: I0122 06:40:02.156873 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5df93441-3446-474a-9f82-82bba08eb13f-serving-cert\") pod \"route-controller-manager-5598468cdf-65dfj\" (UID: \"5df93441-3446-474a-9f82-82bba08eb13f\") " pod="openshift-route-controller-manager/route-controller-manager-5598468cdf-65dfj" Jan 22 06:40:02 crc kubenswrapper[4720]: I0122 06:40:02.161875 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-grk4c\" (UniqueName: \"kubernetes.io/projected/41ddc794-fce7-4633-a525-2d491fca548e-kube-api-access-grk4c\") pod \"controller-manager-5964cbcb45-drf2n\" (UID: \"41ddc794-fce7-4633-a525-2d491fca548e\") " pod="openshift-controller-manager/controller-manager-5964cbcb45-drf2n" Jan 22 06:40:02 crc kubenswrapper[4720]: I0122 06:40:02.218726 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3f7c9fba-71e2-44d4-9601-be0ffa541be4" path="/var/lib/kubelet/pods/3f7c9fba-71e2-44d4-9601-be0ffa541be4/volumes" Jan 22 06:40:02 crc kubenswrapper[4720]: I0122 06:40:02.219517 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="508eaeea-db9b-4801-a9d3-a758e3ae9502" path="/var/lib/kubelet/pods/508eaeea-db9b-4801-a9d3-a758e3ae9502/volumes" Jan 22 06:40:02 crc kubenswrapper[4720]: I0122 06:40:02.264314 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5964cbcb45-drf2n" Jan 22 06:40:02 crc kubenswrapper[4720]: I0122 06:40:02.591053 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5964cbcb45-drf2n"] Jan 22 06:40:03 crc kubenswrapper[4720]: I0122 06:40:03.058654 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 22 06:40:03 crc kubenswrapper[4720]: E0122 06:40:03.145562 4720 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: failed to sync configmap cache: timed out waiting for the condition Jan 22 06:40:03 crc kubenswrapper[4720]: E0122 06:40:03.145607 4720 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/config: failed to sync configmap cache: timed out waiting for the condition Jan 22 06:40:03 crc kubenswrapper[4720]: E0122 06:40:03.145645 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5df93441-3446-474a-9f82-82bba08eb13f-client-ca podName:5df93441-3446-474a-9f82-82bba08eb13f nodeName:}" failed. No retries permitted until 2026-01-22 06:40:03.645624587 +0000 UTC m=+295.787531292 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/5df93441-3446-474a-9f82-82bba08eb13f-client-ca") pod "route-controller-manager-5598468cdf-65dfj" (UID: "5df93441-3446-474a-9f82-82bba08eb13f") : failed to sync configmap cache: timed out waiting for the condition Jan 22 06:40:03 crc kubenswrapper[4720]: E0122 06:40:03.145700 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5df93441-3446-474a-9f82-82bba08eb13f-config podName:5df93441-3446-474a-9f82-82bba08eb13f nodeName:}" failed. No retries permitted until 2026-01-22 06:40:03.645679908 +0000 UTC m=+295.787586613 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/5df93441-3446-474a-9f82-82bba08eb13f-config") pod "route-controller-manager-5598468cdf-65dfj" (UID: "5df93441-3446-474a-9f82-82bba08eb13f") : failed to sync configmap cache: timed out waiting for the condition Jan 22 06:40:03 crc kubenswrapper[4720]: I0122 06:40:03.153687 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 22 06:40:03 crc kubenswrapper[4720]: E0122 06:40:03.162480 4720 projected.go:288] Couldn't get configMap openshift-route-controller-manager/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Jan 22 06:40:03 crc kubenswrapper[4720]: E0122 06:40:03.162551 4720 projected.go:194] Error preparing data for projected volume kube-api-access-k6sxv for pod openshift-route-controller-manager/route-controller-manager-5598468cdf-65dfj: failed to sync configmap cache: timed out waiting for the condition Jan 22 06:40:03 crc kubenswrapper[4720]: E0122 06:40:03.162627 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5df93441-3446-474a-9f82-82bba08eb13f-kube-api-access-k6sxv podName:5df93441-3446-474a-9f82-82bba08eb13f nodeName:}" failed. No retries permitted until 2026-01-22 06:40:03.662605492 +0000 UTC m=+295.804512197 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-k6sxv" (UniqueName: "kubernetes.io/projected/5df93441-3446-474a-9f82-82bba08eb13f-kube-api-access-k6sxv") pod "route-controller-manager-5598468cdf-65dfj" (UID: "5df93441-3446-474a-9f82-82bba08eb13f") : failed to sync configmap cache: timed out waiting for the condition Jan 22 06:40:03 crc kubenswrapper[4720]: I0122 06:40:03.394191 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 22 06:40:03 crc kubenswrapper[4720]: I0122 06:40:03.423218 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5964cbcb45-drf2n" event={"ID":"41ddc794-fce7-4633-a525-2d491fca548e","Type":"ContainerStarted","Data":"d51438f0a23fd774008f24a60ed84375b3a415511ee73340ebcb827d709a8b00"} Jan 22 06:40:03 crc kubenswrapper[4720]: I0122 06:40:03.423283 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5964cbcb45-drf2n" event={"ID":"41ddc794-fce7-4633-a525-2d491fca548e","Type":"ContainerStarted","Data":"a51b4d153038cb3eac76e959d04c8c22fc62db3d3e313b972bfc720d7072f4c3"} Jan 22 06:40:03 crc kubenswrapper[4720]: I0122 06:40:03.423905 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-5964cbcb45-drf2n" Jan 22 06:40:03 crc kubenswrapper[4720]: I0122 06:40:03.433999 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-5964cbcb45-drf2n" Jan 22 06:40:03 crc kubenswrapper[4720]: I0122 06:40:03.460192 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 22 06:40:03 crc kubenswrapper[4720]: I0122 06:40:03.472769 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-5964cbcb45-drf2n" podStartSLOduration=3.472724266 podStartE2EDuration="3.472724266s" podCreationTimestamp="2026-01-22 06:40:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 06:40:03.462478457 +0000 UTC m=+295.604385192" watchObservedRunningTime="2026-01-22 06:40:03.472724266 +0000 UTC m=+295.614631031" Jan 22 06:40:03 crc kubenswrapper[4720]: I0122 06:40:03.531760 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 22 06:40:03 crc kubenswrapper[4720]: I0122 06:40:03.665695 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5df93441-3446-474a-9f82-82bba08eb13f-client-ca\") pod \"route-controller-manager-5598468cdf-65dfj\" (UID: \"5df93441-3446-474a-9f82-82bba08eb13f\") " pod="openshift-route-controller-manager/route-controller-manager-5598468cdf-65dfj" Jan 22 06:40:03 crc kubenswrapper[4720]: I0122 06:40:03.665767 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5df93441-3446-474a-9f82-82bba08eb13f-config\") pod \"route-controller-manager-5598468cdf-65dfj\" (UID: \"5df93441-3446-474a-9f82-82bba08eb13f\") " pod="openshift-route-controller-manager/route-controller-manager-5598468cdf-65dfj" Jan 22 06:40:03 crc kubenswrapper[4720]: I0122 06:40:03.665803 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k6sxv\" (UniqueName: \"kubernetes.io/projected/5df93441-3446-474a-9f82-82bba08eb13f-kube-api-access-k6sxv\") pod \"route-controller-manager-5598468cdf-65dfj\" (UID: \"5df93441-3446-474a-9f82-82bba08eb13f\") " pod="openshift-route-controller-manager/route-controller-manager-5598468cdf-65dfj" Jan 22 06:40:03 crc kubenswrapper[4720]: I0122 06:40:03.667427 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5df93441-3446-474a-9f82-82bba08eb13f-client-ca\") pod \"route-controller-manager-5598468cdf-65dfj\" (UID: \"5df93441-3446-474a-9f82-82bba08eb13f\") " pod="openshift-route-controller-manager/route-controller-manager-5598468cdf-65dfj" Jan 22 06:40:03 crc kubenswrapper[4720]: I0122 06:40:03.667854 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5df93441-3446-474a-9f82-82bba08eb13f-config\") pod \"route-controller-manager-5598468cdf-65dfj\" (UID: \"5df93441-3446-474a-9f82-82bba08eb13f\") " pod="openshift-route-controller-manager/route-controller-manager-5598468cdf-65dfj" Jan 22 06:40:03 crc kubenswrapper[4720]: I0122 06:40:03.679522 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k6sxv\" (UniqueName: \"kubernetes.io/projected/5df93441-3446-474a-9f82-82bba08eb13f-kube-api-access-k6sxv\") pod \"route-controller-manager-5598468cdf-65dfj\" (UID: \"5df93441-3446-474a-9f82-82bba08eb13f\") " pod="openshift-route-controller-manager/route-controller-manager-5598468cdf-65dfj" Jan 22 06:40:03 crc kubenswrapper[4720]: I0122 06:40:03.776823 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5598468cdf-65dfj" Jan 22 06:40:04 crc kubenswrapper[4720]: I0122 06:40:04.039669 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5598468cdf-65dfj"] Jan 22 06:40:04 crc kubenswrapper[4720]: I0122 06:40:04.431938 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5598468cdf-65dfj" event={"ID":"5df93441-3446-474a-9f82-82bba08eb13f","Type":"ContainerStarted","Data":"3bdb9ac20ad07c48ead25f659d2b5c66f2bea560ed6340dbe499784c93cf174d"} Jan 22 06:40:04 crc kubenswrapper[4720]: I0122 06:40:04.432444 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5598468cdf-65dfj" event={"ID":"5df93441-3446-474a-9f82-82bba08eb13f","Type":"ContainerStarted","Data":"6c0efc40d2f56a101069c6adc50dd4e04c8a7029e787cc37a5518f08fbb689c7"} Jan 22 06:40:04 crc kubenswrapper[4720]: I0122 06:40:04.432471 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-5598468cdf-65dfj" Jan 22 06:40:04 crc kubenswrapper[4720]: I0122 06:40:04.453132 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-5598468cdf-65dfj" podStartSLOduration=4.453101995 podStartE2EDuration="4.453101995s" podCreationTimestamp="2026-01-22 06:40:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 06:40:04.451371015 +0000 UTC m=+296.593277740" watchObservedRunningTime="2026-01-22 06:40:04.453101995 +0000 UTC m=+296.595008730" Jan 22 06:40:04 crc kubenswrapper[4720]: I0122 06:40:04.974688 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-5598468cdf-65dfj" Jan 22 06:40:08 crc kubenswrapper[4720]: I0122 06:40:08.018199 4720 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Jan 22 06:40:20 crc kubenswrapper[4720]: I0122 06:40:20.297455 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5964cbcb45-drf2n"] Jan 22 06:40:20 crc kubenswrapper[4720]: I0122 06:40:20.300960 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-5964cbcb45-drf2n" podUID="41ddc794-fce7-4633-a525-2d491fca548e" containerName="controller-manager" containerID="cri-o://d51438f0a23fd774008f24a60ed84375b3a415511ee73340ebcb827d709a8b00" gracePeriod=30 Jan 22 06:40:20 crc kubenswrapper[4720]: I0122 06:40:20.533704 4720 generic.go:334] "Generic (PLEG): container finished" podID="41ddc794-fce7-4633-a525-2d491fca548e" containerID="d51438f0a23fd774008f24a60ed84375b3a415511ee73340ebcb827d709a8b00" exitCode=0 Jan 22 06:40:20 crc kubenswrapper[4720]: I0122 06:40:20.533777 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5964cbcb45-drf2n" event={"ID":"41ddc794-fce7-4633-a525-2d491fca548e","Type":"ContainerDied","Data":"d51438f0a23fd774008f24a60ed84375b3a415511ee73340ebcb827d709a8b00"} Jan 22 06:40:21 crc kubenswrapper[4720]: I0122 06:40:21.404175 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5964cbcb45-drf2n" Jan 22 06:40:21 crc kubenswrapper[4720]: I0122 06:40:21.443286 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-57f4c8cf9d-m8pxt"] Jan 22 06:40:21 crc kubenswrapper[4720]: E0122 06:40:21.443711 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41ddc794-fce7-4633-a525-2d491fca548e" containerName="controller-manager" Jan 22 06:40:21 crc kubenswrapper[4720]: I0122 06:40:21.443734 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="41ddc794-fce7-4633-a525-2d491fca548e" containerName="controller-manager" Jan 22 06:40:21 crc kubenswrapper[4720]: I0122 06:40:21.443933 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="41ddc794-fce7-4633-a525-2d491fca548e" containerName="controller-manager" Jan 22 06:40:21 crc kubenswrapper[4720]: I0122 06:40:21.444605 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-57f4c8cf9d-m8pxt" Jan 22 06:40:21 crc kubenswrapper[4720]: I0122 06:40:21.458097 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-57f4c8cf9d-m8pxt"] Jan 22 06:40:21 crc kubenswrapper[4720]: I0122 06:40:21.520119 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/41ddc794-fce7-4633-a525-2d491fca548e-client-ca\") pod \"41ddc794-fce7-4633-a525-2d491fca548e\" (UID: \"41ddc794-fce7-4633-a525-2d491fca548e\") " Jan 22 06:40:21 crc kubenswrapper[4720]: I0122 06:40:21.520553 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/41ddc794-fce7-4633-a525-2d491fca548e-proxy-ca-bundles\") pod \"41ddc794-fce7-4633-a525-2d491fca548e\" (UID: \"41ddc794-fce7-4633-a525-2d491fca548e\") " Jan 22 06:40:21 crc kubenswrapper[4720]: I0122 06:40:21.520663 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/41ddc794-fce7-4633-a525-2d491fca548e-serving-cert\") pod \"41ddc794-fce7-4633-a525-2d491fca548e\" (UID: \"41ddc794-fce7-4633-a525-2d491fca548e\") " Jan 22 06:40:21 crc kubenswrapper[4720]: I0122 06:40:21.520694 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-grk4c\" (UniqueName: \"kubernetes.io/projected/41ddc794-fce7-4633-a525-2d491fca548e-kube-api-access-grk4c\") pod \"41ddc794-fce7-4633-a525-2d491fca548e\" (UID: \"41ddc794-fce7-4633-a525-2d491fca548e\") " Jan 22 06:40:21 crc kubenswrapper[4720]: I0122 06:40:21.520728 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41ddc794-fce7-4633-a525-2d491fca548e-config\") pod \"41ddc794-fce7-4633-a525-2d491fca548e\" (UID: \"41ddc794-fce7-4633-a525-2d491fca548e\") " Jan 22 06:40:21 crc kubenswrapper[4720]: I0122 06:40:21.521009 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v5t26\" (UniqueName: \"kubernetes.io/projected/37b7dc91-dcbd-4de8-9bd3-ec60512c980b-kube-api-access-v5t26\") pod \"controller-manager-57f4c8cf9d-m8pxt\" (UID: \"37b7dc91-dcbd-4de8-9bd3-ec60512c980b\") " pod="openshift-controller-manager/controller-manager-57f4c8cf9d-m8pxt" Jan 22 06:40:21 crc kubenswrapper[4720]: I0122 06:40:21.521066 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/37b7dc91-dcbd-4de8-9bd3-ec60512c980b-config\") pod \"controller-manager-57f4c8cf9d-m8pxt\" (UID: \"37b7dc91-dcbd-4de8-9bd3-ec60512c980b\") " pod="openshift-controller-manager/controller-manager-57f4c8cf9d-m8pxt" Jan 22 06:40:21 crc kubenswrapper[4720]: I0122 06:40:21.521095 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/37b7dc91-dcbd-4de8-9bd3-ec60512c980b-serving-cert\") pod \"controller-manager-57f4c8cf9d-m8pxt\" (UID: \"37b7dc91-dcbd-4de8-9bd3-ec60512c980b\") " pod="openshift-controller-manager/controller-manager-57f4c8cf9d-m8pxt" Jan 22 06:40:21 crc kubenswrapper[4720]: I0122 06:40:21.521194 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/41ddc794-fce7-4633-a525-2d491fca548e-client-ca" (OuterVolumeSpecName: "client-ca") pod "41ddc794-fce7-4633-a525-2d491fca548e" (UID: "41ddc794-fce7-4633-a525-2d491fca548e"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 06:40:21 crc kubenswrapper[4720]: I0122 06:40:21.521325 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/37b7dc91-dcbd-4de8-9bd3-ec60512c980b-client-ca\") pod \"controller-manager-57f4c8cf9d-m8pxt\" (UID: \"37b7dc91-dcbd-4de8-9bd3-ec60512c980b\") " pod="openshift-controller-manager/controller-manager-57f4c8cf9d-m8pxt" Jan 22 06:40:21 crc kubenswrapper[4720]: I0122 06:40:21.521369 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/37b7dc91-dcbd-4de8-9bd3-ec60512c980b-proxy-ca-bundles\") pod \"controller-manager-57f4c8cf9d-m8pxt\" (UID: \"37b7dc91-dcbd-4de8-9bd3-ec60512c980b\") " pod="openshift-controller-manager/controller-manager-57f4c8cf9d-m8pxt" Jan 22 06:40:21 crc kubenswrapper[4720]: I0122 06:40:21.521391 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/41ddc794-fce7-4633-a525-2d491fca548e-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "41ddc794-fce7-4633-a525-2d491fca548e" (UID: "41ddc794-fce7-4633-a525-2d491fca548e"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 06:40:21 crc kubenswrapper[4720]: I0122 06:40:21.521500 4720 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/41ddc794-fce7-4633-a525-2d491fca548e-client-ca\") on node \"crc\" DevicePath \"\"" Jan 22 06:40:21 crc kubenswrapper[4720]: I0122 06:40:21.521515 4720 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/41ddc794-fce7-4633-a525-2d491fca548e-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 22 06:40:21 crc kubenswrapper[4720]: I0122 06:40:21.521553 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/41ddc794-fce7-4633-a525-2d491fca548e-config" (OuterVolumeSpecName: "config") pod "41ddc794-fce7-4633-a525-2d491fca548e" (UID: "41ddc794-fce7-4633-a525-2d491fca548e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 06:40:21 crc kubenswrapper[4720]: I0122 06:40:21.526326 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41ddc794-fce7-4633-a525-2d491fca548e-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "41ddc794-fce7-4633-a525-2d491fca548e" (UID: "41ddc794-fce7-4633-a525-2d491fca548e"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 06:40:21 crc kubenswrapper[4720]: I0122 06:40:21.535057 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/41ddc794-fce7-4633-a525-2d491fca548e-kube-api-access-grk4c" (OuterVolumeSpecName: "kube-api-access-grk4c") pod "41ddc794-fce7-4633-a525-2d491fca548e" (UID: "41ddc794-fce7-4633-a525-2d491fca548e"). InnerVolumeSpecName "kube-api-access-grk4c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 06:40:21 crc kubenswrapper[4720]: I0122 06:40:21.550592 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5964cbcb45-drf2n" event={"ID":"41ddc794-fce7-4633-a525-2d491fca548e","Type":"ContainerDied","Data":"a51b4d153038cb3eac76e959d04c8c22fc62db3d3e313b972bfc720d7072f4c3"} Jan 22 06:40:21 crc kubenswrapper[4720]: I0122 06:40:21.550663 4720 scope.go:117] "RemoveContainer" containerID="d51438f0a23fd774008f24a60ed84375b3a415511ee73340ebcb827d709a8b00" Jan 22 06:40:21 crc kubenswrapper[4720]: I0122 06:40:21.550706 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5964cbcb45-drf2n" Jan 22 06:40:21 crc kubenswrapper[4720]: I0122 06:40:21.595586 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5964cbcb45-drf2n"] Jan 22 06:40:21 crc kubenswrapper[4720]: I0122 06:40:21.606466 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-5964cbcb45-drf2n"] Jan 22 06:40:21 crc kubenswrapper[4720]: I0122 06:40:21.622842 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/37b7dc91-dcbd-4de8-9bd3-ec60512c980b-client-ca\") pod \"controller-manager-57f4c8cf9d-m8pxt\" (UID: \"37b7dc91-dcbd-4de8-9bd3-ec60512c980b\") " pod="openshift-controller-manager/controller-manager-57f4c8cf9d-m8pxt" Jan 22 06:40:21 crc kubenswrapper[4720]: I0122 06:40:21.623137 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/37b7dc91-dcbd-4de8-9bd3-ec60512c980b-proxy-ca-bundles\") pod \"controller-manager-57f4c8cf9d-m8pxt\" (UID: \"37b7dc91-dcbd-4de8-9bd3-ec60512c980b\") " pod="openshift-controller-manager/controller-manager-57f4c8cf9d-m8pxt" Jan 22 06:40:21 crc kubenswrapper[4720]: I0122 06:40:21.623277 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v5t26\" (UniqueName: \"kubernetes.io/projected/37b7dc91-dcbd-4de8-9bd3-ec60512c980b-kube-api-access-v5t26\") pod \"controller-manager-57f4c8cf9d-m8pxt\" (UID: \"37b7dc91-dcbd-4de8-9bd3-ec60512c980b\") " pod="openshift-controller-manager/controller-manager-57f4c8cf9d-m8pxt" Jan 22 06:40:21 crc kubenswrapper[4720]: I0122 06:40:21.623396 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/37b7dc91-dcbd-4de8-9bd3-ec60512c980b-config\") pod \"controller-manager-57f4c8cf9d-m8pxt\" (UID: \"37b7dc91-dcbd-4de8-9bd3-ec60512c980b\") " pod="openshift-controller-manager/controller-manager-57f4c8cf9d-m8pxt" Jan 22 06:40:21 crc kubenswrapper[4720]: I0122 06:40:21.624280 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/37b7dc91-dcbd-4de8-9bd3-ec60512c980b-proxy-ca-bundles\") pod \"controller-manager-57f4c8cf9d-m8pxt\" (UID: \"37b7dc91-dcbd-4de8-9bd3-ec60512c980b\") " pod="openshift-controller-manager/controller-manager-57f4c8cf9d-m8pxt" Jan 22 06:40:21 crc kubenswrapper[4720]: I0122 06:40:21.624343 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/37b7dc91-dcbd-4de8-9bd3-ec60512c980b-serving-cert\") pod \"controller-manager-57f4c8cf9d-m8pxt\" (UID: \"37b7dc91-dcbd-4de8-9bd3-ec60512c980b\") " pod="openshift-controller-manager/controller-manager-57f4c8cf9d-m8pxt" Jan 22 06:40:21 crc kubenswrapper[4720]: I0122 06:40:21.624427 4720 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/41ddc794-fce7-4633-a525-2d491fca548e-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 06:40:21 crc kubenswrapper[4720]: I0122 06:40:21.624440 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-grk4c\" (UniqueName: \"kubernetes.io/projected/41ddc794-fce7-4633-a525-2d491fca548e-kube-api-access-grk4c\") on node \"crc\" DevicePath \"\"" Jan 22 06:40:21 crc kubenswrapper[4720]: I0122 06:40:21.624457 4720 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41ddc794-fce7-4633-a525-2d491fca548e-config\") on node \"crc\" DevicePath \"\"" Jan 22 06:40:21 crc kubenswrapper[4720]: I0122 06:40:21.624463 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/37b7dc91-dcbd-4de8-9bd3-ec60512c980b-client-ca\") pod \"controller-manager-57f4c8cf9d-m8pxt\" (UID: \"37b7dc91-dcbd-4de8-9bd3-ec60512c980b\") " pod="openshift-controller-manager/controller-manager-57f4c8cf9d-m8pxt" Jan 22 06:40:21 crc kubenswrapper[4720]: I0122 06:40:21.625569 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/37b7dc91-dcbd-4de8-9bd3-ec60512c980b-config\") pod \"controller-manager-57f4c8cf9d-m8pxt\" (UID: \"37b7dc91-dcbd-4de8-9bd3-ec60512c980b\") " pod="openshift-controller-manager/controller-manager-57f4c8cf9d-m8pxt" Jan 22 06:40:21 crc kubenswrapper[4720]: I0122 06:40:21.629481 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/37b7dc91-dcbd-4de8-9bd3-ec60512c980b-serving-cert\") pod \"controller-manager-57f4c8cf9d-m8pxt\" (UID: \"37b7dc91-dcbd-4de8-9bd3-ec60512c980b\") " pod="openshift-controller-manager/controller-manager-57f4c8cf9d-m8pxt" Jan 22 06:40:21 crc kubenswrapper[4720]: I0122 06:40:21.645649 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v5t26\" (UniqueName: \"kubernetes.io/projected/37b7dc91-dcbd-4de8-9bd3-ec60512c980b-kube-api-access-v5t26\") pod \"controller-manager-57f4c8cf9d-m8pxt\" (UID: \"37b7dc91-dcbd-4de8-9bd3-ec60512c980b\") " pod="openshift-controller-manager/controller-manager-57f4c8cf9d-m8pxt" Jan 22 06:40:21 crc kubenswrapper[4720]: I0122 06:40:21.764208 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-57f4c8cf9d-m8pxt" Jan 22 06:40:22 crc kubenswrapper[4720]: I0122 06:40:22.021484 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-57f4c8cf9d-m8pxt"] Jan 22 06:40:22 crc kubenswrapper[4720]: W0122 06:40:22.026235 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37b7dc91_dcbd_4de8_9bd3_ec60512c980b.slice/crio-a6e86987d24c4fa0a6e44798f22397465d268c11a50e957a276cf48803ead635 WatchSource:0}: Error finding container a6e86987d24c4fa0a6e44798f22397465d268c11a50e957a276cf48803ead635: Status 404 returned error can't find the container with id a6e86987d24c4fa0a6e44798f22397465d268c11a50e957a276cf48803ead635 Jan 22 06:40:22 crc kubenswrapper[4720]: I0122 06:40:22.219081 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="41ddc794-fce7-4633-a525-2d491fca548e" path="/var/lib/kubelet/pods/41ddc794-fce7-4633-a525-2d491fca548e/volumes" Jan 22 06:40:22 crc kubenswrapper[4720]: I0122 06:40:22.561052 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-57f4c8cf9d-m8pxt" event={"ID":"37b7dc91-dcbd-4de8-9bd3-ec60512c980b","Type":"ContainerStarted","Data":"535c626886d121fc8be7650492ff444b0a7f179c6022ef30ec2c78480cae46c2"} Jan 22 06:40:22 crc kubenswrapper[4720]: I0122 06:40:22.561115 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-57f4c8cf9d-m8pxt" Jan 22 06:40:22 crc kubenswrapper[4720]: I0122 06:40:22.561133 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-57f4c8cf9d-m8pxt" event={"ID":"37b7dc91-dcbd-4de8-9bd3-ec60512c980b","Type":"ContainerStarted","Data":"a6e86987d24c4fa0a6e44798f22397465d268c11a50e957a276cf48803ead635"} Jan 22 06:40:22 crc kubenswrapper[4720]: I0122 06:40:22.565848 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-57f4c8cf9d-m8pxt" Jan 22 06:40:22 crc kubenswrapper[4720]: I0122 06:40:22.577971 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-57f4c8cf9d-m8pxt" podStartSLOduration=2.577953097 podStartE2EDuration="2.577953097s" podCreationTimestamp="2026-01-22 06:40:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 06:40:22.574480426 +0000 UTC m=+314.716387141" watchObservedRunningTime="2026-01-22 06:40:22.577953097 +0000 UTC m=+314.719859802" Jan 22 06:41:08 crc kubenswrapper[4720]: I0122 06:41:08.747239 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-mlgq8"] Jan 22 06:41:08 crc kubenswrapper[4720]: I0122 06:41:08.748838 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-mlgq8" Jan 22 06:41:08 crc kubenswrapper[4720]: I0122 06:41:08.762864 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-mlgq8"] Jan 22 06:41:08 crc kubenswrapper[4720]: I0122 06:41:08.850409 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/be139f4c-b5f7-45df-b015-707e901f0614-trusted-ca\") pod \"image-registry-66df7c8f76-mlgq8\" (UID: \"be139f4c-b5f7-45df-b015-707e901f0614\") " pod="openshift-image-registry/image-registry-66df7c8f76-mlgq8" Jan 22 06:41:08 crc kubenswrapper[4720]: I0122 06:41:08.850468 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/be139f4c-b5f7-45df-b015-707e901f0614-installation-pull-secrets\") pod \"image-registry-66df7c8f76-mlgq8\" (UID: \"be139f4c-b5f7-45df-b015-707e901f0614\") " pod="openshift-image-registry/image-registry-66df7c8f76-mlgq8" Jan 22 06:41:08 crc kubenswrapper[4720]: I0122 06:41:08.850535 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/be139f4c-b5f7-45df-b015-707e901f0614-ca-trust-extracted\") pod \"image-registry-66df7c8f76-mlgq8\" (UID: \"be139f4c-b5f7-45df-b015-707e901f0614\") " pod="openshift-image-registry/image-registry-66df7c8f76-mlgq8" Jan 22 06:41:08 crc kubenswrapper[4720]: I0122 06:41:08.850552 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/be139f4c-b5f7-45df-b015-707e901f0614-registry-certificates\") pod \"image-registry-66df7c8f76-mlgq8\" (UID: \"be139f4c-b5f7-45df-b015-707e901f0614\") " pod="openshift-image-registry/image-registry-66df7c8f76-mlgq8" Jan 22 06:41:08 crc kubenswrapper[4720]: I0122 06:41:08.850734 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-mlgq8\" (UID: \"be139f4c-b5f7-45df-b015-707e901f0614\") " pod="openshift-image-registry/image-registry-66df7c8f76-mlgq8" Jan 22 06:41:08 crc kubenswrapper[4720]: I0122 06:41:08.850816 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8qj4j\" (UniqueName: \"kubernetes.io/projected/be139f4c-b5f7-45df-b015-707e901f0614-kube-api-access-8qj4j\") pod \"image-registry-66df7c8f76-mlgq8\" (UID: \"be139f4c-b5f7-45df-b015-707e901f0614\") " pod="openshift-image-registry/image-registry-66df7c8f76-mlgq8" Jan 22 06:41:08 crc kubenswrapper[4720]: I0122 06:41:08.850982 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/be139f4c-b5f7-45df-b015-707e901f0614-bound-sa-token\") pod \"image-registry-66df7c8f76-mlgq8\" (UID: \"be139f4c-b5f7-45df-b015-707e901f0614\") " pod="openshift-image-registry/image-registry-66df7c8f76-mlgq8" Jan 22 06:41:08 crc kubenswrapper[4720]: I0122 06:41:08.851217 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/be139f4c-b5f7-45df-b015-707e901f0614-registry-tls\") pod \"image-registry-66df7c8f76-mlgq8\" (UID: \"be139f4c-b5f7-45df-b015-707e901f0614\") " pod="openshift-image-registry/image-registry-66df7c8f76-mlgq8" Jan 22 06:41:08 crc kubenswrapper[4720]: I0122 06:41:08.891451 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-mlgq8\" (UID: \"be139f4c-b5f7-45df-b015-707e901f0614\") " pod="openshift-image-registry/image-registry-66df7c8f76-mlgq8" Jan 22 06:41:08 crc kubenswrapper[4720]: I0122 06:41:08.952653 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/be139f4c-b5f7-45df-b015-707e901f0614-ca-trust-extracted\") pod \"image-registry-66df7c8f76-mlgq8\" (UID: \"be139f4c-b5f7-45df-b015-707e901f0614\") " pod="openshift-image-registry/image-registry-66df7c8f76-mlgq8" Jan 22 06:41:08 crc kubenswrapper[4720]: I0122 06:41:08.952704 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/be139f4c-b5f7-45df-b015-707e901f0614-registry-certificates\") pod \"image-registry-66df7c8f76-mlgq8\" (UID: \"be139f4c-b5f7-45df-b015-707e901f0614\") " pod="openshift-image-registry/image-registry-66df7c8f76-mlgq8" Jan 22 06:41:08 crc kubenswrapper[4720]: I0122 06:41:08.952733 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8qj4j\" (UniqueName: \"kubernetes.io/projected/be139f4c-b5f7-45df-b015-707e901f0614-kube-api-access-8qj4j\") pod \"image-registry-66df7c8f76-mlgq8\" (UID: \"be139f4c-b5f7-45df-b015-707e901f0614\") " pod="openshift-image-registry/image-registry-66df7c8f76-mlgq8" Jan 22 06:41:08 crc kubenswrapper[4720]: I0122 06:41:08.952765 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/be139f4c-b5f7-45df-b015-707e901f0614-bound-sa-token\") pod \"image-registry-66df7c8f76-mlgq8\" (UID: \"be139f4c-b5f7-45df-b015-707e901f0614\") " pod="openshift-image-registry/image-registry-66df7c8f76-mlgq8" Jan 22 06:41:08 crc kubenswrapper[4720]: I0122 06:41:08.952791 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/be139f4c-b5f7-45df-b015-707e901f0614-registry-tls\") pod \"image-registry-66df7c8f76-mlgq8\" (UID: \"be139f4c-b5f7-45df-b015-707e901f0614\") " pod="openshift-image-registry/image-registry-66df7c8f76-mlgq8" Jan 22 06:41:08 crc kubenswrapper[4720]: I0122 06:41:08.952817 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/be139f4c-b5f7-45df-b015-707e901f0614-trusted-ca\") pod \"image-registry-66df7c8f76-mlgq8\" (UID: \"be139f4c-b5f7-45df-b015-707e901f0614\") " pod="openshift-image-registry/image-registry-66df7c8f76-mlgq8" Jan 22 06:41:08 crc kubenswrapper[4720]: I0122 06:41:08.952840 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/be139f4c-b5f7-45df-b015-707e901f0614-installation-pull-secrets\") pod \"image-registry-66df7c8f76-mlgq8\" (UID: \"be139f4c-b5f7-45df-b015-707e901f0614\") " pod="openshift-image-registry/image-registry-66df7c8f76-mlgq8" Jan 22 06:41:08 crc kubenswrapper[4720]: I0122 06:41:08.954256 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/be139f4c-b5f7-45df-b015-707e901f0614-ca-trust-extracted\") pod \"image-registry-66df7c8f76-mlgq8\" (UID: \"be139f4c-b5f7-45df-b015-707e901f0614\") " pod="openshift-image-registry/image-registry-66df7c8f76-mlgq8" Jan 22 06:41:08 crc kubenswrapper[4720]: I0122 06:41:08.954785 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/be139f4c-b5f7-45df-b015-707e901f0614-registry-certificates\") pod \"image-registry-66df7c8f76-mlgq8\" (UID: \"be139f4c-b5f7-45df-b015-707e901f0614\") " pod="openshift-image-registry/image-registry-66df7c8f76-mlgq8" Jan 22 06:41:08 crc kubenswrapper[4720]: I0122 06:41:08.956260 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/be139f4c-b5f7-45df-b015-707e901f0614-trusted-ca\") pod \"image-registry-66df7c8f76-mlgq8\" (UID: \"be139f4c-b5f7-45df-b015-707e901f0614\") " pod="openshift-image-registry/image-registry-66df7c8f76-mlgq8" Jan 22 06:41:08 crc kubenswrapper[4720]: I0122 06:41:08.979620 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/be139f4c-b5f7-45df-b015-707e901f0614-registry-tls\") pod \"image-registry-66df7c8f76-mlgq8\" (UID: \"be139f4c-b5f7-45df-b015-707e901f0614\") " pod="openshift-image-registry/image-registry-66df7c8f76-mlgq8" Jan 22 06:41:08 crc kubenswrapper[4720]: I0122 06:41:08.979859 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/be139f4c-b5f7-45df-b015-707e901f0614-installation-pull-secrets\") pod \"image-registry-66df7c8f76-mlgq8\" (UID: \"be139f4c-b5f7-45df-b015-707e901f0614\") " pod="openshift-image-registry/image-registry-66df7c8f76-mlgq8" Jan 22 06:41:08 crc kubenswrapper[4720]: I0122 06:41:08.982091 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/be139f4c-b5f7-45df-b015-707e901f0614-bound-sa-token\") pod \"image-registry-66df7c8f76-mlgq8\" (UID: \"be139f4c-b5f7-45df-b015-707e901f0614\") " pod="openshift-image-registry/image-registry-66df7c8f76-mlgq8" Jan 22 06:41:08 crc kubenswrapper[4720]: I0122 06:41:08.986884 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8qj4j\" (UniqueName: \"kubernetes.io/projected/be139f4c-b5f7-45df-b015-707e901f0614-kube-api-access-8qj4j\") pod \"image-registry-66df7c8f76-mlgq8\" (UID: \"be139f4c-b5f7-45df-b015-707e901f0614\") " pod="openshift-image-registry/image-registry-66df7c8f76-mlgq8" Jan 22 06:41:09 crc kubenswrapper[4720]: I0122 06:41:09.074782 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-mlgq8" Jan 22 06:41:09 crc kubenswrapper[4720]: I0122 06:41:09.334124 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-mlgq8"] Jan 22 06:41:09 crc kubenswrapper[4720]: I0122 06:41:09.917479 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-mlgq8" event={"ID":"be139f4c-b5f7-45df-b015-707e901f0614","Type":"ContainerStarted","Data":"88c3b2c64b74d188492e7808d22255089b4c5cc6a96015e83be864620cd770d0"} Jan 22 06:41:09 crc kubenswrapper[4720]: I0122 06:41:09.917541 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-mlgq8" event={"ID":"be139f4c-b5f7-45df-b015-707e901f0614","Type":"ContainerStarted","Data":"d7043c54d002e66cad608c9f6a504765ea5938ab7c709d767d9395b5369be5cd"} Jan 22 06:41:09 crc kubenswrapper[4720]: I0122 06:41:09.918157 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-mlgq8" Jan 22 06:41:09 crc kubenswrapper[4720]: I0122 06:41:09.958604 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-mlgq8" podStartSLOduration=1.958533299 podStartE2EDuration="1.958533299s" podCreationTimestamp="2026-01-22 06:41:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 06:41:09.952998721 +0000 UTC m=+362.094905456" watchObservedRunningTime="2026-01-22 06:41:09.958533299 +0000 UTC m=+362.100440054" Jan 22 06:41:13 crc kubenswrapper[4720]: I0122 06:41:13.909268 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-dgfdc"] Jan 22 06:41:13 crc kubenswrapper[4720]: I0122 06:41:13.910413 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-dgfdc" podUID="67487e16-e2f8-441f-9fd2-41e1997d91df" containerName="registry-server" containerID="cri-o://be07eacf9ac2b8f6a2b52544e60aebcd745de2376532ca47276b19811c6c6acb" gracePeriod=30 Jan 22 06:41:13 crc kubenswrapper[4720]: I0122 06:41:13.932667 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bvbhh"] Jan 22 06:41:13 crc kubenswrapper[4720]: I0122 06:41:13.935621 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-bvbhh" podUID="557f2e7c-b408-456f-bfc8-b6714839b46a" containerName="registry-server" containerID="cri-o://65108877e25c65782214e1cdcf650489c6d812dec07aa151ea3caf11d47171ff" gracePeriod=30 Jan 22 06:41:13 crc kubenswrapper[4720]: I0122 06:41:13.954148 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-nhzl2"] Jan 22 06:41:13 crc kubenswrapper[4720]: I0122 06:41:13.954520 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-nhzl2" podUID="41f9ff9a-13f9-49b2-8ba6-0f56462cc94c" containerName="marketplace-operator" containerID="cri-o://84563aa1228da1b60aeed2a84b7aab7fc81ef587a6288b7357e30f1403934c79" gracePeriod=30 Jan 22 06:41:13 crc kubenswrapper[4720]: I0122 06:41:13.964868 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-z59w9"] Jan 22 06:41:13 crc kubenswrapper[4720]: I0122 06:41:13.965187 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-z59w9" podUID="42ecbfe2-1714-40ca-b7ac-191fcbd65b0e" containerName="registry-server" containerID="cri-o://34816e439dd030ce2ecf1a7f4102df4439518dcb55c621262166471c4536e4a8" gracePeriod=30 Jan 22 06:41:13 crc kubenswrapper[4720]: I0122 06:41:13.975884 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-nkz4c"] Jan 22 06:41:13 crc kubenswrapper[4720]: I0122 06:41:13.976255 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-nkz4c" podUID="c8e6204f-9762-43b9-859a-74aaf49f30f4" containerName="registry-server" containerID="cri-o://b6d436a4de33fd7e0b314a607636017f40c2d26b8ed4e1ef36bab6c0042c6064" gracePeriod=30 Jan 22 06:41:13 crc kubenswrapper[4720]: I0122 06:41:13.985284 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-bg62x"] Jan 22 06:41:13 crc kubenswrapper[4720]: I0122 06:41:13.986306 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-bg62x" Jan 22 06:41:13 crc kubenswrapper[4720]: I0122 06:41:13.999548 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-bg62x"] Jan 22 06:41:14 crc kubenswrapper[4720]: I0122 06:41:14.049844 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/1311d24a-e35a-489c-8010-0bca3da90f0f-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-bg62x\" (UID: \"1311d24a-e35a-489c-8010-0bca3da90f0f\") " pod="openshift-marketplace/marketplace-operator-79b997595-bg62x" Jan 22 06:41:14 crc kubenswrapper[4720]: I0122 06:41:14.049917 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gmcmh\" (UniqueName: \"kubernetes.io/projected/1311d24a-e35a-489c-8010-0bca3da90f0f-kube-api-access-gmcmh\") pod \"marketplace-operator-79b997595-bg62x\" (UID: \"1311d24a-e35a-489c-8010-0bca3da90f0f\") " pod="openshift-marketplace/marketplace-operator-79b997595-bg62x" Jan 22 06:41:14 crc kubenswrapper[4720]: I0122 06:41:14.049966 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1311d24a-e35a-489c-8010-0bca3da90f0f-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-bg62x\" (UID: \"1311d24a-e35a-489c-8010-0bca3da90f0f\") " pod="openshift-marketplace/marketplace-operator-79b997595-bg62x" Jan 22 06:41:14 crc kubenswrapper[4720]: I0122 06:41:14.152872 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gmcmh\" (UniqueName: \"kubernetes.io/projected/1311d24a-e35a-489c-8010-0bca3da90f0f-kube-api-access-gmcmh\") pod \"marketplace-operator-79b997595-bg62x\" (UID: \"1311d24a-e35a-489c-8010-0bca3da90f0f\") " pod="openshift-marketplace/marketplace-operator-79b997595-bg62x" Jan 22 06:41:14 crc kubenswrapper[4720]: I0122 06:41:14.152978 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1311d24a-e35a-489c-8010-0bca3da90f0f-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-bg62x\" (UID: \"1311d24a-e35a-489c-8010-0bca3da90f0f\") " pod="openshift-marketplace/marketplace-operator-79b997595-bg62x" Jan 22 06:41:14 crc kubenswrapper[4720]: I0122 06:41:14.153070 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/1311d24a-e35a-489c-8010-0bca3da90f0f-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-bg62x\" (UID: \"1311d24a-e35a-489c-8010-0bca3da90f0f\") " pod="openshift-marketplace/marketplace-operator-79b997595-bg62x" Jan 22 06:41:14 crc kubenswrapper[4720]: I0122 06:41:14.157322 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1311d24a-e35a-489c-8010-0bca3da90f0f-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-bg62x\" (UID: \"1311d24a-e35a-489c-8010-0bca3da90f0f\") " pod="openshift-marketplace/marketplace-operator-79b997595-bg62x" Jan 22 06:41:14 crc kubenswrapper[4720]: I0122 06:41:14.161817 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/1311d24a-e35a-489c-8010-0bca3da90f0f-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-bg62x\" (UID: \"1311d24a-e35a-489c-8010-0bca3da90f0f\") " pod="openshift-marketplace/marketplace-operator-79b997595-bg62x" Jan 22 06:41:14 crc kubenswrapper[4720]: I0122 06:41:14.176354 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gmcmh\" (UniqueName: \"kubernetes.io/projected/1311d24a-e35a-489c-8010-0bca3da90f0f-kube-api-access-gmcmh\") pod \"marketplace-operator-79b997595-bg62x\" (UID: \"1311d24a-e35a-489c-8010-0bca3da90f0f\") " pod="openshift-marketplace/marketplace-operator-79b997595-bg62x" Jan 22 06:41:14 crc kubenswrapper[4720]: I0122 06:41:14.343325 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-bg62x" Jan 22 06:41:14 crc kubenswrapper[4720]: I0122 06:41:14.356322 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dgfdc" Jan 22 06:41:14 crc kubenswrapper[4720]: E0122 06:41:14.399817 4720 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 65108877e25c65782214e1cdcf650489c6d812dec07aa151ea3caf11d47171ff is running failed: container process not found" containerID="65108877e25c65782214e1cdcf650489c6d812dec07aa151ea3caf11d47171ff" cmd=["grpc_health_probe","-addr=:50051"] Jan 22 06:41:14 crc kubenswrapper[4720]: E0122 06:41:14.400574 4720 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 65108877e25c65782214e1cdcf650489c6d812dec07aa151ea3caf11d47171ff is running failed: container process not found" containerID="65108877e25c65782214e1cdcf650489c6d812dec07aa151ea3caf11d47171ff" cmd=["grpc_health_probe","-addr=:50051"] Jan 22 06:41:14 crc kubenswrapper[4720]: E0122 06:41:14.400880 4720 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 65108877e25c65782214e1cdcf650489c6d812dec07aa151ea3caf11d47171ff is running failed: container process not found" containerID="65108877e25c65782214e1cdcf650489c6d812dec07aa151ea3caf11d47171ff" cmd=["grpc_health_probe","-addr=:50051"] Jan 22 06:41:14 crc kubenswrapper[4720]: E0122 06:41:14.400927 4720 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 65108877e25c65782214e1cdcf650489c6d812dec07aa151ea3caf11d47171ff is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/community-operators-bvbhh" podUID="557f2e7c-b408-456f-bfc8-b6714839b46a" containerName="registry-server" Jan 22 06:41:14 crc kubenswrapper[4720]: I0122 06:41:14.415942 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bvbhh" Jan 22 06:41:14 crc kubenswrapper[4720]: I0122 06:41:14.427765 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-nhzl2" Jan 22 06:41:14 crc kubenswrapper[4720]: I0122 06:41:14.434222 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nkz4c" Jan 22 06:41:14 crc kubenswrapper[4720]: I0122 06:41:14.457749 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/557f2e7c-b408-456f-bfc8-b6714839b46a-catalog-content\") pod \"557f2e7c-b408-456f-bfc8-b6714839b46a\" (UID: \"557f2e7c-b408-456f-bfc8-b6714839b46a\") " Jan 22 06:41:14 crc kubenswrapper[4720]: I0122 06:41:14.457834 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kxqxz\" (UniqueName: \"kubernetes.io/projected/557f2e7c-b408-456f-bfc8-b6714839b46a-kube-api-access-kxqxz\") pod \"557f2e7c-b408-456f-bfc8-b6714839b46a\" (UID: \"557f2e7c-b408-456f-bfc8-b6714839b46a\") " Jan 22 06:41:14 crc kubenswrapper[4720]: I0122 06:41:14.457891 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/557f2e7c-b408-456f-bfc8-b6714839b46a-utilities\") pod \"557f2e7c-b408-456f-bfc8-b6714839b46a\" (UID: \"557f2e7c-b408-456f-bfc8-b6714839b46a\") " Jan 22 06:41:14 crc kubenswrapper[4720]: I0122 06:41:14.457920 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67487e16-e2f8-441f-9fd2-41e1997d91df-catalog-content\") pod \"67487e16-e2f8-441f-9fd2-41e1997d91df\" (UID: \"67487e16-e2f8-441f-9fd2-41e1997d91df\") " Jan 22 06:41:14 crc kubenswrapper[4720]: I0122 06:41:14.458024 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67487e16-e2f8-441f-9fd2-41e1997d91df-utilities\") pod \"67487e16-e2f8-441f-9fd2-41e1997d91df\" (UID: \"67487e16-e2f8-441f-9fd2-41e1997d91df\") " Jan 22 06:41:14 crc kubenswrapper[4720]: I0122 06:41:14.458093 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l8tjc\" (UniqueName: \"kubernetes.io/projected/67487e16-e2f8-441f-9fd2-41e1997d91df-kube-api-access-l8tjc\") pod \"67487e16-e2f8-441f-9fd2-41e1997d91df\" (UID: \"67487e16-e2f8-441f-9fd2-41e1997d91df\") " Jan 22 06:41:14 crc kubenswrapper[4720]: I0122 06:41:14.462496 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/557f2e7c-b408-456f-bfc8-b6714839b46a-utilities" (OuterVolumeSpecName: "utilities") pod "557f2e7c-b408-456f-bfc8-b6714839b46a" (UID: "557f2e7c-b408-456f-bfc8-b6714839b46a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 06:41:14 crc kubenswrapper[4720]: I0122 06:41:14.468904 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/67487e16-e2f8-441f-9fd2-41e1997d91df-utilities" (OuterVolumeSpecName: "utilities") pod "67487e16-e2f8-441f-9fd2-41e1997d91df" (UID: "67487e16-e2f8-441f-9fd2-41e1997d91df"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 06:41:14 crc kubenswrapper[4720]: I0122 06:41:14.469896 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/67487e16-e2f8-441f-9fd2-41e1997d91df-kube-api-access-l8tjc" (OuterVolumeSpecName: "kube-api-access-l8tjc") pod "67487e16-e2f8-441f-9fd2-41e1997d91df" (UID: "67487e16-e2f8-441f-9fd2-41e1997d91df"). InnerVolumeSpecName "kube-api-access-l8tjc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 06:41:14 crc kubenswrapper[4720]: I0122 06:41:14.470174 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/557f2e7c-b408-456f-bfc8-b6714839b46a-kube-api-access-kxqxz" (OuterVolumeSpecName: "kube-api-access-kxqxz") pod "557f2e7c-b408-456f-bfc8-b6714839b46a" (UID: "557f2e7c-b408-456f-bfc8-b6714839b46a"). InnerVolumeSpecName "kube-api-access-kxqxz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 06:41:14 crc kubenswrapper[4720]: I0122 06:41:14.475535 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-z59w9" Jan 22 06:41:14 crc kubenswrapper[4720]: I0122 06:41:14.539247 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/67487e16-e2f8-441f-9fd2-41e1997d91df-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "67487e16-e2f8-441f-9fd2-41e1997d91df" (UID: "67487e16-e2f8-441f-9fd2-41e1997d91df"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 06:41:14 crc kubenswrapper[4720]: I0122 06:41:14.559564 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/41f9ff9a-13f9-49b2-8ba6-0f56462cc94c-marketplace-trusted-ca\") pod \"41f9ff9a-13f9-49b2-8ba6-0f56462cc94c\" (UID: \"41f9ff9a-13f9-49b2-8ba6-0f56462cc94c\") " Jan 22 06:41:14 crc kubenswrapper[4720]: I0122 06:41:14.559724 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c8e6204f-9762-43b9-859a-74aaf49f30f4-utilities\") pod \"c8e6204f-9762-43b9-859a-74aaf49f30f4\" (UID: \"c8e6204f-9762-43b9-859a-74aaf49f30f4\") " Jan 22 06:41:14 crc kubenswrapper[4720]: I0122 06:41:14.559861 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42ecbfe2-1714-40ca-b7ac-191fcbd65b0e-utilities\") pod \"42ecbfe2-1714-40ca-b7ac-191fcbd65b0e\" (UID: \"42ecbfe2-1714-40ca-b7ac-191fcbd65b0e\") " Jan 22 06:41:14 crc kubenswrapper[4720]: I0122 06:41:14.560744 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42ecbfe2-1714-40ca-b7ac-191fcbd65b0e-catalog-content\") pod \"42ecbfe2-1714-40ca-b7ac-191fcbd65b0e\" (UID: \"42ecbfe2-1714-40ca-b7ac-191fcbd65b0e\") " Jan 22 06:41:14 crc kubenswrapper[4720]: I0122 06:41:14.560389 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/557f2e7c-b408-456f-bfc8-b6714839b46a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "557f2e7c-b408-456f-bfc8-b6714839b46a" (UID: "557f2e7c-b408-456f-bfc8-b6714839b46a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 06:41:14 crc kubenswrapper[4720]: I0122 06:41:14.560406 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c8e6204f-9762-43b9-859a-74aaf49f30f4-utilities" (OuterVolumeSpecName: "utilities") pod "c8e6204f-9762-43b9-859a-74aaf49f30f4" (UID: "c8e6204f-9762-43b9-859a-74aaf49f30f4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 06:41:14 crc kubenswrapper[4720]: I0122 06:41:14.560494 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/41f9ff9a-13f9-49b2-8ba6-0f56462cc94c-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "41f9ff9a-13f9-49b2-8ba6-0f56462cc94c" (UID: "41f9ff9a-13f9-49b2-8ba6-0f56462cc94c"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 06:41:14 crc kubenswrapper[4720]: I0122 06:41:14.573136 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ghnhg\" (UniqueName: \"kubernetes.io/projected/c8e6204f-9762-43b9-859a-74aaf49f30f4-kube-api-access-ghnhg\") pod \"c8e6204f-9762-43b9-859a-74aaf49f30f4\" (UID: \"c8e6204f-9762-43b9-859a-74aaf49f30f4\") " Jan 22 06:41:14 crc kubenswrapper[4720]: I0122 06:41:14.560673 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/42ecbfe2-1714-40ca-b7ac-191fcbd65b0e-utilities" (OuterVolumeSpecName: "utilities") pod "42ecbfe2-1714-40ca-b7ac-191fcbd65b0e" (UID: "42ecbfe2-1714-40ca-b7ac-191fcbd65b0e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 06:41:14 crc kubenswrapper[4720]: I0122 06:41:14.573424 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t8499\" (UniqueName: \"kubernetes.io/projected/41f9ff9a-13f9-49b2-8ba6-0f56462cc94c-kube-api-access-t8499\") pod \"41f9ff9a-13f9-49b2-8ba6-0f56462cc94c\" (UID: \"41f9ff9a-13f9-49b2-8ba6-0f56462cc94c\") " Jan 22 06:41:14 crc kubenswrapper[4720]: I0122 06:41:14.573539 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c8e6204f-9762-43b9-859a-74aaf49f30f4-catalog-content\") pod \"c8e6204f-9762-43b9-859a-74aaf49f30f4\" (UID: \"c8e6204f-9762-43b9-859a-74aaf49f30f4\") " Jan 22 06:41:14 crc kubenswrapper[4720]: I0122 06:41:14.573632 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bk7kl\" (UniqueName: \"kubernetes.io/projected/42ecbfe2-1714-40ca-b7ac-191fcbd65b0e-kube-api-access-bk7kl\") pod \"42ecbfe2-1714-40ca-b7ac-191fcbd65b0e\" (UID: \"42ecbfe2-1714-40ca-b7ac-191fcbd65b0e\") " Jan 22 06:41:14 crc kubenswrapper[4720]: I0122 06:41:14.573665 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/557f2e7c-b408-456f-bfc8-b6714839b46a-catalog-content\") pod \"557f2e7c-b408-456f-bfc8-b6714839b46a\" (UID: \"557f2e7c-b408-456f-bfc8-b6714839b46a\") " Jan 22 06:41:14 crc kubenswrapper[4720]: I0122 06:41:14.573695 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/41f9ff9a-13f9-49b2-8ba6-0f56462cc94c-marketplace-operator-metrics\") pod \"41f9ff9a-13f9-49b2-8ba6-0f56462cc94c\" (UID: \"41f9ff9a-13f9-49b2-8ba6-0f56462cc94c\") " Jan 22 06:41:14 crc kubenswrapper[4720]: W0122 06:41:14.573863 4720 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/557f2e7c-b408-456f-bfc8-b6714839b46a/volumes/kubernetes.io~empty-dir/catalog-content Jan 22 06:41:14 crc kubenswrapper[4720]: I0122 06:41:14.573910 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/557f2e7c-b408-456f-bfc8-b6714839b46a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "557f2e7c-b408-456f-bfc8-b6714839b46a" (UID: "557f2e7c-b408-456f-bfc8-b6714839b46a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 06:41:14 crc kubenswrapper[4720]: I0122 06:41:14.574819 4720 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/557f2e7c-b408-456f-bfc8-b6714839b46a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 06:41:14 crc kubenswrapper[4720]: I0122 06:41:14.574841 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kxqxz\" (UniqueName: \"kubernetes.io/projected/557f2e7c-b408-456f-bfc8-b6714839b46a-kube-api-access-kxqxz\") on node \"crc\" DevicePath \"\"" Jan 22 06:41:14 crc kubenswrapper[4720]: I0122 06:41:14.574857 4720 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/41f9ff9a-13f9-49b2-8ba6-0f56462cc94c-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 22 06:41:14 crc kubenswrapper[4720]: I0122 06:41:14.574869 4720 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c8e6204f-9762-43b9-859a-74aaf49f30f4-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 06:41:14 crc kubenswrapper[4720]: I0122 06:41:14.574879 4720 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42ecbfe2-1714-40ca-b7ac-191fcbd65b0e-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 06:41:14 crc kubenswrapper[4720]: I0122 06:41:14.574889 4720 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/557f2e7c-b408-456f-bfc8-b6714839b46a-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 06:41:14 crc kubenswrapper[4720]: I0122 06:41:14.574899 4720 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67487e16-e2f8-441f-9fd2-41e1997d91df-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 06:41:14 crc kubenswrapper[4720]: I0122 06:41:14.574958 4720 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67487e16-e2f8-441f-9fd2-41e1997d91df-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 06:41:14 crc kubenswrapper[4720]: I0122 06:41:14.574972 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l8tjc\" (UniqueName: \"kubernetes.io/projected/67487e16-e2f8-441f-9fd2-41e1997d91df-kube-api-access-l8tjc\") on node \"crc\" DevicePath \"\"" Jan 22 06:41:14 crc kubenswrapper[4720]: I0122 06:41:14.577061 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c8e6204f-9762-43b9-859a-74aaf49f30f4-kube-api-access-ghnhg" (OuterVolumeSpecName: "kube-api-access-ghnhg") pod "c8e6204f-9762-43b9-859a-74aaf49f30f4" (UID: "c8e6204f-9762-43b9-859a-74aaf49f30f4"). InnerVolumeSpecName "kube-api-access-ghnhg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 06:41:14 crc kubenswrapper[4720]: I0122 06:41:14.577662 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42ecbfe2-1714-40ca-b7ac-191fcbd65b0e-kube-api-access-bk7kl" (OuterVolumeSpecName: "kube-api-access-bk7kl") pod "42ecbfe2-1714-40ca-b7ac-191fcbd65b0e" (UID: "42ecbfe2-1714-40ca-b7ac-191fcbd65b0e"). InnerVolumeSpecName "kube-api-access-bk7kl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 06:41:14 crc kubenswrapper[4720]: I0122 06:41:14.579973 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/41f9ff9a-13f9-49b2-8ba6-0f56462cc94c-kube-api-access-t8499" (OuterVolumeSpecName: "kube-api-access-t8499") pod "41f9ff9a-13f9-49b2-8ba6-0f56462cc94c" (UID: "41f9ff9a-13f9-49b2-8ba6-0f56462cc94c"). InnerVolumeSpecName "kube-api-access-t8499". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 06:41:14 crc kubenswrapper[4720]: I0122 06:41:14.580585 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41f9ff9a-13f9-49b2-8ba6-0f56462cc94c-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "41f9ff9a-13f9-49b2-8ba6-0f56462cc94c" (UID: "41f9ff9a-13f9-49b2-8ba6-0f56462cc94c"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 06:41:14 crc kubenswrapper[4720]: I0122 06:41:14.585652 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/42ecbfe2-1714-40ca-b7ac-191fcbd65b0e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "42ecbfe2-1714-40ca-b7ac-191fcbd65b0e" (UID: "42ecbfe2-1714-40ca-b7ac-191fcbd65b0e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 06:41:14 crc kubenswrapper[4720]: I0122 06:41:14.676488 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bk7kl\" (UniqueName: \"kubernetes.io/projected/42ecbfe2-1714-40ca-b7ac-191fcbd65b0e-kube-api-access-bk7kl\") on node \"crc\" DevicePath \"\"" Jan 22 06:41:14 crc kubenswrapper[4720]: I0122 06:41:14.676608 4720 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/41f9ff9a-13f9-49b2-8ba6-0f56462cc94c-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 22 06:41:14 crc kubenswrapper[4720]: I0122 06:41:14.676627 4720 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42ecbfe2-1714-40ca-b7ac-191fcbd65b0e-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 06:41:14 crc kubenswrapper[4720]: I0122 06:41:14.676640 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ghnhg\" (UniqueName: \"kubernetes.io/projected/c8e6204f-9762-43b9-859a-74aaf49f30f4-kube-api-access-ghnhg\") on node \"crc\" DevicePath \"\"" Jan 22 06:41:14 crc kubenswrapper[4720]: I0122 06:41:14.676654 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t8499\" (UniqueName: \"kubernetes.io/projected/41f9ff9a-13f9-49b2-8ba6-0f56462cc94c-kube-api-access-t8499\") on node \"crc\" DevicePath \"\"" Jan 22 06:41:14 crc kubenswrapper[4720]: I0122 06:41:14.710031 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c8e6204f-9762-43b9-859a-74aaf49f30f4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c8e6204f-9762-43b9-859a-74aaf49f30f4" (UID: "c8e6204f-9762-43b9-859a-74aaf49f30f4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 06:41:14 crc kubenswrapper[4720]: I0122 06:41:14.778498 4720 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c8e6204f-9762-43b9-859a-74aaf49f30f4-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 06:41:14 crc kubenswrapper[4720]: I0122 06:41:14.839414 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-bg62x"] Jan 22 06:41:14 crc kubenswrapper[4720]: I0122 06:41:14.974070 4720 generic.go:334] "Generic (PLEG): container finished" podID="67487e16-e2f8-441f-9fd2-41e1997d91df" containerID="be07eacf9ac2b8f6a2b52544e60aebcd745de2376532ca47276b19811c6c6acb" exitCode=0 Jan 22 06:41:14 crc kubenswrapper[4720]: I0122 06:41:14.974217 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dgfdc" Jan 22 06:41:14 crc kubenswrapper[4720]: I0122 06:41:14.974268 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dgfdc" event={"ID":"67487e16-e2f8-441f-9fd2-41e1997d91df","Type":"ContainerDied","Data":"be07eacf9ac2b8f6a2b52544e60aebcd745de2376532ca47276b19811c6c6acb"} Jan 22 06:41:14 crc kubenswrapper[4720]: I0122 06:41:14.975537 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dgfdc" event={"ID":"67487e16-e2f8-441f-9fd2-41e1997d91df","Type":"ContainerDied","Data":"ced1ca345ac7b9710a7df25da559a5802d45fafbbcdea07a2e1c1b3f65b83df5"} Jan 22 06:41:14 crc kubenswrapper[4720]: I0122 06:41:14.975613 4720 scope.go:117] "RemoveContainer" containerID="be07eacf9ac2b8f6a2b52544e60aebcd745de2376532ca47276b19811c6c6acb" Jan 22 06:41:14 crc kubenswrapper[4720]: I0122 06:41:14.980941 4720 generic.go:334] "Generic (PLEG): container finished" podID="557f2e7c-b408-456f-bfc8-b6714839b46a" containerID="65108877e25c65782214e1cdcf650489c6d812dec07aa151ea3caf11d47171ff" exitCode=0 Jan 22 06:41:14 crc kubenswrapper[4720]: I0122 06:41:14.981013 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bvbhh" event={"ID":"557f2e7c-b408-456f-bfc8-b6714839b46a","Type":"ContainerDied","Data":"65108877e25c65782214e1cdcf650489c6d812dec07aa151ea3caf11d47171ff"} Jan 22 06:41:14 crc kubenswrapper[4720]: I0122 06:41:14.981046 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bvbhh" event={"ID":"557f2e7c-b408-456f-bfc8-b6714839b46a","Type":"ContainerDied","Data":"07ec8caf765924256883e9a75bdc3f59dd113b4d1d82e018c73b7751df7caa7b"} Jan 22 06:41:14 crc kubenswrapper[4720]: I0122 06:41:14.981981 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bvbhh" Jan 22 06:41:14 crc kubenswrapper[4720]: I0122 06:41:14.984816 4720 generic.go:334] "Generic (PLEG): container finished" podID="c8e6204f-9762-43b9-859a-74aaf49f30f4" containerID="b6d436a4de33fd7e0b314a607636017f40c2d26b8ed4e1ef36bab6c0042c6064" exitCode=0 Jan 22 06:41:14 crc kubenswrapper[4720]: I0122 06:41:14.984953 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nkz4c" event={"ID":"c8e6204f-9762-43b9-859a-74aaf49f30f4","Type":"ContainerDied","Data":"b6d436a4de33fd7e0b314a607636017f40c2d26b8ed4e1ef36bab6c0042c6064"} Jan 22 06:41:14 crc kubenswrapper[4720]: I0122 06:41:14.985019 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nkz4c" event={"ID":"c8e6204f-9762-43b9-859a-74aaf49f30f4","Type":"ContainerDied","Data":"643936999e1caa9fadde39e103a14bee57ba457da811ba16db55e9d831f415e0"} Jan 22 06:41:14 crc kubenswrapper[4720]: I0122 06:41:14.984905 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nkz4c" Jan 22 06:41:14 crc kubenswrapper[4720]: I0122 06:41:14.993800 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-bg62x" event={"ID":"1311d24a-e35a-489c-8010-0bca3da90f0f","Type":"ContainerStarted","Data":"9950fab6750ed4930eafe72b12fff352d4375dfa171e8bb7211208301fc17cfe"} Jan 22 06:41:14 crc kubenswrapper[4720]: I0122 06:41:14.995091 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-bg62x" Jan 22 06:41:14 crc kubenswrapper[4720]: I0122 06:41:14.995201 4720 scope.go:117] "RemoveContainer" containerID="3187d94e4980da5cada473954186e3f11de0b89e7539c155ff6309ad9ab4ea1c" Jan 22 06:41:14 crc kubenswrapper[4720]: I0122 06:41:14.997695 4720 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-bg62x container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.62:8080/healthz\": dial tcp 10.217.0.62:8080: connect: connection refused" start-of-body= Jan 22 06:41:14 crc kubenswrapper[4720]: I0122 06:41:14.997748 4720 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-bg62x" podUID="1311d24a-e35a-489c-8010-0bca3da90f0f" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.62:8080/healthz\": dial tcp 10.217.0.62:8080: connect: connection refused" Jan 22 06:41:15 crc kubenswrapper[4720]: I0122 06:41:15.002249 4720 generic.go:334] "Generic (PLEG): container finished" podID="41f9ff9a-13f9-49b2-8ba6-0f56462cc94c" containerID="84563aa1228da1b60aeed2a84b7aab7fc81ef587a6288b7357e30f1403934c79" exitCode=0 Jan 22 06:41:15 crc kubenswrapper[4720]: I0122 06:41:15.002333 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-nhzl2" Jan 22 06:41:15 crc kubenswrapper[4720]: I0122 06:41:15.002434 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-nhzl2" event={"ID":"41f9ff9a-13f9-49b2-8ba6-0f56462cc94c","Type":"ContainerDied","Data":"84563aa1228da1b60aeed2a84b7aab7fc81ef587a6288b7357e30f1403934c79"} Jan 22 06:41:15 crc kubenswrapper[4720]: I0122 06:41:15.002494 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-nhzl2" event={"ID":"41f9ff9a-13f9-49b2-8ba6-0f56462cc94c","Type":"ContainerDied","Data":"6591c95f3e3f2e5948260f1e7be83c08b2e294a0ff894541e742808920565c4a"} Jan 22 06:41:15 crc kubenswrapper[4720]: I0122 06:41:15.009333 4720 generic.go:334] "Generic (PLEG): container finished" podID="42ecbfe2-1714-40ca-b7ac-191fcbd65b0e" containerID="34816e439dd030ce2ecf1a7f4102df4439518dcb55c621262166471c4536e4a8" exitCode=0 Jan 22 06:41:15 crc kubenswrapper[4720]: I0122 06:41:15.009385 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-z59w9" event={"ID":"42ecbfe2-1714-40ca-b7ac-191fcbd65b0e","Type":"ContainerDied","Data":"34816e439dd030ce2ecf1a7f4102df4439518dcb55c621262166471c4536e4a8"} Jan 22 06:41:15 crc kubenswrapper[4720]: I0122 06:41:15.009419 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-z59w9" event={"ID":"42ecbfe2-1714-40ca-b7ac-191fcbd65b0e","Type":"ContainerDied","Data":"2c19198f85b808611a5372518ffe199395400555541fa30d5328d21745746fe7"} Jan 22 06:41:15 crc kubenswrapper[4720]: I0122 06:41:15.009504 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-z59w9" Jan 22 06:41:15 crc kubenswrapper[4720]: I0122 06:41:15.029684 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-bg62x" podStartSLOduration=2.029657734 podStartE2EDuration="2.029657734s" podCreationTimestamp="2026-01-22 06:41:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 06:41:15.022743806 +0000 UTC m=+367.164650511" watchObservedRunningTime="2026-01-22 06:41:15.029657734 +0000 UTC m=+367.171564439" Jan 22 06:41:15 crc kubenswrapper[4720]: I0122 06:41:15.036173 4720 scope.go:117] "RemoveContainer" containerID="d6e21f9637f2c316934d65f453d95ebc458f8cce28cd08450e8ba3e1bb0b2a4f" Jan 22 06:41:15 crc kubenswrapper[4720]: I0122 06:41:15.043455 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bvbhh"] Jan 22 06:41:15 crc kubenswrapper[4720]: I0122 06:41:15.048836 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-bvbhh"] Jan 22 06:41:15 crc kubenswrapper[4720]: I0122 06:41:15.057476 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-nkz4c"] Jan 22 06:41:15 crc kubenswrapper[4720]: I0122 06:41:15.061805 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-nkz4c"] Jan 22 06:41:15 crc kubenswrapper[4720]: I0122 06:41:15.077314 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-z59w9"] Jan 22 06:41:15 crc kubenswrapper[4720]: I0122 06:41:15.084780 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-z59w9"] Jan 22 06:41:15 crc kubenswrapper[4720]: I0122 06:41:15.091428 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-nhzl2"] Jan 22 06:41:15 crc kubenswrapper[4720]: I0122 06:41:15.098164 4720 scope.go:117] "RemoveContainer" containerID="be07eacf9ac2b8f6a2b52544e60aebcd745de2376532ca47276b19811c6c6acb" Jan 22 06:41:15 crc kubenswrapper[4720]: I0122 06:41:15.098336 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-nhzl2"] Jan 22 06:41:15 crc kubenswrapper[4720]: E0122 06:41:15.098832 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"be07eacf9ac2b8f6a2b52544e60aebcd745de2376532ca47276b19811c6c6acb\": container with ID starting with be07eacf9ac2b8f6a2b52544e60aebcd745de2376532ca47276b19811c6c6acb not found: ID does not exist" containerID="be07eacf9ac2b8f6a2b52544e60aebcd745de2376532ca47276b19811c6c6acb" Jan 22 06:41:15 crc kubenswrapper[4720]: I0122 06:41:15.099060 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"be07eacf9ac2b8f6a2b52544e60aebcd745de2376532ca47276b19811c6c6acb"} err="failed to get container status \"be07eacf9ac2b8f6a2b52544e60aebcd745de2376532ca47276b19811c6c6acb\": rpc error: code = NotFound desc = could not find container \"be07eacf9ac2b8f6a2b52544e60aebcd745de2376532ca47276b19811c6c6acb\": container with ID starting with be07eacf9ac2b8f6a2b52544e60aebcd745de2376532ca47276b19811c6c6acb not found: ID does not exist" Jan 22 06:41:15 crc kubenswrapper[4720]: I0122 06:41:15.099244 4720 scope.go:117] "RemoveContainer" containerID="3187d94e4980da5cada473954186e3f11de0b89e7539c155ff6309ad9ab4ea1c" Jan 22 06:41:15 crc kubenswrapper[4720]: E0122 06:41:15.099880 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3187d94e4980da5cada473954186e3f11de0b89e7539c155ff6309ad9ab4ea1c\": container with ID starting with 3187d94e4980da5cada473954186e3f11de0b89e7539c155ff6309ad9ab4ea1c not found: ID does not exist" containerID="3187d94e4980da5cada473954186e3f11de0b89e7539c155ff6309ad9ab4ea1c" Jan 22 06:41:15 crc kubenswrapper[4720]: I0122 06:41:15.099992 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3187d94e4980da5cada473954186e3f11de0b89e7539c155ff6309ad9ab4ea1c"} err="failed to get container status \"3187d94e4980da5cada473954186e3f11de0b89e7539c155ff6309ad9ab4ea1c\": rpc error: code = NotFound desc = could not find container \"3187d94e4980da5cada473954186e3f11de0b89e7539c155ff6309ad9ab4ea1c\": container with ID starting with 3187d94e4980da5cada473954186e3f11de0b89e7539c155ff6309ad9ab4ea1c not found: ID does not exist" Jan 22 06:41:15 crc kubenswrapper[4720]: I0122 06:41:15.100075 4720 scope.go:117] "RemoveContainer" containerID="d6e21f9637f2c316934d65f453d95ebc458f8cce28cd08450e8ba3e1bb0b2a4f" Jan 22 06:41:15 crc kubenswrapper[4720]: E0122 06:41:15.100450 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d6e21f9637f2c316934d65f453d95ebc458f8cce28cd08450e8ba3e1bb0b2a4f\": container with ID starting with d6e21f9637f2c316934d65f453d95ebc458f8cce28cd08450e8ba3e1bb0b2a4f not found: ID does not exist" containerID="d6e21f9637f2c316934d65f453d95ebc458f8cce28cd08450e8ba3e1bb0b2a4f" Jan 22 06:41:15 crc kubenswrapper[4720]: I0122 06:41:15.100661 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d6e21f9637f2c316934d65f453d95ebc458f8cce28cd08450e8ba3e1bb0b2a4f"} err="failed to get container status \"d6e21f9637f2c316934d65f453d95ebc458f8cce28cd08450e8ba3e1bb0b2a4f\": rpc error: code = NotFound desc = could not find container \"d6e21f9637f2c316934d65f453d95ebc458f8cce28cd08450e8ba3e1bb0b2a4f\": container with ID starting with d6e21f9637f2c316934d65f453d95ebc458f8cce28cd08450e8ba3e1bb0b2a4f not found: ID does not exist" Jan 22 06:41:15 crc kubenswrapper[4720]: I0122 06:41:15.100735 4720 scope.go:117] "RemoveContainer" containerID="65108877e25c65782214e1cdcf650489c6d812dec07aa151ea3caf11d47171ff" Jan 22 06:41:15 crc kubenswrapper[4720]: I0122 06:41:15.101654 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-dgfdc"] Jan 22 06:41:15 crc kubenswrapper[4720]: I0122 06:41:15.105546 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-dgfdc"] Jan 22 06:41:15 crc kubenswrapper[4720]: I0122 06:41:15.117201 4720 scope.go:117] "RemoveContainer" containerID="4e911eae137de14ecbd15af4801bd3f8be27e5d69e51d012fd21832ebf3acebd" Jan 22 06:41:15 crc kubenswrapper[4720]: I0122 06:41:15.133898 4720 scope.go:117] "RemoveContainer" containerID="2300866ef4420be03e1e5f9a64abc1bfcdbd5cfc3054b71779b86dd3dcde38ad" Jan 22 06:41:15 crc kubenswrapper[4720]: I0122 06:41:15.150017 4720 scope.go:117] "RemoveContainer" containerID="65108877e25c65782214e1cdcf650489c6d812dec07aa151ea3caf11d47171ff" Jan 22 06:41:15 crc kubenswrapper[4720]: E0122 06:41:15.151785 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"65108877e25c65782214e1cdcf650489c6d812dec07aa151ea3caf11d47171ff\": container with ID starting with 65108877e25c65782214e1cdcf650489c6d812dec07aa151ea3caf11d47171ff not found: ID does not exist" containerID="65108877e25c65782214e1cdcf650489c6d812dec07aa151ea3caf11d47171ff" Jan 22 06:41:15 crc kubenswrapper[4720]: I0122 06:41:15.151823 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"65108877e25c65782214e1cdcf650489c6d812dec07aa151ea3caf11d47171ff"} err="failed to get container status \"65108877e25c65782214e1cdcf650489c6d812dec07aa151ea3caf11d47171ff\": rpc error: code = NotFound desc = could not find container \"65108877e25c65782214e1cdcf650489c6d812dec07aa151ea3caf11d47171ff\": container with ID starting with 65108877e25c65782214e1cdcf650489c6d812dec07aa151ea3caf11d47171ff not found: ID does not exist" Jan 22 06:41:15 crc kubenswrapper[4720]: I0122 06:41:15.151858 4720 scope.go:117] "RemoveContainer" containerID="4e911eae137de14ecbd15af4801bd3f8be27e5d69e51d012fd21832ebf3acebd" Jan 22 06:41:15 crc kubenswrapper[4720]: E0122 06:41:15.152267 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4e911eae137de14ecbd15af4801bd3f8be27e5d69e51d012fd21832ebf3acebd\": container with ID starting with 4e911eae137de14ecbd15af4801bd3f8be27e5d69e51d012fd21832ebf3acebd not found: ID does not exist" containerID="4e911eae137de14ecbd15af4801bd3f8be27e5d69e51d012fd21832ebf3acebd" Jan 22 06:41:15 crc kubenswrapper[4720]: I0122 06:41:15.152334 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4e911eae137de14ecbd15af4801bd3f8be27e5d69e51d012fd21832ebf3acebd"} err="failed to get container status \"4e911eae137de14ecbd15af4801bd3f8be27e5d69e51d012fd21832ebf3acebd\": rpc error: code = NotFound desc = could not find container \"4e911eae137de14ecbd15af4801bd3f8be27e5d69e51d012fd21832ebf3acebd\": container with ID starting with 4e911eae137de14ecbd15af4801bd3f8be27e5d69e51d012fd21832ebf3acebd not found: ID does not exist" Jan 22 06:41:15 crc kubenswrapper[4720]: I0122 06:41:15.152376 4720 scope.go:117] "RemoveContainer" containerID="2300866ef4420be03e1e5f9a64abc1bfcdbd5cfc3054b71779b86dd3dcde38ad" Jan 22 06:41:15 crc kubenswrapper[4720]: E0122 06:41:15.152707 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2300866ef4420be03e1e5f9a64abc1bfcdbd5cfc3054b71779b86dd3dcde38ad\": container with ID starting with 2300866ef4420be03e1e5f9a64abc1bfcdbd5cfc3054b71779b86dd3dcde38ad not found: ID does not exist" containerID="2300866ef4420be03e1e5f9a64abc1bfcdbd5cfc3054b71779b86dd3dcde38ad" Jan 22 06:41:15 crc kubenswrapper[4720]: I0122 06:41:15.152736 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2300866ef4420be03e1e5f9a64abc1bfcdbd5cfc3054b71779b86dd3dcde38ad"} err="failed to get container status \"2300866ef4420be03e1e5f9a64abc1bfcdbd5cfc3054b71779b86dd3dcde38ad\": rpc error: code = NotFound desc = could not find container \"2300866ef4420be03e1e5f9a64abc1bfcdbd5cfc3054b71779b86dd3dcde38ad\": container with ID starting with 2300866ef4420be03e1e5f9a64abc1bfcdbd5cfc3054b71779b86dd3dcde38ad not found: ID does not exist" Jan 22 06:41:15 crc kubenswrapper[4720]: I0122 06:41:15.152760 4720 scope.go:117] "RemoveContainer" containerID="b6d436a4de33fd7e0b314a607636017f40c2d26b8ed4e1ef36bab6c0042c6064" Jan 22 06:41:15 crc kubenswrapper[4720]: I0122 06:41:15.168999 4720 scope.go:117] "RemoveContainer" containerID="35abea23ccd25eac1a67d61239ebaeeb96c39265435a4221f5e1789754d50006" Jan 22 06:41:15 crc kubenswrapper[4720]: I0122 06:41:15.190800 4720 scope.go:117] "RemoveContainer" containerID="0626055bf456fed27a01e50de9ec6b06989a30050c6e7c7c04f19f982bc457a7" Jan 22 06:41:15 crc kubenswrapper[4720]: I0122 06:41:15.208660 4720 scope.go:117] "RemoveContainer" containerID="b6d436a4de33fd7e0b314a607636017f40c2d26b8ed4e1ef36bab6c0042c6064" Jan 22 06:41:15 crc kubenswrapper[4720]: E0122 06:41:15.209239 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b6d436a4de33fd7e0b314a607636017f40c2d26b8ed4e1ef36bab6c0042c6064\": container with ID starting with b6d436a4de33fd7e0b314a607636017f40c2d26b8ed4e1ef36bab6c0042c6064 not found: ID does not exist" containerID="b6d436a4de33fd7e0b314a607636017f40c2d26b8ed4e1ef36bab6c0042c6064" Jan 22 06:41:15 crc kubenswrapper[4720]: I0122 06:41:15.209305 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b6d436a4de33fd7e0b314a607636017f40c2d26b8ed4e1ef36bab6c0042c6064"} err="failed to get container status \"b6d436a4de33fd7e0b314a607636017f40c2d26b8ed4e1ef36bab6c0042c6064\": rpc error: code = NotFound desc = could not find container \"b6d436a4de33fd7e0b314a607636017f40c2d26b8ed4e1ef36bab6c0042c6064\": container with ID starting with b6d436a4de33fd7e0b314a607636017f40c2d26b8ed4e1ef36bab6c0042c6064 not found: ID does not exist" Jan 22 06:41:15 crc kubenswrapper[4720]: I0122 06:41:15.209355 4720 scope.go:117] "RemoveContainer" containerID="35abea23ccd25eac1a67d61239ebaeeb96c39265435a4221f5e1789754d50006" Jan 22 06:41:15 crc kubenswrapper[4720]: E0122 06:41:15.209782 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"35abea23ccd25eac1a67d61239ebaeeb96c39265435a4221f5e1789754d50006\": container with ID starting with 35abea23ccd25eac1a67d61239ebaeeb96c39265435a4221f5e1789754d50006 not found: ID does not exist" containerID="35abea23ccd25eac1a67d61239ebaeeb96c39265435a4221f5e1789754d50006" Jan 22 06:41:15 crc kubenswrapper[4720]: I0122 06:41:15.209811 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"35abea23ccd25eac1a67d61239ebaeeb96c39265435a4221f5e1789754d50006"} err="failed to get container status \"35abea23ccd25eac1a67d61239ebaeeb96c39265435a4221f5e1789754d50006\": rpc error: code = NotFound desc = could not find container \"35abea23ccd25eac1a67d61239ebaeeb96c39265435a4221f5e1789754d50006\": container with ID starting with 35abea23ccd25eac1a67d61239ebaeeb96c39265435a4221f5e1789754d50006 not found: ID does not exist" Jan 22 06:41:15 crc kubenswrapper[4720]: I0122 06:41:15.209829 4720 scope.go:117] "RemoveContainer" containerID="0626055bf456fed27a01e50de9ec6b06989a30050c6e7c7c04f19f982bc457a7" Jan 22 06:41:15 crc kubenswrapper[4720]: E0122 06:41:15.210176 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0626055bf456fed27a01e50de9ec6b06989a30050c6e7c7c04f19f982bc457a7\": container with ID starting with 0626055bf456fed27a01e50de9ec6b06989a30050c6e7c7c04f19f982bc457a7 not found: ID does not exist" containerID="0626055bf456fed27a01e50de9ec6b06989a30050c6e7c7c04f19f982bc457a7" Jan 22 06:41:15 crc kubenswrapper[4720]: I0122 06:41:15.210219 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0626055bf456fed27a01e50de9ec6b06989a30050c6e7c7c04f19f982bc457a7"} err="failed to get container status \"0626055bf456fed27a01e50de9ec6b06989a30050c6e7c7c04f19f982bc457a7\": rpc error: code = NotFound desc = could not find container \"0626055bf456fed27a01e50de9ec6b06989a30050c6e7c7c04f19f982bc457a7\": container with ID starting with 0626055bf456fed27a01e50de9ec6b06989a30050c6e7c7c04f19f982bc457a7 not found: ID does not exist" Jan 22 06:41:15 crc kubenswrapper[4720]: I0122 06:41:15.210255 4720 scope.go:117] "RemoveContainer" containerID="84563aa1228da1b60aeed2a84b7aab7fc81ef587a6288b7357e30f1403934c79" Jan 22 06:41:15 crc kubenswrapper[4720]: I0122 06:41:15.231649 4720 scope.go:117] "RemoveContainer" containerID="84563aa1228da1b60aeed2a84b7aab7fc81ef587a6288b7357e30f1403934c79" Jan 22 06:41:15 crc kubenswrapper[4720]: E0122 06:41:15.232619 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"84563aa1228da1b60aeed2a84b7aab7fc81ef587a6288b7357e30f1403934c79\": container with ID starting with 84563aa1228da1b60aeed2a84b7aab7fc81ef587a6288b7357e30f1403934c79 not found: ID does not exist" containerID="84563aa1228da1b60aeed2a84b7aab7fc81ef587a6288b7357e30f1403934c79" Jan 22 06:41:15 crc kubenswrapper[4720]: I0122 06:41:15.232690 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"84563aa1228da1b60aeed2a84b7aab7fc81ef587a6288b7357e30f1403934c79"} err="failed to get container status \"84563aa1228da1b60aeed2a84b7aab7fc81ef587a6288b7357e30f1403934c79\": rpc error: code = NotFound desc = could not find container \"84563aa1228da1b60aeed2a84b7aab7fc81ef587a6288b7357e30f1403934c79\": container with ID starting with 84563aa1228da1b60aeed2a84b7aab7fc81ef587a6288b7357e30f1403934c79 not found: ID does not exist" Jan 22 06:41:15 crc kubenswrapper[4720]: I0122 06:41:15.232737 4720 scope.go:117] "RemoveContainer" containerID="34816e439dd030ce2ecf1a7f4102df4439518dcb55c621262166471c4536e4a8" Jan 22 06:41:15 crc kubenswrapper[4720]: I0122 06:41:15.255781 4720 scope.go:117] "RemoveContainer" containerID="2e11d5a77a252e9201d9c5db27af519b28d837ec1aed35a6b209d9b5ed416605" Jan 22 06:41:15 crc kubenswrapper[4720]: I0122 06:41:15.277696 4720 scope.go:117] "RemoveContainer" containerID="492b16e7ca9e3e6c6b7336c843d9d1eb38a67f872a952a7b07221ef061414dca" Jan 22 06:41:15 crc kubenswrapper[4720]: I0122 06:41:15.296526 4720 scope.go:117] "RemoveContainer" containerID="34816e439dd030ce2ecf1a7f4102df4439518dcb55c621262166471c4536e4a8" Jan 22 06:41:15 crc kubenswrapper[4720]: E0122 06:41:15.298088 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"34816e439dd030ce2ecf1a7f4102df4439518dcb55c621262166471c4536e4a8\": container with ID starting with 34816e439dd030ce2ecf1a7f4102df4439518dcb55c621262166471c4536e4a8 not found: ID does not exist" containerID="34816e439dd030ce2ecf1a7f4102df4439518dcb55c621262166471c4536e4a8" Jan 22 06:41:15 crc kubenswrapper[4720]: I0122 06:41:15.298170 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"34816e439dd030ce2ecf1a7f4102df4439518dcb55c621262166471c4536e4a8"} err="failed to get container status \"34816e439dd030ce2ecf1a7f4102df4439518dcb55c621262166471c4536e4a8\": rpc error: code = NotFound desc = could not find container \"34816e439dd030ce2ecf1a7f4102df4439518dcb55c621262166471c4536e4a8\": container with ID starting with 34816e439dd030ce2ecf1a7f4102df4439518dcb55c621262166471c4536e4a8 not found: ID does not exist" Jan 22 06:41:15 crc kubenswrapper[4720]: I0122 06:41:15.298279 4720 scope.go:117] "RemoveContainer" containerID="2e11d5a77a252e9201d9c5db27af519b28d837ec1aed35a6b209d9b5ed416605" Jan 22 06:41:15 crc kubenswrapper[4720]: E0122 06:41:15.299082 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2e11d5a77a252e9201d9c5db27af519b28d837ec1aed35a6b209d9b5ed416605\": container with ID starting with 2e11d5a77a252e9201d9c5db27af519b28d837ec1aed35a6b209d9b5ed416605 not found: ID does not exist" containerID="2e11d5a77a252e9201d9c5db27af519b28d837ec1aed35a6b209d9b5ed416605" Jan 22 06:41:15 crc kubenswrapper[4720]: I0122 06:41:15.299141 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2e11d5a77a252e9201d9c5db27af519b28d837ec1aed35a6b209d9b5ed416605"} err="failed to get container status \"2e11d5a77a252e9201d9c5db27af519b28d837ec1aed35a6b209d9b5ed416605\": rpc error: code = NotFound desc = could not find container \"2e11d5a77a252e9201d9c5db27af519b28d837ec1aed35a6b209d9b5ed416605\": container with ID starting with 2e11d5a77a252e9201d9c5db27af519b28d837ec1aed35a6b209d9b5ed416605 not found: ID does not exist" Jan 22 06:41:15 crc kubenswrapper[4720]: I0122 06:41:15.299184 4720 scope.go:117] "RemoveContainer" containerID="492b16e7ca9e3e6c6b7336c843d9d1eb38a67f872a952a7b07221ef061414dca" Jan 22 06:41:15 crc kubenswrapper[4720]: E0122 06:41:15.300157 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"492b16e7ca9e3e6c6b7336c843d9d1eb38a67f872a952a7b07221ef061414dca\": container with ID starting with 492b16e7ca9e3e6c6b7336c843d9d1eb38a67f872a952a7b07221ef061414dca not found: ID does not exist" containerID="492b16e7ca9e3e6c6b7336c843d9d1eb38a67f872a952a7b07221ef061414dca" Jan 22 06:41:15 crc kubenswrapper[4720]: I0122 06:41:15.300211 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"492b16e7ca9e3e6c6b7336c843d9d1eb38a67f872a952a7b07221ef061414dca"} err="failed to get container status \"492b16e7ca9e3e6c6b7336c843d9d1eb38a67f872a952a7b07221ef061414dca\": rpc error: code = NotFound desc = could not find container \"492b16e7ca9e3e6c6b7336c843d9d1eb38a67f872a952a7b07221ef061414dca\": container with ID starting with 492b16e7ca9e3e6c6b7336c843d9d1eb38a67f872a952a7b07221ef061414dca not found: ID does not exist" Jan 22 06:41:16 crc kubenswrapper[4720]: I0122 06:41:16.017951 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-bg62x" event={"ID":"1311d24a-e35a-489c-8010-0bca3da90f0f","Type":"ContainerStarted","Data":"5e7c27e7a8351bf8d5db9c7a0e42a7a2debb351b370a8f237fbe94c25105e514"} Jan 22 06:41:16 crc kubenswrapper[4720]: I0122 06:41:16.022588 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-bg62x" Jan 22 06:41:16 crc kubenswrapper[4720]: I0122 06:41:16.125189 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-6rvhm"] Jan 22 06:41:16 crc kubenswrapper[4720]: E0122 06:41:16.125415 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42ecbfe2-1714-40ca-b7ac-191fcbd65b0e" containerName="extract-utilities" Jan 22 06:41:16 crc kubenswrapper[4720]: I0122 06:41:16.125428 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="42ecbfe2-1714-40ca-b7ac-191fcbd65b0e" containerName="extract-utilities" Jan 22 06:41:16 crc kubenswrapper[4720]: E0122 06:41:16.125445 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41f9ff9a-13f9-49b2-8ba6-0f56462cc94c" containerName="marketplace-operator" Jan 22 06:41:16 crc kubenswrapper[4720]: I0122 06:41:16.125450 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="41f9ff9a-13f9-49b2-8ba6-0f56462cc94c" containerName="marketplace-operator" Jan 22 06:41:16 crc kubenswrapper[4720]: E0122 06:41:16.125460 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42ecbfe2-1714-40ca-b7ac-191fcbd65b0e" containerName="registry-server" Jan 22 06:41:16 crc kubenswrapper[4720]: I0122 06:41:16.125467 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="42ecbfe2-1714-40ca-b7ac-191fcbd65b0e" containerName="registry-server" Jan 22 06:41:16 crc kubenswrapper[4720]: E0122 06:41:16.125474 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67487e16-e2f8-441f-9fd2-41e1997d91df" containerName="registry-server" Jan 22 06:41:16 crc kubenswrapper[4720]: I0122 06:41:16.125480 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="67487e16-e2f8-441f-9fd2-41e1997d91df" containerName="registry-server" Jan 22 06:41:16 crc kubenswrapper[4720]: E0122 06:41:16.125491 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42ecbfe2-1714-40ca-b7ac-191fcbd65b0e" containerName="extract-content" Jan 22 06:41:16 crc kubenswrapper[4720]: I0122 06:41:16.125497 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="42ecbfe2-1714-40ca-b7ac-191fcbd65b0e" containerName="extract-content" Jan 22 06:41:16 crc kubenswrapper[4720]: E0122 06:41:16.125505 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8e6204f-9762-43b9-859a-74aaf49f30f4" containerName="registry-server" Jan 22 06:41:16 crc kubenswrapper[4720]: I0122 06:41:16.125511 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8e6204f-9762-43b9-859a-74aaf49f30f4" containerName="registry-server" Jan 22 06:41:16 crc kubenswrapper[4720]: E0122 06:41:16.125520 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67487e16-e2f8-441f-9fd2-41e1997d91df" containerName="extract-utilities" Jan 22 06:41:16 crc kubenswrapper[4720]: I0122 06:41:16.125525 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="67487e16-e2f8-441f-9fd2-41e1997d91df" containerName="extract-utilities" Jan 22 06:41:16 crc kubenswrapper[4720]: E0122 06:41:16.125533 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="557f2e7c-b408-456f-bfc8-b6714839b46a" containerName="extract-content" Jan 22 06:41:16 crc kubenswrapper[4720]: I0122 06:41:16.125539 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="557f2e7c-b408-456f-bfc8-b6714839b46a" containerName="extract-content" Jan 22 06:41:16 crc kubenswrapper[4720]: E0122 06:41:16.125546 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8e6204f-9762-43b9-859a-74aaf49f30f4" containerName="extract-content" Jan 22 06:41:16 crc kubenswrapper[4720]: I0122 06:41:16.125553 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8e6204f-9762-43b9-859a-74aaf49f30f4" containerName="extract-content" Jan 22 06:41:16 crc kubenswrapper[4720]: E0122 06:41:16.125561 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="557f2e7c-b408-456f-bfc8-b6714839b46a" containerName="extract-utilities" Jan 22 06:41:16 crc kubenswrapper[4720]: I0122 06:41:16.125567 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="557f2e7c-b408-456f-bfc8-b6714839b46a" containerName="extract-utilities" Jan 22 06:41:16 crc kubenswrapper[4720]: E0122 06:41:16.125574 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67487e16-e2f8-441f-9fd2-41e1997d91df" containerName="extract-content" Jan 22 06:41:16 crc kubenswrapper[4720]: I0122 06:41:16.125580 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="67487e16-e2f8-441f-9fd2-41e1997d91df" containerName="extract-content" Jan 22 06:41:16 crc kubenswrapper[4720]: E0122 06:41:16.125589 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="557f2e7c-b408-456f-bfc8-b6714839b46a" containerName="registry-server" Jan 22 06:41:16 crc kubenswrapper[4720]: I0122 06:41:16.125595 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="557f2e7c-b408-456f-bfc8-b6714839b46a" containerName="registry-server" Jan 22 06:41:16 crc kubenswrapper[4720]: E0122 06:41:16.125602 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8e6204f-9762-43b9-859a-74aaf49f30f4" containerName="extract-utilities" Jan 22 06:41:16 crc kubenswrapper[4720]: I0122 06:41:16.125607 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8e6204f-9762-43b9-859a-74aaf49f30f4" containerName="extract-utilities" Jan 22 06:41:16 crc kubenswrapper[4720]: I0122 06:41:16.125696 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="c8e6204f-9762-43b9-859a-74aaf49f30f4" containerName="registry-server" Jan 22 06:41:16 crc kubenswrapper[4720]: I0122 06:41:16.125707 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="41f9ff9a-13f9-49b2-8ba6-0f56462cc94c" containerName="marketplace-operator" Jan 22 06:41:16 crc kubenswrapper[4720]: I0122 06:41:16.125719 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="42ecbfe2-1714-40ca-b7ac-191fcbd65b0e" containerName="registry-server" Jan 22 06:41:16 crc kubenswrapper[4720]: I0122 06:41:16.125726 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="557f2e7c-b408-456f-bfc8-b6714839b46a" containerName="registry-server" Jan 22 06:41:16 crc kubenswrapper[4720]: I0122 06:41:16.125733 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="67487e16-e2f8-441f-9fd2-41e1997d91df" containerName="registry-server" Jan 22 06:41:16 crc kubenswrapper[4720]: I0122 06:41:16.126472 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6rvhm" Jan 22 06:41:16 crc kubenswrapper[4720]: I0122 06:41:16.129566 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 22 06:41:16 crc kubenswrapper[4720]: I0122 06:41:16.156279 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6rvhm"] Jan 22 06:41:16 crc kubenswrapper[4720]: I0122 06:41:16.200534 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ppnwl\" (UniqueName: \"kubernetes.io/projected/58ab1210-e65f-4e2b-a3f9-dacecd42d90d-kube-api-access-ppnwl\") pod \"redhat-marketplace-6rvhm\" (UID: \"58ab1210-e65f-4e2b-a3f9-dacecd42d90d\") " pod="openshift-marketplace/redhat-marketplace-6rvhm" Jan 22 06:41:16 crc kubenswrapper[4720]: I0122 06:41:16.200622 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/58ab1210-e65f-4e2b-a3f9-dacecd42d90d-catalog-content\") pod \"redhat-marketplace-6rvhm\" (UID: \"58ab1210-e65f-4e2b-a3f9-dacecd42d90d\") " pod="openshift-marketplace/redhat-marketplace-6rvhm" Jan 22 06:41:16 crc kubenswrapper[4720]: I0122 06:41:16.200724 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/58ab1210-e65f-4e2b-a3f9-dacecd42d90d-utilities\") pod \"redhat-marketplace-6rvhm\" (UID: \"58ab1210-e65f-4e2b-a3f9-dacecd42d90d\") " pod="openshift-marketplace/redhat-marketplace-6rvhm" Jan 22 06:41:16 crc kubenswrapper[4720]: I0122 06:41:16.222175 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="41f9ff9a-13f9-49b2-8ba6-0f56462cc94c" path="/var/lib/kubelet/pods/41f9ff9a-13f9-49b2-8ba6-0f56462cc94c/volumes" Jan 22 06:41:16 crc kubenswrapper[4720]: I0122 06:41:16.223622 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42ecbfe2-1714-40ca-b7ac-191fcbd65b0e" path="/var/lib/kubelet/pods/42ecbfe2-1714-40ca-b7ac-191fcbd65b0e/volumes" Jan 22 06:41:16 crc kubenswrapper[4720]: I0122 06:41:16.225242 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="557f2e7c-b408-456f-bfc8-b6714839b46a" path="/var/lib/kubelet/pods/557f2e7c-b408-456f-bfc8-b6714839b46a/volumes" Jan 22 06:41:16 crc kubenswrapper[4720]: I0122 06:41:16.227694 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="67487e16-e2f8-441f-9fd2-41e1997d91df" path="/var/lib/kubelet/pods/67487e16-e2f8-441f-9fd2-41e1997d91df/volumes" Jan 22 06:41:16 crc kubenswrapper[4720]: I0122 06:41:16.229381 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c8e6204f-9762-43b9-859a-74aaf49f30f4" path="/var/lib/kubelet/pods/c8e6204f-9762-43b9-859a-74aaf49f30f4/volumes" Jan 22 06:41:16 crc kubenswrapper[4720]: I0122 06:41:16.302977 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ppnwl\" (UniqueName: \"kubernetes.io/projected/58ab1210-e65f-4e2b-a3f9-dacecd42d90d-kube-api-access-ppnwl\") pod \"redhat-marketplace-6rvhm\" (UID: \"58ab1210-e65f-4e2b-a3f9-dacecd42d90d\") " pod="openshift-marketplace/redhat-marketplace-6rvhm" Jan 22 06:41:16 crc kubenswrapper[4720]: I0122 06:41:16.303330 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/58ab1210-e65f-4e2b-a3f9-dacecd42d90d-catalog-content\") pod \"redhat-marketplace-6rvhm\" (UID: \"58ab1210-e65f-4e2b-a3f9-dacecd42d90d\") " pod="openshift-marketplace/redhat-marketplace-6rvhm" Jan 22 06:41:16 crc kubenswrapper[4720]: I0122 06:41:16.303378 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/58ab1210-e65f-4e2b-a3f9-dacecd42d90d-utilities\") pod \"redhat-marketplace-6rvhm\" (UID: \"58ab1210-e65f-4e2b-a3f9-dacecd42d90d\") " pod="openshift-marketplace/redhat-marketplace-6rvhm" Jan 22 06:41:16 crc kubenswrapper[4720]: I0122 06:41:16.304039 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/58ab1210-e65f-4e2b-a3f9-dacecd42d90d-catalog-content\") pod \"redhat-marketplace-6rvhm\" (UID: \"58ab1210-e65f-4e2b-a3f9-dacecd42d90d\") " pod="openshift-marketplace/redhat-marketplace-6rvhm" Jan 22 06:41:16 crc kubenswrapper[4720]: I0122 06:41:16.304232 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/58ab1210-e65f-4e2b-a3f9-dacecd42d90d-utilities\") pod \"redhat-marketplace-6rvhm\" (UID: \"58ab1210-e65f-4e2b-a3f9-dacecd42d90d\") " pod="openshift-marketplace/redhat-marketplace-6rvhm" Jan 22 06:41:16 crc kubenswrapper[4720]: I0122 06:41:16.325389 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-qwws2"] Jan 22 06:41:16 crc kubenswrapper[4720]: I0122 06:41:16.327440 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qwws2" Jan 22 06:41:16 crc kubenswrapper[4720]: I0122 06:41:16.336838 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ppnwl\" (UniqueName: \"kubernetes.io/projected/58ab1210-e65f-4e2b-a3f9-dacecd42d90d-kube-api-access-ppnwl\") pod \"redhat-marketplace-6rvhm\" (UID: \"58ab1210-e65f-4e2b-a3f9-dacecd42d90d\") " pod="openshift-marketplace/redhat-marketplace-6rvhm" Jan 22 06:41:16 crc kubenswrapper[4720]: I0122 06:41:16.339741 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 22 06:41:16 crc kubenswrapper[4720]: I0122 06:41:16.348237 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qwws2"] Jan 22 06:41:16 crc kubenswrapper[4720]: I0122 06:41:16.405960 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gpzgg\" (UniqueName: \"kubernetes.io/projected/737d462f-6525-4b14-b25d-bc2687d9c5e8-kube-api-access-gpzgg\") pod \"redhat-operators-qwws2\" (UID: \"737d462f-6525-4b14-b25d-bc2687d9c5e8\") " pod="openshift-marketplace/redhat-operators-qwws2" Jan 22 06:41:16 crc kubenswrapper[4720]: I0122 06:41:16.406029 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/737d462f-6525-4b14-b25d-bc2687d9c5e8-catalog-content\") pod \"redhat-operators-qwws2\" (UID: \"737d462f-6525-4b14-b25d-bc2687d9c5e8\") " pod="openshift-marketplace/redhat-operators-qwws2" Jan 22 06:41:16 crc kubenswrapper[4720]: I0122 06:41:16.406098 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/737d462f-6525-4b14-b25d-bc2687d9c5e8-utilities\") pod \"redhat-operators-qwws2\" (UID: \"737d462f-6525-4b14-b25d-bc2687d9c5e8\") " pod="openshift-marketplace/redhat-operators-qwws2" Jan 22 06:41:16 crc kubenswrapper[4720]: I0122 06:41:16.452615 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6rvhm" Jan 22 06:41:16 crc kubenswrapper[4720]: I0122 06:41:16.507827 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gpzgg\" (UniqueName: \"kubernetes.io/projected/737d462f-6525-4b14-b25d-bc2687d9c5e8-kube-api-access-gpzgg\") pod \"redhat-operators-qwws2\" (UID: \"737d462f-6525-4b14-b25d-bc2687d9c5e8\") " pod="openshift-marketplace/redhat-operators-qwws2" Jan 22 06:41:16 crc kubenswrapper[4720]: I0122 06:41:16.507893 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/737d462f-6525-4b14-b25d-bc2687d9c5e8-catalog-content\") pod \"redhat-operators-qwws2\" (UID: \"737d462f-6525-4b14-b25d-bc2687d9c5e8\") " pod="openshift-marketplace/redhat-operators-qwws2" Jan 22 06:41:16 crc kubenswrapper[4720]: I0122 06:41:16.507984 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/737d462f-6525-4b14-b25d-bc2687d9c5e8-utilities\") pod \"redhat-operators-qwws2\" (UID: \"737d462f-6525-4b14-b25d-bc2687d9c5e8\") " pod="openshift-marketplace/redhat-operators-qwws2" Jan 22 06:41:16 crc kubenswrapper[4720]: I0122 06:41:16.508542 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/737d462f-6525-4b14-b25d-bc2687d9c5e8-catalog-content\") pod \"redhat-operators-qwws2\" (UID: \"737d462f-6525-4b14-b25d-bc2687d9c5e8\") " pod="openshift-marketplace/redhat-operators-qwws2" Jan 22 06:41:16 crc kubenswrapper[4720]: I0122 06:41:16.508697 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/737d462f-6525-4b14-b25d-bc2687d9c5e8-utilities\") pod \"redhat-operators-qwws2\" (UID: \"737d462f-6525-4b14-b25d-bc2687d9c5e8\") " pod="openshift-marketplace/redhat-operators-qwws2" Jan 22 06:41:16 crc kubenswrapper[4720]: I0122 06:41:16.525736 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gpzgg\" (UniqueName: \"kubernetes.io/projected/737d462f-6525-4b14-b25d-bc2687d9c5e8-kube-api-access-gpzgg\") pod \"redhat-operators-qwws2\" (UID: \"737d462f-6525-4b14-b25d-bc2687d9c5e8\") " pod="openshift-marketplace/redhat-operators-qwws2" Jan 22 06:41:16 crc kubenswrapper[4720]: I0122 06:41:16.676096 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qwws2" Jan 22 06:41:16 crc kubenswrapper[4720]: I0122 06:41:16.889952 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6rvhm"] Jan 22 06:41:16 crc kubenswrapper[4720]: W0122 06:41:16.893269 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod58ab1210_e65f_4e2b_a3f9_dacecd42d90d.slice/crio-a7c1a4febce5ef84afb2bcf9d88fab10d5a914241cbcb5b3e611322ec306c924 WatchSource:0}: Error finding container a7c1a4febce5ef84afb2bcf9d88fab10d5a914241cbcb5b3e611322ec306c924: Status 404 returned error can't find the container with id a7c1a4febce5ef84afb2bcf9d88fab10d5a914241cbcb5b3e611322ec306c924 Jan 22 06:41:16 crc kubenswrapper[4720]: I0122 06:41:16.895711 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qwws2"] Jan 22 06:41:16 crc kubenswrapper[4720]: W0122 06:41:16.898615 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod737d462f_6525_4b14_b25d_bc2687d9c5e8.slice/crio-89bdc1b16c217b3f1672a4e79e05c97a1ce91b2e103ca04690afcc8fe6109e0e WatchSource:0}: Error finding container 89bdc1b16c217b3f1672a4e79e05c97a1ce91b2e103ca04690afcc8fe6109e0e: Status 404 returned error can't find the container with id 89bdc1b16c217b3f1672a4e79e05c97a1ce91b2e103ca04690afcc8fe6109e0e Jan 22 06:41:17 crc kubenswrapper[4720]: I0122 06:41:17.037860 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qwws2" event={"ID":"737d462f-6525-4b14-b25d-bc2687d9c5e8","Type":"ContainerStarted","Data":"6760563331acbe2a2ef9c17f55465cd95ca82b63702dd70b9dcee3ebd12c9914"} Jan 22 06:41:17 crc kubenswrapper[4720]: I0122 06:41:17.038370 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qwws2" event={"ID":"737d462f-6525-4b14-b25d-bc2687d9c5e8","Type":"ContainerStarted","Data":"89bdc1b16c217b3f1672a4e79e05c97a1ce91b2e103ca04690afcc8fe6109e0e"} Jan 22 06:41:17 crc kubenswrapper[4720]: I0122 06:41:17.043979 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6rvhm" event={"ID":"58ab1210-e65f-4e2b-a3f9-dacecd42d90d","Type":"ContainerStarted","Data":"0762c6cc9c7140f8e4f56c73fc02b83cf1ef4df731472b3c139a753b55e032b5"} Jan 22 06:41:17 crc kubenswrapper[4720]: I0122 06:41:17.044020 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6rvhm" event={"ID":"58ab1210-e65f-4e2b-a3f9-dacecd42d90d","Type":"ContainerStarted","Data":"a7c1a4febce5ef84afb2bcf9d88fab10d5a914241cbcb5b3e611322ec306c924"} Jan 22 06:41:18 crc kubenswrapper[4720]: I0122 06:41:18.061436 4720 generic.go:334] "Generic (PLEG): container finished" podID="58ab1210-e65f-4e2b-a3f9-dacecd42d90d" containerID="0762c6cc9c7140f8e4f56c73fc02b83cf1ef4df731472b3c139a753b55e032b5" exitCode=0 Jan 22 06:41:18 crc kubenswrapper[4720]: I0122 06:41:18.061488 4720 generic.go:334] "Generic (PLEG): container finished" podID="58ab1210-e65f-4e2b-a3f9-dacecd42d90d" containerID="228e16747a82a8e0927101b8fe1fd4c029fce9660c4b2610870f4ce22ec67590" exitCode=0 Jan 22 06:41:18 crc kubenswrapper[4720]: I0122 06:41:18.061560 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6rvhm" event={"ID":"58ab1210-e65f-4e2b-a3f9-dacecd42d90d","Type":"ContainerDied","Data":"0762c6cc9c7140f8e4f56c73fc02b83cf1ef4df731472b3c139a753b55e032b5"} Jan 22 06:41:18 crc kubenswrapper[4720]: I0122 06:41:18.061608 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6rvhm" event={"ID":"58ab1210-e65f-4e2b-a3f9-dacecd42d90d","Type":"ContainerDied","Data":"228e16747a82a8e0927101b8fe1fd4c029fce9660c4b2610870f4ce22ec67590"} Jan 22 06:41:18 crc kubenswrapper[4720]: I0122 06:41:18.063527 4720 generic.go:334] "Generic (PLEG): container finished" podID="737d462f-6525-4b14-b25d-bc2687d9c5e8" containerID="6760563331acbe2a2ef9c17f55465cd95ca82b63702dd70b9dcee3ebd12c9914" exitCode=0 Jan 22 06:41:18 crc kubenswrapper[4720]: I0122 06:41:18.063596 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qwws2" event={"ID":"737d462f-6525-4b14-b25d-bc2687d9c5e8","Type":"ContainerDied","Data":"6760563331acbe2a2ef9c17f55465cd95ca82b63702dd70b9dcee3ebd12c9914"} Jan 22 06:41:18 crc kubenswrapper[4720]: I0122 06:41:18.530049 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-trf28"] Jan 22 06:41:18 crc kubenswrapper[4720]: I0122 06:41:18.532672 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-trf28" Jan 22 06:41:18 crc kubenswrapper[4720]: I0122 06:41:18.536977 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 22 06:41:18 crc kubenswrapper[4720]: I0122 06:41:18.544518 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-trf28"] Jan 22 06:41:18 crc kubenswrapper[4720]: I0122 06:41:18.651154 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a64a2970-44b4-4c97-98d8-7d7de717e554-utilities\") pod \"community-operators-trf28\" (UID: \"a64a2970-44b4-4c97-98d8-7d7de717e554\") " pod="openshift-marketplace/community-operators-trf28" Jan 22 06:41:18 crc kubenswrapper[4720]: I0122 06:41:18.651248 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a64a2970-44b4-4c97-98d8-7d7de717e554-catalog-content\") pod \"community-operators-trf28\" (UID: \"a64a2970-44b4-4c97-98d8-7d7de717e554\") " pod="openshift-marketplace/community-operators-trf28" Jan 22 06:41:18 crc kubenswrapper[4720]: I0122 06:41:18.651293 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d9mqp\" (UniqueName: \"kubernetes.io/projected/a64a2970-44b4-4c97-98d8-7d7de717e554-kube-api-access-d9mqp\") pod \"community-operators-trf28\" (UID: \"a64a2970-44b4-4c97-98d8-7d7de717e554\") " pod="openshift-marketplace/community-operators-trf28" Jan 22 06:41:18 crc kubenswrapper[4720]: I0122 06:41:18.735412 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-tbv85"] Jan 22 06:41:18 crc kubenswrapper[4720]: I0122 06:41:18.738593 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tbv85" Jan 22 06:41:18 crc kubenswrapper[4720]: I0122 06:41:18.742381 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 22 06:41:18 crc kubenswrapper[4720]: I0122 06:41:18.742976 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-tbv85"] Jan 22 06:41:18 crc kubenswrapper[4720]: I0122 06:41:18.753049 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a64a2970-44b4-4c97-98d8-7d7de717e554-utilities\") pod \"community-operators-trf28\" (UID: \"a64a2970-44b4-4c97-98d8-7d7de717e554\") " pod="openshift-marketplace/community-operators-trf28" Jan 22 06:41:18 crc kubenswrapper[4720]: I0122 06:41:18.753113 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a64a2970-44b4-4c97-98d8-7d7de717e554-catalog-content\") pod \"community-operators-trf28\" (UID: \"a64a2970-44b4-4c97-98d8-7d7de717e554\") " pod="openshift-marketplace/community-operators-trf28" Jan 22 06:41:18 crc kubenswrapper[4720]: I0122 06:41:18.753155 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d9mqp\" (UniqueName: \"kubernetes.io/projected/a64a2970-44b4-4c97-98d8-7d7de717e554-kube-api-access-d9mqp\") pod \"community-operators-trf28\" (UID: \"a64a2970-44b4-4c97-98d8-7d7de717e554\") " pod="openshift-marketplace/community-operators-trf28" Jan 22 06:41:18 crc kubenswrapper[4720]: I0122 06:41:18.753900 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a64a2970-44b4-4c97-98d8-7d7de717e554-utilities\") pod \"community-operators-trf28\" (UID: \"a64a2970-44b4-4c97-98d8-7d7de717e554\") " pod="openshift-marketplace/community-operators-trf28" Jan 22 06:41:18 crc kubenswrapper[4720]: I0122 06:41:18.754178 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a64a2970-44b4-4c97-98d8-7d7de717e554-catalog-content\") pod \"community-operators-trf28\" (UID: \"a64a2970-44b4-4c97-98d8-7d7de717e554\") " pod="openshift-marketplace/community-operators-trf28" Jan 22 06:41:18 crc kubenswrapper[4720]: I0122 06:41:18.787006 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d9mqp\" (UniqueName: \"kubernetes.io/projected/a64a2970-44b4-4c97-98d8-7d7de717e554-kube-api-access-d9mqp\") pod \"community-operators-trf28\" (UID: \"a64a2970-44b4-4c97-98d8-7d7de717e554\") " pod="openshift-marketplace/community-operators-trf28" Jan 22 06:41:18 crc kubenswrapper[4720]: I0122 06:41:18.854470 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2f3da976-0b3b-4d11-82fa-a5ea4ebcb38b-utilities\") pod \"certified-operators-tbv85\" (UID: \"2f3da976-0b3b-4d11-82fa-a5ea4ebcb38b\") " pod="openshift-marketplace/certified-operators-tbv85" Jan 22 06:41:18 crc kubenswrapper[4720]: I0122 06:41:18.854670 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lktbl\" (UniqueName: \"kubernetes.io/projected/2f3da976-0b3b-4d11-82fa-a5ea4ebcb38b-kube-api-access-lktbl\") pod \"certified-operators-tbv85\" (UID: \"2f3da976-0b3b-4d11-82fa-a5ea4ebcb38b\") " pod="openshift-marketplace/certified-operators-tbv85" Jan 22 06:41:18 crc kubenswrapper[4720]: I0122 06:41:18.854829 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2f3da976-0b3b-4d11-82fa-a5ea4ebcb38b-catalog-content\") pod \"certified-operators-tbv85\" (UID: \"2f3da976-0b3b-4d11-82fa-a5ea4ebcb38b\") " pod="openshift-marketplace/certified-operators-tbv85" Jan 22 06:41:18 crc kubenswrapper[4720]: I0122 06:41:18.874508 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-trf28" Jan 22 06:41:18 crc kubenswrapper[4720]: I0122 06:41:18.956037 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lktbl\" (UniqueName: \"kubernetes.io/projected/2f3da976-0b3b-4d11-82fa-a5ea4ebcb38b-kube-api-access-lktbl\") pod \"certified-operators-tbv85\" (UID: \"2f3da976-0b3b-4d11-82fa-a5ea4ebcb38b\") " pod="openshift-marketplace/certified-operators-tbv85" Jan 22 06:41:18 crc kubenswrapper[4720]: I0122 06:41:18.956668 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2f3da976-0b3b-4d11-82fa-a5ea4ebcb38b-catalog-content\") pod \"certified-operators-tbv85\" (UID: \"2f3da976-0b3b-4d11-82fa-a5ea4ebcb38b\") " pod="openshift-marketplace/certified-operators-tbv85" Jan 22 06:41:18 crc kubenswrapper[4720]: I0122 06:41:18.956858 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2f3da976-0b3b-4d11-82fa-a5ea4ebcb38b-utilities\") pod \"certified-operators-tbv85\" (UID: \"2f3da976-0b3b-4d11-82fa-a5ea4ebcb38b\") " pod="openshift-marketplace/certified-operators-tbv85" Jan 22 06:41:18 crc kubenswrapper[4720]: I0122 06:41:18.957445 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2f3da976-0b3b-4d11-82fa-a5ea4ebcb38b-catalog-content\") pod \"certified-operators-tbv85\" (UID: \"2f3da976-0b3b-4d11-82fa-a5ea4ebcb38b\") " pod="openshift-marketplace/certified-operators-tbv85" Jan 22 06:41:18 crc kubenswrapper[4720]: I0122 06:41:18.957613 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2f3da976-0b3b-4d11-82fa-a5ea4ebcb38b-utilities\") pod \"certified-operators-tbv85\" (UID: \"2f3da976-0b3b-4d11-82fa-a5ea4ebcb38b\") " pod="openshift-marketplace/certified-operators-tbv85" Jan 22 06:41:18 crc kubenswrapper[4720]: I0122 06:41:18.986200 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lktbl\" (UniqueName: \"kubernetes.io/projected/2f3da976-0b3b-4d11-82fa-a5ea4ebcb38b-kube-api-access-lktbl\") pod \"certified-operators-tbv85\" (UID: \"2f3da976-0b3b-4d11-82fa-a5ea4ebcb38b\") " pod="openshift-marketplace/certified-operators-tbv85" Jan 22 06:41:19 crc kubenswrapper[4720]: I0122 06:41:19.072948 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qwws2" event={"ID":"737d462f-6525-4b14-b25d-bc2687d9c5e8","Type":"ContainerStarted","Data":"c228223ba685b0dadc708456d970f1a762c29f5bce5231b58fd03f5f9a1b7c27"} Jan 22 06:41:19 crc kubenswrapper[4720]: I0122 06:41:19.076371 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tbv85" Jan 22 06:41:19 crc kubenswrapper[4720]: I0122 06:41:19.079219 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6rvhm" event={"ID":"58ab1210-e65f-4e2b-a3f9-dacecd42d90d","Type":"ContainerStarted","Data":"c0c163378bf94dbb9aa8a884c02b53ca38644707911863346f5e6c6efb961c89"} Jan 22 06:41:19 crc kubenswrapper[4720]: I0122 06:41:19.128872 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-6rvhm" podStartSLOduration=1.7197849120000002 podStartE2EDuration="3.128848682s" podCreationTimestamp="2026-01-22 06:41:16 +0000 UTC" firstStartedPulling="2026-01-22 06:41:17.046270614 +0000 UTC m=+369.188177319" lastFinishedPulling="2026-01-22 06:41:18.455334374 +0000 UTC m=+370.597241089" observedRunningTime="2026-01-22 06:41:19.126548506 +0000 UTC m=+371.268455231" watchObservedRunningTime="2026-01-22 06:41:19.128848682 +0000 UTC m=+371.270755387" Jan 22 06:41:19 crc kubenswrapper[4720]: I0122 06:41:19.321899 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-tbv85"] Jan 22 06:41:19 crc kubenswrapper[4720]: W0122 06:41:19.331116 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2f3da976_0b3b_4d11_82fa_a5ea4ebcb38b.slice/crio-5a7e36d6aed8b532c32c55bb314f50868ae47c1b1bb951778ca3d895b12252c5 WatchSource:0}: Error finding container 5a7e36d6aed8b532c32c55bb314f50868ae47c1b1bb951778ca3d895b12252c5: Status 404 returned error can't find the container with id 5a7e36d6aed8b532c32c55bb314f50868ae47c1b1bb951778ca3d895b12252c5 Jan 22 06:41:19 crc kubenswrapper[4720]: I0122 06:41:19.340025 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-trf28"] Jan 22 06:41:20 crc kubenswrapper[4720]: I0122 06:41:20.086744 4720 generic.go:334] "Generic (PLEG): container finished" podID="a64a2970-44b4-4c97-98d8-7d7de717e554" containerID="cbf5a0dc51f7b9a0f7f2f7078ea5ab1438c70e2d178ec76e5747e51b968d969d" exitCode=0 Jan 22 06:41:20 crc kubenswrapper[4720]: I0122 06:41:20.087583 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-trf28" event={"ID":"a64a2970-44b4-4c97-98d8-7d7de717e554","Type":"ContainerDied","Data":"cbf5a0dc51f7b9a0f7f2f7078ea5ab1438c70e2d178ec76e5747e51b968d969d"} Jan 22 06:41:20 crc kubenswrapper[4720]: I0122 06:41:20.087647 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-trf28" event={"ID":"a64a2970-44b4-4c97-98d8-7d7de717e554","Type":"ContainerStarted","Data":"efbae97e8be5ce4a1c8ac6e5ffe17a3a3f59408125616f360bdd48394a84f4e2"} Jan 22 06:41:20 crc kubenswrapper[4720]: I0122 06:41:20.089627 4720 generic.go:334] "Generic (PLEG): container finished" podID="2f3da976-0b3b-4d11-82fa-a5ea4ebcb38b" containerID="a710223d8aaafb21cb8dcdd315cae4d6d6065f960b360bb957204ab129ee6b0c" exitCode=0 Jan 22 06:41:20 crc kubenswrapper[4720]: I0122 06:41:20.089817 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tbv85" event={"ID":"2f3da976-0b3b-4d11-82fa-a5ea4ebcb38b","Type":"ContainerDied","Data":"a710223d8aaafb21cb8dcdd315cae4d6d6065f960b360bb957204ab129ee6b0c"} Jan 22 06:41:20 crc kubenswrapper[4720]: I0122 06:41:20.090084 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tbv85" event={"ID":"2f3da976-0b3b-4d11-82fa-a5ea4ebcb38b","Type":"ContainerStarted","Data":"5a7e36d6aed8b532c32c55bb314f50868ae47c1b1bb951778ca3d895b12252c5"} Jan 22 06:41:20 crc kubenswrapper[4720]: I0122 06:41:20.092879 4720 generic.go:334] "Generic (PLEG): container finished" podID="737d462f-6525-4b14-b25d-bc2687d9c5e8" containerID="c228223ba685b0dadc708456d970f1a762c29f5bce5231b58fd03f5f9a1b7c27" exitCode=0 Jan 22 06:41:20 crc kubenswrapper[4720]: I0122 06:41:20.093654 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qwws2" event={"ID":"737d462f-6525-4b14-b25d-bc2687d9c5e8","Type":"ContainerDied","Data":"c228223ba685b0dadc708456d970f1a762c29f5bce5231b58fd03f5f9a1b7c27"} Jan 22 06:41:20 crc kubenswrapper[4720]: I0122 06:41:20.264155 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5598468cdf-65dfj"] Jan 22 06:41:20 crc kubenswrapper[4720]: I0122 06:41:20.264718 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-5598468cdf-65dfj" podUID="5df93441-3446-474a-9f82-82bba08eb13f" containerName="route-controller-manager" containerID="cri-o://3bdb9ac20ad07c48ead25f659d2b5c66f2bea560ed6340dbe499784c93cf174d" gracePeriod=30 Jan 22 06:41:20 crc kubenswrapper[4720]: I0122 06:41:20.630694 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5598468cdf-65dfj" Jan 22 06:41:20 crc kubenswrapper[4720]: I0122 06:41:20.784486 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5df93441-3446-474a-9f82-82bba08eb13f-config\") pod \"5df93441-3446-474a-9f82-82bba08eb13f\" (UID: \"5df93441-3446-474a-9f82-82bba08eb13f\") " Jan 22 06:41:20 crc kubenswrapper[4720]: I0122 06:41:20.784553 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5df93441-3446-474a-9f82-82bba08eb13f-serving-cert\") pod \"5df93441-3446-474a-9f82-82bba08eb13f\" (UID: \"5df93441-3446-474a-9f82-82bba08eb13f\") " Jan 22 06:41:20 crc kubenswrapper[4720]: I0122 06:41:20.784596 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5df93441-3446-474a-9f82-82bba08eb13f-client-ca\") pod \"5df93441-3446-474a-9f82-82bba08eb13f\" (UID: \"5df93441-3446-474a-9f82-82bba08eb13f\") " Jan 22 06:41:20 crc kubenswrapper[4720]: I0122 06:41:20.784628 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k6sxv\" (UniqueName: \"kubernetes.io/projected/5df93441-3446-474a-9f82-82bba08eb13f-kube-api-access-k6sxv\") pod \"5df93441-3446-474a-9f82-82bba08eb13f\" (UID: \"5df93441-3446-474a-9f82-82bba08eb13f\") " Jan 22 06:41:20 crc kubenswrapper[4720]: I0122 06:41:20.785648 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5df93441-3446-474a-9f82-82bba08eb13f-config" (OuterVolumeSpecName: "config") pod "5df93441-3446-474a-9f82-82bba08eb13f" (UID: "5df93441-3446-474a-9f82-82bba08eb13f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 06:41:20 crc kubenswrapper[4720]: I0122 06:41:20.786247 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5df93441-3446-474a-9f82-82bba08eb13f-client-ca" (OuterVolumeSpecName: "client-ca") pod "5df93441-3446-474a-9f82-82bba08eb13f" (UID: "5df93441-3446-474a-9f82-82bba08eb13f"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 06:41:20 crc kubenswrapper[4720]: I0122 06:41:20.791767 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5df93441-3446-474a-9f82-82bba08eb13f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5df93441-3446-474a-9f82-82bba08eb13f" (UID: "5df93441-3446-474a-9f82-82bba08eb13f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 06:41:20 crc kubenswrapper[4720]: I0122 06:41:20.793075 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5df93441-3446-474a-9f82-82bba08eb13f-kube-api-access-k6sxv" (OuterVolumeSpecName: "kube-api-access-k6sxv") pod "5df93441-3446-474a-9f82-82bba08eb13f" (UID: "5df93441-3446-474a-9f82-82bba08eb13f"). InnerVolumeSpecName "kube-api-access-k6sxv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 06:41:20 crc kubenswrapper[4720]: I0122 06:41:20.886514 4720 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5df93441-3446-474a-9f82-82bba08eb13f-client-ca\") on node \"crc\" DevicePath \"\"" Jan 22 06:41:20 crc kubenswrapper[4720]: I0122 06:41:20.886557 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k6sxv\" (UniqueName: \"kubernetes.io/projected/5df93441-3446-474a-9f82-82bba08eb13f-kube-api-access-k6sxv\") on node \"crc\" DevicePath \"\"" Jan 22 06:41:20 crc kubenswrapper[4720]: I0122 06:41:20.886568 4720 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5df93441-3446-474a-9f82-82bba08eb13f-config\") on node \"crc\" DevicePath \"\"" Jan 22 06:41:20 crc kubenswrapper[4720]: I0122 06:41:20.886577 4720 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5df93441-3446-474a-9f82-82bba08eb13f-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 06:41:21 crc kubenswrapper[4720]: I0122 06:41:21.100446 4720 generic.go:334] "Generic (PLEG): container finished" podID="5df93441-3446-474a-9f82-82bba08eb13f" containerID="3bdb9ac20ad07c48ead25f659d2b5c66f2bea560ed6340dbe499784c93cf174d" exitCode=0 Jan 22 06:41:21 crc kubenswrapper[4720]: I0122 06:41:21.100483 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5598468cdf-65dfj" Jan 22 06:41:21 crc kubenswrapper[4720]: I0122 06:41:21.100500 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5598468cdf-65dfj" event={"ID":"5df93441-3446-474a-9f82-82bba08eb13f","Type":"ContainerDied","Data":"3bdb9ac20ad07c48ead25f659d2b5c66f2bea560ed6340dbe499784c93cf174d"} Jan 22 06:41:21 crc kubenswrapper[4720]: I0122 06:41:21.100538 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5598468cdf-65dfj" event={"ID":"5df93441-3446-474a-9f82-82bba08eb13f","Type":"ContainerDied","Data":"6c0efc40d2f56a101069c6adc50dd4e04c8a7029e787cc37a5518f08fbb689c7"} Jan 22 06:41:21 crc kubenswrapper[4720]: I0122 06:41:21.100561 4720 scope.go:117] "RemoveContainer" containerID="3bdb9ac20ad07c48ead25f659d2b5c66f2bea560ed6340dbe499784c93cf174d" Jan 22 06:41:21 crc kubenswrapper[4720]: I0122 06:41:21.128416 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5598468cdf-65dfj"] Jan 22 06:41:21 crc kubenswrapper[4720]: I0122 06:41:21.130210 4720 scope.go:117] "RemoveContainer" containerID="3bdb9ac20ad07c48ead25f659d2b5c66f2bea560ed6340dbe499784c93cf174d" Jan 22 06:41:21 crc kubenswrapper[4720]: E0122 06:41:21.130754 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3bdb9ac20ad07c48ead25f659d2b5c66f2bea560ed6340dbe499784c93cf174d\": container with ID starting with 3bdb9ac20ad07c48ead25f659d2b5c66f2bea560ed6340dbe499784c93cf174d not found: ID does not exist" containerID="3bdb9ac20ad07c48ead25f659d2b5c66f2bea560ed6340dbe499784c93cf174d" Jan 22 06:41:21 crc kubenswrapper[4720]: I0122 06:41:21.130807 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3bdb9ac20ad07c48ead25f659d2b5c66f2bea560ed6340dbe499784c93cf174d"} err="failed to get container status \"3bdb9ac20ad07c48ead25f659d2b5c66f2bea560ed6340dbe499784c93cf174d\": rpc error: code = NotFound desc = could not find container \"3bdb9ac20ad07c48ead25f659d2b5c66f2bea560ed6340dbe499784c93cf174d\": container with ID starting with 3bdb9ac20ad07c48ead25f659d2b5c66f2bea560ed6340dbe499784c93cf174d not found: ID does not exist" Jan 22 06:41:21 crc kubenswrapper[4720]: I0122 06:41:21.132692 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5598468cdf-65dfj"] Jan 22 06:41:21 crc kubenswrapper[4720]: I0122 06:41:21.688146 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-849fd97bf8-4lrb6"] Jan 22 06:41:21 crc kubenswrapper[4720]: E0122 06:41:21.688838 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5df93441-3446-474a-9f82-82bba08eb13f" containerName="route-controller-manager" Jan 22 06:41:21 crc kubenswrapper[4720]: I0122 06:41:21.688856 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="5df93441-3446-474a-9f82-82bba08eb13f" containerName="route-controller-manager" Jan 22 06:41:21 crc kubenswrapper[4720]: I0122 06:41:21.688964 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="5df93441-3446-474a-9f82-82bba08eb13f" containerName="route-controller-manager" Jan 22 06:41:21 crc kubenswrapper[4720]: I0122 06:41:21.689394 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-849fd97bf8-4lrb6" Jan 22 06:41:21 crc kubenswrapper[4720]: I0122 06:41:21.694880 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 22 06:41:21 crc kubenswrapper[4720]: I0122 06:41:21.695093 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 22 06:41:21 crc kubenswrapper[4720]: I0122 06:41:21.695391 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 22 06:41:21 crc kubenswrapper[4720]: I0122 06:41:21.695600 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 22 06:41:21 crc kubenswrapper[4720]: I0122 06:41:21.695739 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 22 06:41:21 crc kubenswrapper[4720]: I0122 06:41:21.702249 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 22 06:41:21 crc kubenswrapper[4720]: I0122 06:41:21.702803 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-849fd97bf8-4lrb6"] Jan 22 06:41:21 crc kubenswrapper[4720]: I0122 06:41:21.797024 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bh9hg\" (UniqueName: \"kubernetes.io/projected/17eb1033-2ca8-4049-bb1c-783d5f112689-kube-api-access-bh9hg\") pod \"route-controller-manager-849fd97bf8-4lrb6\" (UID: \"17eb1033-2ca8-4049-bb1c-783d5f112689\") " pod="openshift-route-controller-manager/route-controller-manager-849fd97bf8-4lrb6" Jan 22 06:41:21 crc kubenswrapper[4720]: I0122 06:41:21.797068 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/17eb1033-2ca8-4049-bb1c-783d5f112689-client-ca\") pod \"route-controller-manager-849fd97bf8-4lrb6\" (UID: \"17eb1033-2ca8-4049-bb1c-783d5f112689\") " pod="openshift-route-controller-manager/route-controller-manager-849fd97bf8-4lrb6" Jan 22 06:41:21 crc kubenswrapper[4720]: I0122 06:41:21.797123 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/17eb1033-2ca8-4049-bb1c-783d5f112689-serving-cert\") pod \"route-controller-manager-849fd97bf8-4lrb6\" (UID: \"17eb1033-2ca8-4049-bb1c-783d5f112689\") " pod="openshift-route-controller-manager/route-controller-manager-849fd97bf8-4lrb6" Jan 22 06:41:21 crc kubenswrapper[4720]: I0122 06:41:21.797155 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/17eb1033-2ca8-4049-bb1c-783d5f112689-config\") pod \"route-controller-manager-849fd97bf8-4lrb6\" (UID: \"17eb1033-2ca8-4049-bb1c-783d5f112689\") " pod="openshift-route-controller-manager/route-controller-manager-849fd97bf8-4lrb6" Jan 22 06:41:21 crc kubenswrapper[4720]: I0122 06:41:21.898619 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bh9hg\" (UniqueName: \"kubernetes.io/projected/17eb1033-2ca8-4049-bb1c-783d5f112689-kube-api-access-bh9hg\") pod \"route-controller-manager-849fd97bf8-4lrb6\" (UID: \"17eb1033-2ca8-4049-bb1c-783d5f112689\") " pod="openshift-route-controller-manager/route-controller-manager-849fd97bf8-4lrb6" Jan 22 06:41:21 crc kubenswrapper[4720]: I0122 06:41:21.898669 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/17eb1033-2ca8-4049-bb1c-783d5f112689-client-ca\") pod \"route-controller-manager-849fd97bf8-4lrb6\" (UID: \"17eb1033-2ca8-4049-bb1c-783d5f112689\") " pod="openshift-route-controller-manager/route-controller-manager-849fd97bf8-4lrb6" Jan 22 06:41:21 crc kubenswrapper[4720]: I0122 06:41:21.898725 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/17eb1033-2ca8-4049-bb1c-783d5f112689-serving-cert\") pod \"route-controller-manager-849fd97bf8-4lrb6\" (UID: \"17eb1033-2ca8-4049-bb1c-783d5f112689\") " pod="openshift-route-controller-manager/route-controller-manager-849fd97bf8-4lrb6" Jan 22 06:41:21 crc kubenswrapper[4720]: I0122 06:41:21.898759 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/17eb1033-2ca8-4049-bb1c-783d5f112689-config\") pod \"route-controller-manager-849fd97bf8-4lrb6\" (UID: \"17eb1033-2ca8-4049-bb1c-783d5f112689\") " pod="openshift-route-controller-manager/route-controller-manager-849fd97bf8-4lrb6" Jan 22 06:41:21 crc kubenswrapper[4720]: I0122 06:41:21.900013 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/17eb1033-2ca8-4049-bb1c-783d5f112689-client-ca\") pod \"route-controller-manager-849fd97bf8-4lrb6\" (UID: \"17eb1033-2ca8-4049-bb1c-783d5f112689\") " pod="openshift-route-controller-manager/route-controller-manager-849fd97bf8-4lrb6" Jan 22 06:41:21 crc kubenswrapper[4720]: I0122 06:41:21.900148 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/17eb1033-2ca8-4049-bb1c-783d5f112689-config\") pod \"route-controller-manager-849fd97bf8-4lrb6\" (UID: \"17eb1033-2ca8-4049-bb1c-783d5f112689\") " pod="openshift-route-controller-manager/route-controller-manager-849fd97bf8-4lrb6" Jan 22 06:41:21 crc kubenswrapper[4720]: I0122 06:41:21.912619 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/17eb1033-2ca8-4049-bb1c-783d5f112689-serving-cert\") pod \"route-controller-manager-849fd97bf8-4lrb6\" (UID: \"17eb1033-2ca8-4049-bb1c-783d5f112689\") " pod="openshift-route-controller-manager/route-controller-manager-849fd97bf8-4lrb6" Jan 22 06:41:21 crc kubenswrapper[4720]: I0122 06:41:21.916503 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bh9hg\" (UniqueName: \"kubernetes.io/projected/17eb1033-2ca8-4049-bb1c-783d5f112689-kube-api-access-bh9hg\") pod \"route-controller-manager-849fd97bf8-4lrb6\" (UID: \"17eb1033-2ca8-4049-bb1c-783d5f112689\") " pod="openshift-route-controller-manager/route-controller-manager-849fd97bf8-4lrb6" Jan 22 06:41:22 crc kubenswrapper[4720]: I0122 06:41:22.006747 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-849fd97bf8-4lrb6" Jan 22 06:41:22 crc kubenswrapper[4720]: I0122 06:41:22.134562 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tbv85" event={"ID":"2f3da976-0b3b-4d11-82fa-a5ea4ebcb38b","Type":"ContainerStarted","Data":"1eb222d31a39060948932e2d546d37ff4f234d918d6d6df3dda2044e1e04dc6c"} Jan 22 06:41:22 crc kubenswrapper[4720]: I0122 06:41:22.138657 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qwws2" event={"ID":"737d462f-6525-4b14-b25d-bc2687d9c5e8","Type":"ContainerStarted","Data":"fa325f973bbf4f3790b5c2c6a113be7db60a8c6a2ffe64f597854ecaf47fc9d2"} Jan 22 06:41:22 crc kubenswrapper[4720]: I0122 06:41:22.172952 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-trf28" event={"ID":"a64a2970-44b4-4c97-98d8-7d7de717e554","Type":"ContainerStarted","Data":"8765e253c41e7369a7adf593d9763a8895da11cf4ed1507e00c51a562b83139b"} Jan 22 06:41:22 crc kubenswrapper[4720]: I0122 06:41:22.206025 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-qwws2" podStartSLOduration=2.785570946 podStartE2EDuration="6.205998046s" podCreationTimestamp="2026-01-22 06:41:16 +0000 UTC" firstStartedPulling="2026-01-22 06:41:18.065543577 +0000 UTC m=+370.207450312" lastFinishedPulling="2026-01-22 06:41:21.485970707 +0000 UTC m=+373.627877412" observedRunningTime="2026-01-22 06:41:22.182983497 +0000 UTC m=+374.324890212" watchObservedRunningTime="2026-01-22 06:41:22.205998046 +0000 UTC m=+374.347904751" Jan 22 06:41:22 crc kubenswrapper[4720]: I0122 06:41:22.217085 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5df93441-3446-474a-9f82-82bba08eb13f" path="/var/lib/kubelet/pods/5df93441-3446-474a-9f82-82bba08eb13f/volumes" Jan 22 06:41:22 crc kubenswrapper[4720]: I0122 06:41:22.447708 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-849fd97bf8-4lrb6"] Jan 22 06:41:23 crc kubenswrapper[4720]: I0122 06:41:23.182120 4720 generic.go:334] "Generic (PLEG): container finished" podID="a64a2970-44b4-4c97-98d8-7d7de717e554" containerID="8765e253c41e7369a7adf593d9763a8895da11cf4ed1507e00c51a562b83139b" exitCode=0 Jan 22 06:41:23 crc kubenswrapper[4720]: I0122 06:41:23.182231 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-trf28" event={"ID":"a64a2970-44b4-4c97-98d8-7d7de717e554","Type":"ContainerDied","Data":"8765e253c41e7369a7adf593d9763a8895da11cf4ed1507e00c51a562b83139b"} Jan 22 06:41:23 crc kubenswrapper[4720]: I0122 06:41:23.182301 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-trf28" event={"ID":"a64a2970-44b4-4c97-98d8-7d7de717e554","Type":"ContainerStarted","Data":"7d5ffa46107ed89646b6094ba3272783810517ad027d6af7f6148a265433dd9d"} Jan 22 06:41:23 crc kubenswrapper[4720]: I0122 06:41:23.185012 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-849fd97bf8-4lrb6" event={"ID":"17eb1033-2ca8-4049-bb1c-783d5f112689","Type":"ContainerStarted","Data":"343eda63db88b5605a89d24b67f10c35bf93cb9a3f1d15cc4f31f24a9a0fc245"} Jan 22 06:41:23 crc kubenswrapper[4720]: I0122 06:41:23.185042 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-849fd97bf8-4lrb6" event={"ID":"17eb1033-2ca8-4049-bb1c-783d5f112689","Type":"ContainerStarted","Data":"55de1b1432785ef680adb250baafc0ecadd0b342ca9aaa37ca37d2ce020f3cac"} Jan 22 06:41:23 crc kubenswrapper[4720]: I0122 06:41:23.185270 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-849fd97bf8-4lrb6" Jan 22 06:41:23 crc kubenswrapper[4720]: I0122 06:41:23.188103 4720 generic.go:334] "Generic (PLEG): container finished" podID="2f3da976-0b3b-4d11-82fa-a5ea4ebcb38b" containerID="1eb222d31a39060948932e2d546d37ff4f234d918d6d6df3dda2044e1e04dc6c" exitCode=0 Jan 22 06:41:23 crc kubenswrapper[4720]: I0122 06:41:23.188194 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tbv85" event={"ID":"2f3da976-0b3b-4d11-82fa-a5ea4ebcb38b","Type":"ContainerDied","Data":"1eb222d31a39060948932e2d546d37ff4f234d918d6d6df3dda2044e1e04dc6c"} Jan 22 06:41:23 crc kubenswrapper[4720]: I0122 06:41:23.188243 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tbv85" event={"ID":"2f3da976-0b3b-4d11-82fa-a5ea4ebcb38b","Type":"ContainerStarted","Data":"74d0ed6b75ff5beed0ed3f9ce5a3cef90045ad81e3623807da8fcb96bd231d70"} Jan 22 06:41:23 crc kubenswrapper[4720]: I0122 06:41:23.191450 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-849fd97bf8-4lrb6" Jan 22 06:41:23 crc kubenswrapper[4720]: I0122 06:41:23.206463 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-trf28" podStartSLOduration=2.607570926 podStartE2EDuration="5.206439211s" podCreationTimestamp="2026-01-22 06:41:18 +0000 UTC" firstStartedPulling="2026-01-22 06:41:20.089448546 +0000 UTC m=+372.231355251" lastFinishedPulling="2026-01-22 06:41:22.688316821 +0000 UTC m=+374.830223536" observedRunningTime="2026-01-22 06:41:23.206092361 +0000 UTC m=+375.347999056" watchObservedRunningTime="2026-01-22 06:41:23.206439211 +0000 UTC m=+375.348345926" Jan 22 06:41:23 crc kubenswrapper[4720]: I0122 06:41:23.226557 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-tbv85" podStartSLOduration=2.7857708370000003 podStartE2EDuration="5.226527266s" podCreationTimestamp="2026-01-22 06:41:18 +0000 UTC" firstStartedPulling="2026-01-22 06:41:20.091270078 +0000 UTC m=+372.233176783" lastFinishedPulling="2026-01-22 06:41:22.532026507 +0000 UTC m=+374.673933212" observedRunningTime="2026-01-22 06:41:23.22352411 +0000 UTC m=+375.365430825" watchObservedRunningTime="2026-01-22 06:41:23.226527266 +0000 UTC m=+375.368433971" Jan 22 06:41:23 crc kubenswrapper[4720]: I0122 06:41:23.253129 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-849fd97bf8-4lrb6" podStartSLOduration=3.2531079370000002 podStartE2EDuration="3.253107937s" podCreationTimestamp="2026-01-22 06:41:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 06:41:23.250517592 +0000 UTC m=+375.392424297" watchObservedRunningTime="2026-01-22 06:41:23.253107937 +0000 UTC m=+375.395014642" Jan 22 06:41:26 crc kubenswrapper[4720]: I0122 06:41:26.453653 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-6rvhm" Jan 22 06:41:26 crc kubenswrapper[4720]: I0122 06:41:26.454072 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-6rvhm" Jan 22 06:41:26 crc kubenswrapper[4720]: I0122 06:41:26.512739 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-6rvhm" Jan 22 06:41:26 crc kubenswrapper[4720]: I0122 06:41:26.676856 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-qwws2" Jan 22 06:41:26 crc kubenswrapper[4720]: I0122 06:41:26.676941 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-qwws2" Jan 22 06:41:27 crc kubenswrapper[4720]: I0122 06:41:27.263470 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-6rvhm" Jan 22 06:41:27 crc kubenswrapper[4720]: I0122 06:41:27.723978 4720 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-qwws2" podUID="737d462f-6525-4b14-b25d-bc2687d9c5e8" containerName="registry-server" probeResult="failure" output=< Jan 22 06:41:27 crc kubenswrapper[4720]: timeout: failed to connect service ":50051" within 1s Jan 22 06:41:27 crc kubenswrapper[4720]: > Jan 22 06:41:28 crc kubenswrapper[4720]: I0122 06:41:28.876331 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-trf28" Jan 22 06:41:28 crc kubenswrapper[4720]: I0122 06:41:28.876405 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-trf28" Jan 22 06:41:28 crc kubenswrapper[4720]: I0122 06:41:28.918499 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-trf28" Jan 22 06:41:29 crc kubenswrapper[4720]: I0122 06:41:29.076653 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-tbv85" Jan 22 06:41:29 crc kubenswrapper[4720]: I0122 06:41:29.076717 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-tbv85" Jan 22 06:41:29 crc kubenswrapper[4720]: I0122 06:41:29.080141 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-mlgq8" Jan 22 06:41:29 crc kubenswrapper[4720]: I0122 06:41:29.149364 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-zcbc4"] Jan 22 06:41:29 crc kubenswrapper[4720]: I0122 06:41:29.173299 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-tbv85" Jan 22 06:41:29 crc kubenswrapper[4720]: I0122 06:41:29.270985 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-trf28" Jan 22 06:41:29 crc kubenswrapper[4720]: I0122 06:41:29.281701 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-tbv85" Jan 22 06:41:29 crc kubenswrapper[4720]: I0122 06:41:29.781090 4720 patch_prober.go:28] interesting pod/machine-config-daemon-bnsvd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 06:41:29 crc kubenswrapper[4720]: I0122 06:41:29.781206 4720 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" podUID="f4b26e9d-6a95-4b1c-9750-88b6aa100c67" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 06:41:36 crc kubenswrapper[4720]: I0122 06:41:36.728103 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-qwws2" Jan 22 06:41:36 crc kubenswrapper[4720]: I0122 06:41:36.778402 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-qwws2" Jan 22 06:41:54 crc kubenswrapper[4720]: I0122 06:41:54.188127 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-zcbc4" podUID="c27ad45d-a6e8-48af-9417-5422ce60dcec" containerName="registry" containerID="cri-o://010b5a9962c9f0671fd301bdb2f34e77b12f2f1912188ded589e9f3f88489a55" gracePeriod=30 Jan 22 06:41:54 crc kubenswrapper[4720]: I0122 06:41:54.382287 4720 generic.go:334] "Generic (PLEG): container finished" podID="c27ad45d-a6e8-48af-9417-5422ce60dcec" containerID="010b5a9962c9f0671fd301bdb2f34e77b12f2f1912188ded589e9f3f88489a55" exitCode=0 Jan 22 06:41:54 crc kubenswrapper[4720]: I0122 06:41:54.382408 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-zcbc4" event={"ID":"c27ad45d-a6e8-48af-9417-5422ce60dcec","Type":"ContainerDied","Data":"010b5a9962c9f0671fd301bdb2f34e77b12f2f1912188ded589e9f3f88489a55"} Jan 22 06:41:54 crc kubenswrapper[4720]: I0122 06:41:54.658870 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-zcbc4" Jan 22 06:41:54 crc kubenswrapper[4720]: I0122 06:41:54.687272 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c27ad45d-a6e8-48af-9417-5422ce60dcec-trusted-ca\") pod \"c27ad45d-a6e8-48af-9417-5422ce60dcec\" (UID: \"c27ad45d-a6e8-48af-9417-5422ce60dcec\") " Jan 22 06:41:54 crc kubenswrapper[4720]: I0122 06:41:54.687347 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c27ad45d-a6e8-48af-9417-5422ce60dcec-bound-sa-token\") pod \"c27ad45d-a6e8-48af-9417-5422ce60dcec\" (UID: \"c27ad45d-a6e8-48af-9417-5422ce60dcec\") " Jan 22 06:41:54 crc kubenswrapper[4720]: I0122 06:41:54.687569 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"c27ad45d-a6e8-48af-9417-5422ce60dcec\" (UID: \"c27ad45d-a6e8-48af-9417-5422ce60dcec\") " Jan 22 06:41:54 crc kubenswrapper[4720]: I0122 06:41:54.687614 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/c27ad45d-a6e8-48af-9417-5422ce60dcec-registry-certificates\") pod \"c27ad45d-a6e8-48af-9417-5422ce60dcec\" (UID: \"c27ad45d-a6e8-48af-9417-5422ce60dcec\") " Jan 22 06:41:54 crc kubenswrapper[4720]: I0122 06:41:54.687632 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c27ad45d-a6e8-48af-9417-5422ce60dcec-registry-tls\") pod \"c27ad45d-a6e8-48af-9417-5422ce60dcec\" (UID: \"c27ad45d-a6e8-48af-9417-5422ce60dcec\") " Jan 22 06:41:54 crc kubenswrapper[4720]: I0122 06:41:54.687652 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/c27ad45d-a6e8-48af-9417-5422ce60dcec-ca-trust-extracted\") pod \"c27ad45d-a6e8-48af-9417-5422ce60dcec\" (UID: \"c27ad45d-a6e8-48af-9417-5422ce60dcec\") " Jan 22 06:41:54 crc kubenswrapper[4720]: I0122 06:41:54.687679 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nhm5d\" (UniqueName: \"kubernetes.io/projected/c27ad45d-a6e8-48af-9417-5422ce60dcec-kube-api-access-nhm5d\") pod \"c27ad45d-a6e8-48af-9417-5422ce60dcec\" (UID: \"c27ad45d-a6e8-48af-9417-5422ce60dcec\") " Jan 22 06:41:54 crc kubenswrapper[4720]: I0122 06:41:54.687723 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c27ad45d-a6e8-48af-9417-5422ce60dcec-installation-pull-secrets\") pod \"c27ad45d-a6e8-48af-9417-5422ce60dcec\" (UID: \"c27ad45d-a6e8-48af-9417-5422ce60dcec\") " Jan 22 06:41:54 crc kubenswrapper[4720]: I0122 06:41:54.688517 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c27ad45d-a6e8-48af-9417-5422ce60dcec-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "c27ad45d-a6e8-48af-9417-5422ce60dcec" (UID: "c27ad45d-a6e8-48af-9417-5422ce60dcec"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 06:41:54 crc kubenswrapper[4720]: I0122 06:41:54.688596 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c27ad45d-a6e8-48af-9417-5422ce60dcec-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "c27ad45d-a6e8-48af-9417-5422ce60dcec" (UID: "c27ad45d-a6e8-48af-9417-5422ce60dcec"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 06:41:54 crc kubenswrapper[4720]: I0122 06:41:54.689055 4720 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c27ad45d-a6e8-48af-9417-5422ce60dcec-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 22 06:41:54 crc kubenswrapper[4720]: I0122 06:41:54.689073 4720 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/c27ad45d-a6e8-48af-9417-5422ce60dcec-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 22 06:41:54 crc kubenswrapper[4720]: I0122 06:41:54.695012 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c27ad45d-a6e8-48af-9417-5422ce60dcec-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "c27ad45d-a6e8-48af-9417-5422ce60dcec" (UID: "c27ad45d-a6e8-48af-9417-5422ce60dcec"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 06:41:54 crc kubenswrapper[4720]: I0122 06:41:54.695295 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c27ad45d-a6e8-48af-9417-5422ce60dcec-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "c27ad45d-a6e8-48af-9417-5422ce60dcec" (UID: "c27ad45d-a6e8-48af-9417-5422ce60dcec"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 06:41:54 crc kubenswrapper[4720]: I0122 06:41:54.695382 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c27ad45d-a6e8-48af-9417-5422ce60dcec-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "c27ad45d-a6e8-48af-9417-5422ce60dcec" (UID: "c27ad45d-a6e8-48af-9417-5422ce60dcec"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 06:41:54 crc kubenswrapper[4720]: I0122 06:41:54.697277 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c27ad45d-a6e8-48af-9417-5422ce60dcec-kube-api-access-nhm5d" (OuterVolumeSpecName: "kube-api-access-nhm5d") pod "c27ad45d-a6e8-48af-9417-5422ce60dcec" (UID: "c27ad45d-a6e8-48af-9417-5422ce60dcec"). InnerVolumeSpecName "kube-api-access-nhm5d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 06:41:54 crc kubenswrapper[4720]: I0122 06:41:54.701593 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "c27ad45d-a6e8-48af-9417-5422ce60dcec" (UID: "c27ad45d-a6e8-48af-9417-5422ce60dcec"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 22 06:41:54 crc kubenswrapper[4720]: I0122 06:41:54.706269 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c27ad45d-a6e8-48af-9417-5422ce60dcec-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "c27ad45d-a6e8-48af-9417-5422ce60dcec" (UID: "c27ad45d-a6e8-48af-9417-5422ce60dcec"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 06:41:54 crc kubenswrapper[4720]: I0122 06:41:54.790732 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nhm5d\" (UniqueName: \"kubernetes.io/projected/c27ad45d-a6e8-48af-9417-5422ce60dcec-kube-api-access-nhm5d\") on node \"crc\" DevicePath \"\"" Jan 22 06:41:54 crc kubenswrapper[4720]: I0122 06:41:54.790794 4720 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c27ad45d-a6e8-48af-9417-5422ce60dcec-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 22 06:41:54 crc kubenswrapper[4720]: I0122 06:41:54.790808 4720 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c27ad45d-a6e8-48af-9417-5422ce60dcec-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 22 06:41:54 crc kubenswrapper[4720]: I0122 06:41:54.790822 4720 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c27ad45d-a6e8-48af-9417-5422ce60dcec-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 22 06:41:54 crc kubenswrapper[4720]: I0122 06:41:54.790834 4720 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/c27ad45d-a6e8-48af-9417-5422ce60dcec-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 22 06:41:55 crc kubenswrapper[4720]: I0122 06:41:55.393039 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-zcbc4" event={"ID":"c27ad45d-a6e8-48af-9417-5422ce60dcec","Type":"ContainerDied","Data":"df5261edc36757235a737f8384eb22ddaaabb5a1005f44880e50bf5c0775be26"} Jan 22 06:41:55 crc kubenswrapper[4720]: I0122 06:41:55.393141 4720 scope.go:117] "RemoveContainer" containerID="010b5a9962c9f0671fd301bdb2f34e77b12f2f1912188ded589e9f3f88489a55" Jan 22 06:41:55 crc kubenswrapper[4720]: I0122 06:41:55.393172 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-zcbc4" Jan 22 06:41:55 crc kubenswrapper[4720]: I0122 06:41:55.437584 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-zcbc4"] Jan 22 06:41:55 crc kubenswrapper[4720]: I0122 06:41:55.442562 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-zcbc4"] Jan 22 06:41:56 crc kubenswrapper[4720]: I0122 06:41:56.222021 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c27ad45d-a6e8-48af-9417-5422ce60dcec" path="/var/lib/kubelet/pods/c27ad45d-a6e8-48af-9417-5422ce60dcec/volumes" Jan 22 06:41:59 crc kubenswrapper[4720]: I0122 06:41:59.780379 4720 patch_prober.go:28] interesting pod/machine-config-daemon-bnsvd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 06:41:59 crc kubenswrapper[4720]: I0122 06:41:59.780823 4720 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" podUID="f4b26e9d-6a95-4b1c-9750-88b6aa100c67" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 06:42:29 crc kubenswrapper[4720]: I0122 06:42:29.779824 4720 patch_prober.go:28] interesting pod/machine-config-daemon-bnsvd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 06:42:29 crc kubenswrapper[4720]: I0122 06:42:29.780458 4720 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" podUID="f4b26e9d-6a95-4b1c-9750-88b6aa100c67" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 06:42:29 crc kubenswrapper[4720]: I0122 06:42:29.780519 4720 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" Jan 22 06:42:29 crc kubenswrapper[4720]: I0122 06:42:29.781279 4720 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f83c910b79e584790834a758285c2f47f6303b6b8de79f48f26d6971c7a8b55e"} pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 22 06:42:29 crc kubenswrapper[4720]: I0122 06:42:29.781361 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" podUID="f4b26e9d-6a95-4b1c-9750-88b6aa100c67" containerName="machine-config-daemon" containerID="cri-o://f83c910b79e584790834a758285c2f47f6303b6b8de79f48f26d6971c7a8b55e" gracePeriod=600 Jan 22 06:42:30 crc kubenswrapper[4720]: I0122 06:42:30.618090 4720 generic.go:334] "Generic (PLEG): container finished" podID="f4b26e9d-6a95-4b1c-9750-88b6aa100c67" containerID="f83c910b79e584790834a758285c2f47f6303b6b8de79f48f26d6971c7a8b55e" exitCode=0 Jan 22 06:42:30 crc kubenswrapper[4720]: I0122 06:42:30.618173 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" event={"ID":"f4b26e9d-6a95-4b1c-9750-88b6aa100c67","Type":"ContainerDied","Data":"f83c910b79e584790834a758285c2f47f6303b6b8de79f48f26d6971c7a8b55e"} Jan 22 06:42:30 crc kubenswrapper[4720]: I0122 06:42:30.618875 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" event={"ID":"f4b26e9d-6a95-4b1c-9750-88b6aa100c67","Type":"ContainerStarted","Data":"7bcb5112b649a106e66f934306ee592f8a752080d8191cf468e62a0e5b343bf1"} Jan 22 06:42:30 crc kubenswrapper[4720]: I0122 06:42:30.618944 4720 scope.go:117] "RemoveContainer" containerID="88eb6692702bcb8523c759d764bb8dede5af5a2890217a1c6897a5b18a7197dd" Jan 22 06:44:08 crc kubenswrapper[4720]: I0122 06:44:08.447389 4720 scope.go:117] "RemoveContainer" containerID="6532848adf57f0baefbc3174a61697838923f41ec34413ecb9d18c49a5865764" Jan 22 06:44:59 crc kubenswrapper[4720]: I0122 06:44:59.780861 4720 patch_prober.go:28] interesting pod/machine-config-daemon-bnsvd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 06:44:59 crc kubenswrapper[4720]: I0122 06:44:59.781599 4720 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" podUID="f4b26e9d-6a95-4b1c-9750-88b6aa100c67" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 06:45:00 crc kubenswrapper[4720]: I0122 06:45:00.209536 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484405-fcsb6"] Jan 22 06:45:00 crc kubenswrapper[4720]: E0122 06:45:00.209806 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c27ad45d-a6e8-48af-9417-5422ce60dcec" containerName="registry" Jan 22 06:45:00 crc kubenswrapper[4720]: I0122 06:45:00.209822 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="c27ad45d-a6e8-48af-9417-5422ce60dcec" containerName="registry" Jan 22 06:45:00 crc kubenswrapper[4720]: I0122 06:45:00.209975 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="c27ad45d-a6e8-48af-9417-5422ce60dcec" containerName="registry" Jan 22 06:45:00 crc kubenswrapper[4720]: I0122 06:45:00.210699 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484405-fcsb6" Jan 22 06:45:00 crc kubenswrapper[4720]: I0122 06:45:00.215481 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 22 06:45:00 crc kubenswrapper[4720]: I0122 06:45:00.215662 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 22 06:45:00 crc kubenswrapper[4720]: I0122 06:45:00.226546 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484405-fcsb6"] Jan 22 06:45:00 crc kubenswrapper[4720]: I0122 06:45:00.342591 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/216f8ab1-3326-4006-b0b5-ac9018b17dbe-secret-volume\") pod \"collect-profiles-29484405-fcsb6\" (UID: \"216f8ab1-3326-4006-b0b5-ac9018b17dbe\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484405-fcsb6" Jan 22 06:45:00 crc kubenswrapper[4720]: I0122 06:45:00.343143 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jpr4v\" (UniqueName: \"kubernetes.io/projected/216f8ab1-3326-4006-b0b5-ac9018b17dbe-kube-api-access-jpr4v\") pod \"collect-profiles-29484405-fcsb6\" (UID: \"216f8ab1-3326-4006-b0b5-ac9018b17dbe\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484405-fcsb6" Jan 22 06:45:00 crc kubenswrapper[4720]: I0122 06:45:00.343621 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/216f8ab1-3326-4006-b0b5-ac9018b17dbe-config-volume\") pod \"collect-profiles-29484405-fcsb6\" (UID: \"216f8ab1-3326-4006-b0b5-ac9018b17dbe\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484405-fcsb6" Jan 22 06:45:00 crc kubenswrapper[4720]: I0122 06:45:00.445270 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/216f8ab1-3326-4006-b0b5-ac9018b17dbe-config-volume\") pod \"collect-profiles-29484405-fcsb6\" (UID: \"216f8ab1-3326-4006-b0b5-ac9018b17dbe\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484405-fcsb6" Jan 22 06:45:00 crc kubenswrapper[4720]: I0122 06:45:00.445353 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/216f8ab1-3326-4006-b0b5-ac9018b17dbe-secret-volume\") pod \"collect-profiles-29484405-fcsb6\" (UID: \"216f8ab1-3326-4006-b0b5-ac9018b17dbe\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484405-fcsb6" Jan 22 06:45:00 crc kubenswrapper[4720]: I0122 06:45:00.445403 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jpr4v\" (UniqueName: \"kubernetes.io/projected/216f8ab1-3326-4006-b0b5-ac9018b17dbe-kube-api-access-jpr4v\") pod \"collect-profiles-29484405-fcsb6\" (UID: \"216f8ab1-3326-4006-b0b5-ac9018b17dbe\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484405-fcsb6" Jan 22 06:45:00 crc kubenswrapper[4720]: I0122 06:45:00.447463 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/216f8ab1-3326-4006-b0b5-ac9018b17dbe-config-volume\") pod \"collect-profiles-29484405-fcsb6\" (UID: \"216f8ab1-3326-4006-b0b5-ac9018b17dbe\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484405-fcsb6" Jan 22 06:45:00 crc kubenswrapper[4720]: I0122 06:45:00.460996 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/216f8ab1-3326-4006-b0b5-ac9018b17dbe-secret-volume\") pod \"collect-profiles-29484405-fcsb6\" (UID: \"216f8ab1-3326-4006-b0b5-ac9018b17dbe\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484405-fcsb6" Jan 22 06:45:00 crc kubenswrapper[4720]: I0122 06:45:00.467725 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jpr4v\" (UniqueName: \"kubernetes.io/projected/216f8ab1-3326-4006-b0b5-ac9018b17dbe-kube-api-access-jpr4v\") pod \"collect-profiles-29484405-fcsb6\" (UID: \"216f8ab1-3326-4006-b0b5-ac9018b17dbe\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484405-fcsb6" Jan 22 06:45:00 crc kubenswrapper[4720]: I0122 06:45:00.546161 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484405-fcsb6" Jan 22 06:45:00 crc kubenswrapper[4720]: I0122 06:45:00.773005 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484405-fcsb6"] Jan 22 06:45:01 crc kubenswrapper[4720]: I0122 06:45:01.628326 4720 generic.go:334] "Generic (PLEG): container finished" podID="216f8ab1-3326-4006-b0b5-ac9018b17dbe" containerID="4f659f8d7e2a0d044e94c569c7209aefbda13544226d0f425c6c04462dd0afcb" exitCode=0 Jan 22 06:45:01 crc kubenswrapper[4720]: I0122 06:45:01.628390 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29484405-fcsb6" event={"ID":"216f8ab1-3326-4006-b0b5-ac9018b17dbe","Type":"ContainerDied","Data":"4f659f8d7e2a0d044e94c569c7209aefbda13544226d0f425c6c04462dd0afcb"} Jan 22 06:45:01 crc kubenswrapper[4720]: I0122 06:45:01.628842 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29484405-fcsb6" event={"ID":"216f8ab1-3326-4006-b0b5-ac9018b17dbe","Type":"ContainerStarted","Data":"93cc59c48636bd288f30c2c3fbff78cc4352dae086ea4ede6a7222f9af1de2be"} Jan 22 06:45:02 crc kubenswrapper[4720]: I0122 06:45:02.829720 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484405-fcsb6" Jan 22 06:45:02 crc kubenswrapper[4720]: I0122 06:45:02.880426 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jpr4v\" (UniqueName: \"kubernetes.io/projected/216f8ab1-3326-4006-b0b5-ac9018b17dbe-kube-api-access-jpr4v\") pod \"216f8ab1-3326-4006-b0b5-ac9018b17dbe\" (UID: \"216f8ab1-3326-4006-b0b5-ac9018b17dbe\") " Jan 22 06:45:02 crc kubenswrapper[4720]: I0122 06:45:02.880710 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/216f8ab1-3326-4006-b0b5-ac9018b17dbe-secret-volume\") pod \"216f8ab1-3326-4006-b0b5-ac9018b17dbe\" (UID: \"216f8ab1-3326-4006-b0b5-ac9018b17dbe\") " Jan 22 06:45:02 crc kubenswrapper[4720]: I0122 06:45:02.880738 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/216f8ab1-3326-4006-b0b5-ac9018b17dbe-config-volume\") pod \"216f8ab1-3326-4006-b0b5-ac9018b17dbe\" (UID: \"216f8ab1-3326-4006-b0b5-ac9018b17dbe\") " Jan 22 06:45:02 crc kubenswrapper[4720]: I0122 06:45:02.881401 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/216f8ab1-3326-4006-b0b5-ac9018b17dbe-config-volume" (OuterVolumeSpecName: "config-volume") pod "216f8ab1-3326-4006-b0b5-ac9018b17dbe" (UID: "216f8ab1-3326-4006-b0b5-ac9018b17dbe"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 06:45:02 crc kubenswrapper[4720]: I0122 06:45:02.885431 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/216f8ab1-3326-4006-b0b5-ac9018b17dbe-kube-api-access-jpr4v" (OuterVolumeSpecName: "kube-api-access-jpr4v") pod "216f8ab1-3326-4006-b0b5-ac9018b17dbe" (UID: "216f8ab1-3326-4006-b0b5-ac9018b17dbe"). InnerVolumeSpecName "kube-api-access-jpr4v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 06:45:02 crc kubenswrapper[4720]: I0122 06:45:02.886039 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/216f8ab1-3326-4006-b0b5-ac9018b17dbe-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "216f8ab1-3326-4006-b0b5-ac9018b17dbe" (UID: "216f8ab1-3326-4006-b0b5-ac9018b17dbe"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 06:45:02 crc kubenswrapper[4720]: I0122 06:45:02.981704 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jpr4v\" (UniqueName: \"kubernetes.io/projected/216f8ab1-3326-4006-b0b5-ac9018b17dbe-kube-api-access-jpr4v\") on node \"crc\" DevicePath \"\"" Jan 22 06:45:02 crc kubenswrapper[4720]: I0122 06:45:02.981743 4720 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/216f8ab1-3326-4006-b0b5-ac9018b17dbe-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 22 06:45:02 crc kubenswrapper[4720]: I0122 06:45:02.981755 4720 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/216f8ab1-3326-4006-b0b5-ac9018b17dbe-config-volume\") on node \"crc\" DevicePath \"\"" Jan 22 06:45:03 crc kubenswrapper[4720]: I0122 06:45:03.641500 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29484405-fcsb6" event={"ID":"216f8ab1-3326-4006-b0b5-ac9018b17dbe","Type":"ContainerDied","Data":"93cc59c48636bd288f30c2c3fbff78cc4352dae086ea4ede6a7222f9af1de2be"} Jan 22 06:45:03 crc kubenswrapper[4720]: I0122 06:45:03.641565 4720 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="93cc59c48636bd288f30c2c3fbff78cc4352dae086ea4ede6a7222f9af1de2be" Jan 22 06:45:03 crc kubenswrapper[4720]: I0122 06:45:03.641652 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484405-fcsb6" Jan 22 06:45:08 crc kubenswrapper[4720]: I0122 06:45:08.492478 4720 scope.go:117] "RemoveContainer" containerID="80bb9fbb458d15c1532a3b3ff1f288a38aa5bc229f70a71dac5351a5b1881af6" Jan 22 06:45:08 crc kubenswrapper[4720]: I0122 06:45:08.523130 4720 scope.go:117] "RemoveContainer" containerID="bfc095255073f80b3b211dc677e38b20d156bd3c97c9f9aa02b70c2a2d69b8e2" Jan 22 06:45:29 crc kubenswrapper[4720]: I0122 06:45:29.780768 4720 patch_prober.go:28] interesting pod/machine-config-daemon-bnsvd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 06:45:29 crc kubenswrapper[4720]: I0122 06:45:29.781479 4720 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" podUID="f4b26e9d-6a95-4b1c-9750-88b6aa100c67" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 06:45:59 crc kubenswrapper[4720]: I0122 06:45:59.780765 4720 patch_prober.go:28] interesting pod/machine-config-daemon-bnsvd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 06:45:59 crc kubenswrapper[4720]: I0122 06:45:59.781480 4720 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" podUID="f4b26e9d-6a95-4b1c-9750-88b6aa100c67" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 06:45:59 crc kubenswrapper[4720]: I0122 06:45:59.781537 4720 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" Jan 22 06:45:59 crc kubenswrapper[4720]: I0122 06:45:59.782281 4720 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7bcb5112b649a106e66f934306ee592f8a752080d8191cf468e62a0e5b343bf1"} pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 22 06:45:59 crc kubenswrapper[4720]: I0122 06:45:59.782342 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" podUID="f4b26e9d-6a95-4b1c-9750-88b6aa100c67" containerName="machine-config-daemon" containerID="cri-o://7bcb5112b649a106e66f934306ee592f8a752080d8191cf468e62a0e5b343bf1" gracePeriod=600 Jan 22 06:46:00 crc kubenswrapper[4720]: I0122 06:46:00.987255 4720 generic.go:334] "Generic (PLEG): container finished" podID="f4b26e9d-6a95-4b1c-9750-88b6aa100c67" containerID="7bcb5112b649a106e66f934306ee592f8a752080d8191cf468e62a0e5b343bf1" exitCode=0 Jan 22 06:46:00 crc kubenswrapper[4720]: I0122 06:46:00.987314 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" event={"ID":"f4b26e9d-6a95-4b1c-9750-88b6aa100c67","Type":"ContainerDied","Data":"7bcb5112b649a106e66f934306ee592f8a752080d8191cf468e62a0e5b343bf1"} Jan 22 06:46:00 crc kubenswrapper[4720]: I0122 06:46:00.987596 4720 scope.go:117] "RemoveContainer" containerID="f83c910b79e584790834a758285c2f47f6303b6b8de79f48f26d6971c7a8b55e" Jan 22 06:46:01 crc kubenswrapper[4720]: I0122 06:46:01.994659 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" event={"ID":"f4b26e9d-6a95-4b1c-9750-88b6aa100c67","Type":"ContainerStarted","Data":"5133cd7a4f98ed55da7368ea4892714f9b22a1313703673917d384626f9d42e1"} Jan 22 06:47:30 crc kubenswrapper[4720]: I0122 06:47:30.279382 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f082sjlt"] Jan 22 06:47:30 crc kubenswrapper[4720]: E0122 06:47:30.280789 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="216f8ab1-3326-4006-b0b5-ac9018b17dbe" containerName="collect-profiles" Jan 22 06:47:30 crc kubenswrapper[4720]: I0122 06:47:30.280812 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="216f8ab1-3326-4006-b0b5-ac9018b17dbe" containerName="collect-profiles" Jan 22 06:47:30 crc kubenswrapper[4720]: I0122 06:47:30.281062 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="216f8ab1-3326-4006-b0b5-ac9018b17dbe" containerName="collect-profiles" Jan 22 06:47:30 crc kubenswrapper[4720]: I0122 06:47:30.282557 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f082sjlt" Jan 22 06:47:30 crc kubenswrapper[4720]: I0122 06:47:30.286138 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f082sjlt"] Jan 22 06:47:30 crc kubenswrapper[4720]: I0122 06:47:30.294534 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 22 06:47:30 crc kubenswrapper[4720]: I0122 06:47:30.426017 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/dc107a3a-440f-43c6-a92c-378d6fb30761-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f082sjlt\" (UID: \"dc107a3a-440f-43c6-a92c-378d6fb30761\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f082sjlt" Jan 22 06:47:30 crc kubenswrapper[4720]: I0122 06:47:30.426507 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hpq78\" (UniqueName: \"kubernetes.io/projected/dc107a3a-440f-43c6-a92c-378d6fb30761-kube-api-access-hpq78\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f082sjlt\" (UID: \"dc107a3a-440f-43c6-a92c-378d6fb30761\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f082sjlt" Jan 22 06:47:30 crc kubenswrapper[4720]: I0122 06:47:30.426623 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/dc107a3a-440f-43c6-a92c-378d6fb30761-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f082sjlt\" (UID: \"dc107a3a-440f-43c6-a92c-378d6fb30761\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f082sjlt" Jan 22 06:47:30 crc kubenswrapper[4720]: I0122 06:47:30.528271 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hpq78\" (UniqueName: \"kubernetes.io/projected/dc107a3a-440f-43c6-a92c-378d6fb30761-kube-api-access-hpq78\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f082sjlt\" (UID: \"dc107a3a-440f-43c6-a92c-378d6fb30761\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f082sjlt" Jan 22 06:47:30 crc kubenswrapper[4720]: I0122 06:47:30.528623 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/dc107a3a-440f-43c6-a92c-378d6fb30761-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f082sjlt\" (UID: \"dc107a3a-440f-43c6-a92c-378d6fb30761\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f082sjlt" Jan 22 06:47:30 crc kubenswrapper[4720]: I0122 06:47:30.528763 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/dc107a3a-440f-43c6-a92c-378d6fb30761-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f082sjlt\" (UID: \"dc107a3a-440f-43c6-a92c-378d6fb30761\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f082sjlt" Jan 22 06:47:30 crc kubenswrapper[4720]: I0122 06:47:30.529208 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/dc107a3a-440f-43c6-a92c-378d6fb30761-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f082sjlt\" (UID: \"dc107a3a-440f-43c6-a92c-378d6fb30761\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f082sjlt" Jan 22 06:47:30 crc kubenswrapper[4720]: I0122 06:47:30.529251 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/dc107a3a-440f-43c6-a92c-378d6fb30761-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f082sjlt\" (UID: \"dc107a3a-440f-43c6-a92c-378d6fb30761\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f082sjlt" Jan 22 06:47:30 crc kubenswrapper[4720]: I0122 06:47:30.555772 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hpq78\" (UniqueName: \"kubernetes.io/projected/dc107a3a-440f-43c6-a92c-378d6fb30761-kube-api-access-hpq78\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f082sjlt\" (UID: \"dc107a3a-440f-43c6-a92c-378d6fb30761\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f082sjlt" Jan 22 06:47:30 crc kubenswrapper[4720]: I0122 06:47:30.604315 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f082sjlt" Jan 22 06:47:30 crc kubenswrapper[4720]: I0122 06:47:30.877844 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f082sjlt"] Jan 22 06:47:31 crc kubenswrapper[4720]: I0122 06:47:31.632493 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f082sjlt" event={"ID":"dc107a3a-440f-43c6-a92c-378d6fb30761","Type":"ContainerStarted","Data":"18e0e98255cb8dbbca56ea1e58ecd1e4b63b806b47962470914e5d83a6f4f2b4"} Jan 22 06:47:32 crc kubenswrapper[4720]: I0122 06:47:32.641183 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f082sjlt" event={"ID":"dc107a3a-440f-43c6-a92c-378d6fb30761","Type":"ContainerStarted","Data":"f87370a668cd99f786cfed02124406bb03f568ebe18a157eb98aa72322edcd2b"} Jan 22 06:47:33 crc kubenswrapper[4720]: I0122 06:47:33.656260 4720 generic.go:334] "Generic (PLEG): container finished" podID="dc107a3a-440f-43c6-a92c-378d6fb30761" containerID="f87370a668cd99f786cfed02124406bb03f568ebe18a157eb98aa72322edcd2b" exitCode=0 Jan 22 06:47:33 crc kubenswrapper[4720]: I0122 06:47:33.656351 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f082sjlt" event={"ID":"dc107a3a-440f-43c6-a92c-378d6fb30761","Type":"ContainerDied","Data":"f87370a668cd99f786cfed02124406bb03f568ebe18a157eb98aa72322edcd2b"} Jan 22 06:47:33 crc kubenswrapper[4720]: I0122 06:47:33.662473 4720 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 22 06:47:38 crc kubenswrapper[4720]: I0122 06:47:38.691643 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f082sjlt" event={"ID":"dc107a3a-440f-43c6-a92c-378d6fb30761","Type":"ContainerStarted","Data":"bd54809c5a7f0844007b707f6a4cbeefa178a26d1b9d6c18bd4e52b18ba45835"} Jan 22 06:47:39 crc kubenswrapper[4720]: I0122 06:47:39.606557 4720 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 22 06:47:39 crc kubenswrapper[4720]: I0122 06:47:39.700571 4720 generic.go:334] "Generic (PLEG): container finished" podID="dc107a3a-440f-43c6-a92c-378d6fb30761" containerID="bd54809c5a7f0844007b707f6a4cbeefa178a26d1b9d6c18bd4e52b18ba45835" exitCode=0 Jan 22 06:47:39 crc kubenswrapper[4720]: I0122 06:47:39.700657 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f082sjlt" event={"ID":"dc107a3a-440f-43c6-a92c-378d6fb30761","Type":"ContainerDied","Data":"bd54809c5a7f0844007b707f6a4cbeefa178a26d1b9d6c18bd4e52b18ba45835"} Jan 22 06:47:40 crc kubenswrapper[4720]: I0122 06:47:40.596590 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-znwhq"] Jan 22 06:47:40 crc kubenswrapper[4720]: I0122 06:47:40.597670 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-znwhq" Jan 22 06:47:40 crc kubenswrapper[4720]: I0122 06:47:40.607421 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-znwhq"] Jan 22 06:47:40 crc kubenswrapper[4720]: I0122 06:47:40.669562 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s7nln\" (UniqueName: \"kubernetes.io/projected/6ccede6a-6547-474f-8288-7058e36c1642-kube-api-access-s7nln\") pod \"redhat-operators-znwhq\" (UID: \"6ccede6a-6547-474f-8288-7058e36c1642\") " pod="openshift-marketplace/redhat-operators-znwhq" Jan 22 06:47:40 crc kubenswrapper[4720]: I0122 06:47:40.669629 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ccede6a-6547-474f-8288-7058e36c1642-catalog-content\") pod \"redhat-operators-znwhq\" (UID: \"6ccede6a-6547-474f-8288-7058e36c1642\") " pod="openshift-marketplace/redhat-operators-znwhq" Jan 22 06:47:40 crc kubenswrapper[4720]: I0122 06:47:40.669646 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ccede6a-6547-474f-8288-7058e36c1642-utilities\") pod \"redhat-operators-znwhq\" (UID: \"6ccede6a-6547-474f-8288-7058e36c1642\") " pod="openshift-marketplace/redhat-operators-znwhq" Jan 22 06:47:40 crc kubenswrapper[4720]: I0122 06:47:40.708480 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f082sjlt" event={"ID":"dc107a3a-440f-43c6-a92c-378d6fb30761","Type":"ContainerStarted","Data":"2fcf5e45c2b3ccfb74132ed80b299757908773e1c36d1c9aee7d9f90bad5121a"} Jan 22 06:47:40 crc kubenswrapper[4720]: I0122 06:47:40.727310 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f082sjlt" podStartSLOduration=6.015646344 podStartE2EDuration="10.727290737s" podCreationTimestamp="2026-01-22 06:47:30 +0000 UTC" firstStartedPulling="2026-01-22 06:47:33.662065 +0000 UTC m=+745.803971705" lastFinishedPulling="2026-01-22 06:47:38.373709393 +0000 UTC m=+750.515616098" observedRunningTime="2026-01-22 06:47:40.725391354 +0000 UTC m=+752.867298089" watchObservedRunningTime="2026-01-22 06:47:40.727290737 +0000 UTC m=+752.869197442" Jan 22 06:47:40 crc kubenswrapper[4720]: I0122 06:47:40.771608 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s7nln\" (UniqueName: \"kubernetes.io/projected/6ccede6a-6547-474f-8288-7058e36c1642-kube-api-access-s7nln\") pod \"redhat-operators-znwhq\" (UID: \"6ccede6a-6547-474f-8288-7058e36c1642\") " pod="openshift-marketplace/redhat-operators-znwhq" Jan 22 06:47:40 crc kubenswrapper[4720]: I0122 06:47:40.771929 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ccede6a-6547-474f-8288-7058e36c1642-catalog-content\") pod \"redhat-operators-znwhq\" (UID: \"6ccede6a-6547-474f-8288-7058e36c1642\") " pod="openshift-marketplace/redhat-operators-znwhq" Jan 22 06:47:40 crc kubenswrapper[4720]: I0122 06:47:40.772011 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ccede6a-6547-474f-8288-7058e36c1642-utilities\") pod \"redhat-operators-znwhq\" (UID: \"6ccede6a-6547-474f-8288-7058e36c1642\") " pod="openshift-marketplace/redhat-operators-znwhq" Jan 22 06:47:40 crc kubenswrapper[4720]: I0122 06:47:40.772479 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ccede6a-6547-474f-8288-7058e36c1642-catalog-content\") pod \"redhat-operators-znwhq\" (UID: \"6ccede6a-6547-474f-8288-7058e36c1642\") " pod="openshift-marketplace/redhat-operators-znwhq" Jan 22 06:47:40 crc kubenswrapper[4720]: I0122 06:47:40.772552 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ccede6a-6547-474f-8288-7058e36c1642-utilities\") pod \"redhat-operators-znwhq\" (UID: \"6ccede6a-6547-474f-8288-7058e36c1642\") " pod="openshift-marketplace/redhat-operators-znwhq" Jan 22 06:47:40 crc kubenswrapper[4720]: I0122 06:47:40.796128 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s7nln\" (UniqueName: \"kubernetes.io/projected/6ccede6a-6547-474f-8288-7058e36c1642-kube-api-access-s7nln\") pod \"redhat-operators-znwhq\" (UID: \"6ccede6a-6547-474f-8288-7058e36c1642\") " pod="openshift-marketplace/redhat-operators-znwhq" Jan 22 06:47:40 crc kubenswrapper[4720]: I0122 06:47:40.969842 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-znwhq" Jan 22 06:47:40 crc kubenswrapper[4720]: I0122 06:47:40.985073 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-pc2f4"] Jan 22 06:47:40 crc kubenswrapper[4720]: I0122 06:47:40.985934 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-pc2f4" podUID="9a725fa6-120e-41b1-bf7b-e1419e35c891" containerName="ovn-controller" containerID="cri-o://279bdc70caff8b05f2eaff1d28bee03725a7069dcc99c4770a8df9a3e3ed0440" gracePeriod=30 Jan 22 06:47:40 crc kubenswrapper[4720]: I0122 06:47:40.986451 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-pc2f4" podUID="9a725fa6-120e-41b1-bf7b-e1419e35c891" containerName="sbdb" containerID="cri-o://dd6371da033cf0ecd1c0b746cae82e14593edbf2353910a77284a2987741580d" gracePeriod=30 Jan 22 06:47:40 crc kubenswrapper[4720]: I0122 06:47:40.986516 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-pc2f4" podUID="9a725fa6-120e-41b1-bf7b-e1419e35c891" containerName="nbdb" containerID="cri-o://b013c0f8d83bb91a1385ed067377ea5bd81640f7b420160042c33a3112cbe153" gracePeriod=30 Jan 22 06:47:40 crc kubenswrapper[4720]: I0122 06:47:40.986565 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-pc2f4" podUID="9a725fa6-120e-41b1-bf7b-e1419e35c891" containerName="northd" containerID="cri-o://5a1f835b7dfa74f56bbfdc5ef0a6674e6c04ae1be7119e3675d4a7e970bc37d9" gracePeriod=30 Jan 22 06:47:40 crc kubenswrapper[4720]: I0122 06:47:40.986612 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-pc2f4" podUID="9a725fa6-120e-41b1-bf7b-e1419e35c891" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://b14f48ebae84e1450337b5ce78c6b9e74f4ffd418eaaf9df9fcb7495f5feae06" gracePeriod=30 Jan 22 06:47:40 crc kubenswrapper[4720]: I0122 06:47:40.986658 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-pc2f4" podUID="9a725fa6-120e-41b1-bf7b-e1419e35c891" containerName="kube-rbac-proxy-node" containerID="cri-o://be8927bdfee25dd12f02d8f5e4923d1ef6e034a4ab3c036ee8fb76b699aa4a04" gracePeriod=30 Jan 22 06:47:40 crc kubenswrapper[4720]: I0122 06:47:40.986713 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-pc2f4" podUID="9a725fa6-120e-41b1-bf7b-e1419e35c891" containerName="ovn-acl-logging" containerID="cri-o://bdea94d764eb25af4aec5da1f1aaf94b7d75e66292561a54803a079a6ca35cbd" gracePeriod=30 Jan 22 06:47:41 crc kubenswrapper[4720]: I0122 06:47:41.022854 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-pc2f4" podUID="9a725fa6-120e-41b1-bf7b-e1419e35c891" containerName="ovnkube-controller" containerID="cri-o://dee4e936e1b6317e41638ead6be9d777b984b89ca57b4bb97aeea2cd862976e7" gracePeriod=30 Jan 22 06:47:41 crc kubenswrapper[4720]: I0122 06:47:41.716276 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pc2f4_9a725fa6-120e-41b1-bf7b-e1419e35c891/ovnkube-controller/3.log" Jan 22 06:47:41 crc kubenswrapper[4720]: I0122 06:47:41.719289 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pc2f4_9a725fa6-120e-41b1-bf7b-e1419e35c891/ovn-acl-logging/0.log" Jan 22 06:47:41 crc kubenswrapper[4720]: I0122 06:47:41.720077 4720 generic.go:334] "Generic (PLEG): container finished" podID="9a725fa6-120e-41b1-bf7b-e1419e35c891" containerID="bdea94d764eb25af4aec5da1f1aaf94b7d75e66292561a54803a079a6ca35cbd" exitCode=143 Jan 22 06:47:41 crc kubenswrapper[4720]: I0122 06:47:41.720158 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pc2f4" event={"ID":"9a725fa6-120e-41b1-bf7b-e1419e35c891","Type":"ContainerDied","Data":"bdea94d764eb25af4aec5da1f1aaf94b7d75e66292561a54803a079a6ca35cbd"} Jan 22 06:47:41 crc kubenswrapper[4720]: I0122 06:47:41.722419 4720 generic.go:334] "Generic (PLEG): container finished" podID="dc107a3a-440f-43c6-a92c-378d6fb30761" containerID="2fcf5e45c2b3ccfb74132ed80b299757908773e1c36d1c9aee7d9f90bad5121a" exitCode=0 Jan 22 06:47:41 crc kubenswrapper[4720]: I0122 06:47:41.722465 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f082sjlt" event={"ID":"dc107a3a-440f-43c6-a92c-378d6fb30761","Type":"ContainerDied","Data":"2fcf5e45c2b3ccfb74132ed80b299757908773e1c36d1c9aee7d9f90bad5121a"} Jan 22 06:47:42 crc kubenswrapper[4720]: I0122 06:47:42.456897 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pc2f4_9a725fa6-120e-41b1-bf7b-e1419e35c891/ovnkube-controller/3.log" Jan 22 06:47:42 crc kubenswrapper[4720]: I0122 06:47:42.460558 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pc2f4_9a725fa6-120e-41b1-bf7b-e1419e35c891/ovn-acl-logging/0.log" Jan 22 06:47:42 crc kubenswrapper[4720]: I0122 06:47:42.461375 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pc2f4_9a725fa6-120e-41b1-bf7b-e1419e35c891/ovn-controller/0.log" Jan 22 06:47:42 crc kubenswrapper[4720]: I0122 06:47:42.461854 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-pc2f4" Jan 22 06:47:42 crc kubenswrapper[4720]: I0122 06:47:42.519111 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-vm9h9"] Jan 22 06:47:42 crc kubenswrapper[4720]: E0122 06:47:42.519361 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a725fa6-120e-41b1-bf7b-e1419e35c891" containerName="ovn-acl-logging" Jan 22 06:47:42 crc kubenswrapper[4720]: I0122 06:47:42.519376 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a725fa6-120e-41b1-bf7b-e1419e35c891" containerName="ovn-acl-logging" Jan 22 06:47:42 crc kubenswrapper[4720]: E0122 06:47:42.519385 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a725fa6-120e-41b1-bf7b-e1419e35c891" containerName="sbdb" Jan 22 06:47:42 crc kubenswrapper[4720]: I0122 06:47:42.519391 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a725fa6-120e-41b1-bf7b-e1419e35c891" containerName="sbdb" Jan 22 06:47:42 crc kubenswrapper[4720]: E0122 06:47:42.519404 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a725fa6-120e-41b1-bf7b-e1419e35c891" containerName="nbdb" Jan 22 06:47:42 crc kubenswrapper[4720]: I0122 06:47:42.519411 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a725fa6-120e-41b1-bf7b-e1419e35c891" containerName="nbdb" Jan 22 06:47:42 crc kubenswrapper[4720]: E0122 06:47:42.519422 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a725fa6-120e-41b1-bf7b-e1419e35c891" containerName="kube-rbac-proxy-ovn-metrics" Jan 22 06:47:42 crc kubenswrapper[4720]: I0122 06:47:42.519430 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a725fa6-120e-41b1-bf7b-e1419e35c891" containerName="kube-rbac-proxy-ovn-metrics" Jan 22 06:47:42 crc kubenswrapper[4720]: E0122 06:47:42.519441 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a725fa6-120e-41b1-bf7b-e1419e35c891" containerName="kube-rbac-proxy-node" Jan 22 06:47:42 crc kubenswrapper[4720]: I0122 06:47:42.519447 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a725fa6-120e-41b1-bf7b-e1419e35c891" containerName="kube-rbac-proxy-node" Jan 22 06:47:42 crc kubenswrapper[4720]: E0122 06:47:42.519454 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a725fa6-120e-41b1-bf7b-e1419e35c891" containerName="ovnkube-controller" Jan 22 06:47:42 crc kubenswrapper[4720]: I0122 06:47:42.519462 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a725fa6-120e-41b1-bf7b-e1419e35c891" containerName="ovnkube-controller" Jan 22 06:47:42 crc kubenswrapper[4720]: E0122 06:47:42.519469 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a725fa6-120e-41b1-bf7b-e1419e35c891" containerName="ovnkube-controller" Jan 22 06:47:42 crc kubenswrapper[4720]: I0122 06:47:42.519477 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a725fa6-120e-41b1-bf7b-e1419e35c891" containerName="ovnkube-controller" Jan 22 06:47:42 crc kubenswrapper[4720]: E0122 06:47:42.519485 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a725fa6-120e-41b1-bf7b-e1419e35c891" containerName="northd" Jan 22 06:47:42 crc kubenswrapper[4720]: I0122 06:47:42.519491 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a725fa6-120e-41b1-bf7b-e1419e35c891" containerName="northd" Jan 22 06:47:42 crc kubenswrapper[4720]: E0122 06:47:42.519501 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a725fa6-120e-41b1-bf7b-e1419e35c891" containerName="ovn-controller" Jan 22 06:47:42 crc kubenswrapper[4720]: I0122 06:47:42.519507 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a725fa6-120e-41b1-bf7b-e1419e35c891" containerName="ovn-controller" Jan 22 06:47:42 crc kubenswrapper[4720]: E0122 06:47:42.519516 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a725fa6-120e-41b1-bf7b-e1419e35c891" containerName="kubecfg-setup" Jan 22 06:47:42 crc kubenswrapper[4720]: I0122 06:47:42.519523 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a725fa6-120e-41b1-bf7b-e1419e35c891" containerName="kubecfg-setup" Jan 22 06:47:42 crc kubenswrapper[4720]: E0122 06:47:42.519530 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a725fa6-120e-41b1-bf7b-e1419e35c891" containerName="ovnkube-controller" Jan 22 06:47:42 crc kubenswrapper[4720]: I0122 06:47:42.519536 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a725fa6-120e-41b1-bf7b-e1419e35c891" containerName="ovnkube-controller" Jan 22 06:47:42 crc kubenswrapper[4720]: E0122 06:47:42.519544 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a725fa6-120e-41b1-bf7b-e1419e35c891" containerName="ovnkube-controller" Jan 22 06:47:42 crc kubenswrapper[4720]: I0122 06:47:42.519549 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a725fa6-120e-41b1-bf7b-e1419e35c891" containerName="ovnkube-controller" Jan 22 06:47:42 crc kubenswrapper[4720]: E0122 06:47:42.519559 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a725fa6-120e-41b1-bf7b-e1419e35c891" containerName="ovnkube-controller" Jan 22 06:47:42 crc kubenswrapper[4720]: I0122 06:47:42.519565 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a725fa6-120e-41b1-bf7b-e1419e35c891" containerName="ovnkube-controller" Jan 22 06:47:42 crc kubenswrapper[4720]: I0122 06:47:42.519656 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a725fa6-120e-41b1-bf7b-e1419e35c891" containerName="ovn-controller" Jan 22 06:47:42 crc kubenswrapper[4720]: I0122 06:47:42.519666 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a725fa6-120e-41b1-bf7b-e1419e35c891" containerName="ovnkube-controller" Jan 22 06:47:42 crc kubenswrapper[4720]: I0122 06:47:42.519673 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a725fa6-120e-41b1-bf7b-e1419e35c891" containerName="ovnkube-controller" Jan 22 06:47:42 crc kubenswrapper[4720]: I0122 06:47:42.519682 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a725fa6-120e-41b1-bf7b-e1419e35c891" containerName="ovnkube-controller" Jan 22 06:47:42 crc kubenswrapper[4720]: I0122 06:47:42.519688 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a725fa6-120e-41b1-bf7b-e1419e35c891" containerName="kube-rbac-proxy-ovn-metrics" Jan 22 06:47:42 crc kubenswrapper[4720]: I0122 06:47:42.519699 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a725fa6-120e-41b1-bf7b-e1419e35c891" containerName="nbdb" Jan 22 06:47:42 crc kubenswrapper[4720]: I0122 06:47:42.519708 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a725fa6-120e-41b1-bf7b-e1419e35c891" containerName="kube-rbac-proxy-node" Jan 22 06:47:42 crc kubenswrapper[4720]: I0122 06:47:42.519716 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a725fa6-120e-41b1-bf7b-e1419e35c891" containerName="ovnkube-controller" Jan 22 06:47:42 crc kubenswrapper[4720]: I0122 06:47:42.519723 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a725fa6-120e-41b1-bf7b-e1419e35c891" containerName="sbdb" Jan 22 06:47:42 crc kubenswrapper[4720]: I0122 06:47:42.519730 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a725fa6-120e-41b1-bf7b-e1419e35c891" containerName="northd" Jan 22 06:47:42 crc kubenswrapper[4720]: I0122 06:47:42.519738 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a725fa6-120e-41b1-bf7b-e1419e35c891" containerName="ovn-acl-logging" Jan 22 06:47:42 crc kubenswrapper[4720]: I0122 06:47:42.519955 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a725fa6-120e-41b1-bf7b-e1419e35c891" containerName="ovnkube-controller" Jan 22 06:47:42 crc kubenswrapper[4720]: I0122 06:47:42.521632 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-vm9h9" Jan 22 06:47:42 crc kubenswrapper[4720]: I0122 06:47:42.604478 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9a725fa6-120e-41b1-bf7b-e1419e35c891-host-cni-netd\") pod \"9a725fa6-120e-41b1-bf7b-e1419e35c891\" (UID: \"9a725fa6-120e-41b1-bf7b-e1419e35c891\") " Jan 22 06:47:42 crc kubenswrapper[4720]: I0122 06:47:42.604651 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/9a725fa6-120e-41b1-bf7b-e1419e35c891-env-overrides\") pod \"9a725fa6-120e-41b1-bf7b-e1419e35c891\" (UID: \"9a725fa6-120e-41b1-bf7b-e1419e35c891\") " Jan 22 06:47:42 crc kubenswrapper[4720]: I0122 06:47:42.604579 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9a725fa6-120e-41b1-bf7b-e1419e35c891-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "9a725fa6-120e-41b1-bf7b-e1419e35c891" (UID: "9a725fa6-120e-41b1-bf7b-e1419e35c891"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 06:47:42 crc kubenswrapper[4720]: I0122 06:47:42.604717 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/9a725fa6-120e-41b1-bf7b-e1419e35c891-log-socket\") pod \"9a725fa6-120e-41b1-bf7b-e1419e35c891\" (UID: \"9a725fa6-120e-41b1-bf7b-e1419e35c891\") " Jan 22 06:47:42 crc kubenswrapper[4720]: I0122 06:47:42.604733 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9a725fa6-120e-41b1-bf7b-e1419e35c891-host-run-ovn-kubernetes\") pod \"9a725fa6-120e-41b1-bf7b-e1419e35c891\" (UID: \"9a725fa6-120e-41b1-bf7b-e1419e35c891\") " Jan 22 06:47:42 crc kubenswrapper[4720]: I0122 06:47:42.604780 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9a725fa6-120e-41b1-bf7b-e1419e35c891-log-socket" (OuterVolumeSpecName: "log-socket") pod "9a725fa6-120e-41b1-bf7b-e1419e35c891" (UID: "9a725fa6-120e-41b1-bf7b-e1419e35c891"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 06:47:42 crc kubenswrapper[4720]: I0122 06:47:42.604834 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/9a725fa6-120e-41b1-bf7b-e1419e35c891-run-ovn\") pod \"9a725fa6-120e-41b1-bf7b-e1419e35c891\" (UID: \"9a725fa6-120e-41b1-bf7b-e1419e35c891\") " Jan 22 06:47:42 crc kubenswrapper[4720]: I0122 06:47:42.604864 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9a725fa6-120e-41b1-bf7b-e1419e35c891-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "9a725fa6-120e-41b1-bf7b-e1419e35c891" (UID: "9a725fa6-120e-41b1-bf7b-e1419e35c891"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 06:47:42 crc kubenswrapper[4720]: I0122 06:47:42.604901 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/9a725fa6-120e-41b1-bf7b-e1419e35c891-ovn-node-metrics-cert\") pod \"9a725fa6-120e-41b1-bf7b-e1419e35c891\" (UID: \"9a725fa6-120e-41b1-bf7b-e1419e35c891\") " Jan 22 06:47:42 crc kubenswrapper[4720]: I0122 06:47:42.605025 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/9a725fa6-120e-41b1-bf7b-e1419e35c891-host-slash\") pod \"9a725fa6-120e-41b1-bf7b-e1419e35c891\" (UID: \"9a725fa6-120e-41b1-bf7b-e1419e35c891\") " Jan 22 06:47:42 crc kubenswrapper[4720]: I0122 06:47:42.605040 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/9a725fa6-120e-41b1-bf7b-e1419e35c891-host-run-netns\") pod \"9a725fa6-120e-41b1-bf7b-e1419e35c891\" (UID: \"9a725fa6-120e-41b1-bf7b-e1419e35c891\") " Jan 22 06:47:42 crc kubenswrapper[4720]: I0122 06:47:42.605059 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/9a725fa6-120e-41b1-bf7b-e1419e35c891-systemd-units\") pod \"9a725fa6-120e-41b1-bf7b-e1419e35c891\" (UID: \"9a725fa6-120e-41b1-bf7b-e1419e35c891\") " Jan 22 06:47:42 crc kubenswrapper[4720]: I0122 06:47:42.604928 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9a725fa6-120e-41b1-bf7b-e1419e35c891-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "9a725fa6-120e-41b1-bf7b-e1419e35c891" (UID: "9a725fa6-120e-41b1-bf7b-e1419e35c891"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 06:47:42 crc kubenswrapper[4720]: I0122 06:47:42.605141 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9a725fa6-120e-41b1-bf7b-e1419e35c891-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "9a725fa6-120e-41b1-bf7b-e1419e35c891" (UID: "9a725fa6-120e-41b1-bf7b-e1419e35c891"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 06:47:42 crc kubenswrapper[4720]: I0122 06:47:42.605125 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9a725fa6-120e-41b1-bf7b-e1419e35c891-var-lib-openvswitch\") pod \"9a725fa6-120e-41b1-bf7b-e1419e35c891\" (UID: \"9a725fa6-120e-41b1-bf7b-e1419e35c891\") " Jan 22 06:47:42 crc kubenswrapper[4720]: I0122 06:47:42.605100 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9a725fa6-120e-41b1-bf7b-e1419e35c891-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "9a725fa6-120e-41b1-bf7b-e1419e35c891" (UID: "9a725fa6-120e-41b1-bf7b-e1419e35c891"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 06:47:42 crc kubenswrapper[4720]: I0122 06:47:42.605198 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/9a725fa6-120e-41b1-bf7b-e1419e35c891-node-log\") pod \"9a725fa6-120e-41b1-bf7b-e1419e35c891\" (UID: \"9a725fa6-120e-41b1-bf7b-e1419e35c891\") " Jan 22 06:47:42 crc kubenswrapper[4720]: I0122 06:47:42.605216 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/9a725fa6-120e-41b1-bf7b-e1419e35c891-host-kubelet\") pod \"9a725fa6-120e-41b1-bf7b-e1419e35c891\" (UID: \"9a725fa6-120e-41b1-bf7b-e1419e35c891\") " Jan 22 06:47:42 crc kubenswrapper[4720]: I0122 06:47:42.605231 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9a725fa6-120e-41b1-bf7b-e1419e35c891-etc-openvswitch\") pod \"9a725fa6-120e-41b1-bf7b-e1419e35c891\" (UID: \"9a725fa6-120e-41b1-bf7b-e1419e35c891\") " Jan 22 06:47:42 crc kubenswrapper[4720]: I0122 06:47:42.605248 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9a725fa6-120e-41b1-bf7b-e1419e35c891-host-var-lib-cni-networks-ovn-kubernetes\") pod \"9a725fa6-120e-41b1-bf7b-e1419e35c891\" (UID: \"9a725fa6-120e-41b1-bf7b-e1419e35c891\") " Jan 22 06:47:42 crc kubenswrapper[4720]: I0122 06:47:42.605268 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/9a725fa6-120e-41b1-bf7b-e1419e35c891-host-cni-bin\") pod \"9a725fa6-120e-41b1-bf7b-e1419e35c891\" (UID: \"9a725fa6-120e-41b1-bf7b-e1419e35c891\") " Jan 22 06:47:42 crc kubenswrapper[4720]: I0122 06:47:42.605292 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/9a725fa6-120e-41b1-bf7b-e1419e35c891-run-systemd\") pod \"9a725fa6-120e-41b1-bf7b-e1419e35c891\" (UID: \"9a725fa6-120e-41b1-bf7b-e1419e35c891\") " Jan 22 06:47:42 crc kubenswrapper[4720]: I0122 06:47:42.605311 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9a725fa6-120e-41b1-bf7b-e1419e35c891-run-openvswitch\") pod \"9a725fa6-120e-41b1-bf7b-e1419e35c891\" (UID: \"9a725fa6-120e-41b1-bf7b-e1419e35c891\") " Jan 22 06:47:42 crc kubenswrapper[4720]: I0122 06:47:42.605292 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9a725fa6-120e-41b1-bf7b-e1419e35c891-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "9a725fa6-120e-41b1-bf7b-e1419e35c891" (UID: "9a725fa6-120e-41b1-bf7b-e1419e35c891"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 06:47:42 crc kubenswrapper[4720]: I0122 06:47:42.605348 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fmnn9\" (UniqueName: \"kubernetes.io/projected/9a725fa6-120e-41b1-bf7b-e1419e35c891-kube-api-access-fmnn9\") pod \"9a725fa6-120e-41b1-bf7b-e1419e35c891\" (UID: \"9a725fa6-120e-41b1-bf7b-e1419e35c891\") " Jan 22 06:47:42 crc kubenswrapper[4720]: I0122 06:47:42.605360 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9a725fa6-120e-41b1-bf7b-e1419e35c891-node-log" (OuterVolumeSpecName: "node-log") pod "9a725fa6-120e-41b1-bf7b-e1419e35c891" (UID: "9a725fa6-120e-41b1-bf7b-e1419e35c891"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 06:47:42 crc kubenswrapper[4720]: I0122 06:47:42.605371 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/9a725fa6-120e-41b1-bf7b-e1419e35c891-ovnkube-script-lib\") pod \"9a725fa6-120e-41b1-bf7b-e1419e35c891\" (UID: \"9a725fa6-120e-41b1-bf7b-e1419e35c891\") " Jan 22 06:47:42 crc kubenswrapper[4720]: I0122 06:47:42.605440 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/9a725fa6-120e-41b1-bf7b-e1419e35c891-ovnkube-config\") pod \"9a725fa6-120e-41b1-bf7b-e1419e35c891\" (UID: \"9a725fa6-120e-41b1-bf7b-e1419e35c891\") " Jan 22 06:47:42 crc kubenswrapper[4720]: I0122 06:47:42.605526 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9a725fa6-120e-41b1-bf7b-e1419e35c891-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "9a725fa6-120e-41b1-bf7b-e1419e35c891" (UID: "9a725fa6-120e-41b1-bf7b-e1419e35c891"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 06:47:42 crc kubenswrapper[4720]: I0122 06:47:42.605593 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3cd5a2a0-4299-41d0-8dc3-39ea67b02ea2-host-run-ovn-kubernetes\") pod \"ovnkube-node-vm9h9\" (UID: \"3cd5a2a0-4299-41d0-8dc3-39ea67b02ea2\") " pod="openshift-ovn-kubernetes/ovnkube-node-vm9h9" Jan 22 06:47:42 crc kubenswrapper[4720]: I0122 06:47:42.605627 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3cd5a2a0-4299-41d0-8dc3-39ea67b02ea2-etc-openvswitch\") pod \"ovnkube-node-vm9h9\" (UID: \"3cd5a2a0-4299-41d0-8dc3-39ea67b02ea2\") " pod="openshift-ovn-kubernetes/ovnkube-node-vm9h9" Jan 22 06:47:42 crc kubenswrapper[4720]: I0122 06:47:42.605652 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/3cd5a2a0-4299-41d0-8dc3-39ea67b02ea2-host-run-netns\") pod \"ovnkube-node-vm9h9\" (UID: \"3cd5a2a0-4299-41d0-8dc3-39ea67b02ea2\") " pod="openshift-ovn-kubernetes/ovnkube-node-vm9h9" Jan 22 06:47:42 crc kubenswrapper[4720]: I0122 06:47:42.605670 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/3cd5a2a0-4299-41d0-8dc3-39ea67b02ea2-host-kubelet\") pod \"ovnkube-node-vm9h9\" (UID: \"3cd5a2a0-4299-41d0-8dc3-39ea67b02ea2\") " pod="openshift-ovn-kubernetes/ovnkube-node-vm9h9" Jan 22 06:47:42 crc kubenswrapper[4720]: I0122 06:47:42.605686 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3cd5a2a0-4299-41d0-8dc3-39ea67b02ea2-run-openvswitch\") pod \"ovnkube-node-vm9h9\" (UID: \"3cd5a2a0-4299-41d0-8dc3-39ea67b02ea2\") " pod="openshift-ovn-kubernetes/ovnkube-node-vm9h9" Jan 22 06:47:42 crc kubenswrapper[4720]: I0122 06:47:42.605700 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/3cd5a2a0-4299-41d0-8dc3-39ea67b02ea2-host-slash\") pod \"ovnkube-node-vm9h9\" (UID: \"3cd5a2a0-4299-41d0-8dc3-39ea67b02ea2\") " pod="openshift-ovn-kubernetes/ovnkube-node-vm9h9" Jan 22 06:47:42 crc kubenswrapper[4720]: I0122 06:47:42.605730 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/3cd5a2a0-4299-41d0-8dc3-39ea67b02ea2-run-ovn\") pod \"ovnkube-node-vm9h9\" (UID: \"3cd5a2a0-4299-41d0-8dc3-39ea67b02ea2\") " pod="openshift-ovn-kubernetes/ovnkube-node-vm9h9" Jan 22 06:47:42 crc kubenswrapper[4720]: I0122 06:47:42.605747 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/3cd5a2a0-4299-41d0-8dc3-39ea67b02ea2-ovnkube-config\") pod \"ovnkube-node-vm9h9\" (UID: \"3cd5a2a0-4299-41d0-8dc3-39ea67b02ea2\") " pod="openshift-ovn-kubernetes/ovnkube-node-vm9h9" Jan 22 06:47:42 crc kubenswrapper[4720]: I0122 06:47:42.605774 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/3cd5a2a0-4299-41d0-8dc3-39ea67b02ea2-node-log\") pod \"ovnkube-node-vm9h9\" (UID: \"3cd5a2a0-4299-41d0-8dc3-39ea67b02ea2\") " pod="openshift-ovn-kubernetes/ovnkube-node-vm9h9" Jan 22 06:47:42 crc kubenswrapper[4720]: I0122 06:47:42.605788 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/3cd5a2a0-4299-41d0-8dc3-39ea67b02ea2-ovnkube-script-lib\") pod \"ovnkube-node-vm9h9\" (UID: \"3cd5a2a0-4299-41d0-8dc3-39ea67b02ea2\") " pod="openshift-ovn-kubernetes/ovnkube-node-vm9h9" Jan 22 06:47:42 crc kubenswrapper[4720]: I0122 06:47:42.605805 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/3cd5a2a0-4299-41d0-8dc3-39ea67b02ea2-host-cni-bin\") pod \"ovnkube-node-vm9h9\" (UID: \"3cd5a2a0-4299-41d0-8dc3-39ea67b02ea2\") " pod="openshift-ovn-kubernetes/ovnkube-node-vm9h9" Jan 22 06:47:42 crc kubenswrapper[4720]: I0122 06:47:42.605824 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3cd5a2a0-4299-41d0-8dc3-39ea67b02ea2-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-vm9h9\" (UID: \"3cd5a2a0-4299-41d0-8dc3-39ea67b02ea2\") " pod="openshift-ovn-kubernetes/ovnkube-node-vm9h9" Jan 22 06:47:42 crc kubenswrapper[4720]: I0122 06:47:42.605842 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/3cd5a2a0-4299-41d0-8dc3-39ea67b02ea2-env-overrides\") pod \"ovnkube-node-vm9h9\" (UID: \"3cd5a2a0-4299-41d0-8dc3-39ea67b02ea2\") " pod="openshift-ovn-kubernetes/ovnkube-node-vm9h9" Jan 22 06:47:42 crc kubenswrapper[4720]: I0122 06:47:42.605864 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rm7p7\" (UniqueName: \"kubernetes.io/projected/3cd5a2a0-4299-41d0-8dc3-39ea67b02ea2-kube-api-access-rm7p7\") pod \"ovnkube-node-vm9h9\" (UID: \"3cd5a2a0-4299-41d0-8dc3-39ea67b02ea2\") " pod="openshift-ovn-kubernetes/ovnkube-node-vm9h9" Jan 22 06:47:42 crc kubenswrapper[4720]: I0122 06:47:42.605880 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/3cd5a2a0-4299-41d0-8dc3-39ea67b02ea2-run-systemd\") pod \"ovnkube-node-vm9h9\" (UID: \"3cd5a2a0-4299-41d0-8dc3-39ea67b02ea2\") " pod="openshift-ovn-kubernetes/ovnkube-node-vm9h9" Jan 22 06:47:42 crc kubenswrapper[4720]: I0122 06:47:42.605899 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/3cd5a2a0-4299-41d0-8dc3-39ea67b02ea2-systemd-units\") pod \"ovnkube-node-vm9h9\" (UID: \"3cd5a2a0-4299-41d0-8dc3-39ea67b02ea2\") " pod="openshift-ovn-kubernetes/ovnkube-node-vm9h9" Jan 22 06:47:42 crc kubenswrapper[4720]: I0122 06:47:42.605931 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3cd5a2a0-4299-41d0-8dc3-39ea67b02ea2-var-lib-openvswitch\") pod \"ovnkube-node-vm9h9\" (UID: \"3cd5a2a0-4299-41d0-8dc3-39ea67b02ea2\") " pod="openshift-ovn-kubernetes/ovnkube-node-vm9h9" Jan 22 06:47:42 crc kubenswrapper[4720]: I0122 06:47:42.605948 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/3cd5a2a0-4299-41d0-8dc3-39ea67b02ea2-log-socket\") pod \"ovnkube-node-vm9h9\" (UID: \"3cd5a2a0-4299-41d0-8dc3-39ea67b02ea2\") " pod="openshift-ovn-kubernetes/ovnkube-node-vm9h9" Jan 22 06:47:42 crc kubenswrapper[4720]: I0122 06:47:42.605962 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/3cd5a2a0-4299-41d0-8dc3-39ea67b02ea2-ovn-node-metrics-cert\") pod \"ovnkube-node-vm9h9\" (UID: \"3cd5a2a0-4299-41d0-8dc3-39ea67b02ea2\") " pod="openshift-ovn-kubernetes/ovnkube-node-vm9h9" Jan 22 06:47:42 crc kubenswrapper[4720]: I0122 06:47:42.605976 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3cd5a2a0-4299-41d0-8dc3-39ea67b02ea2-host-cni-netd\") pod \"ovnkube-node-vm9h9\" (UID: \"3cd5a2a0-4299-41d0-8dc3-39ea67b02ea2\") " pod="openshift-ovn-kubernetes/ovnkube-node-vm9h9" Jan 22 06:47:42 crc kubenswrapper[4720]: I0122 06:47:42.606013 4720 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/9a725fa6-120e-41b1-bf7b-e1419e35c891-node-log\") on node \"crc\" DevicePath \"\"" Jan 22 06:47:42 crc kubenswrapper[4720]: I0122 06:47:42.606024 4720 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9a725fa6-120e-41b1-bf7b-e1419e35c891-host-cni-netd\") on node \"crc\" DevicePath \"\"" Jan 22 06:47:42 crc kubenswrapper[4720]: I0122 06:47:42.606033 4720 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/9a725fa6-120e-41b1-bf7b-e1419e35c891-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 22 06:47:42 crc kubenswrapper[4720]: I0122 06:47:42.606042 4720 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/9a725fa6-120e-41b1-bf7b-e1419e35c891-log-socket\") on node \"crc\" DevicePath \"\"" Jan 22 06:47:42 crc kubenswrapper[4720]: I0122 06:47:42.606050 4720 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9a725fa6-120e-41b1-bf7b-e1419e35c891-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 22 06:47:42 crc kubenswrapper[4720]: I0122 06:47:42.606059 4720 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/9a725fa6-120e-41b1-bf7b-e1419e35c891-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 22 06:47:42 crc kubenswrapper[4720]: I0122 06:47:42.606067 4720 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/9a725fa6-120e-41b1-bf7b-e1419e35c891-host-run-netns\") on node \"crc\" DevicePath \"\"" Jan 22 06:47:42 crc kubenswrapper[4720]: I0122 06:47:42.606075 4720 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9a725fa6-120e-41b1-bf7b-e1419e35c891-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 22 06:47:42 crc kubenswrapper[4720]: I0122 06:47:42.606333 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9a725fa6-120e-41b1-bf7b-e1419e35c891-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "9a725fa6-120e-41b1-bf7b-e1419e35c891" (UID: "9a725fa6-120e-41b1-bf7b-e1419e35c891"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 06:47:42 crc kubenswrapper[4720]: I0122 06:47:42.606392 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9a725fa6-120e-41b1-bf7b-e1419e35c891-host-slash" (OuterVolumeSpecName: "host-slash") pod "9a725fa6-120e-41b1-bf7b-e1419e35c891" (UID: "9a725fa6-120e-41b1-bf7b-e1419e35c891"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 06:47:42 crc kubenswrapper[4720]: I0122 06:47:42.606439 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9a725fa6-120e-41b1-bf7b-e1419e35c891-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "9a725fa6-120e-41b1-bf7b-e1419e35c891" (UID: "9a725fa6-120e-41b1-bf7b-e1419e35c891"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 06:47:42 crc kubenswrapper[4720]: I0122 06:47:42.606447 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9a725fa6-120e-41b1-bf7b-e1419e35c891-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "9a725fa6-120e-41b1-bf7b-e1419e35c891" (UID: "9a725fa6-120e-41b1-bf7b-e1419e35c891"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 06:47:42 crc kubenswrapper[4720]: I0122 06:47:42.606478 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9a725fa6-120e-41b1-bf7b-e1419e35c891-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "9a725fa6-120e-41b1-bf7b-e1419e35c891" (UID: "9a725fa6-120e-41b1-bf7b-e1419e35c891"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 06:47:42 crc kubenswrapper[4720]: I0122 06:47:42.606511 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9a725fa6-120e-41b1-bf7b-e1419e35c891-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "9a725fa6-120e-41b1-bf7b-e1419e35c891" (UID: "9a725fa6-120e-41b1-bf7b-e1419e35c891"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 06:47:42 crc kubenswrapper[4720]: I0122 06:47:42.606615 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9a725fa6-120e-41b1-bf7b-e1419e35c891-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "9a725fa6-120e-41b1-bf7b-e1419e35c891" (UID: "9a725fa6-120e-41b1-bf7b-e1419e35c891"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 06:47:42 crc kubenswrapper[4720]: I0122 06:47:42.606862 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9a725fa6-120e-41b1-bf7b-e1419e35c891-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "9a725fa6-120e-41b1-bf7b-e1419e35c891" (UID: "9a725fa6-120e-41b1-bf7b-e1419e35c891"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 06:47:42 crc kubenswrapper[4720]: I0122 06:47:42.612479 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a725fa6-120e-41b1-bf7b-e1419e35c891-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "9a725fa6-120e-41b1-bf7b-e1419e35c891" (UID: "9a725fa6-120e-41b1-bf7b-e1419e35c891"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 06:47:42 crc kubenswrapper[4720]: I0122 06:47:42.613029 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a725fa6-120e-41b1-bf7b-e1419e35c891-kube-api-access-fmnn9" (OuterVolumeSpecName: "kube-api-access-fmnn9") pod "9a725fa6-120e-41b1-bf7b-e1419e35c891" (UID: "9a725fa6-120e-41b1-bf7b-e1419e35c891"). InnerVolumeSpecName "kube-api-access-fmnn9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 06:47:42 crc kubenswrapper[4720]: I0122 06:47:42.627924 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9a725fa6-120e-41b1-bf7b-e1419e35c891-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "9a725fa6-120e-41b1-bf7b-e1419e35c891" (UID: "9a725fa6-120e-41b1-bf7b-e1419e35c891"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 06:47:42 crc kubenswrapper[4720]: I0122 06:47:42.707578 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/3cd5a2a0-4299-41d0-8dc3-39ea67b02ea2-systemd-units\") pod \"ovnkube-node-vm9h9\" (UID: \"3cd5a2a0-4299-41d0-8dc3-39ea67b02ea2\") " pod="openshift-ovn-kubernetes/ovnkube-node-vm9h9" Jan 22 06:47:42 crc kubenswrapper[4720]: I0122 06:47:42.707645 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3cd5a2a0-4299-41d0-8dc3-39ea67b02ea2-var-lib-openvswitch\") pod \"ovnkube-node-vm9h9\" (UID: \"3cd5a2a0-4299-41d0-8dc3-39ea67b02ea2\") " pod="openshift-ovn-kubernetes/ovnkube-node-vm9h9" Jan 22 06:47:42 crc kubenswrapper[4720]: I0122 06:47:42.707684 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/3cd5a2a0-4299-41d0-8dc3-39ea67b02ea2-log-socket\") pod \"ovnkube-node-vm9h9\" (UID: \"3cd5a2a0-4299-41d0-8dc3-39ea67b02ea2\") " pod="openshift-ovn-kubernetes/ovnkube-node-vm9h9" Jan 22 06:47:42 crc kubenswrapper[4720]: I0122 06:47:42.707706 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/3cd5a2a0-4299-41d0-8dc3-39ea67b02ea2-ovn-node-metrics-cert\") pod \"ovnkube-node-vm9h9\" (UID: \"3cd5a2a0-4299-41d0-8dc3-39ea67b02ea2\") " pod="openshift-ovn-kubernetes/ovnkube-node-vm9h9" Jan 22 06:47:42 crc kubenswrapper[4720]: I0122 06:47:42.707708 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/3cd5a2a0-4299-41d0-8dc3-39ea67b02ea2-systemd-units\") pod \"ovnkube-node-vm9h9\" (UID: \"3cd5a2a0-4299-41d0-8dc3-39ea67b02ea2\") " pod="openshift-ovn-kubernetes/ovnkube-node-vm9h9" Jan 22 06:47:42 crc kubenswrapper[4720]: I0122 06:47:42.707775 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3cd5a2a0-4299-41d0-8dc3-39ea67b02ea2-host-cni-netd\") pod \"ovnkube-node-vm9h9\" (UID: \"3cd5a2a0-4299-41d0-8dc3-39ea67b02ea2\") " pod="openshift-ovn-kubernetes/ovnkube-node-vm9h9" Jan 22 06:47:42 crc kubenswrapper[4720]: I0122 06:47:42.707723 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3cd5a2a0-4299-41d0-8dc3-39ea67b02ea2-host-cni-netd\") pod \"ovnkube-node-vm9h9\" (UID: \"3cd5a2a0-4299-41d0-8dc3-39ea67b02ea2\") " pod="openshift-ovn-kubernetes/ovnkube-node-vm9h9" Jan 22 06:47:42 crc kubenswrapper[4720]: I0122 06:47:42.707827 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3cd5a2a0-4299-41d0-8dc3-39ea67b02ea2-var-lib-openvswitch\") pod \"ovnkube-node-vm9h9\" (UID: \"3cd5a2a0-4299-41d0-8dc3-39ea67b02ea2\") " pod="openshift-ovn-kubernetes/ovnkube-node-vm9h9" Jan 22 06:47:42 crc kubenswrapper[4720]: I0122 06:47:42.707840 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3cd5a2a0-4299-41d0-8dc3-39ea67b02ea2-host-run-ovn-kubernetes\") pod \"ovnkube-node-vm9h9\" (UID: \"3cd5a2a0-4299-41d0-8dc3-39ea67b02ea2\") " pod="openshift-ovn-kubernetes/ovnkube-node-vm9h9" Jan 22 06:47:42 crc kubenswrapper[4720]: I0122 06:47:42.707999 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3cd5a2a0-4299-41d0-8dc3-39ea67b02ea2-etc-openvswitch\") pod \"ovnkube-node-vm9h9\" (UID: \"3cd5a2a0-4299-41d0-8dc3-39ea67b02ea2\") " pod="openshift-ovn-kubernetes/ovnkube-node-vm9h9" Jan 22 06:47:42 crc kubenswrapper[4720]: I0122 06:47:42.708050 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3cd5a2a0-4299-41d0-8dc3-39ea67b02ea2-etc-openvswitch\") pod \"ovnkube-node-vm9h9\" (UID: \"3cd5a2a0-4299-41d0-8dc3-39ea67b02ea2\") " pod="openshift-ovn-kubernetes/ovnkube-node-vm9h9" Jan 22 06:47:42 crc kubenswrapper[4720]: I0122 06:47:42.708056 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/3cd5a2a0-4299-41d0-8dc3-39ea67b02ea2-host-run-netns\") pod \"ovnkube-node-vm9h9\" (UID: \"3cd5a2a0-4299-41d0-8dc3-39ea67b02ea2\") " pod="openshift-ovn-kubernetes/ovnkube-node-vm9h9" Jan 22 06:47:42 crc kubenswrapper[4720]: I0122 06:47:42.708127 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/3cd5a2a0-4299-41d0-8dc3-39ea67b02ea2-host-kubelet\") pod \"ovnkube-node-vm9h9\" (UID: \"3cd5a2a0-4299-41d0-8dc3-39ea67b02ea2\") " pod="openshift-ovn-kubernetes/ovnkube-node-vm9h9" Jan 22 06:47:42 crc kubenswrapper[4720]: I0122 06:47:42.708157 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3cd5a2a0-4299-41d0-8dc3-39ea67b02ea2-run-openvswitch\") pod \"ovnkube-node-vm9h9\" (UID: \"3cd5a2a0-4299-41d0-8dc3-39ea67b02ea2\") " pod="openshift-ovn-kubernetes/ovnkube-node-vm9h9" Jan 22 06:47:42 crc kubenswrapper[4720]: I0122 06:47:42.708183 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/3cd5a2a0-4299-41d0-8dc3-39ea67b02ea2-host-slash\") pod \"ovnkube-node-vm9h9\" (UID: \"3cd5a2a0-4299-41d0-8dc3-39ea67b02ea2\") " pod="openshift-ovn-kubernetes/ovnkube-node-vm9h9" Jan 22 06:47:42 crc kubenswrapper[4720]: I0122 06:47:42.707807 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/3cd5a2a0-4299-41d0-8dc3-39ea67b02ea2-log-socket\") pod \"ovnkube-node-vm9h9\" (UID: \"3cd5a2a0-4299-41d0-8dc3-39ea67b02ea2\") " pod="openshift-ovn-kubernetes/ovnkube-node-vm9h9" Jan 22 06:47:42 crc kubenswrapper[4720]: I0122 06:47:42.708225 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/3cd5a2a0-4299-41d0-8dc3-39ea67b02ea2-run-ovn\") pod \"ovnkube-node-vm9h9\" (UID: \"3cd5a2a0-4299-41d0-8dc3-39ea67b02ea2\") " pod="openshift-ovn-kubernetes/ovnkube-node-vm9h9" Jan 22 06:47:42 crc kubenswrapper[4720]: I0122 06:47:42.708249 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3cd5a2a0-4299-41d0-8dc3-39ea67b02ea2-run-openvswitch\") pod \"ovnkube-node-vm9h9\" (UID: \"3cd5a2a0-4299-41d0-8dc3-39ea67b02ea2\") " pod="openshift-ovn-kubernetes/ovnkube-node-vm9h9" Jan 22 06:47:42 crc kubenswrapper[4720]: I0122 06:47:42.708079 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/3cd5a2a0-4299-41d0-8dc3-39ea67b02ea2-host-run-netns\") pod \"ovnkube-node-vm9h9\" (UID: \"3cd5a2a0-4299-41d0-8dc3-39ea67b02ea2\") " pod="openshift-ovn-kubernetes/ovnkube-node-vm9h9" Jan 22 06:47:42 crc kubenswrapper[4720]: I0122 06:47:42.708280 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/3cd5a2a0-4299-41d0-8dc3-39ea67b02ea2-host-slash\") pod \"ovnkube-node-vm9h9\" (UID: \"3cd5a2a0-4299-41d0-8dc3-39ea67b02ea2\") " pod="openshift-ovn-kubernetes/ovnkube-node-vm9h9" Jan 22 06:47:42 crc kubenswrapper[4720]: I0122 06:47:42.708271 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/3cd5a2a0-4299-41d0-8dc3-39ea67b02ea2-ovnkube-config\") pod \"ovnkube-node-vm9h9\" (UID: \"3cd5a2a0-4299-41d0-8dc3-39ea67b02ea2\") " pod="openshift-ovn-kubernetes/ovnkube-node-vm9h9" Jan 22 06:47:42 crc kubenswrapper[4720]: I0122 06:47:42.708289 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/3cd5a2a0-4299-41d0-8dc3-39ea67b02ea2-host-kubelet\") pod \"ovnkube-node-vm9h9\" (UID: \"3cd5a2a0-4299-41d0-8dc3-39ea67b02ea2\") " pod="openshift-ovn-kubernetes/ovnkube-node-vm9h9" Jan 22 06:47:42 crc kubenswrapper[4720]: I0122 06:47:42.708357 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/3cd5a2a0-4299-41d0-8dc3-39ea67b02ea2-node-log\") pod \"ovnkube-node-vm9h9\" (UID: \"3cd5a2a0-4299-41d0-8dc3-39ea67b02ea2\") " pod="openshift-ovn-kubernetes/ovnkube-node-vm9h9" Jan 22 06:47:42 crc kubenswrapper[4720]: I0122 06:47:42.708318 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/3cd5a2a0-4299-41d0-8dc3-39ea67b02ea2-run-ovn\") pod \"ovnkube-node-vm9h9\" (UID: \"3cd5a2a0-4299-41d0-8dc3-39ea67b02ea2\") " pod="openshift-ovn-kubernetes/ovnkube-node-vm9h9" Jan 22 06:47:42 crc kubenswrapper[4720]: I0122 06:47:42.708402 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/3cd5a2a0-4299-41d0-8dc3-39ea67b02ea2-ovnkube-script-lib\") pod \"ovnkube-node-vm9h9\" (UID: \"3cd5a2a0-4299-41d0-8dc3-39ea67b02ea2\") " pod="openshift-ovn-kubernetes/ovnkube-node-vm9h9" Jan 22 06:47:42 crc kubenswrapper[4720]: I0122 06:47:42.708419 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/3cd5a2a0-4299-41d0-8dc3-39ea67b02ea2-node-log\") pod \"ovnkube-node-vm9h9\" (UID: \"3cd5a2a0-4299-41d0-8dc3-39ea67b02ea2\") " pod="openshift-ovn-kubernetes/ovnkube-node-vm9h9" Jan 22 06:47:42 crc kubenswrapper[4720]: I0122 06:47:42.708437 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/3cd5a2a0-4299-41d0-8dc3-39ea67b02ea2-host-cni-bin\") pod \"ovnkube-node-vm9h9\" (UID: \"3cd5a2a0-4299-41d0-8dc3-39ea67b02ea2\") " pod="openshift-ovn-kubernetes/ovnkube-node-vm9h9" Jan 22 06:47:42 crc kubenswrapper[4720]: I0122 06:47:42.708469 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3cd5a2a0-4299-41d0-8dc3-39ea67b02ea2-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-vm9h9\" (UID: \"3cd5a2a0-4299-41d0-8dc3-39ea67b02ea2\") " pod="openshift-ovn-kubernetes/ovnkube-node-vm9h9" Jan 22 06:47:42 crc kubenswrapper[4720]: I0122 06:47:42.708491 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/3cd5a2a0-4299-41d0-8dc3-39ea67b02ea2-host-cni-bin\") pod \"ovnkube-node-vm9h9\" (UID: \"3cd5a2a0-4299-41d0-8dc3-39ea67b02ea2\") " pod="openshift-ovn-kubernetes/ovnkube-node-vm9h9" Jan 22 06:47:42 crc kubenswrapper[4720]: I0122 06:47:42.708498 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/3cd5a2a0-4299-41d0-8dc3-39ea67b02ea2-env-overrides\") pod \"ovnkube-node-vm9h9\" (UID: \"3cd5a2a0-4299-41d0-8dc3-39ea67b02ea2\") " pod="openshift-ovn-kubernetes/ovnkube-node-vm9h9" Jan 22 06:47:42 crc kubenswrapper[4720]: I0122 06:47:42.707861 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3cd5a2a0-4299-41d0-8dc3-39ea67b02ea2-host-run-ovn-kubernetes\") pod \"ovnkube-node-vm9h9\" (UID: \"3cd5a2a0-4299-41d0-8dc3-39ea67b02ea2\") " pod="openshift-ovn-kubernetes/ovnkube-node-vm9h9" Jan 22 06:47:42 crc kubenswrapper[4720]: I0122 06:47:42.708538 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3cd5a2a0-4299-41d0-8dc3-39ea67b02ea2-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-vm9h9\" (UID: \"3cd5a2a0-4299-41d0-8dc3-39ea67b02ea2\") " pod="openshift-ovn-kubernetes/ovnkube-node-vm9h9" Jan 22 06:47:42 crc kubenswrapper[4720]: I0122 06:47:42.708532 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rm7p7\" (UniqueName: \"kubernetes.io/projected/3cd5a2a0-4299-41d0-8dc3-39ea67b02ea2-kube-api-access-rm7p7\") pod \"ovnkube-node-vm9h9\" (UID: \"3cd5a2a0-4299-41d0-8dc3-39ea67b02ea2\") " pod="openshift-ovn-kubernetes/ovnkube-node-vm9h9" Jan 22 06:47:42 crc kubenswrapper[4720]: I0122 06:47:42.708594 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/3cd5a2a0-4299-41d0-8dc3-39ea67b02ea2-run-systemd\") pod \"ovnkube-node-vm9h9\" (UID: \"3cd5a2a0-4299-41d0-8dc3-39ea67b02ea2\") " pod="openshift-ovn-kubernetes/ovnkube-node-vm9h9" Jan 22 06:47:42 crc kubenswrapper[4720]: I0122 06:47:42.708660 4720 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/9a725fa6-120e-41b1-bf7b-e1419e35c891-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 22 06:47:42 crc kubenswrapper[4720]: I0122 06:47:42.708677 4720 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/9a725fa6-120e-41b1-bf7b-e1419e35c891-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 22 06:47:42 crc kubenswrapper[4720]: I0122 06:47:42.708694 4720 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/9a725fa6-120e-41b1-bf7b-e1419e35c891-host-slash\") on node \"crc\" DevicePath \"\"" Jan 22 06:47:42 crc kubenswrapper[4720]: I0122 06:47:42.708707 4720 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/9a725fa6-120e-41b1-bf7b-e1419e35c891-systemd-units\") on node \"crc\" DevicePath \"\"" Jan 22 06:47:42 crc kubenswrapper[4720]: I0122 06:47:42.708718 4720 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/9a725fa6-120e-41b1-bf7b-e1419e35c891-host-kubelet\") on node \"crc\" DevicePath \"\"" Jan 22 06:47:42 crc kubenswrapper[4720]: I0122 06:47:42.708731 4720 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9a725fa6-120e-41b1-bf7b-e1419e35c891-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 22 06:47:42 crc kubenswrapper[4720]: I0122 06:47:42.708740 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/3cd5a2a0-4299-41d0-8dc3-39ea67b02ea2-run-systemd\") pod \"ovnkube-node-vm9h9\" (UID: \"3cd5a2a0-4299-41d0-8dc3-39ea67b02ea2\") " pod="openshift-ovn-kubernetes/ovnkube-node-vm9h9" Jan 22 06:47:42 crc kubenswrapper[4720]: I0122 06:47:42.708746 4720 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9a725fa6-120e-41b1-bf7b-e1419e35c891-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 22 06:47:42 crc kubenswrapper[4720]: I0122 06:47:42.708787 4720 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/9a725fa6-120e-41b1-bf7b-e1419e35c891-host-cni-bin\") on node \"crc\" DevicePath \"\"" Jan 22 06:47:42 crc kubenswrapper[4720]: I0122 06:47:42.708799 4720 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/9a725fa6-120e-41b1-bf7b-e1419e35c891-run-systemd\") on node \"crc\" DevicePath \"\"" Jan 22 06:47:42 crc kubenswrapper[4720]: I0122 06:47:42.708809 4720 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9a725fa6-120e-41b1-bf7b-e1419e35c891-run-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 22 06:47:42 crc kubenswrapper[4720]: I0122 06:47:42.708821 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fmnn9\" (UniqueName: \"kubernetes.io/projected/9a725fa6-120e-41b1-bf7b-e1419e35c891-kube-api-access-fmnn9\") on node \"crc\" DevicePath \"\"" Jan 22 06:47:42 crc kubenswrapper[4720]: I0122 06:47:42.708831 4720 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/9a725fa6-120e-41b1-bf7b-e1419e35c891-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 22 06:47:42 crc kubenswrapper[4720]: I0122 06:47:42.709255 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/3cd5a2a0-4299-41d0-8dc3-39ea67b02ea2-ovnkube-config\") pod \"ovnkube-node-vm9h9\" (UID: \"3cd5a2a0-4299-41d0-8dc3-39ea67b02ea2\") " pod="openshift-ovn-kubernetes/ovnkube-node-vm9h9" Jan 22 06:47:42 crc kubenswrapper[4720]: I0122 06:47:42.709316 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/3cd5a2a0-4299-41d0-8dc3-39ea67b02ea2-env-overrides\") pod \"ovnkube-node-vm9h9\" (UID: \"3cd5a2a0-4299-41d0-8dc3-39ea67b02ea2\") " pod="openshift-ovn-kubernetes/ovnkube-node-vm9h9" Jan 22 06:47:42 crc kubenswrapper[4720]: I0122 06:47:42.709386 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/3cd5a2a0-4299-41d0-8dc3-39ea67b02ea2-ovnkube-script-lib\") pod \"ovnkube-node-vm9h9\" (UID: \"3cd5a2a0-4299-41d0-8dc3-39ea67b02ea2\") " pod="openshift-ovn-kubernetes/ovnkube-node-vm9h9" Jan 22 06:47:42 crc kubenswrapper[4720]: I0122 06:47:42.711834 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/3cd5a2a0-4299-41d0-8dc3-39ea67b02ea2-ovn-node-metrics-cert\") pod \"ovnkube-node-vm9h9\" (UID: \"3cd5a2a0-4299-41d0-8dc3-39ea67b02ea2\") " pod="openshift-ovn-kubernetes/ovnkube-node-vm9h9" Jan 22 06:47:42 crc kubenswrapper[4720]: I0122 06:47:42.728795 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pc2f4_9a725fa6-120e-41b1-bf7b-e1419e35c891/ovnkube-controller/3.log" Jan 22 06:47:42 crc kubenswrapper[4720]: I0122 06:47:42.730925 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rm7p7\" (UniqueName: \"kubernetes.io/projected/3cd5a2a0-4299-41d0-8dc3-39ea67b02ea2-kube-api-access-rm7p7\") pod \"ovnkube-node-vm9h9\" (UID: \"3cd5a2a0-4299-41d0-8dc3-39ea67b02ea2\") " pod="openshift-ovn-kubernetes/ovnkube-node-vm9h9" Jan 22 06:47:42 crc kubenswrapper[4720]: I0122 06:47:42.731341 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pc2f4_9a725fa6-120e-41b1-bf7b-e1419e35c891/ovn-acl-logging/0.log" Jan 22 06:47:42 crc kubenswrapper[4720]: I0122 06:47:42.731735 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pc2f4_9a725fa6-120e-41b1-bf7b-e1419e35c891/ovn-controller/0.log" Jan 22 06:47:42 crc kubenswrapper[4720]: I0122 06:47:42.732545 4720 generic.go:334] "Generic (PLEG): container finished" podID="9a725fa6-120e-41b1-bf7b-e1419e35c891" containerID="dee4e936e1b6317e41638ead6be9d777b984b89ca57b4bb97aeea2cd862976e7" exitCode=0 Jan 22 06:47:42 crc kubenswrapper[4720]: I0122 06:47:42.732570 4720 generic.go:334] "Generic (PLEG): container finished" podID="9a725fa6-120e-41b1-bf7b-e1419e35c891" containerID="dd6371da033cf0ecd1c0b746cae82e14593edbf2353910a77284a2987741580d" exitCode=0 Jan 22 06:47:42 crc kubenswrapper[4720]: I0122 06:47:42.732577 4720 generic.go:334] "Generic (PLEG): container finished" podID="9a725fa6-120e-41b1-bf7b-e1419e35c891" containerID="b013c0f8d83bb91a1385ed067377ea5bd81640f7b420160042c33a3112cbe153" exitCode=0 Jan 22 06:47:42 crc kubenswrapper[4720]: I0122 06:47:42.732584 4720 generic.go:334] "Generic (PLEG): container finished" podID="9a725fa6-120e-41b1-bf7b-e1419e35c891" containerID="5a1f835b7dfa74f56bbfdc5ef0a6674e6c04ae1be7119e3675d4a7e970bc37d9" exitCode=0 Jan 22 06:47:42 crc kubenswrapper[4720]: I0122 06:47:42.732590 4720 generic.go:334] "Generic (PLEG): container finished" podID="9a725fa6-120e-41b1-bf7b-e1419e35c891" containerID="b14f48ebae84e1450337b5ce78c6b9e74f4ffd418eaaf9df9fcb7495f5feae06" exitCode=0 Jan 22 06:47:42 crc kubenswrapper[4720]: I0122 06:47:42.732597 4720 generic.go:334] "Generic (PLEG): container finished" podID="9a725fa6-120e-41b1-bf7b-e1419e35c891" containerID="be8927bdfee25dd12f02d8f5e4923d1ef6e034a4ab3c036ee8fb76b699aa4a04" exitCode=0 Jan 22 06:47:42 crc kubenswrapper[4720]: I0122 06:47:42.732603 4720 generic.go:334] "Generic (PLEG): container finished" podID="9a725fa6-120e-41b1-bf7b-e1419e35c891" containerID="279bdc70caff8b05f2eaff1d28bee03725a7069dcc99c4770a8df9a3e3ed0440" exitCode=143 Jan 22 06:47:42 crc kubenswrapper[4720]: I0122 06:47:42.732646 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pc2f4" event={"ID":"9a725fa6-120e-41b1-bf7b-e1419e35c891","Type":"ContainerDied","Data":"dee4e936e1b6317e41638ead6be9d777b984b89ca57b4bb97aeea2cd862976e7"} Jan 22 06:47:42 crc kubenswrapper[4720]: I0122 06:47:42.732676 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pc2f4" event={"ID":"9a725fa6-120e-41b1-bf7b-e1419e35c891","Type":"ContainerDied","Data":"dd6371da033cf0ecd1c0b746cae82e14593edbf2353910a77284a2987741580d"} Jan 22 06:47:42 crc kubenswrapper[4720]: I0122 06:47:42.732686 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pc2f4" event={"ID":"9a725fa6-120e-41b1-bf7b-e1419e35c891","Type":"ContainerDied","Data":"b013c0f8d83bb91a1385ed067377ea5bd81640f7b420160042c33a3112cbe153"} Jan 22 06:47:42 crc kubenswrapper[4720]: I0122 06:47:42.732694 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pc2f4" event={"ID":"9a725fa6-120e-41b1-bf7b-e1419e35c891","Type":"ContainerDied","Data":"5a1f835b7dfa74f56bbfdc5ef0a6674e6c04ae1be7119e3675d4a7e970bc37d9"} Jan 22 06:47:42 crc kubenswrapper[4720]: I0122 06:47:42.732703 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pc2f4" event={"ID":"9a725fa6-120e-41b1-bf7b-e1419e35c891","Type":"ContainerDied","Data":"b14f48ebae84e1450337b5ce78c6b9e74f4ffd418eaaf9df9fcb7495f5feae06"} Jan 22 06:47:42 crc kubenswrapper[4720]: I0122 06:47:42.732714 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pc2f4" event={"ID":"9a725fa6-120e-41b1-bf7b-e1419e35c891","Type":"ContainerDied","Data":"be8927bdfee25dd12f02d8f5e4923d1ef6e034a4ab3c036ee8fb76b699aa4a04"} Jan 22 06:47:42 crc kubenswrapper[4720]: I0122 06:47:42.732726 4720 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4bab2488a753e9278defd984cada047c9d9b5411a54e88204c7c67add2341e64"} Jan 22 06:47:42 crc kubenswrapper[4720]: I0122 06:47:42.732737 4720 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"dd6371da033cf0ecd1c0b746cae82e14593edbf2353910a77284a2987741580d"} Jan 22 06:47:42 crc kubenswrapper[4720]: I0122 06:47:42.732743 4720 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b013c0f8d83bb91a1385ed067377ea5bd81640f7b420160042c33a3112cbe153"} Jan 22 06:47:42 crc kubenswrapper[4720]: I0122 06:47:42.732748 4720 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5a1f835b7dfa74f56bbfdc5ef0a6674e6c04ae1be7119e3675d4a7e970bc37d9"} Jan 22 06:47:42 crc kubenswrapper[4720]: I0122 06:47:42.732753 4720 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b14f48ebae84e1450337b5ce78c6b9e74f4ffd418eaaf9df9fcb7495f5feae06"} Jan 22 06:47:42 crc kubenswrapper[4720]: I0122 06:47:42.732759 4720 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"be8927bdfee25dd12f02d8f5e4923d1ef6e034a4ab3c036ee8fb76b699aa4a04"} Jan 22 06:47:42 crc kubenswrapper[4720]: I0122 06:47:42.732764 4720 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"bdea94d764eb25af4aec5da1f1aaf94b7d75e66292561a54803a079a6ca35cbd"} Jan 22 06:47:42 crc kubenswrapper[4720]: I0122 06:47:42.732769 4720 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"279bdc70caff8b05f2eaff1d28bee03725a7069dcc99c4770a8df9a3e3ed0440"} Jan 22 06:47:42 crc kubenswrapper[4720]: I0122 06:47:42.732774 4720 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a478b385adad231c65bafa44aef49fd983ef1a81babc0464478e199578b612b6"} Jan 22 06:47:42 crc kubenswrapper[4720]: I0122 06:47:42.732781 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pc2f4" event={"ID":"9a725fa6-120e-41b1-bf7b-e1419e35c891","Type":"ContainerDied","Data":"279bdc70caff8b05f2eaff1d28bee03725a7069dcc99c4770a8df9a3e3ed0440"} Jan 22 06:47:42 crc kubenswrapper[4720]: I0122 06:47:42.732788 4720 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"dee4e936e1b6317e41638ead6be9d777b984b89ca57b4bb97aeea2cd862976e7"} Jan 22 06:47:42 crc kubenswrapper[4720]: I0122 06:47:42.732794 4720 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4bab2488a753e9278defd984cada047c9d9b5411a54e88204c7c67add2341e64"} Jan 22 06:47:42 crc kubenswrapper[4720]: I0122 06:47:42.732799 4720 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"dd6371da033cf0ecd1c0b746cae82e14593edbf2353910a77284a2987741580d"} Jan 22 06:47:42 crc kubenswrapper[4720]: I0122 06:47:42.732804 4720 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b013c0f8d83bb91a1385ed067377ea5bd81640f7b420160042c33a3112cbe153"} Jan 22 06:47:42 crc kubenswrapper[4720]: I0122 06:47:42.732809 4720 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5a1f835b7dfa74f56bbfdc5ef0a6674e6c04ae1be7119e3675d4a7e970bc37d9"} Jan 22 06:47:42 crc kubenswrapper[4720]: I0122 06:47:42.732816 4720 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b14f48ebae84e1450337b5ce78c6b9e74f4ffd418eaaf9df9fcb7495f5feae06"} Jan 22 06:47:42 crc kubenswrapper[4720]: I0122 06:47:42.732821 4720 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"be8927bdfee25dd12f02d8f5e4923d1ef6e034a4ab3c036ee8fb76b699aa4a04"} Jan 22 06:47:42 crc kubenswrapper[4720]: I0122 06:47:42.732826 4720 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"bdea94d764eb25af4aec5da1f1aaf94b7d75e66292561a54803a079a6ca35cbd"} Jan 22 06:47:42 crc kubenswrapper[4720]: I0122 06:47:42.732831 4720 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"279bdc70caff8b05f2eaff1d28bee03725a7069dcc99c4770a8df9a3e3ed0440"} Jan 22 06:47:42 crc kubenswrapper[4720]: I0122 06:47:42.732836 4720 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a478b385adad231c65bafa44aef49fd983ef1a81babc0464478e199578b612b6"} Jan 22 06:47:42 crc kubenswrapper[4720]: I0122 06:47:42.732842 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pc2f4" event={"ID":"9a725fa6-120e-41b1-bf7b-e1419e35c891","Type":"ContainerDied","Data":"97f65448ee42888f06a1ee0565e9d5e6a0ccb5044062ad539d5060910ee6b4bd"} Jan 22 06:47:42 crc kubenswrapper[4720]: I0122 06:47:42.732850 4720 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"dee4e936e1b6317e41638ead6be9d777b984b89ca57b4bb97aeea2cd862976e7"} Jan 22 06:47:42 crc kubenswrapper[4720]: I0122 06:47:42.732856 4720 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4bab2488a753e9278defd984cada047c9d9b5411a54e88204c7c67add2341e64"} Jan 22 06:47:42 crc kubenswrapper[4720]: I0122 06:47:42.732860 4720 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"dd6371da033cf0ecd1c0b746cae82e14593edbf2353910a77284a2987741580d"} Jan 22 06:47:42 crc kubenswrapper[4720]: I0122 06:47:42.732865 4720 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b013c0f8d83bb91a1385ed067377ea5bd81640f7b420160042c33a3112cbe153"} Jan 22 06:47:42 crc kubenswrapper[4720]: I0122 06:47:42.732870 4720 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5a1f835b7dfa74f56bbfdc5ef0a6674e6c04ae1be7119e3675d4a7e970bc37d9"} Jan 22 06:47:42 crc kubenswrapper[4720]: I0122 06:47:42.732874 4720 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b14f48ebae84e1450337b5ce78c6b9e74f4ffd418eaaf9df9fcb7495f5feae06"} Jan 22 06:47:42 crc kubenswrapper[4720]: I0122 06:47:42.732879 4720 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"be8927bdfee25dd12f02d8f5e4923d1ef6e034a4ab3c036ee8fb76b699aa4a04"} Jan 22 06:47:42 crc kubenswrapper[4720]: I0122 06:47:42.732884 4720 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"bdea94d764eb25af4aec5da1f1aaf94b7d75e66292561a54803a079a6ca35cbd"} Jan 22 06:47:42 crc kubenswrapper[4720]: I0122 06:47:42.732890 4720 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"279bdc70caff8b05f2eaff1d28bee03725a7069dcc99c4770a8df9a3e3ed0440"} Jan 22 06:47:42 crc kubenswrapper[4720]: I0122 06:47:42.732895 4720 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a478b385adad231c65bafa44aef49fd983ef1a81babc0464478e199578b612b6"} Jan 22 06:47:42 crc kubenswrapper[4720]: I0122 06:47:42.732977 4720 scope.go:117] "RemoveContainer" containerID="dee4e936e1b6317e41638ead6be9d777b984b89ca57b4bb97aeea2cd862976e7" Jan 22 06:47:42 crc kubenswrapper[4720]: I0122 06:47:42.733140 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-pc2f4" Jan 22 06:47:42 crc kubenswrapper[4720]: I0122 06:47:42.736613 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-n5w5r_85373343-156d-4de0-a72b-baaf7c4e3d08/kube-multus/2.log" Jan 22 06:47:42 crc kubenswrapper[4720]: I0122 06:47:42.737179 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-n5w5r_85373343-156d-4de0-a72b-baaf7c4e3d08/kube-multus/1.log" Jan 22 06:47:42 crc kubenswrapper[4720]: I0122 06:47:42.737302 4720 generic.go:334] "Generic (PLEG): container finished" podID="85373343-156d-4de0-a72b-baaf7c4e3d08" containerID="c0028ee94bbbee298a2b436cb261af92d992335251cf0d39192eacaf29503865" exitCode=2 Jan 22 06:47:42 crc kubenswrapper[4720]: I0122 06:47:42.737362 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-n5w5r" event={"ID":"85373343-156d-4de0-a72b-baaf7c4e3d08","Type":"ContainerDied","Data":"c0028ee94bbbee298a2b436cb261af92d992335251cf0d39192eacaf29503865"} Jan 22 06:47:42 crc kubenswrapper[4720]: I0122 06:47:42.737405 4720 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b71047289bcefd19da4f70da8db4ee3456912a253f598d85540effeea52ca966"} Jan 22 06:47:42 crc kubenswrapper[4720]: I0122 06:47:42.738034 4720 scope.go:117] "RemoveContainer" containerID="c0028ee94bbbee298a2b436cb261af92d992335251cf0d39192eacaf29503865" Jan 22 06:47:42 crc kubenswrapper[4720]: I0122 06:47:42.754333 4720 scope.go:117] "RemoveContainer" containerID="4bab2488a753e9278defd984cada047c9d9b5411a54e88204c7c67add2341e64" Jan 22 06:47:42 crc kubenswrapper[4720]: I0122 06:47:42.837630 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f082sjlt" Jan 22 06:47:42 crc kubenswrapper[4720]: I0122 06:47:42.839344 4720 scope.go:117] "RemoveContainer" containerID="dd6371da033cf0ecd1c0b746cae82e14593edbf2353910a77284a2987741580d" Jan 22 06:47:42 crc kubenswrapper[4720]: I0122 06:47:42.839421 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-pc2f4"] Jan 22 06:47:42 crc kubenswrapper[4720]: I0122 06:47:42.842176 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-pc2f4"] Jan 22 06:47:42 crc kubenswrapper[4720]: I0122 06:47:42.852403 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-vm9h9" Jan 22 06:47:42 crc kubenswrapper[4720]: I0122 06:47:42.874738 4720 scope.go:117] "RemoveContainer" containerID="b013c0f8d83bb91a1385ed067377ea5bd81640f7b420160042c33a3112cbe153" Jan 22 06:47:42 crc kubenswrapper[4720]: W0122 06:47:42.882356 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3cd5a2a0_4299_41d0_8dc3_39ea67b02ea2.slice/crio-3cebb1f4ba3b1f4b02a014f38cf45b759043214173b5d55adc450817ea69f462 WatchSource:0}: Error finding container 3cebb1f4ba3b1f4b02a014f38cf45b759043214173b5d55adc450817ea69f462: Status 404 returned error can't find the container with id 3cebb1f4ba3b1f4b02a014f38cf45b759043214173b5d55adc450817ea69f462 Jan 22 06:47:42 crc kubenswrapper[4720]: I0122 06:47:42.916196 4720 scope.go:117] "RemoveContainer" containerID="5a1f835b7dfa74f56bbfdc5ef0a6674e6c04ae1be7119e3675d4a7e970bc37d9" Jan 22 06:47:42 crc kubenswrapper[4720]: I0122 06:47:42.981078 4720 scope.go:117] "RemoveContainer" containerID="b14f48ebae84e1450337b5ce78c6b9e74f4ffd418eaaf9df9fcb7495f5feae06" Jan 22 06:47:42 crc kubenswrapper[4720]: I0122 06:47:42.997742 4720 scope.go:117] "RemoveContainer" containerID="be8927bdfee25dd12f02d8f5e4923d1ef6e034a4ab3c036ee8fb76b699aa4a04" Jan 22 06:47:43 crc kubenswrapper[4720]: I0122 06:47:43.012370 4720 scope.go:117] "RemoveContainer" containerID="bdea94d764eb25af4aec5da1f1aaf94b7d75e66292561a54803a079a6ca35cbd" Jan 22 06:47:43 crc kubenswrapper[4720]: I0122 06:47:43.014377 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/dc107a3a-440f-43c6-a92c-378d6fb30761-bundle\") pod \"dc107a3a-440f-43c6-a92c-378d6fb30761\" (UID: \"dc107a3a-440f-43c6-a92c-378d6fb30761\") " Jan 22 06:47:43 crc kubenswrapper[4720]: I0122 06:47:43.014503 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hpq78\" (UniqueName: \"kubernetes.io/projected/dc107a3a-440f-43c6-a92c-378d6fb30761-kube-api-access-hpq78\") pod \"dc107a3a-440f-43c6-a92c-378d6fb30761\" (UID: \"dc107a3a-440f-43c6-a92c-378d6fb30761\") " Jan 22 06:47:43 crc kubenswrapper[4720]: I0122 06:47:43.014559 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/dc107a3a-440f-43c6-a92c-378d6fb30761-util\") pod \"dc107a3a-440f-43c6-a92c-378d6fb30761\" (UID: \"dc107a3a-440f-43c6-a92c-378d6fb30761\") " Jan 22 06:47:43 crc kubenswrapper[4720]: I0122 06:47:43.017605 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dc107a3a-440f-43c6-a92c-378d6fb30761-bundle" (OuterVolumeSpecName: "bundle") pod "dc107a3a-440f-43c6-a92c-378d6fb30761" (UID: "dc107a3a-440f-43c6-a92c-378d6fb30761"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 06:47:43 crc kubenswrapper[4720]: I0122 06:47:43.020059 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc107a3a-440f-43c6-a92c-378d6fb30761-kube-api-access-hpq78" (OuterVolumeSpecName: "kube-api-access-hpq78") pod "dc107a3a-440f-43c6-a92c-378d6fb30761" (UID: "dc107a3a-440f-43c6-a92c-378d6fb30761"). InnerVolumeSpecName "kube-api-access-hpq78". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 06:47:43 crc kubenswrapper[4720]: I0122 06:47:43.024658 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dc107a3a-440f-43c6-a92c-378d6fb30761-util" (OuterVolumeSpecName: "util") pod "dc107a3a-440f-43c6-a92c-378d6fb30761" (UID: "dc107a3a-440f-43c6-a92c-378d6fb30761"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 06:47:43 crc kubenswrapper[4720]: I0122 06:47:43.046888 4720 scope.go:117] "RemoveContainer" containerID="279bdc70caff8b05f2eaff1d28bee03725a7069dcc99c4770a8df9a3e3ed0440" Jan 22 06:47:43 crc kubenswrapper[4720]: I0122 06:47:43.063879 4720 scope.go:117] "RemoveContainer" containerID="a478b385adad231c65bafa44aef49fd983ef1a81babc0464478e199578b612b6" Jan 22 06:47:43 crc kubenswrapper[4720]: I0122 06:47:43.089596 4720 scope.go:117] "RemoveContainer" containerID="dee4e936e1b6317e41638ead6be9d777b984b89ca57b4bb97aeea2cd862976e7" Jan 22 06:47:43 crc kubenswrapper[4720]: E0122 06:47:43.090700 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dee4e936e1b6317e41638ead6be9d777b984b89ca57b4bb97aeea2cd862976e7\": container with ID starting with dee4e936e1b6317e41638ead6be9d777b984b89ca57b4bb97aeea2cd862976e7 not found: ID does not exist" containerID="dee4e936e1b6317e41638ead6be9d777b984b89ca57b4bb97aeea2cd862976e7" Jan 22 06:47:43 crc kubenswrapper[4720]: I0122 06:47:43.090748 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dee4e936e1b6317e41638ead6be9d777b984b89ca57b4bb97aeea2cd862976e7"} err="failed to get container status \"dee4e936e1b6317e41638ead6be9d777b984b89ca57b4bb97aeea2cd862976e7\": rpc error: code = NotFound desc = could not find container \"dee4e936e1b6317e41638ead6be9d777b984b89ca57b4bb97aeea2cd862976e7\": container with ID starting with dee4e936e1b6317e41638ead6be9d777b984b89ca57b4bb97aeea2cd862976e7 not found: ID does not exist" Jan 22 06:47:43 crc kubenswrapper[4720]: I0122 06:47:43.090784 4720 scope.go:117] "RemoveContainer" containerID="4bab2488a753e9278defd984cada047c9d9b5411a54e88204c7c67add2341e64" Jan 22 06:47:43 crc kubenswrapper[4720]: E0122 06:47:43.093699 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4bab2488a753e9278defd984cada047c9d9b5411a54e88204c7c67add2341e64\": container with ID starting with 4bab2488a753e9278defd984cada047c9d9b5411a54e88204c7c67add2341e64 not found: ID does not exist" containerID="4bab2488a753e9278defd984cada047c9d9b5411a54e88204c7c67add2341e64" Jan 22 06:47:43 crc kubenswrapper[4720]: I0122 06:47:43.093759 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4bab2488a753e9278defd984cada047c9d9b5411a54e88204c7c67add2341e64"} err="failed to get container status \"4bab2488a753e9278defd984cada047c9d9b5411a54e88204c7c67add2341e64\": rpc error: code = NotFound desc = could not find container \"4bab2488a753e9278defd984cada047c9d9b5411a54e88204c7c67add2341e64\": container with ID starting with 4bab2488a753e9278defd984cada047c9d9b5411a54e88204c7c67add2341e64 not found: ID does not exist" Jan 22 06:47:43 crc kubenswrapper[4720]: I0122 06:47:43.093807 4720 scope.go:117] "RemoveContainer" containerID="dd6371da033cf0ecd1c0b746cae82e14593edbf2353910a77284a2987741580d" Jan 22 06:47:43 crc kubenswrapper[4720]: E0122 06:47:43.094271 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dd6371da033cf0ecd1c0b746cae82e14593edbf2353910a77284a2987741580d\": container with ID starting with dd6371da033cf0ecd1c0b746cae82e14593edbf2353910a77284a2987741580d not found: ID does not exist" containerID="dd6371da033cf0ecd1c0b746cae82e14593edbf2353910a77284a2987741580d" Jan 22 06:47:43 crc kubenswrapper[4720]: I0122 06:47:43.094295 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dd6371da033cf0ecd1c0b746cae82e14593edbf2353910a77284a2987741580d"} err="failed to get container status \"dd6371da033cf0ecd1c0b746cae82e14593edbf2353910a77284a2987741580d\": rpc error: code = NotFound desc = could not find container \"dd6371da033cf0ecd1c0b746cae82e14593edbf2353910a77284a2987741580d\": container with ID starting with dd6371da033cf0ecd1c0b746cae82e14593edbf2353910a77284a2987741580d not found: ID does not exist" Jan 22 06:47:43 crc kubenswrapper[4720]: I0122 06:47:43.094314 4720 scope.go:117] "RemoveContainer" containerID="b013c0f8d83bb91a1385ed067377ea5bd81640f7b420160042c33a3112cbe153" Jan 22 06:47:43 crc kubenswrapper[4720]: E0122 06:47:43.094559 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b013c0f8d83bb91a1385ed067377ea5bd81640f7b420160042c33a3112cbe153\": container with ID starting with b013c0f8d83bb91a1385ed067377ea5bd81640f7b420160042c33a3112cbe153 not found: ID does not exist" containerID="b013c0f8d83bb91a1385ed067377ea5bd81640f7b420160042c33a3112cbe153" Jan 22 06:47:43 crc kubenswrapper[4720]: I0122 06:47:43.094583 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b013c0f8d83bb91a1385ed067377ea5bd81640f7b420160042c33a3112cbe153"} err="failed to get container status \"b013c0f8d83bb91a1385ed067377ea5bd81640f7b420160042c33a3112cbe153\": rpc error: code = NotFound desc = could not find container \"b013c0f8d83bb91a1385ed067377ea5bd81640f7b420160042c33a3112cbe153\": container with ID starting with b013c0f8d83bb91a1385ed067377ea5bd81640f7b420160042c33a3112cbe153 not found: ID does not exist" Jan 22 06:47:43 crc kubenswrapper[4720]: I0122 06:47:43.094597 4720 scope.go:117] "RemoveContainer" containerID="5a1f835b7dfa74f56bbfdc5ef0a6674e6c04ae1be7119e3675d4a7e970bc37d9" Jan 22 06:47:43 crc kubenswrapper[4720]: E0122 06:47:43.095213 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5a1f835b7dfa74f56bbfdc5ef0a6674e6c04ae1be7119e3675d4a7e970bc37d9\": container with ID starting with 5a1f835b7dfa74f56bbfdc5ef0a6674e6c04ae1be7119e3675d4a7e970bc37d9 not found: ID does not exist" containerID="5a1f835b7dfa74f56bbfdc5ef0a6674e6c04ae1be7119e3675d4a7e970bc37d9" Jan 22 06:47:43 crc kubenswrapper[4720]: I0122 06:47:43.095256 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5a1f835b7dfa74f56bbfdc5ef0a6674e6c04ae1be7119e3675d4a7e970bc37d9"} err="failed to get container status \"5a1f835b7dfa74f56bbfdc5ef0a6674e6c04ae1be7119e3675d4a7e970bc37d9\": rpc error: code = NotFound desc = could not find container \"5a1f835b7dfa74f56bbfdc5ef0a6674e6c04ae1be7119e3675d4a7e970bc37d9\": container with ID starting with 5a1f835b7dfa74f56bbfdc5ef0a6674e6c04ae1be7119e3675d4a7e970bc37d9 not found: ID does not exist" Jan 22 06:47:43 crc kubenswrapper[4720]: I0122 06:47:43.095286 4720 scope.go:117] "RemoveContainer" containerID="b14f48ebae84e1450337b5ce78c6b9e74f4ffd418eaaf9df9fcb7495f5feae06" Jan 22 06:47:43 crc kubenswrapper[4720]: E0122 06:47:43.095811 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b14f48ebae84e1450337b5ce78c6b9e74f4ffd418eaaf9df9fcb7495f5feae06\": container with ID starting with b14f48ebae84e1450337b5ce78c6b9e74f4ffd418eaaf9df9fcb7495f5feae06 not found: ID does not exist" containerID="b14f48ebae84e1450337b5ce78c6b9e74f4ffd418eaaf9df9fcb7495f5feae06" Jan 22 06:47:43 crc kubenswrapper[4720]: I0122 06:47:43.095846 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b14f48ebae84e1450337b5ce78c6b9e74f4ffd418eaaf9df9fcb7495f5feae06"} err="failed to get container status \"b14f48ebae84e1450337b5ce78c6b9e74f4ffd418eaaf9df9fcb7495f5feae06\": rpc error: code = NotFound desc = could not find container \"b14f48ebae84e1450337b5ce78c6b9e74f4ffd418eaaf9df9fcb7495f5feae06\": container with ID starting with b14f48ebae84e1450337b5ce78c6b9e74f4ffd418eaaf9df9fcb7495f5feae06 not found: ID does not exist" Jan 22 06:47:43 crc kubenswrapper[4720]: I0122 06:47:43.095864 4720 scope.go:117] "RemoveContainer" containerID="be8927bdfee25dd12f02d8f5e4923d1ef6e034a4ab3c036ee8fb76b699aa4a04" Jan 22 06:47:43 crc kubenswrapper[4720]: E0122 06:47:43.096374 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"be8927bdfee25dd12f02d8f5e4923d1ef6e034a4ab3c036ee8fb76b699aa4a04\": container with ID starting with be8927bdfee25dd12f02d8f5e4923d1ef6e034a4ab3c036ee8fb76b699aa4a04 not found: ID does not exist" containerID="be8927bdfee25dd12f02d8f5e4923d1ef6e034a4ab3c036ee8fb76b699aa4a04" Jan 22 06:47:43 crc kubenswrapper[4720]: I0122 06:47:43.096407 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"be8927bdfee25dd12f02d8f5e4923d1ef6e034a4ab3c036ee8fb76b699aa4a04"} err="failed to get container status \"be8927bdfee25dd12f02d8f5e4923d1ef6e034a4ab3c036ee8fb76b699aa4a04\": rpc error: code = NotFound desc = could not find container \"be8927bdfee25dd12f02d8f5e4923d1ef6e034a4ab3c036ee8fb76b699aa4a04\": container with ID starting with be8927bdfee25dd12f02d8f5e4923d1ef6e034a4ab3c036ee8fb76b699aa4a04 not found: ID does not exist" Jan 22 06:47:43 crc kubenswrapper[4720]: I0122 06:47:43.096429 4720 scope.go:117] "RemoveContainer" containerID="bdea94d764eb25af4aec5da1f1aaf94b7d75e66292561a54803a079a6ca35cbd" Jan 22 06:47:43 crc kubenswrapper[4720]: E0122 06:47:43.096756 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bdea94d764eb25af4aec5da1f1aaf94b7d75e66292561a54803a079a6ca35cbd\": container with ID starting with bdea94d764eb25af4aec5da1f1aaf94b7d75e66292561a54803a079a6ca35cbd not found: ID does not exist" containerID="bdea94d764eb25af4aec5da1f1aaf94b7d75e66292561a54803a079a6ca35cbd" Jan 22 06:47:43 crc kubenswrapper[4720]: I0122 06:47:43.096792 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bdea94d764eb25af4aec5da1f1aaf94b7d75e66292561a54803a079a6ca35cbd"} err="failed to get container status \"bdea94d764eb25af4aec5da1f1aaf94b7d75e66292561a54803a079a6ca35cbd\": rpc error: code = NotFound desc = could not find container \"bdea94d764eb25af4aec5da1f1aaf94b7d75e66292561a54803a079a6ca35cbd\": container with ID starting with bdea94d764eb25af4aec5da1f1aaf94b7d75e66292561a54803a079a6ca35cbd not found: ID does not exist" Jan 22 06:47:43 crc kubenswrapper[4720]: I0122 06:47:43.096813 4720 scope.go:117] "RemoveContainer" containerID="279bdc70caff8b05f2eaff1d28bee03725a7069dcc99c4770a8df9a3e3ed0440" Jan 22 06:47:43 crc kubenswrapper[4720]: E0122 06:47:43.097403 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"279bdc70caff8b05f2eaff1d28bee03725a7069dcc99c4770a8df9a3e3ed0440\": container with ID starting with 279bdc70caff8b05f2eaff1d28bee03725a7069dcc99c4770a8df9a3e3ed0440 not found: ID does not exist" containerID="279bdc70caff8b05f2eaff1d28bee03725a7069dcc99c4770a8df9a3e3ed0440" Jan 22 06:47:43 crc kubenswrapper[4720]: I0122 06:47:43.097438 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"279bdc70caff8b05f2eaff1d28bee03725a7069dcc99c4770a8df9a3e3ed0440"} err="failed to get container status \"279bdc70caff8b05f2eaff1d28bee03725a7069dcc99c4770a8df9a3e3ed0440\": rpc error: code = NotFound desc = could not find container \"279bdc70caff8b05f2eaff1d28bee03725a7069dcc99c4770a8df9a3e3ed0440\": container with ID starting with 279bdc70caff8b05f2eaff1d28bee03725a7069dcc99c4770a8df9a3e3ed0440 not found: ID does not exist" Jan 22 06:47:43 crc kubenswrapper[4720]: I0122 06:47:43.097456 4720 scope.go:117] "RemoveContainer" containerID="a478b385adad231c65bafa44aef49fd983ef1a81babc0464478e199578b612b6" Jan 22 06:47:43 crc kubenswrapper[4720]: E0122 06:47:43.097832 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a478b385adad231c65bafa44aef49fd983ef1a81babc0464478e199578b612b6\": container with ID starting with a478b385adad231c65bafa44aef49fd983ef1a81babc0464478e199578b612b6 not found: ID does not exist" containerID="a478b385adad231c65bafa44aef49fd983ef1a81babc0464478e199578b612b6" Jan 22 06:47:43 crc kubenswrapper[4720]: I0122 06:47:43.097860 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a478b385adad231c65bafa44aef49fd983ef1a81babc0464478e199578b612b6"} err="failed to get container status \"a478b385adad231c65bafa44aef49fd983ef1a81babc0464478e199578b612b6\": rpc error: code = NotFound desc = could not find container \"a478b385adad231c65bafa44aef49fd983ef1a81babc0464478e199578b612b6\": container with ID starting with a478b385adad231c65bafa44aef49fd983ef1a81babc0464478e199578b612b6 not found: ID does not exist" Jan 22 06:47:43 crc kubenswrapper[4720]: I0122 06:47:43.097877 4720 scope.go:117] "RemoveContainer" containerID="dee4e936e1b6317e41638ead6be9d777b984b89ca57b4bb97aeea2cd862976e7" Jan 22 06:47:43 crc kubenswrapper[4720]: I0122 06:47:43.098322 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dee4e936e1b6317e41638ead6be9d777b984b89ca57b4bb97aeea2cd862976e7"} err="failed to get container status \"dee4e936e1b6317e41638ead6be9d777b984b89ca57b4bb97aeea2cd862976e7\": rpc error: code = NotFound desc = could not find container \"dee4e936e1b6317e41638ead6be9d777b984b89ca57b4bb97aeea2cd862976e7\": container with ID starting with dee4e936e1b6317e41638ead6be9d777b984b89ca57b4bb97aeea2cd862976e7 not found: ID does not exist" Jan 22 06:47:43 crc kubenswrapper[4720]: I0122 06:47:43.098350 4720 scope.go:117] "RemoveContainer" containerID="4bab2488a753e9278defd984cada047c9d9b5411a54e88204c7c67add2341e64" Jan 22 06:47:43 crc kubenswrapper[4720]: I0122 06:47:43.098766 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4bab2488a753e9278defd984cada047c9d9b5411a54e88204c7c67add2341e64"} err="failed to get container status \"4bab2488a753e9278defd984cada047c9d9b5411a54e88204c7c67add2341e64\": rpc error: code = NotFound desc = could not find container \"4bab2488a753e9278defd984cada047c9d9b5411a54e88204c7c67add2341e64\": container with ID starting with 4bab2488a753e9278defd984cada047c9d9b5411a54e88204c7c67add2341e64 not found: ID does not exist" Jan 22 06:47:43 crc kubenswrapper[4720]: I0122 06:47:43.098794 4720 scope.go:117] "RemoveContainer" containerID="dd6371da033cf0ecd1c0b746cae82e14593edbf2353910a77284a2987741580d" Jan 22 06:47:43 crc kubenswrapper[4720]: I0122 06:47:43.099150 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dd6371da033cf0ecd1c0b746cae82e14593edbf2353910a77284a2987741580d"} err="failed to get container status \"dd6371da033cf0ecd1c0b746cae82e14593edbf2353910a77284a2987741580d\": rpc error: code = NotFound desc = could not find container \"dd6371da033cf0ecd1c0b746cae82e14593edbf2353910a77284a2987741580d\": container with ID starting with dd6371da033cf0ecd1c0b746cae82e14593edbf2353910a77284a2987741580d not found: ID does not exist" Jan 22 06:47:43 crc kubenswrapper[4720]: I0122 06:47:43.099188 4720 scope.go:117] "RemoveContainer" containerID="b013c0f8d83bb91a1385ed067377ea5bd81640f7b420160042c33a3112cbe153" Jan 22 06:47:43 crc kubenswrapper[4720]: I0122 06:47:43.099840 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b013c0f8d83bb91a1385ed067377ea5bd81640f7b420160042c33a3112cbe153"} err="failed to get container status \"b013c0f8d83bb91a1385ed067377ea5bd81640f7b420160042c33a3112cbe153\": rpc error: code = NotFound desc = could not find container \"b013c0f8d83bb91a1385ed067377ea5bd81640f7b420160042c33a3112cbe153\": container with ID starting with b013c0f8d83bb91a1385ed067377ea5bd81640f7b420160042c33a3112cbe153 not found: ID does not exist" Jan 22 06:47:43 crc kubenswrapper[4720]: I0122 06:47:43.099894 4720 scope.go:117] "RemoveContainer" containerID="5a1f835b7dfa74f56bbfdc5ef0a6674e6c04ae1be7119e3675d4a7e970bc37d9" Jan 22 06:47:43 crc kubenswrapper[4720]: I0122 06:47:43.100289 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5a1f835b7dfa74f56bbfdc5ef0a6674e6c04ae1be7119e3675d4a7e970bc37d9"} err="failed to get container status \"5a1f835b7dfa74f56bbfdc5ef0a6674e6c04ae1be7119e3675d4a7e970bc37d9\": rpc error: code = NotFound desc = could not find container \"5a1f835b7dfa74f56bbfdc5ef0a6674e6c04ae1be7119e3675d4a7e970bc37d9\": container with ID starting with 5a1f835b7dfa74f56bbfdc5ef0a6674e6c04ae1be7119e3675d4a7e970bc37d9 not found: ID does not exist" Jan 22 06:47:43 crc kubenswrapper[4720]: I0122 06:47:43.100322 4720 scope.go:117] "RemoveContainer" containerID="b14f48ebae84e1450337b5ce78c6b9e74f4ffd418eaaf9df9fcb7495f5feae06" Jan 22 06:47:43 crc kubenswrapper[4720]: I0122 06:47:43.100740 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b14f48ebae84e1450337b5ce78c6b9e74f4ffd418eaaf9df9fcb7495f5feae06"} err="failed to get container status \"b14f48ebae84e1450337b5ce78c6b9e74f4ffd418eaaf9df9fcb7495f5feae06\": rpc error: code = NotFound desc = could not find container \"b14f48ebae84e1450337b5ce78c6b9e74f4ffd418eaaf9df9fcb7495f5feae06\": container with ID starting with b14f48ebae84e1450337b5ce78c6b9e74f4ffd418eaaf9df9fcb7495f5feae06 not found: ID does not exist" Jan 22 06:47:43 crc kubenswrapper[4720]: I0122 06:47:43.100765 4720 scope.go:117] "RemoveContainer" containerID="be8927bdfee25dd12f02d8f5e4923d1ef6e034a4ab3c036ee8fb76b699aa4a04" Jan 22 06:47:43 crc kubenswrapper[4720]: I0122 06:47:43.101075 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"be8927bdfee25dd12f02d8f5e4923d1ef6e034a4ab3c036ee8fb76b699aa4a04"} err="failed to get container status \"be8927bdfee25dd12f02d8f5e4923d1ef6e034a4ab3c036ee8fb76b699aa4a04\": rpc error: code = NotFound desc = could not find container \"be8927bdfee25dd12f02d8f5e4923d1ef6e034a4ab3c036ee8fb76b699aa4a04\": container with ID starting with be8927bdfee25dd12f02d8f5e4923d1ef6e034a4ab3c036ee8fb76b699aa4a04 not found: ID does not exist" Jan 22 06:47:43 crc kubenswrapper[4720]: I0122 06:47:43.101113 4720 scope.go:117] "RemoveContainer" containerID="bdea94d764eb25af4aec5da1f1aaf94b7d75e66292561a54803a079a6ca35cbd" Jan 22 06:47:43 crc kubenswrapper[4720]: I0122 06:47:43.101469 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bdea94d764eb25af4aec5da1f1aaf94b7d75e66292561a54803a079a6ca35cbd"} err="failed to get container status \"bdea94d764eb25af4aec5da1f1aaf94b7d75e66292561a54803a079a6ca35cbd\": rpc error: code = NotFound desc = could not find container \"bdea94d764eb25af4aec5da1f1aaf94b7d75e66292561a54803a079a6ca35cbd\": container with ID starting with bdea94d764eb25af4aec5da1f1aaf94b7d75e66292561a54803a079a6ca35cbd not found: ID does not exist" Jan 22 06:47:43 crc kubenswrapper[4720]: I0122 06:47:43.101500 4720 scope.go:117] "RemoveContainer" containerID="279bdc70caff8b05f2eaff1d28bee03725a7069dcc99c4770a8df9a3e3ed0440" Jan 22 06:47:43 crc kubenswrapper[4720]: I0122 06:47:43.101897 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"279bdc70caff8b05f2eaff1d28bee03725a7069dcc99c4770a8df9a3e3ed0440"} err="failed to get container status \"279bdc70caff8b05f2eaff1d28bee03725a7069dcc99c4770a8df9a3e3ed0440\": rpc error: code = NotFound desc = could not find container \"279bdc70caff8b05f2eaff1d28bee03725a7069dcc99c4770a8df9a3e3ed0440\": container with ID starting with 279bdc70caff8b05f2eaff1d28bee03725a7069dcc99c4770a8df9a3e3ed0440 not found: ID does not exist" Jan 22 06:47:43 crc kubenswrapper[4720]: I0122 06:47:43.101953 4720 scope.go:117] "RemoveContainer" containerID="a478b385adad231c65bafa44aef49fd983ef1a81babc0464478e199578b612b6" Jan 22 06:47:43 crc kubenswrapper[4720]: I0122 06:47:43.102485 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a478b385adad231c65bafa44aef49fd983ef1a81babc0464478e199578b612b6"} err="failed to get container status \"a478b385adad231c65bafa44aef49fd983ef1a81babc0464478e199578b612b6\": rpc error: code = NotFound desc = could not find container \"a478b385adad231c65bafa44aef49fd983ef1a81babc0464478e199578b612b6\": container with ID starting with a478b385adad231c65bafa44aef49fd983ef1a81babc0464478e199578b612b6 not found: ID does not exist" Jan 22 06:47:43 crc kubenswrapper[4720]: I0122 06:47:43.102517 4720 scope.go:117] "RemoveContainer" containerID="dee4e936e1b6317e41638ead6be9d777b984b89ca57b4bb97aeea2cd862976e7" Jan 22 06:47:43 crc kubenswrapper[4720]: I0122 06:47:43.102875 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dee4e936e1b6317e41638ead6be9d777b984b89ca57b4bb97aeea2cd862976e7"} err="failed to get container status \"dee4e936e1b6317e41638ead6be9d777b984b89ca57b4bb97aeea2cd862976e7\": rpc error: code = NotFound desc = could not find container \"dee4e936e1b6317e41638ead6be9d777b984b89ca57b4bb97aeea2cd862976e7\": container with ID starting with dee4e936e1b6317e41638ead6be9d777b984b89ca57b4bb97aeea2cd862976e7 not found: ID does not exist" Jan 22 06:47:43 crc kubenswrapper[4720]: I0122 06:47:43.102922 4720 scope.go:117] "RemoveContainer" containerID="4bab2488a753e9278defd984cada047c9d9b5411a54e88204c7c67add2341e64" Jan 22 06:47:43 crc kubenswrapper[4720]: I0122 06:47:43.103242 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4bab2488a753e9278defd984cada047c9d9b5411a54e88204c7c67add2341e64"} err="failed to get container status \"4bab2488a753e9278defd984cada047c9d9b5411a54e88204c7c67add2341e64\": rpc error: code = NotFound desc = could not find container \"4bab2488a753e9278defd984cada047c9d9b5411a54e88204c7c67add2341e64\": container with ID starting with 4bab2488a753e9278defd984cada047c9d9b5411a54e88204c7c67add2341e64 not found: ID does not exist" Jan 22 06:47:43 crc kubenswrapper[4720]: I0122 06:47:43.103270 4720 scope.go:117] "RemoveContainer" containerID="dd6371da033cf0ecd1c0b746cae82e14593edbf2353910a77284a2987741580d" Jan 22 06:47:43 crc kubenswrapper[4720]: I0122 06:47:43.103691 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dd6371da033cf0ecd1c0b746cae82e14593edbf2353910a77284a2987741580d"} err="failed to get container status \"dd6371da033cf0ecd1c0b746cae82e14593edbf2353910a77284a2987741580d\": rpc error: code = NotFound desc = could not find container \"dd6371da033cf0ecd1c0b746cae82e14593edbf2353910a77284a2987741580d\": container with ID starting with dd6371da033cf0ecd1c0b746cae82e14593edbf2353910a77284a2987741580d not found: ID does not exist" Jan 22 06:47:43 crc kubenswrapper[4720]: I0122 06:47:43.103718 4720 scope.go:117] "RemoveContainer" containerID="b013c0f8d83bb91a1385ed067377ea5bd81640f7b420160042c33a3112cbe153" Jan 22 06:47:43 crc kubenswrapper[4720]: I0122 06:47:43.104231 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b013c0f8d83bb91a1385ed067377ea5bd81640f7b420160042c33a3112cbe153"} err="failed to get container status \"b013c0f8d83bb91a1385ed067377ea5bd81640f7b420160042c33a3112cbe153\": rpc error: code = NotFound desc = could not find container \"b013c0f8d83bb91a1385ed067377ea5bd81640f7b420160042c33a3112cbe153\": container with ID starting with b013c0f8d83bb91a1385ed067377ea5bd81640f7b420160042c33a3112cbe153 not found: ID does not exist" Jan 22 06:47:43 crc kubenswrapper[4720]: I0122 06:47:43.104261 4720 scope.go:117] "RemoveContainer" containerID="5a1f835b7dfa74f56bbfdc5ef0a6674e6c04ae1be7119e3675d4a7e970bc37d9" Jan 22 06:47:43 crc kubenswrapper[4720]: I0122 06:47:43.104584 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5a1f835b7dfa74f56bbfdc5ef0a6674e6c04ae1be7119e3675d4a7e970bc37d9"} err="failed to get container status \"5a1f835b7dfa74f56bbfdc5ef0a6674e6c04ae1be7119e3675d4a7e970bc37d9\": rpc error: code = NotFound desc = could not find container \"5a1f835b7dfa74f56bbfdc5ef0a6674e6c04ae1be7119e3675d4a7e970bc37d9\": container with ID starting with 5a1f835b7dfa74f56bbfdc5ef0a6674e6c04ae1be7119e3675d4a7e970bc37d9 not found: ID does not exist" Jan 22 06:47:43 crc kubenswrapper[4720]: I0122 06:47:43.104613 4720 scope.go:117] "RemoveContainer" containerID="b14f48ebae84e1450337b5ce78c6b9e74f4ffd418eaaf9df9fcb7495f5feae06" Jan 22 06:47:43 crc kubenswrapper[4720]: I0122 06:47:43.104953 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b14f48ebae84e1450337b5ce78c6b9e74f4ffd418eaaf9df9fcb7495f5feae06"} err="failed to get container status \"b14f48ebae84e1450337b5ce78c6b9e74f4ffd418eaaf9df9fcb7495f5feae06\": rpc error: code = NotFound desc = could not find container \"b14f48ebae84e1450337b5ce78c6b9e74f4ffd418eaaf9df9fcb7495f5feae06\": container with ID starting with b14f48ebae84e1450337b5ce78c6b9e74f4ffd418eaaf9df9fcb7495f5feae06 not found: ID does not exist" Jan 22 06:47:43 crc kubenswrapper[4720]: I0122 06:47:43.104985 4720 scope.go:117] "RemoveContainer" containerID="be8927bdfee25dd12f02d8f5e4923d1ef6e034a4ab3c036ee8fb76b699aa4a04" Jan 22 06:47:43 crc kubenswrapper[4720]: I0122 06:47:43.105271 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"be8927bdfee25dd12f02d8f5e4923d1ef6e034a4ab3c036ee8fb76b699aa4a04"} err="failed to get container status \"be8927bdfee25dd12f02d8f5e4923d1ef6e034a4ab3c036ee8fb76b699aa4a04\": rpc error: code = NotFound desc = could not find container \"be8927bdfee25dd12f02d8f5e4923d1ef6e034a4ab3c036ee8fb76b699aa4a04\": container with ID starting with be8927bdfee25dd12f02d8f5e4923d1ef6e034a4ab3c036ee8fb76b699aa4a04 not found: ID does not exist" Jan 22 06:47:43 crc kubenswrapper[4720]: I0122 06:47:43.105298 4720 scope.go:117] "RemoveContainer" containerID="bdea94d764eb25af4aec5da1f1aaf94b7d75e66292561a54803a079a6ca35cbd" Jan 22 06:47:43 crc kubenswrapper[4720]: I0122 06:47:43.105586 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bdea94d764eb25af4aec5da1f1aaf94b7d75e66292561a54803a079a6ca35cbd"} err="failed to get container status \"bdea94d764eb25af4aec5da1f1aaf94b7d75e66292561a54803a079a6ca35cbd\": rpc error: code = NotFound desc = could not find container \"bdea94d764eb25af4aec5da1f1aaf94b7d75e66292561a54803a079a6ca35cbd\": container with ID starting with bdea94d764eb25af4aec5da1f1aaf94b7d75e66292561a54803a079a6ca35cbd not found: ID does not exist" Jan 22 06:47:43 crc kubenswrapper[4720]: I0122 06:47:43.105610 4720 scope.go:117] "RemoveContainer" containerID="279bdc70caff8b05f2eaff1d28bee03725a7069dcc99c4770a8df9a3e3ed0440" Jan 22 06:47:43 crc kubenswrapper[4720]: I0122 06:47:43.105976 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"279bdc70caff8b05f2eaff1d28bee03725a7069dcc99c4770a8df9a3e3ed0440"} err="failed to get container status \"279bdc70caff8b05f2eaff1d28bee03725a7069dcc99c4770a8df9a3e3ed0440\": rpc error: code = NotFound desc = could not find container \"279bdc70caff8b05f2eaff1d28bee03725a7069dcc99c4770a8df9a3e3ed0440\": container with ID starting with 279bdc70caff8b05f2eaff1d28bee03725a7069dcc99c4770a8df9a3e3ed0440 not found: ID does not exist" Jan 22 06:47:43 crc kubenswrapper[4720]: I0122 06:47:43.106018 4720 scope.go:117] "RemoveContainer" containerID="a478b385adad231c65bafa44aef49fd983ef1a81babc0464478e199578b612b6" Jan 22 06:47:43 crc kubenswrapper[4720]: I0122 06:47:43.106467 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a478b385adad231c65bafa44aef49fd983ef1a81babc0464478e199578b612b6"} err="failed to get container status \"a478b385adad231c65bafa44aef49fd983ef1a81babc0464478e199578b612b6\": rpc error: code = NotFound desc = could not find container \"a478b385adad231c65bafa44aef49fd983ef1a81babc0464478e199578b612b6\": container with ID starting with a478b385adad231c65bafa44aef49fd983ef1a81babc0464478e199578b612b6 not found: ID does not exist" Jan 22 06:47:43 crc kubenswrapper[4720]: I0122 06:47:43.106496 4720 scope.go:117] "RemoveContainer" containerID="dee4e936e1b6317e41638ead6be9d777b984b89ca57b4bb97aeea2cd862976e7" Jan 22 06:47:43 crc kubenswrapper[4720]: I0122 06:47:43.106847 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dee4e936e1b6317e41638ead6be9d777b984b89ca57b4bb97aeea2cd862976e7"} err="failed to get container status \"dee4e936e1b6317e41638ead6be9d777b984b89ca57b4bb97aeea2cd862976e7\": rpc error: code = NotFound desc = could not find container \"dee4e936e1b6317e41638ead6be9d777b984b89ca57b4bb97aeea2cd862976e7\": container with ID starting with dee4e936e1b6317e41638ead6be9d777b984b89ca57b4bb97aeea2cd862976e7 not found: ID does not exist" Jan 22 06:47:43 crc kubenswrapper[4720]: I0122 06:47:43.106917 4720 scope.go:117] "RemoveContainer" containerID="4bab2488a753e9278defd984cada047c9d9b5411a54e88204c7c67add2341e64" Jan 22 06:47:43 crc kubenswrapper[4720]: I0122 06:47:43.107287 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4bab2488a753e9278defd984cada047c9d9b5411a54e88204c7c67add2341e64"} err="failed to get container status \"4bab2488a753e9278defd984cada047c9d9b5411a54e88204c7c67add2341e64\": rpc error: code = NotFound desc = could not find container \"4bab2488a753e9278defd984cada047c9d9b5411a54e88204c7c67add2341e64\": container with ID starting with 4bab2488a753e9278defd984cada047c9d9b5411a54e88204c7c67add2341e64 not found: ID does not exist" Jan 22 06:47:43 crc kubenswrapper[4720]: I0122 06:47:43.107333 4720 scope.go:117] "RemoveContainer" containerID="dd6371da033cf0ecd1c0b746cae82e14593edbf2353910a77284a2987741580d" Jan 22 06:47:43 crc kubenswrapper[4720]: I0122 06:47:43.107683 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dd6371da033cf0ecd1c0b746cae82e14593edbf2353910a77284a2987741580d"} err="failed to get container status \"dd6371da033cf0ecd1c0b746cae82e14593edbf2353910a77284a2987741580d\": rpc error: code = NotFound desc = could not find container \"dd6371da033cf0ecd1c0b746cae82e14593edbf2353910a77284a2987741580d\": container with ID starting with dd6371da033cf0ecd1c0b746cae82e14593edbf2353910a77284a2987741580d not found: ID does not exist" Jan 22 06:47:43 crc kubenswrapper[4720]: I0122 06:47:43.107721 4720 scope.go:117] "RemoveContainer" containerID="b013c0f8d83bb91a1385ed067377ea5bd81640f7b420160042c33a3112cbe153" Jan 22 06:47:43 crc kubenswrapper[4720]: I0122 06:47:43.108164 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b013c0f8d83bb91a1385ed067377ea5bd81640f7b420160042c33a3112cbe153"} err="failed to get container status \"b013c0f8d83bb91a1385ed067377ea5bd81640f7b420160042c33a3112cbe153\": rpc error: code = NotFound desc = could not find container \"b013c0f8d83bb91a1385ed067377ea5bd81640f7b420160042c33a3112cbe153\": container with ID starting with b013c0f8d83bb91a1385ed067377ea5bd81640f7b420160042c33a3112cbe153 not found: ID does not exist" Jan 22 06:47:43 crc kubenswrapper[4720]: I0122 06:47:43.108199 4720 scope.go:117] "RemoveContainer" containerID="5a1f835b7dfa74f56bbfdc5ef0a6674e6c04ae1be7119e3675d4a7e970bc37d9" Jan 22 06:47:43 crc kubenswrapper[4720]: I0122 06:47:43.108555 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5a1f835b7dfa74f56bbfdc5ef0a6674e6c04ae1be7119e3675d4a7e970bc37d9"} err="failed to get container status \"5a1f835b7dfa74f56bbfdc5ef0a6674e6c04ae1be7119e3675d4a7e970bc37d9\": rpc error: code = NotFound desc = could not find container \"5a1f835b7dfa74f56bbfdc5ef0a6674e6c04ae1be7119e3675d4a7e970bc37d9\": container with ID starting with 5a1f835b7dfa74f56bbfdc5ef0a6674e6c04ae1be7119e3675d4a7e970bc37d9 not found: ID does not exist" Jan 22 06:47:43 crc kubenswrapper[4720]: I0122 06:47:43.108593 4720 scope.go:117] "RemoveContainer" containerID="b14f48ebae84e1450337b5ce78c6b9e74f4ffd418eaaf9df9fcb7495f5feae06" Jan 22 06:47:43 crc kubenswrapper[4720]: I0122 06:47:43.108950 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b14f48ebae84e1450337b5ce78c6b9e74f4ffd418eaaf9df9fcb7495f5feae06"} err="failed to get container status \"b14f48ebae84e1450337b5ce78c6b9e74f4ffd418eaaf9df9fcb7495f5feae06\": rpc error: code = NotFound desc = could not find container \"b14f48ebae84e1450337b5ce78c6b9e74f4ffd418eaaf9df9fcb7495f5feae06\": container with ID starting with b14f48ebae84e1450337b5ce78c6b9e74f4ffd418eaaf9df9fcb7495f5feae06 not found: ID does not exist" Jan 22 06:47:43 crc kubenswrapper[4720]: I0122 06:47:43.109007 4720 scope.go:117] "RemoveContainer" containerID="be8927bdfee25dd12f02d8f5e4923d1ef6e034a4ab3c036ee8fb76b699aa4a04" Jan 22 06:47:43 crc kubenswrapper[4720]: I0122 06:47:43.109357 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"be8927bdfee25dd12f02d8f5e4923d1ef6e034a4ab3c036ee8fb76b699aa4a04"} err="failed to get container status \"be8927bdfee25dd12f02d8f5e4923d1ef6e034a4ab3c036ee8fb76b699aa4a04\": rpc error: code = NotFound desc = could not find container \"be8927bdfee25dd12f02d8f5e4923d1ef6e034a4ab3c036ee8fb76b699aa4a04\": container with ID starting with be8927bdfee25dd12f02d8f5e4923d1ef6e034a4ab3c036ee8fb76b699aa4a04 not found: ID does not exist" Jan 22 06:47:43 crc kubenswrapper[4720]: I0122 06:47:43.109398 4720 scope.go:117] "RemoveContainer" containerID="bdea94d764eb25af4aec5da1f1aaf94b7d75e66292561a54803a079a6ca35cbd" Jan 22 06:47:43 crc kubenswrapper[4720]: I0122 06:47:43.109834 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bdea94d764eb25af4aec5da1f1aaf94b7d75e66292561a54803a079a6ca35cbd"} err="failed to get container status \"bdea94d764eb25af4aec5da1f1aaf94b7d75e66292561a54803a079a6ca35cbd\": rpc error: code = NotFound desc = could not find container \"bdea94d764eb25af4aec5da1f1aaf94b7d75e66292561a54803a079a6ca35cbd\": container with ID starting with bdea94d764eb25af4aec5da1f1aaf94b7d75e66292561a54803a079a6ca35cbd not found: ID does not exist" Jan 22 06:47:43 crc kubenswrapper[4720]: I0122 06:47:43.109866 4720 scope.go:117] "RemoveContainer" containerID="279bdc70caff8b05f2eaff1d28bee03725a7069dcc99c4770a8df9a3e3ed0440" Jan 22 06:47:43 crc kubenswrapper[4720]: I0122 06:47:43.110139 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"279bdc70caff8b05f2eaff1d28bee03725a7069dcc99c4770a8df9a3e3ed0440"} err="failed to get container status \"279bdc70caff8b05f2eaff1d28bee03725a7069dcc99c4770a8df9a3e3ed0440\": rpc error: code = NotFound desc = could not find container \"279bdc70caff8b05f2eaff1d28bee03725a7069dcc99c4770a8df9a3e3ed0440\": container with ID starting with 279bdc70caff8b05f2eaff1d28bee03725a7069dcc99c4770a8df9a3e3ed0440 not found: ID does not exist" Jan 22 06:47:43 crc kubenswrapper[4720]: I0122 06:47:43.110179 4720 scope.go:117] "RemoveContainer" containerID="a478b385adad231c65bafa44aef49fd983ef1a81babc0464478e199578b612b6" Jan 22 06:47:43 crc kubenswrapper[4720]: I0122 06:47:43.110472 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a478b385adad231c65bafa44aef49fd983ef1a81babc0464478e199578b612b6"} err="failed to get container status \"a478b385adad231c65bafa44aef49fd983ef1a81babc0464478e199578b612b6\": rpc error: code = NotFound desc = could not find container \"a478b385adad231c65bafa44aef49fd983ef1a81babc0464478e199578b612b6\": container with ID starting with a478b385adad231c65bafa44aef49fd983ef1a81babc0464478e199578b612b6 not found: ID does not exist" Jan 22 06:47:43 crc kubenswrapper[4720]: I0122 06:47:43.110505 4720 scope.go:117] "RemoveContainer" containerID="dee4e936e1b6317e41638ead6be9d777b984b89ca57b4bb97aeea2cd862976e7" Jan 22 06:47:43 crc kubenswrapper[4720]: I0122 06:47:43.110803 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dee4e936e1b6317e41638ead6be9d777b984b89ca57b4bb97aeea2cd862976e7"} err="failed to get container status \"dee4e936e1b6317e41638ead6be9d777b984b89ca57b4bb97aeea2cd862976e7\": rpc error: code = NotFound desc = could not find container \"dee4e936e1b6317e41638ead6be9d777b984b89ca57b4bb97aeea2cd862976e7\": container with ID starting with dee4e936e1b6317e41638ead6be9d777b984b89ca57b4bb97aeea2cd862976e7 not found: ID does not exist" Jan 22 06:47:43 crc kubenswrapper[4720]: I0122 06:47:43.115730 4720 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/dc107a3a-440f-43c6-a92c-378d6fb30761-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 06:47:43 crc kubenswrapper[4720]: I0122 06:47:43.115779 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hpq78\" (UniqueName: \"kubernetes.io/projected/dc107a3a-440f-43c6-a92c-378d6fb30761-kube-api-access-hpq78\") on node \"crc\" DevicePath \"\"" Jan 22 06:47:43 crc kubenswrapper[4720]: I0122 06:47:43.115791 4720 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/dc107a3a-440f-43c6-a92c-378d6fb30761-util\") on node \"crc\" DevicePath \"\"" Jan 22 06:47:43 crc kubenswrapper[4720]: I0122 06:47:43.193404 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-s7ps7"] Jan 22 06:47:43 crc kubenswrapper[4720]: E0122 06:47:43.193643 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc107a3a-440f-43c6-a92c-378d6fb30761" containerName="extract" Jan 22 06:47:43 crc kubenswrapper[4720]: I0122 06:47:43.193658 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc107a3a-440f-43c6-a92c-378d6fb30761" containerName="extract" Jan 22 06:47:43 crc kubenswrapper[4720]: E0122 06:47:43.193682 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc107a3a-440f-43c6-a92c-378d6fb30761" containerName="pull" Jan 22 06:47:43 crc kubenswrapper[4720]: I0122 06:47:43.193688 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc107a3a-440f-43c6-a92c-378d6fb30761" containerName="pull" Jan 22 06:47:43 crc kubenswrapper[4720]: E0122 06:47:43.193699 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc107a3a-440f-43c6-a92c-378d6fb30761" containerName="util" Jan 22 06:47:43 crc kubenswrapper[4720]: I0122 06:47:43.193705 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc107a3a-440f-43c6-a92c-378d6fb30761" containerName="util" Jan 22 06:47:43 crc kubenswrapper[4720]: I0122 06:47:43.193807 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc107a3a-440f-43c6-a92c-378d6fb30761" containerName="extract" Jan 22 06:47:43 crc kubenswrapper[4720]: I0122 06:47:43.197332 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-s7ps7" Jan 22 06:47:43 crc kubenswrapper[4720]: I0122 06:47:43.319081 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6aceac93-bd1a-4897-a920-8ee803c81cb2-catalog-content\") pod \"certified-operators-s7ps7\" (UID: \"6aceac93-bd1a-4897-a920-8ee803c81cb2\") " pod="openshift-marketplace/certified-operators-s7ps7" Jan 22 06:47:43 crc kubenswrapper[4720]: I0122 06:47:43.319324 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6aceac93-bd1a-4897-a920-8ee803c81cb2-utilities\") pod \"certified-operators-s7ps7\" (UID: \"6aceac93-bd1a-4897-a920-8ee803c81cb2\") " pod="openshift-marketplace/certified-operators-s7ps7" Jan 22 06:47:43 crc kubenswrapper[4720]: I0122 06:47:43.319683 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mvrkj\" (UniqueName: \"kubernetes.io/projected/6aceac93-bd1a-4897-a920-8ee803c81cb2-kube-api-access-mvrkj\") pod \"certified-operators-s7ps7\" (UID: \"6aceac93-bd1a-4897-a920-8ee803c81cb2\") " pod="openshift-marketplace/certified-operators-s7ps7" Jan 22 06:47:43 crc kubenswrapper[4720]: I0122 06:47:43.421176 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6aceac93-bd1a-4897-a920-8ee803c81cb2-utilities\") pod \"certified-operators-s7ps7\" (UID: \"6aceac93-bd1a-4897-a920-8ee803c81cb2\") " pod="openshift-marketplace/certified-operators-s7ps7" Jan 22 06:47:43 crc kubenswrapper[4720]: I0122 06:47:43.421239 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mvrkj\" (UniqueName: \"kubernetes.io/projected/6aceac93-bd1a-4897-a920-8ee803c81cb2-kube-api-access-mvrkj\") pod \"certified-operators-s7ps7\" (UID: \"6aceac93-bd1a-4897-a920-8ee803c81cb2\") " pod="openshift-marketplace/certified-operators-s7ps7" Jan 22 06:47:43 crc kubenswrapper[4720]: I0122 06:47:43.421275 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6aceac93-bd1a-4897-a920-8ee803c81cb2-catalog-content\") pod \"certified-operators-s7ps7\" (UID: \"6aceac93-bd1a-4897-a920-8ee803c81cb2\") " pod="openshift-marketplace/certified-operators-s7ps7" Jan 22 06:47:43 crc kubenswrapper[4720]: I0122 06:47:43.421718 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6aceac93-bd1a-4897-a920-8ee803c81cb2-catalog-content\") pod \"certified-operators-s7ps7\" (UID: \"6aceac93-bd1a-4897-a920-8ee803c81cb2\") " pod="openshift-marketplace/certified-operators-s7ps7" Jan 22 06:47:43 crc kubenswrapper[4720]: I0122 06:47:43.421821 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6aceac93-bd1a-4897-a920-8ee803c81cb2-utilities\") pod \"certified-operators-s7ps7\" (UID: \"6aceac93-bd1a-4897-a920-8ee803c81cb2\") " pod="openshift-marketplace/certified-operators-s7ps7" Jan 22 06:47:43 crc kubenswrapper[4720]: I0122 06:47:43.442952 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mvrkj\" (UniqueName: \"kubernetes.io/projected/6aceac93-bd1a-4897-a920-8ee803c81cb2-kube-api-access-mvrkj\") pod \"certified-operators-s7ps7\" (UID: \"6aceac93-bd1a-4897-a920-8ee803c81cb2\") " pod="openshift-marketplace/certified-operators-s7ps7" Jan 22 06:47:43 crc kubenswrapper[4720]: I0122 06:47:43.512337 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-s7ps7" Jan 22 06:47:43 crc kubenswrapper[4720]: E0122 06:47:43.538190 4720 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_certified-operators-s7ps7_openshift-marketplace_6aceac93-bd1a-4897-a920-8ee803c81cb2_0(c7e82f4a954efd22922fafb7eafa78e6b8d7adb8c13352572d7ec89de6eea810): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 22 06:47:43 crc kubenswrapper[4720]: E0122 06:47:43.538289 4720 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_certified-operators-s7ps7_openshift-marketplace_6aceac93-bd1a-4897-a920-8ee803c81cb2_0(c7e82f4a954efd22922fafb7eafa78e6b8d7adb8c13352572d7ec89de6eea810): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-s7ps7" Jan 22 06:47:43 crc kubenswrapper[4720]: E0122 06:47:43.538317 4720 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_certified-operators-s7ps7_openshift-marketplace_6aceac93-bd1a-4897-a920-8ee803c81cb2_0(c7e82f4a954efd22922fafb7eafa78e6b8d7adb8c13352572d7ec89de6eea810): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-s7ps7" Jan 22 06:47:43 crc kubenswrapper[4720]: E0122 06:47:43.538380 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"certified-operators-s7ps7_openshift-marketplace(6aceac93-bd1a-4897-a920-8ee803c81cb2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"certified-operators-s7ps7_openshift-marketplace(6aceac93-bd1a-4897-a920-8ee803c81cb2)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_certified-operators-s7ps7_openshift-marketplace_6aceac93-bd1a-4897-a920-8ee803c81cb2_0(c7e82f4a954efd22922fafb7eafa78e6b8d7adb8c13352572d7ec89de6eea810): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-marketplace/certified-operators-s7ps7" podUID="6aceac93-bd1a-4897-a920-8ee803c81cb2" Jan 22 06:47:43 crc kubenswrapper[4720]: I0122 06:47:43.745745 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-n5w5r_85373343-156d-4de0-a72b-baaf7c4e3d08/kube-multus/2.log" Jan 22 06:47:43 crc kubenswrapper[4720]: I0122 06:47:43.747153 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-n5w5r_85373343-156d-4de0-a72b-baaf7c4e3d08/kube-multus/1.log" Jan 22 06:47:43 crc kubenswrapper[4720]: I0122 06:47:43.747264 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-n5w5r" event={"ID":"85373343-156d-4de0-a72b-baaf7c4e3d08","Type":"ContainerStarted","Data":"7bb990b150be849f9a0cbf134d4381d0c4d8c8c87bbcdbcd3e0933b836bfa152"} Jan 22 06:47:43 crc kubenswrapper[4720]: I0122 06:47:43.751004 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f082sjlt" Jan 22 06:47:43 crc kubenswrapper[4720]: I0122 06:47:43.751000 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f082sjlt" event={"ID":"dc107a3a-440f-43c6-a92c-378d6fb30761","Type":"ContainerDied","Data":"18e0e98255cb8dbbca56ea1e58ecd1e4b63b806b47962470914e5d83a6f4f2b4"} Jan 22 06:47:43 crc kubenswrapper[4720]: I0122 06:47:43.751136 4720 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="18e0e98255cb8dbbca56ea1e58ecd1e4b63b806b47962470914e5d83a6f4f2b4" Jan 22 06:47:43 crc kubenswrapper[4720]: I0122 06:47:43.752864 4720 generic.go:334] "Generic (PLEG): container finished" podID="3cd5a2a0-4299-41d0-8dc3-39ea67b02ea2" containerID="03a7886da18dd9e58253917492ab9da9fd81a0ba8cb3d29f94b095806d7194a2" exitCode=0 Jan 22 06:47:43 crc kubenswrapper[4720]: I0122 06:47:43.752941 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vm9h9" event={"ID":"3cd5a2a0-4299-41d0-8dc3-39ea67b02ea2","Type":"ContainerDied","Data":"03a7886da18dd9e58253917492ab9da9fd81a0ba8cb3d29f94b095806d7194a2"} Jan 22 06:47:43 crc kubenswrapper[4720]: I0122 06:47:43.752988 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vm9h9" event={"ID":"3cd5a2a0-4299-41d0-8dc3-39ea67b02ea2","Type":"ContainerStarted","Data":"3cebb1f4ba3b1f4b02a014f38cf45b759043214173b5d55adc450817ea69f462"} Jan 22 06:47:44 crc kubenswrapper[4720]: I0122 06:47:44.218890 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9a725fa6-120e-41b1-bf7b-e1419e35c891" path="/var/lib/kubelet/pods/9a725fa6-120e-41b1-bf7b-e1419e35c891/volumes" Jan 22 06:47:44 crc kubenswrapper[4720]: I0122 06:47:44.766119 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vm9h9" event={"ID":"3cd5a2a0-4299-41d0-8dc3-39ea67b02ea2","Type":"ContainerStarted","Data":"b52ce1132b945b16c677bb7f40bae8bd6dbbb1407915e1dabb6127d4a675a22e"} Jan 22 06:47:44 crc kubenswrapper[4720]: I0122 06:47:44.766175 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vm9h9" event={"ID":"3cd5a2a0-4299-41d0-8dc3-39ea67b02ea2","Type":"ContainerStarted","Data":"6cb58e23d52c053a798c24833ecb7a73fbba98bb6a3be7fb1b9d62a49bb0a03f"} Jan 22 06:47:44 crc kubenswrapper[4720]: I0122 06:47:44.766189 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vm9h9" event={"ID":"3cd5a2a0-4299-41d0-8dc3-39ea67b02ea2","Type":"ContainerStarted","Data":"9297153428c9a521e540baf39a0b517ca5aab2e09f9e04f663da517e14d9f0d8"} Jan 22 06:47:44 crc kubenswrapper[4720]: I0122 06:47:44.766197 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vm9h9" event={"ID":"3cd5a2a0-4299-41d0-8dc3-39ea67b02ea2","Type":"ContainerStarted","Data":"416a880b2515a5f71e11aa1cdf01ad7118ed25223602eb67c2e4a299c05bd94e"} Jan 22 06:47:44 crc kubenswrapper[4720]: I0122 06:47:44.766205 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vm9h9" event={"ID":"3cd5a2a0-4299-41d0-8dc3-39ea67b02ea2","Type":"ContainerStarted","Data":"5978f0dfe083c28c9dd548d70fce42925c4c8c831ab3935b754ee8de94162918"} Jan 22 06:47:45 crc kubenswrapper[4720]: I0122 06:47:45.777178 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vm9h9" event={"ID":"3cd5a2a0-4299-41d0-8dc3-39ea67b02ea2","Type":"ContainerStarted","Data":"924b09347742412aea8073c18f484bebc4715d101278fc0dc11743042d9bdc57"} Jan 22 06:47:47 crc kubenswrapper[4720]: I0122 06:47:47.792901 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vm9h9" event={"ID":"3cd5a2a0-4299-41d0-8dc3-39ea67b02ea2","Type":"ContainerStarted","Data":"b27e198e5bc860d681c0fcd388339f2f5491cd2ff692752a35fa29fae711c8b7"} Jan 22 06:47:52 crc kubenswrapper[4720]: I0122 06:47:52.487131 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-5x7g8"] Jan 22 06:47:52 crc kubenswrapper[4720]: I0122 06:47:52.488254 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-5x7g8" Jan 22 06:47:52 crc kubenswrapper[4720]: I0122 06:47:52.490456 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"openshift-service-ca.crt" Jan 22 06:47:52 crc kubenswrapper[4720]: I0122 06:47:52.490832 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"kube-root-ca.crt" Jan 22 06:47:52 crc kubenswrapper[4720]: I0122 06:47:52.490903 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-dockercfg-4f57g" Jan 22 06:47:52 crc kubenswrapper[4720]: I0122 06:47:52.621947 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-57d798ddfd-rvwjb"] Jan 22 06:47:52 crc kubenswrapper[4720]: I0122 06:47:52.623306 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-57d798ddfd-rvwjb" Jan 22 06:47:52 crc kubenswrapper[4720]: I0122 06:47:52.625452 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-dockercfg-8c87j" Jan 22 06:47:52 crc kubenswrapper[4720]: I0122 06:47:52.630279 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-57d798ddfd-xtbwd"] Jan 22 06:47:52 crc kubenswrapper[4720]: I0122 06:47:52.631579 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-57d798ddfd-xtbwd" Jan 22 06:47:52 crc kubenswrapper[4720]: I0122 06:47:52.632774 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" Jan 22 06:47:52 crc kubenswrapper[4720]: I0122 06:47:52.649504 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zjzl5\" (UniqueName: \"kubernetes.io/projected/fd9304c1-f30e-4235-9324-b437e69544ee-kube-api-access-zjzl5\") pod \"obo-prometheus-operator-68bc856cb9-5x7g8\" (UID: \"fd9304c1-f30e-4235-9324-b437e69544ee\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-5x7g8" Jan 22 06:47:52 crc kubenswrapper[4720]: I0122 06:47:52.751539 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zjzl5\" (UniqueName: \"kubernetes.io/projected/fd9304c1-f30e-4235-9324-b437e69544ee-kube-api-access-zjzl5\") pod \"obo-prometheus-operator-68bc856cb9-5x7g8\" (UID: \"fd9304c1-f30e-4235-9324-b437e69544ee\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-5x7g8" Jan 22 06:47:52 crc kubenswrapper[4720]: I0122 06:47:52.751632 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/dad79855-f5f9-42e6-ba0b-c2134f92c107-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-57d798ddfd-rvwjb\" (UID: \"dad79855-f5f9-42e6-ba0b-c2134f92c107\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-57d798ddfd-rvwjb" Jan 22 06:47:52 crc kubenswrapper[4720]: I0122 06:47:52.751682 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b47c94b1-cb06-4aa2-aa94-cbf6da840eb4-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-57d798ddfd-xtbwd\" (UID: \"b47c94b1-cb06-4aa2-aa94-cbf6da840eb4\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-57d798ddfd-xtbwd" Jan 22 06:47:52 crc kubenswrapper[4720]: I0122 06:47:52.751726 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b47c94b1-cb06-4aa2-aa94-cbf6da840eb4-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-57d798ddfd-xtbwd\" (UID: \"b47c94b1-cb06-4aa2-aa94-cbf6da840eb4\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-57d798ddfd-xtbwd" Jan 22 06:47:52 crc kubenswrapper[4720]: I0122 06:47:52.751813 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/dad79855-f5f9-42e6-ba0b-c2134f92c107-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-57d798ddfd-rvwjb\" (UID: \"dad79855-f5f9-42e6-ba0b-c2134f92c107\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-57d798ddfd-rvwjb" Jan 22 06:47:52 crc kubenswrapper[4720]: I0122 06:47:52.772506 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zjzl5\" (UniqueName: \"kubernetes.io/projected/fd9304c1-f30e-4235-9324-b437e69544ee-kube-api-access-zjzl5\") pod \"obo-prometheus-operator-68bc856cb9-5x7g8\" (UID: \"fd9304c1-f30e-4235-9324-b437e69544ee\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-5x7g8" Jan 22 06:47:52 crc kubenswrapper[4720]: I0122 06:47:52.805315 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-5x7g8" Jan 22 06:47:52 crc kubenswrapper[4720]: I0122 06:47:52.853627 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/dad79855-f5f9-42e6-ba0b-c2134f92c107-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-57d798ddfd-rvwjb\" (UID: \"dad79855-f5f9-42e6-ba0b-c2134f92c107\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-57d798ddfd-rvwjb" Jan 22 06:47:52 crc kubenswrapper[4720]: I0122 06:47:52.853731 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b47c94b1-cb06-4aa2-aa94-cbf6da840eb4-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-57d798ddfd-xtbwd\" (UID: \"b47c94b1-cb06-4aa2-aa94-cbf6da840eb4\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-57d798ddfd-xtbwd" Jan 22 06:47:52 crc kubenswrapper[4720]: I0122 06:47:52.853770 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b47c94b1-cb06-4aa2-aa94-cbf6da840eb4-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-57d798ddfd-xtbwd\" (UID: \"b47c94b1-cb06-4aa2-aa94-cbf6da840eb4\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-57d798ddfd-xtbwd" Jan 22 06:47:52 crc kubenswrapper[4720]: I0122 06:47:52.853857 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/dad79855-f5f9-42e6-ba0b-c2134f92c107-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-57d798ddfd-rvwjb\" (UID: \"dad79855-f5f9-42e6-ba0b-c2134f92c107\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-57d798ddfd-rvwjb" Jan 22 06:47:52 crc kubenswrapper[4720]: I0122 06:47:52.858367 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/dad79855-f5f9-42e6-ba0b-c2134f92c107-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-57d798ddfd-rvwjb\" (UID: \"dad79855-f5f9-42e6-ba0b-c2134f92c107\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-57d798ddfd-rvwjb" Jan 22 06:47:52 crc kubenswrapper[4720]: I0122 06:47:52.858446 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b47c94b1-cb06-4aa2-aa94-cbf6da840eb4-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-57d798ddfd-xtbwd\" (UID: \"b47c94b1-cb06-4aa2-aa94-cbf6da840eb4\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-57d798ddfd-xtbwd" Jan 22 06:47:52 crc kubenswrapper[4720]: I0122 06:47:52.860515 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b47c94b1-cb06-4aa2-aa94-cbf6da840eb4-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-57d798ddfd-xtbwd\" (UID: \"b47c94b1-cb06-4aa2-aa94-cbf6da840eb4\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-57d798ddfd-xtbwd" Jan 22 06:47:52 crc kubenswrapper[4720]: I0122 06:47:52.870543 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/dad79855-f5f9-42e6-ba0b-c2134f92c107-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-57d798ddfd-rvwjb\" (UID: \"dad79855-f5f9-42e6-ba0b-c2134f92c107\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-57d798ddfd-rvwjb" Jan 22 06:47:52 crc kubenswrapper[4720]: I0122 06:47:52.938815 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-57d798ddfd-rvwjb" Jan 22 06:47:52 crc kubenswrapper[4720]: I0122 06:47:52.953534 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-57d798ddfd-xtbwd" Jan 22 06:47:54 crc kubenswrapper[4720]: E0122 06:47:54.356163 4720 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-5x7g8_openshift-operators_fd9304c1-f30e-4235-9324-b437e69544ee_0(8b5044b356175f43367fe56fe90e895e8f915e7beca099885bad95e0bf916ddf): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 22 06:47:54 crc kubenswrapper[4720]: E0122 06:47:54.356265 4720 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-5x7g8_openshift-operators_fd9304c1-f30e-4235-9324-b437e69544ee_0(8b5044b356175f43367fe56fe90e895e8f915e7beca099885bad95e0bf916ddf): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-5x7g8" Jan 22 06:47:54 crc kubenswrapper[4720]: E0122 06:47:54.356295 4720 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-5x7g8_openshift-operators_fd9304c1-f30e-4235-9324-b437e69544ee_0(8b5044b356175f43367fe56fe90e895e8f915e7beca099885bad95e0bf916ddf): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-5x7g8" Jan 22 06:47:54 crc kubenswrapper[4720]: E0122 06:47:54.356360 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-68bc856cb9-5x7g8_openshift-operators(fd9304c1-f30e-4235-9324-b437e69544ee)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-68bc856cb9-5x7g8_openshift-operators(fd9304c1-f30e-4235-9324-b437e69544ee)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-5x7g8_openshift-operators_fd9304c1-f30e-4235-9324-b437e69544ee_0(8b5044b356175f43367fe56fe90e895e8f915e7beca099885bad95e0bf916ddf): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-5x7g8" podUID="fd9304c1-f30e-4235-9324-b437e69544ee" Jan 22 06:47:54 crc kubenswrapper[4720]: E0122 06:47:54.372063 4720 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-57d798ddfd-rvwjb_openshift-operators_dad79855-f5f9-42e6-ba0b-c2134f92c107_0(fbdae55717eb1fbce2d1a031ddbd4730b10213a7cb924162ae517ef45e5cedf4): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 22 06:47:54 crc kubenswrapper[4720]: E0122 06:47:54.372158 4720 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-57d798ddfd-rvwjb_openshift-operators_dad79855-f5f9-42e6-ba0b-c2134f92c107_0(fbdae55717eb1fbce2d1a031ddbd4730b10213a7cb924162ae517ef45e5cedf4): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-57d798ddfd-rvwjb" Jan 22 06:47:54 crc kubenswrapper[4720]: E0122 06:47:54.372185 4720 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-57d798ddfd-rvwjb_openshift-operators_dad79855-f5f9-42e6-ba0b-c2134f92c107_0(fbdae55717eb1fbce2d1a031ddbd4730b10213a7cb924162ae517ef45e5cedf4): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-57d798ddfd-rvwjb" Jan 22 06:47:54 crc kubenswrapper[4720]: E0122 06:47:54.372256 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-57d798ddfd-rvwjb_openshift-operators(dad79855-f5f9-42e6-ba0b-c2134f92c107)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-57d798ddfd-rvwjb_openshift-operators(dad79855-f5f9-42e6-ba0b-c2134f92c107)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-57d798ddfd-rvwjb_openshift-operators_dad79855-f5f9-42e6-ba0b-c2134f92c107_0(fbdae55717eb1fbce2d1a031ddbd4730b10213a7cb924162ae517ef45e5cedf4): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-57d798ddfd-rvwjb" podUID="dad79855-f5f9-42e6-ba0b-c2134f92c107" Jan 22 06:47:54 crc kubenswrapper[4720]: E0122 06:47:54.394294 4720 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-57d798ddfd-xtbwd_openshift-operators_b47c94b1-cb06-4aa2-aa94-cbf6da840eb4_0(0058df5a1712dfba713c98bcf5038521e9574a78fe7d762a2a3d5f2a9c48369d): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 22 06:47:54 crc kubenswrapper[4720]: E0122 06:47:54.394382 4720 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-57d798ddfd-xtbwd_openshift-operators_b47c94b1-cb06-4aa2-aa94-cbf6da840eb4_0(0058df5a1712dfba713c98bcf5038521e9574a78fe7d762a2a3d5f2a9c48369d): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-57d798ddfd-xtbwd" Jan 22 06:47:54 crc kubenswrapper[4720]: E0122 06:47:54.394408 4720 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-57d798ddfd-xtbwd_openshift-operators_b47c94b1-cb06-4aa2-aa94-cbf6da840eb4_0(0058df5a1712dfba713c98bcf5038521e9574a78fe7d762a2a3d5f2a9c48369d): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-57d798ddfd-xtbwd" Jan 22 06:47:54 crc kubenswrapper[4720]: E0122 06:47:54.394467 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-57d798ddfd-xtbwd_openshift-operators(b47c94b1-cb06-4aa2-aa94-cbf6da840eb4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-57d798ddfd-xtbwd_openshift-operators(b47c94b1-cb06-4aa2-aa94-cbf6da840eb4)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-57d798ddfd-xtbwd_openshift-operators_b47c94b1-cb06-4aa2-aa94-cbf6da840eb4_0(0058df5a1712dfba713c98bcf5038521e9574a78fe7d762a2a3d5f2a9c48369d): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-57d798ddfd-xtbwd" podUID="b47c94b1-cb06-4aa2-aa94-cbf6da840eb4" Jan 22 06:47:54 crc kubenswrapper[4720]: I0122 06:47:54.409009 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-9tl9d"] Jan 22 06:47:54 crc kubenswrapper[4720]: I0122 06:47:54.410145 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-9tl9d" Jan 22 06:47:54 crc kubenswrapper[4720]: I0122 06:47:54.413438 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-sa-dockercfg-j9qts" Jan 22 06:47:54 crc kubenswrapper[4720]: I0122 06:47:54.413623 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-tls" Jan 22 06:47:54 crc kubenswrapper[4720]: I0122 06:47:54.578521 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sglms\" (UniqueName: \"kubernetes.io/projected/758ea564-cd8b-4e93-bd76-563d86418578-kube-api-access-sglms\") pod \"observability-operator-59bdc8b94-9tl9d\" (UID: \"758ea564-cd8b-4e93-bd76-563d86418578\") " pod="openshift-operators/observability-operator-59bdc8b94-9tl9d" Jan 22 06:47:54 crc kubenswrapper[4720]: I0122 06:47:54.578604 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/758ea564-cd8b-4e93-bd76-563d86418578-observability-operator-tls\") pod \"observability-operator-59bdc8b94-9tl9d\" (UID: \"758ea564-cd8b-4e93-bd76-563d86418578\") " pod="openshift-operators/observability-operator-59bdc8b94-9tl9d" Jan 22 06:47:54 crc kubenswrapper[4720]: I0122 06:47:54.600855 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-88ll2"] Jan 22 06:47:54 crc kubenswrapper[4720]: I0122 06:47:54.601837 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-88ll2" Jan 22 06:47:54 crc kubenswrapper[4720]: I0122 06:47:54.603824 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"perses-operator-dockercfg-vt2gb" Jan 22 06:47:54 crc kubenswrapper[4720]: I0122 06:47:54.680119 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/db323c34-5995-4cc9-baab-de570b5fc5b3-openshift-service-ca\") pod \"perses-operator-5bf474d74f-88ll2\" (UID: \"db323c34-5995-4cc9-baab-de570b5fc5b3\") " pod="openshift-operators/perses-operator-5bf474d74f-88ll2" Jan 22 06:47:54 crc kubenswrapper[4720]: I0122 06:47:54.680191 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/758ea564-cd8b-4e93-bd76-563d86418578-observability-operator-tls\") pod \"observability-operator-59bdc8b94-9tl9d\" (UID: \"758ea564-cd8b-4e93-bd76-563d86418578\") " pod="openshift-operators/observability-operator-59bdc8b94-9tl9d" Jan 22 06:47:54 crc kubenswrapper[4720]: I0122 06:47:54.680423 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sglms\" (UniqueName: \"kubernetes.io/projected/758ea564-cd8b-4e93-bd76-563d86418578-kube-api-access-sglms\") pod \"observability-operator-59bdc8b94-9tl9d\" (UID: \"758ea564-cd8b-4e93-bd76-563d86418578\") " pod="openshift-operators/observability-operator-59bdc8b94-9tl9d" Jan 22 06:47:54 crc kubenswrapper[4720]: I0122 06:47:54.680586 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-djk26\" (UniqueName: \"kubernetes.io/projected/db323c34-5995-4cc9-baab-de570b5fc5b3-kube-api-access-djk26\") pod \"perses-operator-5bf474d74f-88ll2\" (UID: \"db323c34-5995-4cc9-baab-de570b5fc5b3\") " pod="openshift-operators/perses-operator-5bf474d74f-88ll2" Jan 22 06:47:54 crc kubenswrapper[4720]: I0122 06:47:54.687983 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/758ea564-cd8b-4e93-bd76-563d86418578-observability-operator-tls\") pod \"observability-operator-59bdc8b94-9tl9d\" (UID: \"758ea564-cd8b-4e93-bd76-563d86418578\") " pod="openshift-operators/observability-operator-59bdc8b94-9tl9d" Jan 22 06:47:54 crc kubenswrapper[4720]: I0122 06:47:54.722883 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sglms\" (UniqueName: \"kubernetes.io/projected/758ea564-cd8b-4e93-bd76-563d86418578-kube-api-access-sglms\") pod \"observability-operator-59bdc8b94-9tl9d\" (UID: \"758ea564-cd8b-4e93-bd76-563d86418578\") " pod="openshift-operators/observability-operator-59bdc8b94-9tl9d" Jan 22 06:47:54 crc kubenswrapper[4720]: I0122 06:47:54.731416 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-9tl9d" Jan 22 06:47:54 crc kubenswrapper[4720]: E0122 06:47:54.766737 4720 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-9tl9d_openshift-operators_758ea564-cd8b-4e93-bd76-563d86418578_0(83292435e3c567d61abb08065eedc6568beecfc26bf34c0b52d26f0e52f6b71c): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 22 06:47:54 crc kubenswrapper[4720]: E0122 06:47:54.766863 4720 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-9tl9d_openshift-operators_758ea564-cd8b-4e93-bd76-563d86418578_0(83292435e3c567d61abb08065eedc6568beecfc26bf34c0b52d26f0e52f6b71c): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-9tl9d" Jan 22 06:47:54 crc kubenswrapper[4720]: E0122 06:47:54.766900 4720 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-9tl9d_openshift-operators_758ea564-cd8b-4e93-bd76-563d86418578_0(83292435e3c567d61abb08065eedc6568beecfc26bf34c0b52d26f0e52f6b71c): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-9tl9d" Jan 22 06:47:54 crc kubenswrapper[4720]: E0122 06:47:54.766987 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"observability-operator-59bdc8b94-9tl9d_openshift-operators(758ea564-cd8b-4e93-bd76-563d86418578)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"observability-operator-59bdc8b94-9tl9d_openshift-operators(758ea564-cd8b-4e93-bd76-563d86418578)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-9tl9d_openshift-operators_758ea564-cd8b-4e93-bd76-563d86418578_0(83292435e3c567d61abb08065eedc6568beecfc26bf34c0b52d26f0e52f6b71c): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/observability-operator-59bdc8b94-9tl9d" podUID="758ea564-cd8b-4e93-bd76-563d86418578" Jan 22 06:47:54 crc kubenswrapper[4720]: I0122 06:47:54.781895 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-djk26\" (UniqueName: \"kubernetes.io/projected/db323c34-5995-4cc9-baab-de570b5fc5b3-kube-api-access-djk26\") pod \"perses-operator-5bf474d74f-88ll2\" (UID: \"db323c34-5995-4cc9-baab-de570b5fc5b3\") " pod="openshift-operators/perses-operator-5bf474d74f-88ll2" Jan 22 06:47:54 crc kubenswrapper[4720]: I0122 06:47:54.782085 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/db323c34-5995-4cc9-baab-de570b5fc5b3-openshift-service-ca\") pod \"perses-operator-5bf474d74f-88ll2\" (UID: \"db323c34-5995-4cc9-baab-de570b5fc5b3\") " pod="openshift-operators/perses-operator-5bf474d74f-88ll2" Jan 22 06:47:54 crc kubenswrapper[4720]: I0122 06:47:54.783374 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/db323c34-5995-4cc9-baab-de570b5fc5b3-openshift-service-ca\") pod \"perses-operator-5bf474d74f-88ll2\" (UID: \"db323c34-5995-4cc9-baab-de570b5fc5b3\") " pod="openshift-operators/perses-operator-5bf474d74f-88ll2" Jan 22 06:47:54 crc kubenswrapper[4720]: I0122 06:47:54.825578 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-djk26\" (UniqueName: \"kubernetes.io/projected/db323c34-5995-4cc9-baab-de570b5fc5b3-kube-api-access-djk26\") pod \"perses-operator-5bf474d74f-88ll2\" (UID: \"db323c34-5995-4cc9-baab-de570b5fc5b3\") " pod="openshift-operators/perses-operator-5bf474d74f-88ll2" Jan 22 06:47:54 crc kubenswrapper[4720]: I0122 06:47:54.881234 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vm9h9" event={"ID":"3cd5a2a0-4299-41d0-8dc3-39ea67b02ea2","Type":"ContainerStarted","Data":"ca01599aac89aafcaa8b04ec92ca3711b030f4dcb515c15117702ac58f7c86bc"} Jan 22 06:47:54 crc kubenswrapper[4720]: I0122 06:47:54.881736 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-vm9h9" Jan 22 06:47:54 crc kubenswrapper[4720]: I0122 06:47:54.919835 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-88ll2" Jan 22 06:47:54 crc kubenswrapper[4720]: I0122 06:47:54.933447 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-vm9h9" Jan 22 06:47:54 crc kubenswrapper[4720]: E0122 06:47:54.945175 4720 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-88ll2_openshift-operators_db323c34-5995-4cc9-baab-de570b5fc5b3_0(69d6e843739a696bbae61d0c3d4efc34fec101ac0a228417b1e3d9a8584d602b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 22 06:47:54 crc kubenswrapper[4720]: E0122 06:47:54.945265 4720 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-88ll2_openshift-operators_db323c34-5995-4cc9-baab-de570b5fc5b3_0(69d6e843739a696bbae61d0c3d4efc34fec101ac0a228417b1e3d9a8584d602b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-88ll2" Jan 22 06:47:54 crc kubenswrapper[4720]: E0122 06:47:54.945295 4720 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-88ll2_openshift-operators_db323c34-5995-4cc9-baab-de570b5fc5b3_0(69d6e843739a696bbae61d0c3d4efc34fec101ac0a228417b1e3d9a8584d602b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-88ll2" Jan 22 06:47:54 crc kubenswrapper[4720]: E0122 06:47:54.945353 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"perses-operator-5bf474d74f-88ll2_openshift-operators(db323c34-5995-4cc9-baab-de570b5fc5b3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"perses-operator-5bf474d74f-88ll2_openshift-operators(db323c34-5995-4cc9-baab-de570b5fc5b3)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-88ll2_openshift-operators_db323c34-5995-4cc9-baab-de570b5fc5b3_0(69d6e843739a696bbae61d0c3d4efc34fec101ac0a228417b1e3d9a8584d602b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/perses-operator-5bf474d74f-88ll2" podUID="db323c34-5995-4cc9-baab-de570b5fc5b3" Jan 22 06:47:54 crc kubenswrapper[4720]: I0122 06:47:54.987430 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-vm9h9" podStartSLOduration=12.987409102 podStartE2EDuration="12.987409102s" podCreationTimestamp="2026-01-22 06:47:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 06:47:54.950336376 +0000 UTC m=+767.092243111" watchObservedRunningTime="2026-01-22 06:47:54.987409102 +0000 UTC m=+767.129315807" Jan 22 06:47:55 crc kubenswrapper[4720]: I0122 06:47:55.869460 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-88ll2"] Jan 22 06:47:55 crc kubenswrapper[4720]: I0122 06:47:55.874486 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-57d798ddfd-rvwjb"] Jan 22 06:47:55 crc kubenswrapper[4720]: I0122 06:47:55.874671 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-57d798ddfd-rvwjb" Jan 22 06:47:55 crc kubenswrapper[4720]: I0122 06:47:55.877467 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-57d798ddfd-rvwjb" Jan 22 06:47:55 crc kubenswrapper[4720]: I0122 06:47:55.885866 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-88ll2" Jan 22 06:47:55 crc kubenswrapper[4720]: I0122 06:47:55.886442 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-88ll2" Jan 22 06:47:55 crc kubenswrapper[4720]: I0122 06:47:55.887359 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-vm9h9" Jan 22 06:47:55 crc kubenswrapper[4720]: I0122 06:47:55.887451 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-vm9h9" Jan 22 06:47:55 crc kubenswrapper[4720]: I0122 06:47:55.901975 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-9tl9d"] Jan 22 06:47:55 crc kubenswrapper[4720]: I0122 06:47:55.902123 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-9tl9d" Jan 22 06:47:55 crc kubenswrapper[4720]: I0122 06:47:55.902668 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-9tl9d" Jan 22 06:47:55 crc kubenswrapper[4720]: I0122 06:47:55.917863 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-s7ps7"] Jan 22 06:47:55 crc kubenswrapper[4720]: I0122 06:47:55.918186 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-s7ps7" Jan 22 06:47:55 crc kubenswrapper[4720]: I0122 06:47:55.919093 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-s7ps7" Jan 22 06:47:55 crc kubenswrapper[4720]: I0122 06:47:55.943994 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-57d798ddfd-xtbwd"] Jan 22 06:47:55 crc kubenswrapper[4720]: I0122 06:47:55.944184 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-57d798ddfd-xtbwd" Jan 22 06:47:55 crc kubenswrapper[4720]: I0122 06:47:55.944892 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-57d798ddfd-xtbwd" Jan 22 06:47:55 crc kubenswrapper[4720]: I0122 06:47:55.951540 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-5x7g8"] Jan 22 06:47:55 crc kubenswrapper[4720]: I0122 06:47:55.951673 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-5x7g8" Jan 22 06:47:55 crc kubenswrapper[4720]: I0122 06:47:55.952236 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-5x7g8" Jan 22 06:47:55 crc kubenswrapper[4720]: I0122 06:47:55.953946 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-vm9h9" Jan 22 06:47:55 crc kubenswrapper[4720]: E0122 06:47:55.979139 4720 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-57d798ddfd-rvwjb_openshift-operators_dad79855-f5f9-42e6-ba0b-c2134f92c107_0(e25e27741fa0fd1345b9c0f813211707cfbdadd7258dcc147932e200fe230eda): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 22 06:47:55 crc kubenswrapper[4720]: E0122 06:47:55.979244 4720 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-57d798ddfd-rvwjb_openshift-operators_dad79855-f5f9-42e6-ba0b-c2134f92c107_0(e25e27741fa0fd1345b9c0f813211707cfbdadd7258dcc147932e200fe230eda): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-57d798ddfd-rvwjb" Jan 22 06:47:55 crc kubenswrapper[4720]: E0122 06:47:55.979294 4720 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-57d798ddfd-rvwjb_openshift-operators_dad79855-f5f9-42e6-ba0b-c2134f92c107_0(e25e27741fa0fd1345b9c0f813211707cfbdadd7258dcc147932e200fe230eda): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-57d798ddfd-rvwjb" Jan 22 06:47:55 crc kubenswrapper[4720]: E0122 06:47:55.979381 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-57d798ddfd-rvwjb_openshift-operators(dad79855-f5f9-42e6-ba0b-c2134f92c107)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-57d798ddfd-rvwjb_openshift-operators(dad79855-f5f9-42e6-ba0b-c2134f92c107)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-57d798ddfd-rvwjb_openshift-operators_dad79855-f5f9-42e6-ba0b-c2134f92c107_0(e25e27741fa0fd1345b9c0f813211707cfbdadd7258dcc147932e200fe230eda): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-57d798ddfd-rvwjb" podUID="dad79855-f5f9-42e6-ba0b-c2134f92c107" Jan 22 06:47:56 crc kubenswrapper[4720]: E0122 06:47:56.016287 4720 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-9tl9d_openshift-operators_758ea564-cd8b-4e93-bd76-563d86418578_0(56df75174256c22bd21c5c121f2f63b8aec27b368796733a526dbcd22459930b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 22 06:47:56 crc kubenswrapper[4720]: E0122 06:47:56.016388 4720 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-9tl9d_openshift-operators_758ea564-cd8b-4e93-bd76-563d86418578_0(56df75174256c22bd21c5c121f2f63b8aec27b368796733a526dbcd22459930b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-9tl9d" Jan 22 06:47:56 crc kubenswrapper[4720]: E0122 06:47:56.016435 4720 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-9tl9d_openshift-operators_758ea564-cd8b-4e93-bd76-563d86418578_0(56df75174256c22bd21c5c121f2f63b8aec27b368796733a526dbcd22459930b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-9tl9d" Jan 22 06:47:56 crc kubenswrapper[4720]: E0122 06:47:56.016505 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"observability-operator-59bdc8b94-9tl9d_openshift-operators(758ea564-cd8b-4e93-bd76-563d86418578)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"observability-operator-59bdc8b94-9tl9d_openshift-operators(758ea564-cd8b-4e93-bd76-563d86418578)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-9tl9d_openshift-operators_758ea564-cd8b-4e93-bd76-563d86418578_0(56df75174256c22bd21c5c121f2f63b8aec27b368796733a526dbcd22459930b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/observability-operator-59bdc8b94-9tl9d" podUID="758ea564-cd8b-4e93-bd76-563d86418578" Jan 22 06:47:56 crc kubenswrapper[4720]: E0122 06:47:56.032505 4720 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-88ll2_openshift-operators_db323c34-5995-4cc9-baab-de570b5fc5b3_0(76483990f8d846caa805e2ced7840bf5caa5a3adf4bd5cf9a46946d5681d5993): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 22 06:47:56 crc kubenswrapper[4720]: E0122 06:47:56.032873 4720 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-88ll2_openshift-operators_db323c34-5995-4cc9-baab-de570b5fc5b3_0(76483990f8d846caa805e2ced7840bf5caa5a3adf4bd5cf9a46946d5681d5993): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-88ll2" Jan 22 06:47:56 crc kubenswrapper[4720]: E0122 06:47:56.032928 4720 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-88ll2_openshift-operators_db323c34-5995-4cc9-baab-de570b5fc5b3_0(76483990f8d846caa805e2ced7840bf5caa5a3adf4bd5cf9a46946d5681d5993): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-88ll2" Jan 22 06:47:56 crc kubenswrapper[4720]: E0122 06:47:56.032986 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"perses-operator-5bf474d74f-88ll2_openshift-operators(db323c34-5995-4cc9-baab-de570b5fc5b3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"perses-operator-5bf474d74f-88ll2_openshift-operators(db323c34-5995-4cc9-baab-de570b5fc5b3)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-88ll2_openshift-operators_db323c34-5995-4cc9-baab-de570b5fc5b3_0(76483990f8d846caa805e2ced7840bf5caa5a3adf4bd5cf9a46946d5681d5993): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/perses-operator-5bf474d74f-88ll2" podUID="db323c34-5995-4cc9-baab-de570b5fc5b3" Jan 22 06:47:56 crc kubenswrapper[4720]: E0122 06:47:56.042663 4720 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-57d798ddfd-xtbwd_openshift-operators_b47c94b1-cb06-4aa2-aa94-cbf6da840eb4_0(67822e5ffedf3945d945bc7753aa03c221e3047e766d98404b8674dc1e4ff786): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 22 06:47:56 crc kubenswrapper[4720]: E0122 06:47:56.042771 4720 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-57d798ddfd-xtbwd_openshift-operators_b47c94b1-cb06-4aa2-aa94-cbf6da840eb4_0(67822e5ffedf3945d945bc7753aa03c221e3047e766d98404b8674dc1e4ff786): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-57d798ddfd-xtbwd" Jan 22 06:47:56 crc kubenswrapper[4720]: E0122 06:47:56.042812 4720 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-57d798ddfd-xtbwd_openshift-operators_b47c94b1-cb06-4aa2-aa94-cbf6da840eb4_0(67822e5ffedf3945d945bc7753aa03c221e3047e766d98404b8674dc1e4ff786): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-57d798ddfd-xtbwd" Jan 22 06:47:56 crc kubenswrapper[4720]: E0122 06:47:56.042871 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-57d798ddfd-xtbwd_openshift-operators(b47c94b1-cb06-4aa2-aa94-cbf6da840eb4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-57d798ddfd-xtbwd_openshift-operators(b47c94b1-cb06-4aa2-aa94-cbf6da840eb4)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-57d798ddfd-xtbwd_openshift-operators_b47c94b1-cb06-4aa2-aa94-cbf6da840eb4_0(67822e5ffedf3945d945bc7753aa03c221e3047e766d98404b8674dc1e4ff786): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-57d798ddfd-xtbwd" podUID="b47c94b1-cb06-4aa2-aa94-cbf6da840eb4" Jan 22 06:47:56 crc kubenswrapper[4720]: E0122 06:47:56.073447 4720 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_certified-operators-s7ps7_openshift-marketplace_6aceac93-bd1a-4897-a920-8ee803c81cb2_0(652df38b0ea0b22e946409efb401a1c749eb0df01cfb556f583c951b2ef0b648): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 22 06:47:56 crc kubenswrapper[4720]: E0122 06:47:56.073534 4720 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_certified-operators-s7ps7_openshift-marketplace_6aceac93-bd1a-4897-a920-8ee803c81cb2_0(652df38b0ea0b22e946409efb401a1c749eb0df01cfb556f583c951b2ef0b648): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-s7ps7" Jan 22 06:47:56 crc kubenswrapper[4720]: E0122 06:47:56.073562 4720 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_certified-operators-s7ps7_openshift-marketplace_6aceac93-bd1a-4897-a920-8ee803c81cb2_0(652df38b0ea0b22e946409efb401a1c749eb0df01cfb556f583c951b2ef0b648): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-s7ps7" Jan 22 06:47:56 crc kubenswrapper[4720]: E0122 06:47:56.073627 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"certified-operators-s7ps7_openshift-marketplace(6aceac93-bd1a-4897-a920-8ee803c81cb2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"certified-operators-s7ps7_openshift-marketplace(6aceac93-bd1a-4897-a920-8ee803c81cb2)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_certified-operators-s7ps7_openshift-marketplace_6aceac93-bd1a-4897-a920-8ee803c81cb2_0(652df38b0ea0b22e946409efb401a1c749eb0df01cfb556f583c951b2ef0b648): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-marketplace/certified-operators-s7ps7" podUID="6aceac93-bd1a-4897-a920-8ee803c81cb2" Jan 22 06:47:56 crc kubenswrapper[4720]: E0122 06:47:56.084152 4720 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-5x7g8_openshift-operators_fd9304c1-f30e-4235-9324-b437e69544ee_0(c300163ad7b77800a49345cba199f90cfb8a0767e080dc5b3e6169af9c8f450a): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 22 06:47:56 crc kubenswrapper[4720]: E0122 06:47:56.084246 4720 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-5x7g8_openshift-operators_fd9304c1-f30e-4235-9324-b437e69544ee_0(c300163ad7b77800a49345cba199f90cfb8a0767e080dc5b3e6169af9c8f450a): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-5x7g8" Jan 22 06:47:56 crc kubenswrapper[4720]: E0122 06:47:56.084270 4720 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-5x7g8_openshift-operators_fd9304c1-f30e-4235-9324-b437e69544ee_0(c300163ad7b77800a49345cba199f90cfb8a0767e080dc5b3e6169af9c8f450a): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-5x7g8" Jan 22 06:47:56 crc kubenswrapper[4720]: E0122 06:47:56.084319 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-68bc856cb9-5x7g8_openshift-operators(fd9304c1-f30e-4235-9324-b437e69544ee)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-68bc856cb9-5x7g8_openshift-operators(fd9304c1-f30e-4235-9324-b437e69544ee)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-5x7g8_openshift-operators_fd9304c1-f30e-4235-9324-b437e69544ee_0(c300163ad7b77800a49345cba199f90cfb8a0767e080dc5b3e6169af9c8f450a): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-5x7g8" podUID="fd9304c1-f30e-4235-9324-b437e69544ee" Jan 22 06:47:58 crc kubenswrapper[4720]: I0122 06:47:58.985717 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-znwhq"] Jan 22 06:47:59 crc kubenswrapper[4720]: W0122 06:47:59.006150 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6ccede6a_6547_474f_8288_7058e36c1642.slice/crio-a98940b071644b013eb251810c38312f5faf64c6a10336d050a950de1974beff WatchSource:0}: Error finding container a98940b071644b013eb251810c38312f5faf64c6a10336d050a950de1974beff: Status 404 returned error can't find the container with id a98940b071644b013eb251810c38312f5faf64c6a10336d050a950de1974beff Jan 22 06:47:59 crc kubenswrapper[4720]: I0122 06:47:59.919849 4720 generic.go:334] "Generic (PLEG): container finished" podID="6ccede6a-6547-474f-8288-7058e36c1642" containerID="5be6d492498d5b30932a12c5369c1d92c448f96b972ce049e4bba42bde79f38a" exitCode=0 Jan 22 06:47:59 crc kubenswrapper[4720]: I0122 06:47:59.919952 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-znwhq" event={"ID":"6ccede6a-6547-474f-8288-7058e36c1642","Type":"ContainerDied","Data":"5be6d492498d5b30932a12c5369c1d92c448f96b972ce049e4bba42bde79f38a"} Jan 22 06:47:59 crc kubenswrapper[4720]: I0122 06:47:59.920191 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-znwhq" event={"ID":"6ccede6a-6547-474f-8288-7058e36c1642","Type":"ContainerStarted","Data":"a98940b071644b013eb251810c38312f5faf64c6a10336d050a950de1974beff"} Jan 22 06:48:01 crc kubenswrapper[4720]: I0122 06:48:01.932112 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-znwhq" event={"ID":"6ccede6a-6547-474f-8288-7058e36c1642","Type":"ContainerStarted","Data":"d044b6451d29e46b540c65029f61d5cb8562152fc834df710cc8af8b265a0966"} Jan 22 06:48:02 crc kubenswrapper[4720]: I0122 06:48:02.939050 4720 generic.go:334] "Generic (PLEG): container finished" podID="6ccede6a-6547-474f-8288-7058e36c1642" containerID="d044b6451d29e46b540c65029f61d5cb8562152fc834df710cc8af8b265a0966" exitCode=0 Jan 22 06:48:02 crc kubenswrapper[4720]: I0122 06:48:02.939135 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-znwhq" event={"ID":"6ccede6a-6547-474f-8288-7058e36c1642","Type":"ContainerDied","Data":"d044b6451d29e46b540c65029f61d5cb8562152fc834df710cc8af8b265a0966"} Jan 22 06:48:03 crc kubenswrapper[4720]: I0122 06:48:03.947247 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-znwhq" event={"ID":"6ccede6a-6547-474f-8288-7058e36c1642","Type":"ContainerStarted","Data":"b2e13991b6228fa7e019e5443a51fbe1c1ea383b1828f327dfa7025752aef7f5"} Jan 22 06:48:03 crc kubenswrapper[4720]: I0122 06:48:03.977161 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-znwhq" podStartSLOduration=20.470917488 podStartE2EDuration="23.977132478s" podCreationTimestamp="2026-01-22 06:47:40 +0000 UTC" firstStartedPulling="2026-01-22 06:47:59.92144684 +0000 UTC m=+772.063353545" lastFinishedPulling="2026-01-22 06:48:03.42766183 +0000 UTC m=+775.569568535" observedRunningTime="2026-01-22 06:48:03.970418241 +0000 UTC m=+776.112324946" watchObservedRunningTime="2026-01-22 06:48:03.977132478 +0000 UTC m=+776.119039193" Jan 22 06:48:07 crc kubenswrapper[4720]: I0122 06:48:07.209640 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-88ll2" Jan 22 06:48:07 crc kubenswrapper[4720]: I0122 06:48:07.209676 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-5x7g8" Jan 22 06:48:07 crc kubenswrapper[4720]: I0122 06:48:07.210426 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-5x7g8" Jan 22 06:48:07 crc kubenswrapper[4720]: I0122 06:48:07.210480 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-88ll2" Jan 22 06:48:07 crc kubenswrapper[4720]: I0122 06:48:07.698256 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-5x7g8"] Jan 22 06:48:07 crc kubenswrapper[4720]: I0122 06:48:07.719341 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-88ll2"] Jan 22 06:48:07 crc kubenswrapper[4720]: I0122 06:48:07.969640 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-5x7g8" event={"ID":"fd9304c1-f30e-4235-9324-b437e69544ee","Type":"ContainerStarted","Data":"331f758e56f25421e8f33c2b8c1b04f819bc2b972d637baf5dcb737ee3ecef5c"} Jan 22 06:48:07 crc kubenswrapper[4720]: I0122 06:48:07.971322 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-88ll2" event={"ID":"db323c34-5995-4cc9-baab-de570b5fc5b3","Type":"ContainerStarted","Data":"c5720973b39401a4cda6c74793d3dcd3c3e90c3626c26152d9b81b3996b442bf"} Jan 22 06:48:08 crc kubenswrapper[4720]: I0122 06:48:08.210047 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-9tl9d" Jan 22 06:48:08 crc kubenswrapper[4720]: I0122 06:48:08.210079 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-s7ps7" Jan 22 06:48:08 crc kubenswrapper[4720]: I0122 06:48:08.210189 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-57d798ddfd-rvwjb" Jan 22 06:48:08 crc kubenswrapper[4720]: I0122 06:48:08.216008 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-9tl9d" Jan 22 06:48:08 crc kubenswrapper[4720]: I0122 06:48:08.216058 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-s7ps7" Jan 22 06:48:08 crc kubenswrapper[4720]: I0122 06:48:08.216299 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-57d798ddfd-rvwjb" Jan 22 06:48:08 crc kubenswrapper[4720]: I0122 06:48:08.530536 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-s7ps7"] Jan 22 06:48:08 crc kubenswrapper[4720]: I0122 06:48:08.574981 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-57d798ddfd-rvwjb"] Jan 22 06:48:08 crc kubenswrapper[4720]: I0122 06:48:08.614738 4720 scope.go:117] "RemoveContainer" containerID="b71047289bcefd19da4f70da8db4ee3456912a253f598d85540effeea52ca966" Jan 22 06:48:08 crc kubenswrapper[4720]: I0122 06:48:08.616051 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-9tl9d"] Jan 22 06:48:08 crc kubenswrapper[4720]: W0122 06:48:08.618881 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod758ea564_cd8b_4e93_bd76_563d86418578.slice/crio-4e4c92417c61d614f25c77a10c7cfa2367adf456ff045253b4cca240737ca64f WatchSource:0}: Error finding container 4e4c92417c61d614f25c77a10c7cfa2367adf456ff045253b4cca240737ca64f: Status 404 returned error can't find the container with id 4e4c92417c61d614f25c77a10c7cfa2367adf456ff045253b4cca240737ca64f Jan 22 06:48:08 crc kubenswrapper[4720]: I0122 06:48:08.979441 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-57d798ddfd-rvwjb" event={"ID":"dad79855-f5f9-42e6-ba0b-c2134f92c107","Type":"ContainerStarted","Data":"7fcdec5f490ed79a1cabe489163d1bed17a7d02d2d48f298b7f4e1b0f582fe41"} Jan 22 06:48:08 crc kubenswrapper[4720]: I0122 06:48:08.981483 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s7ps7" event={"ID":"6aceac93-bd1a-4897-a920-8ee803c81cb2","Type":"ContainerStarted","Data":"59ccb790c2f9d75cbb4a065d00ee910c59c7b07ba690818804fbd8d2215a4455"} Jan 22 06:48:08 crc kubenswrapper[4720]: I0122 06:48:08.982460 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-9tl9d" event={"ID":"758ea564-cd8b-4e93-bd76-563d86418578","Type":"ContainerStarted","Data":"4e4c92417c61d614f25c77a10c7cfa2367adf456ff045253b4cca240737ca64f"} Jan 22 06:48:10 crc kubenswrapper[4720]: I0122 06:48:10.210449 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-57d798ddfd-xtbwd" Jan 22 06:48:10 crc kubenswrapper[4720]: I0122 06:48:10.211178 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-57d798ddfd-xtbwd" Jan 22 06:48:10 crc kubenswrapper[4720]: I0122 06:48:10.490620 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-57d798ddfd-xtbwd"] Jan 22 06:48:10 crc kubenswrapper[4720]: W0122 06:48:10.521303 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb47c94b1_cb06_4aa2_aa94_cbf6da840eb4.slice/crio-b852eca0252da0da786ab2f155085982136d9f707690b439ccf95b27447e80c6 WatchSource:0}: Error finding container b852eca0252da0da786ab2f155085982136d9f707690b439ccf95b27447e80c6: Status 404 returned error can't find the container with id b852eca0252da0da786ab2f155085982136d9f707690b439ccf95b27447e80c6 Jan 22 06:48:10 crc kubenswrapper[4720]: I0122 06:48:10.970714 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-znwhq" Jan 22 06:48:10 crc kubenswrapper[4720]: I0122 06:48:10.970795 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-znwhq" Jan 22 06:48:11 crc kubenswrapper[4720]: I0122 06:48:11.000289 4720 generic.go:334] "Generic (PLEG): container finished" podID="6aceac93-bd1a-4897-a920-8ee803c81cb2" containerID="afcdf44560ee0245d11aa35955ddb24c6259b365dee6ea666b8dff00bb57022d" exitCode=0 Jan 22 06:48:11 crc kubenswrapper[4720]: I0122 06:48:11.000422 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s7ps7" event={"ID":"6aceac93-bd1a-4897-a920-8ee803c81cb2","Type":"ContainerDied","Data":"afcdf44560ee0245d11aa35955ddb24c6259b365dee6ea666b8dff00bb57022d"} Jan 22 06:48:11 crc kubenswrapper[4720]: I0122 06:48:11.006330 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-n5w5r_85373343-156d-4de0-a72b-baaf7c4e3d08/kube-multus/2.log" Jan 22 06:48:11 crc kubenswrapper[4720]: I0122 06:48:11.009560 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-57d798ddfd-xtbwd" event={"ID":"b47c94b1-cb06-4aa2-aa94-cbf6da840eb4","Type":"ContainerStarted","Data":"b852eca0252da0da786ab2f155085982136d9f707690b439ccf95b27447e80c6"} Jan 22 06:48:11 crc kubenswrapper[4720]: I0122 06:48:11.057264 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-znwhq" Jan 22 06:48:11 crc kubenswrapper[4720]: I0122 06:48:11.105191 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-znwhq" Jan 22 06:48:11 crc kubenswrapper[4720]: I0122 06:48:11.805396 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-znwhq"] Jan 22 06:48:12 crc kubenswrapper[4720]: I0122 06:48:12.888718 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-vm9h9" Jan 22 06:48:13 crc kubenswrapper[4720]: I0122 06:48:13.044529 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-znwhq" podUID="6ccede6a-6547-474f-8288-7058e36c1642" containerName="registry-server" containerID="cri-o://b2e13991b6228fa7e019e5443a51fbe1c1ea383b1828f327dfa7025752aef7f5" gracePeriod=2 Jan 22 06:48:14 crc kubenswrapper[4720]: I0122 06:48:14.066992 4720 generic.go:334] "Generic (PLEG): container finished" podID="6ccede6a-6547-474f-8288-7058e36c1642" containerID="b2e13991b6228fa7e019e5443a51fbe1c1ea383b1828f327dfa7025752aef7f5" exitCode=0 Jan 22 06:48:14 crc kubenswrapper[4720]: I0122 06:48:14.067065 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-znwhq" event={"ID":"6ccede6a-6547-474f-8288-7058e36c1642","Type":"ContainerDied","Data":"b2e13991b6228fa7e019e5443a51fbe1c1ea383b1828f327dfa7025752aef7f5"} Jan 22 06:48:15 crc kubenswrapper[4720]: I0122 06:48:15.080091 4720 generic.go:334] "Generic (PLEG): container finished" podID="6aceac93-bd1a-4897-a920-8ee803c81cb2" containerID="bb1b3a62311eacef0be3bae88a0c12dfe2b81e78b4bc85c66e211f9713e863aa" exitCode=0 Jan 22 06:48:15 crc kubenswrapper[4720]: I0122 06:48:15.080290 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s7ps7" event={"ID":"6aceac93-bd1a-4897-a920-8ee803c81cb2","Type":"ContainerDied","Data":"bb1b3a62311eacef0be3bae88a0c12dfe2b81e78b4bc85c66e211f9713e863aa"} Jan 22 06:48:15 crc kubenswrapper[4720]: I0122 06:48:15.567651 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-znwhq" Jan 22 06:48:15 crc kubenswrapper[4720]: I0122 06:48:15.643208 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ccede6a-6547-474f-8288-7058e36c1642-catalog-content\") pod \"6ccede6a-6547-474f-8288-7058e36c1642\" (UID: \"6ccede6a-6547-474f-8288-7058e36c1642\") " Jan 22 06:48:15 crc kubenswrapper[4720]: I0122 06:48:15.643318 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ccede6a-6547-474f-8288-7058e36c1642-utilities\") pod \"6ccede6a-6547-474f-8288-7058e36c1642\" (UID: \"6ccede6a-6547-474f-8288-7058e36c1642\") " Jan 22 06:48:15 crc kubenswrapper[4720]: I0122 06:48:15.643395 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s7nln\" (UniqueName: \"kubernetes.io/projected/6ccede6a-6547-474f-8288-7058e36c1642-kube-api-access-s7nln\") pod \"6ccede6a-6547-474f-8288-7058e36c1642\" (UID: \"6ccede6a-6547-474f-8288-7058e36c1642\") " Jan 22 06:48:15 crc kubenswrapper[4720]: I0122 06:48:15.644228 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6ccede6a-6547-474f-8288-7058e36c1642-utilities" (OuterVolumeSpecName: "utilities") pod "6ccede6a-6547-474f-8288-7058e36c1642" (UID: "6ccede6a-6547-474f-8288-7058e36c1642"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 06:48:15 crc kubenswrapper[4720]: I0122 06:48:15.644473 4720 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ccede6a-6547-474f-8288-7058e36c1642-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 06:48:15 crc kubenswrapper[4720]: I0122 06:48:15.652101 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ccede6a-6547-474f-8288-7058e36c1642-kube-api-access-s7nln" (OuterVolumeSpecName: "kube-api-access-s7nln") pod "6ccede6a-6547-474f-8288-7058e36c1642" (UID: "6ccede6a-6547-474f-8288-7058e36c1642"). InnerVolumeSpecName "kube-api-access-s7nln". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 06:48:15 crc kubenswrapper[4720]: I0122 06:48:15.745820 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s7nln\" (UniqueName: \"kubernetes.io/projected/6ccede6a-6547-474f-8288-7058e36c1642-kube-api-access-s7nln\") on node \"crc\" DevicePath \"\"" Jan 22 06:48:15 crc kubenswrapper[4720]: I0122 06:48:15.787075 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6ccede6a-6547-474f-8288-7058e36c1642-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6ccede6a-6547-474f-8288-7058e36c1642" (UID: "6ccede6a-6547-474f-8288-7058e36c1642"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 06:48:15 crc kubenswrapper[4720]: I0122 06:48:15.847497 4720 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ccede6a-6547-474f-8288-7058e36c1642-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 06:48:16 crc kubenswrapper[4720]: I0122 06:48:16.091339 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-znwhq" event={"ID":"6ccede6a-6547-474f-8288-7058e36c1642","Type":"ContainerDied","Data":"a98940b071644b013eb251810c38312f5faf64c6a10336d050a950de1974beff"} Jan 22 06:48:16 crc kubenswrapper[4720]: I0122 06:48:16.091408 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-znwhq" Jan 22 06:48:16 crc kubenswrapper[4720]: I0122 06:48:16.091475 4720 scope.go:117] "RemoveContainer" containerID="b2e13991b6228fa7e019e5443a51fbe1c1ea383b1828f327dfa7025752aef7f5" Jan 22 06:48:16 crc kubenswrapper[4720]: I0122 06:48:16.123111 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-znwhq"] Jan 22 06:48:16 crc kubenswrapper[4720]: I0122 06:48:16.126302 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-znwhq"] Jan 22 06:48:16 crc kubenswrapper[4720]: I0122 06:48:16.222029 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ccede6a-6547-474f-8288-7058e36c1642" path="/var/lib/kubelet/pods/6ccede6a-6547-474f-8288-7058e36c1642/volumes" Jan 22 06:48:23 crc kubenswrapper[4720]: I0122 06:48:23.809080 4720 scope.go:117] "RemoveContainer" containerID="d044b6451d29e46b540c65029f61d5cb8562152fc834df710cc8af8b265a0966" Jan 22 06:48:23 crc kubenswrapper[4720]: I0122 06:48:23.836886 4720 scope.go:117] "RemoveContainer" containerID="5be6d492498d5b30932a12c5369c1d92c448f96b972ce049e4bba42bde79f38a" Jan 22 06:48:25 crc kubenswrapper[4720]: I0122 06:48:25.161585 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-57d798ddfd-rvwjb" event={"ID":"dad79855-f5f9-42e6-ba0b-c2134f92c107","Type":"ContainerStarted","Data":"14888e9f252287357c38a4862a0b0b500fdbfcb607d32499bdbe37867bad7468"} Jan 22 06:48:25 crc kubenswrapper[4720]: I0122 06:48:25.165937 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s7ps7" event={"ID":"6aceac93-bd1a-4897-a920-8ee803c81cb2","Type":"ContainerStarted","Data":"743b0c5921e7e0b0ea6265f0d19eb2904d185a5c5100147cbe98b8762ead7357"} Jan 22 06:48:25 crc kubenswrapper[4720]: I0122 06:48:25.167718 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-5x7g8" event={"ID":"fd9304c1-f30e-4235-9324-b437e69544ee","Type":"ContainerStarted","Data":"d71d24dee1ff75856ac04dd43be2517ea451159451f3e919ec8c56975b4721a3"} Jan 22 06:48:25 crc kubenswrapper[4720]: I0122 06:48:25.171258 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-88ll2" event={"ID":"db323c34-5995-4cc9-baab-de570b5fc5b3","Type":"ContainerStarted","Data":"1eb9d36c3887d6d6ed2613c761b4cf899ca445a51b8d668b12b22334ce51c4da"} Jan 22 06:48:25 crc kubenswrapper[4720]: I0122 06:48:25.173298 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-57d798ddfd-xtbwd" event={"ID":"b47c94b1-cb06-4aa2-aa94-cbf6da840eb4","Type":"ContainerStarted","Data":"73a5ab47772222e852ef6b5d26d540f8c6fc316b1970889141d62a5f87ab1db8"} Jan 22 06:48:25 crc kubenswrapper[4720]: I0122 06:48:25.175169 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-9tl9d" event={"ID":"758ea564-cd8b-4e93-bd76-563d86418578","Type":"ContainerStarted","Data":"a52fb1d104bf06e2229cb9fb5937872324529a938b005b0f8fc06b1120908e53"} Jan 22 06:48:28 crc kubenswrapper[4720]: I0122 06:48:28.216241 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-5x7g8" podStartSLOduration=20.02860951 podStartE2EDuration="36.216209409s" podCreationTimestamp="2026-01-22 06:47:52 +0000 UTC" firstStartedPulling="2026-01-22 06:48:07.71024703 +0000 UTC m=+779.852153735" lastFinishedPulling="2026-01-22 06:48:23.897846929 +0000 UTC m=+796.039753634" observedRunningTime="2026-01-22 06:48:25.19267074 +0000 UTC m=+797.334577445" watchObservedRunningTime="2026-01-22 06:48:28.216209409 +0000 UTC m=+800.358116144" Jan 22 06:48:28 crc kubenswrapper[4720]: I0122 06:48:28.219523 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-57d798ddfd-rvwjb" podStartSLOduration=20.91260704 podStartE2EDuration="36.219487001s" podCreationTimestamp="2026-01-22 06:47:52 +0000 UTC" firstStartedPulling="2026-01-22 06:48:08.590557026 +0000 UTC m=+780.732463731" lastFinishedPulling="2026-01-22 06:48:23.897436987 +0000 UTC m=+796.039343692" observedRunningTime="2026-01-22 06:48:28.207362252 +0000 UTC m=+800.349268967" watchObservedRunningTime="2026-01-22 06:48:28.219487001 +0000 UTC m=+800.361393706" Jan 22 06:48:28 crc kubenswrapper[4720]: I0122 06:48:28.235246 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-5bf474d74f-88ll2" podStartSLOduration=18.075123321 podStartE2EDuration="34.235227281s" podCreationTimestamp="2026-01-22 06:47:54 +0000 UTC" firstStartedPulling="2026-01-22 06:48:07.736934916 +0000 UTC m=+779.878841621" lastFinishedPulling="2026-01-22 06:48:23.897038876 +0000 UTC m=+796.038945581" observedRunningTime="2026-01-22 06:48:28.232508435 +0000 UTC m=+800.374415140" watchObservedRunningTime="2026-01-22 06:48:28.235227281 +0000 UTC m=+800.377133986" Jan 22 06:48:28 crc kubenswrapper[4720]: I0122 06:48:28.252636 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-57d798ddfd-xtbwd" podStartSLOduration=22.867469728 podStartE2EDuration="36.252612326s" podCreationTimestamp="2026-01-22 06:47:52 +0000 UTC" firstStartedPulling="2026-01-22 06:48:10.527250257 +0000 UTC m=+782.669156962" lastFinishedPulling="2026-01-22 06:48:23.912392855 +0000 UTC m=+796.054299560" observedRunningTime="2026-01-22 06:48:28.250006584 +0000 UTC m=+800.391913289" watchObservedRunningTime="2026-01-22 06:48:28.252612326 +0000 UTC m=+800.394519051" Jan 22 06:48:28 crc kubenswrapper[4720]: I0122 06:48:28.274660 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-s7ps7" podStartSLOduration=32.365840387 podStartE2EDuration="45.274613791s" podCreationTimestamp="2026-01-22 06:47:43 +0000 UTC" firstStartedPulling="2026-01-22 06:48:11.003749195 +0000 UTC m=+783.145655900" lastFinishedPulling="2026-01-22 06:48:23.912522579 +0000 UTC m=+796.054429304" observedRunningTime="2026-01-22 06:48:28.273184401 +0000 UTC m=+800.415091106" watchObservedRunningTime="2026-01-22 06:48:28.274613791 +0000 UTC m=+800.416520496" Jan 22 06:48:28 crc kubenswrapper[4720]: I0122 06:48:28.323828 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-59bdc8b94-9tl9d" podStartSLOduration=19.032824679 podStartE2EDuration="34.323775965s" podCreationTimestamp="2026-01-22 06:47:54 +0000 UTC" firstStartedPulling="2026-01-22 06:48:08.62147275 +0000 UTC m=+780.763379455" lastFinishedPulling="2026-01-22 06:48:23.912424026 +0000 UTC m=+796.054330741" observedRunningTime="2026-01-22 06:48:28.309383393 +0000 UTC m=+800.451290108" watchObservedRunningTime="2026-01-22 06:48:28.323775965 +0000 UTC m=+800.465682670" Jan 22 06:48:29 crc kubenswrapper[4720]: I0122 06:48:29.781285 4720 patch_prober.go:28] interesting pod/machine-config-daemon-bnsvd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 06:48:29 crc kubenswrapper[4720]: I0122 06:48:29.781872 4720 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" podUID="f4b26e9d-6a95-4b1c-9750-88b6aa100c67" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 06:48:33 crc kubenswrapper[4720]: I0122 06:48:33.512790 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-s7ps7" Jan 22 06:48:33 crc kubenswrapper[4720]: I0122 06:48:33.513294 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-s7ps7" Jan 22 06:48:33 crc kubenswrapper[4720]: I0122 06:48:33.557051 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-s7ps7" Jan 22 06:48:34 crc kubenswrapper[4720]: I0122 06:48:34.265166 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-s7ps7" Jan 22 06:48:34 crc kubenswrapper[4720]: I0122 06:48:34.316151 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-s7ps7"] Jan 22 06:48:34 crc kubenswrapper[4720]: I0122 06:48:34.733283 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-59bdc8b94-9tl9d" Jan 22 06:48:34 crc kubenswrapper[4720]: I0122 06:48:34.736239 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-59bdc8b94-9tl9d" Jan 22 06:48:34 crc kubenswrapper[4720]: I0122 06:48:34.920984 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-5bf474d74f-88ll2" Jan 22 06:48:34 crc kubenswrapper[4720]: I0122 06:48:34.928733 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-5bf474d74f-88ll2" Jan 22 06:48:36 crc kubenswrapper[4720]: I0122 06:48:36.240133 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-s7ps7" podUID="6aceac93-bd1a-4897-a920-8ee803c81cb2" containerName="registry-server" containerID="cri-o://743b0c5921e7e0b0ea6265f0d19eb2904d185a5c5100147cbe98b8762ead7357" gracePeriod=2 Jan 22 06:48:39 crc kubenswrapper[4720]: I0122 06:48:39.260017 4720 generic.go:334] "Generic (PLEG): container finished" podID="6aceac93-bd1a-4897-a920-8ee803c81cb2" containerID="743b0c5921e7e0b0ea6265f0d19eb2904d185a5c5100147cbe98b8762ead7357" exitCode=0 Jan 22 06:48:39 crc kubenswrapper[4720]: I0122 06:48:39.260561 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s7ps7" event={"ID":"6aceac93-bd1a-4897-a920-8ee803c81cb2","Type":"ContainerDied","Data":"743b0c5921e7e0b0ea6265f0d19eb2904d185a5c5100147cbe98b8762ead7357"} Jan 22 06:48:39 crc kubenswrapper[4720]: I0122 06:48:39.596650 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-s7ps7" Jan 22 06:48:39 crc kubenswrapper[4720]: I0122 06:48:39.730972 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6aceac93-bd1a-4897-a920-8ee803c81cb2-utilities\") pod \"6aceac93-bd1a-4897-a920-8ee803c81cb2\" (UID: \"6aceac93-bd1a-4897-a920-8ee803c81cb2\") " Jan 22 06:48:39 crc kubenswrapper[4720]: I0122 06:48:39.731164 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mvrkj\" (UniqueName: \"kubernetes.io/projected/6aceac93-bd1a-4897-a920-8ee803c81cb2-kube-api-access-mvrkj\") pod \"6aceac93-bd1a-4897-a920-8ee803c81cb2\" (UID: \"6aceac93-bd1a-4897-a920-8ee803c81cb2\") " Jan 22 06:48:39 crc kubenswrapper[4720]: I0122 06:48:39.731235 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6aceac93-bd1a-4897-a920-8ee803c81cb2-catalog-content\") pod \"6aceac93-bd1a-4897-a920-8ee803c81cb2\" (UID: \"6aceac93-bd1a-4897-a920-8ee803c81cb2\") " Jan 22 06:48:39 crc kubenswrapper[4720]: I0122 06:48:39.731837 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6aceac93-bd1a-4897-a920-8ee803c81cb2-utilities" (OuterVolumeSpecName: "utilities") pod "6aceac93-bd1a-4897-a920-8ee803c81cb2" (UID: "6aceac93-bd1a-4897-a920-8ee803c81cb2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 06:48:39 crc kubenswrapper[4720]: I0122 06:48:39.739204 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6aceac93-bd1a-4897-a920-8ee803c81cb2-kube-api-access-mvrkj" (OuterVolumeSpecName: "kube-api-access-mvrkj") pod "6aceac93-bd1a-4897-a920-8ee803c81cb2" (UID: "6aceac93-bd1a-4897-a920-8ee803c81cb2"). InnerVolumeSpecName "kube-api-access-mvrkj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 06:48:39 crc kubenswrapper[4720]: I0122 06:48:39.794963 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6aceac93-bd1a-4897-a920-8ee803c81cb2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6aceac93-bd1a-4897-a920-8ee803c81cb2" (UID: "6aceac93-bd1a-4897-a920-8ee803c81cb2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 06:48:39 crc kubenswrapper[4720]: I0122 06:48:39.833491 4720 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6aceac93-bd1a-4897-a920-8ee803c81cb2-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 06:48:39 crc kubenswrapper[4720]: I0122 06:48:39.833545 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mvrkj\" (UniqueName: \"kubernetes.io/projected/6aceac93-bd1a-4897-a920-8ee803c81cb2-kube-api-access-mvrkj\") on node \"crc\" DevicePath \"\"" Jan 22 06:48:39 crc kubenswrapper[4720]: I0122 06:48:39.833564 4720 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6aceac93-bd1a-4897-a920-8ee803c81cb2-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 06:48:40 crc kubenswrapper[4720]: I0122 06:48:40.268505 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s7ps7" event={"ID":"6aceac93-bd1a-4897-a920-8ee803c81cb2","Type":"ContainerDied","Data":"59ccb790c2f9d75cbb4a065d00ee910c59c7b07ba690818804fbd8d2215a4455"} Jan 22 06:48:40 crc kubenswrapper[4720]: I0122 06:48:40.268571 4720 scope.go:117] "RemoveContainer" containerID="743b0c5921e7e0b0ea6265f0d19eb2904d185a5c5100147cbe98b8762ead7357" Jan 22 06:48:40 crc kubenswrapper[4720]: I0122 06:48:40.268719 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-s7ps7" Jan 22 06:48:40 crc kubenswrapper[4720]: I0122 06:48:40.292215 4720 scope.go:117] "RemoveContainer" containerID="bb1b3a62311eacef0be3bae88a0c12dfe2b81e78b4bc85c66e211f9713e863aa" Jan 22 06:48:40 crc kubenswrapper[4720]: I0122 06:48:40.292583 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-s7ps7"] Jan 22 06:48:40 crc kubenswrapper[4720]: I0122 06:48:40.305310 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-s7ps7"] Jan 22 06:48:40 crc kubenswrapper[4720]: I0122 06:48:40.320803 4720 scope.go:117] "RemoveContainer" containerID="afcdf44560ee0245d11aa35955ddb24c6259b365dee6ea666b8dff00bb57022d" Jan 22 06:48:42 crc kubenswrapper[4720]: I0122 06:48:42.219671 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6aceac93-bd1a-4897-a920-8ee803c81cb2" path="/var/lib/kubelet/pods/6aceac93-bd1a-4897-a920-8ee803c81cb2/volumes" Jan 22 06:48:43 crc kubenswrapper[4720]: I0122 06:48:43.488976 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7139sftq"] Jan 22 06:48:43 crc kubenswrapper[4720]: E0122 06:48:43.489338 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6aceac93-bd1a-4897-a920-8ee803c81cb2" containerName="registry-server" Jan 22 06:48:43 crc kubenswrapper[4720]: I0122 06:48:43.489357 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="6aceac93-bd1a-4897-a920-8ee803c81cb2" containerName="registry-server" Jan 22 06:48:43 crc kubenswrapper[4720]: E0122 06:48:43.489389 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ccede6a-6547-474f-8288-7058e36c1642" containerName="registry-server" Jan 22 06:48:43 crc kubenswrapper[4720]: I0122 06:48:43.489396 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ccede6a-6547-474f-8288-7058e36c1642" containerName="registry-server" Jan 22 06:48:43 crc kubenswrapper[4720]: E0122 06:48:43.489408 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ccede6a-6547-474f-8288-7058e36c1642" containerName="extract-utilities" Jan 22 06:48:43 crc kubenswrapper[4720]: I0122 06:48:43.489418 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ccede6a-6547-474f-8288-7058e36c1642" containerName="extract-utilities" Jan 22 06:48:43 crc kubenswrapper[4720]: E0122 06:48:43.489427 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6aceac93-bd1a-4897-a920-8ee803c81cb2" containerName="extract-utilities" Jan 22 06:48:43 crc kubenswrapper[4720]: I0122 06:48:43.489434 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="6aceac93-bd1a-4897-a920-8ee803c81cb2" containerName="extract-utilities" Jan 22 06:48:43 crc kubenswrapper[4720]: E0122 06:48:43.489445 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ccede6a-6547-474f-8288-7058e36c1642" containerName="extract-content" Jan 22 06:48:43 crc kubenswrapper[4720]: I0122 06:48:43.489453 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ccede6a-6547-474f-8288-7058e36c1642" containerName="extract-content" Jan 22 06:48:43 crc kubenswrapper[4720]: E0122 06:48:43.489466 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6aceac93-bd1a-4897-a920-8ee803c81cb2" containerName="extract-content" Jan 22 06:48:43 crc kubenswrapper[4720]: I0122 06:48:43.489472 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="6aceac93-bd1a-4897-a920-8ee803c81cb2" containerName="extract-content" Jan 22 06:48:43 crc kubenswrapper[4720]: I0122 06:48:43.489606 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="6aceac93-bd1a-4897-a920-8ee803c81cb2" containerName="registry-server" Jan 22 06:48:43 crc kubenswrapper[4720]: I0122 06:48:43.489620 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="6ccede6a-6547-474f-8288-7058e36c1642" containerName="registry-server" Jan 22 06:48:43 crc kubenswrapper[4720]: I0122 06:48:43.490697 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7139sftq" Jan 22 06:48:43 crc kubenswrapper[4720]: I0122 06:48:43.493171 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 22 06:48:43 crc kubenswrapper[4720]: I0122 06:48:43.502141 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7139sftq"] Jan 22 06:48:43 crc kubenswrapper[4720]: I0122 06:48:43.583072 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/86c83893-dd50-4631-aae4-b1069bac73c6-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7139sftq\" (UID: \"86c83893-dd50-4631-aae4-b1069bac73c6\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7139sftq" Jan 22 06:48:43 crc kubenswrapper[4720]: I0122 06:48:43.583242 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lrqrz\" (UniqueName: \"kubernetes.io/projected/86c83893-dd50-4631-aae4-b1069bac73c6-kube-api-access-lrqrz\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7139sftq\" (UID: \"86c83893-dd50-4631-aae4-b1069bac73c6\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7139sftq" Jan 22 06:48:43 crc kubenswrapper[4720]: I0122 06:48:43.583385 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/86c83893-dd50-4631-aae4-b1069bac73c6-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7139sftq\" (UID: \"86c83893-dd50-4631-aae4-b1069bac73c6\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7139sftq" Jan 22 06:48:43 crc kubenswrapper[4720]: I0122 06:48:43.685535 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/86c83893-dd50-4631-aae4-b1069bac73c6-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7139sftq\" (UID: \"86c83893-dd50-4631-aae4-b1069bac73c6\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7139sftq" Jan 22 06:48:43 crc kubenswrapper[4720]: I0122 06:48:43.685719 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/86c83893-dd50-4631-aae4-b1069bac73c6-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7139sftq\" (UID: \"86c83893-dd50-4631-aae4-b1069bac73c6\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7139sftq" Jan 22 06:48:43 crc kubenswrapper[4720]: I0122 06:48:43.685818 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lrqrz\" (UniqueName: \"kubernetes.io/projected/86c83893-dd50-4631-aae4-b1069bac73c6-kube-api-access-lrqrz\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7139sftq\" (UID: \"86c83893-dd50-4631-aae4-b1069bac73c6\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7139sftq" Jan 22 06:48:43 crc kubenswrapper[4720]: I0122 06:48:43.686427 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/86c83893-dd50-4631-aae4-b1069bac73c6-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7139sftq\" (UID: \"86c83893-dd50-4631-aae4-b1069bac73c6\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7139sftq" Jan 22 06:48:43 crc kubenswrapper[4720]: I0122 06:48:43.686749 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/86c83893-dd50-4631-aae4-b1069bac73c6-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7139sftq\" (UID: \"86c83893-dd50-4631-aae4-b1069bac73c6\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7139sftq" Jan 22 06:48:43 crc kubenswrapper[4720]: I0122 06:48:43.711089 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lrqrz\" (UniqueName: \"kubernetes.io/projected/86c83893-dd50-4631-aae4-b1069bac73c6-kube-api-access-lrqrz\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7139sftq\" (UID: \"86c83893-dd50-4631-aae4-b1069bac73c6\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7139sftq" Jan 22 06:48:43 crc kubenswrapper[4720]: I0122 06:48:43.819434 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7139sftq" Jan 22 06:48:44 crc kubenswrapper[4720]: I0122 06:48:44.232267 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7139sftq"] Jan 22 06:48:44 crc kubenswrapper[4720]: I0122 06:48:44.304289 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7139sftq" event={"ID":"86c83893-dd50-4631-aae4-b1069bac73c6","Type":"ContainerStarted","Data":"85dd5fd2ba2ec019fa33042a9829350355b11a4860774af13475a0b23167afb6"} Jan 22 06:48:45 crc kubenswrapper[4720]: I0122 06:48:45.315796 4720 generic.go:334] "Generic (PLEG): container finished" podID="86c83893-dd50-4631-aae4-b1069bac73c6" containerID="6e43c2a7739d776f885a9200812a21fe6b297d31853999d25be4f8e196c552d8" exitCode=0 Jan 22 06:48:45 crc kubenswrapper[4720]: I0122 06:48:45.315945 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7139sftq" event={"ID":"86c83893-dd50-4631-aae4-b1069bac73c6","Type":"ContainerDied","Data":"6e43c2a7739d776f885a9200812a21fe6b297d31853999d25be4f8e196c552d8"} Jan 22 06:48:47 crc kubenswrapper[4720]: I0122 06:48:47.335895 4720 generic.go:334] "Generic (PLEG): container finished" podID="86c83893-dd50-4631-aae4-b1069bac73c6" containerID="40385284e9091a83443f58c9288c29bca75d8459b543d96d72614891f0a10255" exitCode=0 Jan 22 06:48:47 crc kubenswrapper[4720]: I0122 06:48:47.336073 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7139sftq" event={"ID":"86c83893-dd50-4631-aae4-b1069bac73c6","Type":"ContainerDied","Data":"40385284e9091a83443f58c9288c29bca75d8459b543d96d72614891f0a10255"} Jan 22 06:48:48 crc kubenswrapper[4720]: I0122 06:48:48.345746 4720 generic.go:334] "Generic (PLEG): container finished" podID="86c83893-dd50-4631-aae4-b1069bac73c6" containerID="5d5b0f0a46a356256b20a101ce22a8174dc572abb2b14dfbe7142fe37ccaf798" exitCode=0 Jan 22 06:48:48 crc kubenswrapper[4720]: I0122 06:48:48.345811 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7139sftq" event={"ID":"86c83893-dd50-4631-aae4-b1069bac73c6","Type":"ContainerDied","Data":"5d5b0f0a46a356256b20a101ce22a8174dc572abb2b14dfbe7142fe37ccaf798"} Jan 22 06:48:49 crc kubenswrapper[4720]: I0122 06:48:49.859737 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7139sftq" Jan 22 06:48:49 crc kubenswrapper[4720]: I0122 06:48:49.975936 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/86c83893-dd50-4631-aae4-b1069bac73c6-util\") pod \"86c83893-dd50-4631-aae4-b1069bac73c6\" (UID: \"86c83893-dd50-4631-aae4-b1069bac73c6\") " Jan 22 06:48:49 crc kubenswrapper[4720]: I0122 06:48:49.976453 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/86c83893-dd50-4631-aae4-b1069bac73c6-bundle\") pod \"86c83893-dd50-4631-aae4-b1069bac73c6\" (UID: \"86c83893-dd50-4631-aae4-b1069bac73c6\") " Jan 22 06:48:49 crc kubenswrapper[4720]: I0122 06:48:49.976516 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lrqrz\" (UniqueName: \"kubernetes.io/projected/86c83893-dd50-4631-aae4-b1069bac73c6-kube-api-access-lrqrz\") pod \"86c83893-dd50-4631-aae4-b1069bac73c6\" (UID: \"86c83893-dd50-4631-aae4-b1069bac73c6\") " Jan 22 06:48:49 crc kubenswrapper[4720]: I0122 06:48:49.976935 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/86c83893-dd50-4631-aae4-b1069bac73c6-bundle" (OuterVolumeSpecName: "bundle") pod "86c83893-dd50-4631-aae4-b1069bac73c6" (UID: "86c83893-dd50-4631-aae4-b1069bac73c6"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 06:48:49 crc kubenswrapper[4720]: I0122 06:48:49.977072 4720 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/86c83893-dd50-4631-aae4-b1069bac73c6-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 06:48:49 crc kubenswrapper[4720]: I0122 06:48:49.982883 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/86c83893-dd50-4631-aae4-b1069bac73c6-kube-api-access-lrqrz" (OuterVolumeSpecName: "kube-api-access-lrqrz") pod "86c83893-dd50-4631-aae4-b1069bac73c6" (UID: "86c83893-dd50-4631-aae4-b1069bac73c6"). InnerVolumeSpecName "kube-api-access-lrqrz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 06:48:50 crc kubenswrapper[4720]: I0122 06:48:50.007877 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/86c83893-dd50-4631-aae4-b1069bac73c6-util" (OuterVolumeSpecName: "util") pod "86c83893-dd50-4631-aae4-b1069bac73c6" (UID: "86c83893-dd50-4631-aae4-b1069bac73c6"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 06:48:50 crc kubenswrapper[4720]: I0122 06:48:50.078253 4720 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/86c83893-dd50-4631-aae4-b1069bac73c6-util\") on node \"crc\" DevicePath \"\"" Jan 22 06:48:50 crc kubenswrapper[4720]: I0122 06:48:50.078305 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lrqrz\" (UniqueName: \"kubernetes.io/projected/86c83893-dd50-4631-aae4-b1069bac73c6-kube-api-access-lrqrz\") on node \"crc\" DevicePath \"\"" Jan 22 06:48:50 crc kubenswrapper[4720]: I0122 06:48:50.368269 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7139sftq" event={"ID":"86c83893-dd50-4631-aae4-b1069bac73c6","Type":"ContainerDied","Data":"85dd5fd2ba2ec019fa33042a9829350355b11a4860774af13475a0b23167afb6"} Jan 22 06:48:50 crc kubenswrapper[4720]: I0122 06:48:50.368329 4720 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="85dd5fd2ba2ec019fa33042a9829350355b11a4860774af13475a0b23167afb6" Jan 22 06:48:50 crc kubenswrapper[4720]: I0122 06:48:50.368360 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7139sftq" Jan 22 06:48:52 crc kubenswrapper[4720]: I0122 06:48:52.142670 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-hb6mk"] Jan 22 06:48:52 crc kubenswrapper[4720]: E0122 06:48:52.143866 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86c83893-dd50-4631-aae4-b1069bac73c6" containerName="pull" Jan 22 06:48:52 crc kubenswrapper[4720]: I0122 06:48:52.143981 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="86c83893-dd50-4631-aae4-b1069bac73c6" containerName="pull" Jan 22 06:48:52 crc kubenswrapper[4720]: E0122 06:48:52.144046 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86c83893-dd50-4631-aae4-b1069bac73c6" containerName="util" Jan 22 06:48:52 crc kubenswrapper[4720]: I0122 06:48:52.144093 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="86c83893-dd50-4631-aae4-b1069bac73c6" containerName="util" Jan 22 06:48:52 crc kubenswrapper[4720]: E0122 06:48:52.144147 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86c83893-dd50-4631-aae4-b1069bac73c6" containerName="extract" Jan 22 06:48:52 crc kubenswrapper[4720]: I0122 06:48:52.144199 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="86c83893-dd50-4631-aae4-b1069bac73c6" containerName="extract" Jan 22 06:48:52 crc kubenswrapper[4720]: I0122 06:48:52.144358 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="86c83893-dd50-4631-aae4-b1069bac73c6" containerName="extract" Jan 22 06:48:52 crc kubenswrapper[4720]: I0122 06:48:52.144875 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-hb6mk" Jan 22 06:48:52 crc kubenswrapper[4720]: I0122 06:48:52.148885 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Jan 22 06:48:52 crc kubenswrapper[4720]: I0122 06:48:52.148950 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-jrl56" Jan 22 06:48:52 crc kubenswrapper[4720]: I0122 06:48:52.149705 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Jan 22 06:48:52 crc kubenswrapper[4720]: I0122 06:48:52.154512 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-hb6mk"] Jan 22 06:48:52 crc kubenswrapper[4720]: I0122 06:48:52.313079 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h52gd\" (UniqueName: \"kubernetes.io/projected/2e442158-14c1-4ed3-a62b-679e64c48148-kube-api-access-h52gd\") pod \"nmstate-operator-646758c888-hb6mk\" (UID: \"2e442158-14c1-4ed3-a62b-679e64c48148\") " pod="openshift-nmstate/nmstate-operator-646758c888-hb6mk" Jan 22 06:48:52 crc kubenswrapper[4720]: I0122 06:48:52.414158 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h52gd\" (UniqueName: \"kubernetes.io/projected/2e442158-14c1-4ed3-a62b-679e64c48148-kube-api-access-h52gd\") pod \"nmstate-operator-646758c888-hb6mk\" (UID: \"2e442158-14c1-4ed3-a62b-679e64c48148\") " pod="openshift-nmstate/nmstate-operator-646758c888-hb6mk" Jan 22 06:48:52 crc kubenswrapper[4720]: I0122 06:48:52.453330 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h52gd\" (UniqueName: \"kubernetes.io/projected/2e442158-14c1-4ed3-a62b-679e64c48148-kube-api-access-h52gd\") pod \"nmstate-operator-646758c888-hb6mk\" (UID: \"2e442158-14c1-4ed3-a62b-679e64c48148\") " pod="openshift-nmstate/nmstate-operator-646758c888-hb6mk" Jan 22 06:48:52 crc kubenswrapper[4720]: I0122 06:48:52.461707 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-hb6mk" Jan 22 06:48:52 crc kubenswrapper[4720]: I0122 06:48:52.940571 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-hb6mk"] Jan 22 06:48:52 crc kubenswrapper[4720]: W0122 06:48:52.945268 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2e442158_14c1_4ed3_a62b_679e64c48148.slice/crio-4eace5d567fe5bb02dd05ab26c04993a3a67461b194af668a4a9e78cba207b5c WatchSource:0}: Error finding container 4eace5d567fe5bb02dd05ab26c04993a3a67461b194af668a4a9e78cba207b5c: Status 404 returned error can't find the container with id 4eace5d567fe5bb02dd05ab26c04993a3a67461b194af668a4a9e78cba207b5c Jan 22 06:48:53 crc kubenswrapper[4720]: I0122 06:48:53.391492 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-hb6mk" event={"ID":"2e442158-14c1-4ed3-a62b-679e64c48148","Type":"ContainerStarted","Data":"4eace5d567fe5bb02dd05ab26c04993a3a67461b194af668a4a9e78cba207b5c"} Jan 22 06:48:55 crc kubenswrapper[4720]: I0122 06:48:55.407457 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-hb6mk" event={"ID":"2e442158-14c1-4ed3-a62b-679e64c48148","Type":"ContainerStarted","Data":"e083f82c94ba75e76c5ddad7ef2a246ff9ea95a1405681f94c88d7ff9aeab9ab"} Jan 22 06:48:55 crc kubenswrapper[4720]: I0122 06:48:55.430187 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-646758c888-hb6mk" podStartSLOduration=1.245344054 podStartE2EDuration="3.430162241s" podCreationTimestamp="2026-01-22 06:48:52 +0000 UTC" firstStartedPulling="2026-01-22 06:48:52.949631358 +0000 UTC m=+825.091538063" lastFinishedPulling="2026-01-22 06:48:55.134449545 +0000 UTC m=+827.276356250" observedRunningTime="2026-01-22 06:48:55.429435861 +0000 UTC m=+827.571342586" watchObservedRunningTime="2026-01-22 06:48:55.430162241 +0000 UTC m=+827.572068956" Jan 22 06:48:56 crc kubenswrapper[4720]: I0122 06:48:56.389989 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-hnrr7"] Jan 22 06:48:56 crc kubenswrapper[4720]: I0122 06:48:56.391096 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-hnrr7" Jan 22 06:48:56 crc kubenswrapper[4720]: I0122 06:48:56.394797 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-lddwh" Jan 22 06:48:56 crc kubenswrapper[4720]: I0122 06:48:56.400024 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-68ctq"] Jan 22 06:48:56 crc kubenswrapper[4720]: I0122 06:48:56.400957 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-68ctq" Jan 22 06:48:56 crc kubenswrapper[4720]: I0122 06:48:56.414189 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Jan 22 06:48:56 crc kubenswrapper[4720]: I0122 06:48:56.418445 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-hnrr7"] Jan 22 06:48:56 crc kubenswrapper[4720]: I0122 06:48:56.421519 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-68ctq"] Jan 22 06:48:56 crc kubenswrapper[4720]: I0122 06:48:56.466971 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-lfx9d"] Jan 22 06:48:56 crc kubenswrapper[4720]: I0122 06:48:56.470292 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-lfx9d" Jan 22 06:48:56 crc kubenswrapper[4720]: I0122 06:48:56.570724 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9sffn\" (UniqueName: \"kubernetes.io/projected/10356d8e-1761-4a55-ad79-fee34dd3aabf-kube-api-access-9sffn\") pod \"nmstate-handler-lfx9d\" (UID: \"10356d8e-1761-4a55-ad79-fee34dd3aabf\") " pod="openshift-nmstate/nmstate-handler-lfx9d" Jan 22 06:48:56 crc kubenswrapper[4720]: I0122 06:48:56.570790 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/1d307b97-f8d7-4624-ad82-c40af972eeff-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-68ctq\" (UID: \"1d307b97-f8d7-4624-ad82-c40af972eeff\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-68ctq" Jan 22 06:48:56 crc kubenswrapper[4720]: I0122 06:48:56.570835 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/10356d8e-1761-4a55-ad79-fee34dd3aabf-ovs-socket\") pod \"nmstate-handler-lfx9d\" (UID: \"10356d8e-1761-4a55-ad79-fee34dd3aabf\") " pod="openshift-nmstate/nmstate-handler-lfx9d" Jan 22 06:48:56 crc kubenswrapper[4720]: I0122 06:48:56.570888 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-62bbq\" (UniqueName: \"kubernetes.io/projected/bce8fd7c-de7e-4ca2-bebf-c37b5c6d5ddb-kube-api-access-62bbq\") pod \"nmstate-metrics-54757c584b-hnrr7\" (UID: \"bce8fd7c-de7e-4ca2-bebf-c37b5c6d5ddb\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-hnrr7" Jan 22 06:48:56 crc kubenswrapper[4720]: I0122 06:48:56.570942 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/10356d8e-1761-4a55-ad79-fee34dd3aabf-dbus-socket\") pod \"nmstate-handler-lfx9d\" (UID: \"10356d8e-1761-4a55-ad79-fee34dd3aabf\") " pod="openshift-nmstate/nmstate-handler-lfx9d" Jan 22 06:48:56 crc kubenswrapper[4720]: I0122 06:48:56.570965 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6n2xt\" (UniqueName: \"kubernetes.io/projected/1d307b97-f8d7-4624-ad82-c40af972eeff-kube-api-access-6n2xt\") pod \"nmstate-webhook-8474b5b9d8-68ctq\" (UID: \"1d307b97-f8d7-4624-ad82-c40af972eeff\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-68ctq" Jan 22 06:48:56 crc kubenswrapper[4720]: I0122 06:48:56.570988 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/10356d8e-1761-4a55-ad79-fee34dd3aabf-nmstate-lock\") pod \"nmstate-handler-lfx9d\" (UID: \"10356d8e-1761-4a55-ad79-fee34dd3aabf\") " pod="openshift-nmstate/nmstate-handler-lfx9d" Jan 22 06:48:56 crc kubenswrapper[4720]: I0122 06:48:56.656477 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-b4rzf"] Jan 22 06:48:56 crc kubenswrapper[4720]: I0122 06:48:56.657880 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-b4rzf" Jan 22 06:48:56 crc kubenswrapper[4720]: I0122 06:48:56.662493 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Jan 22 06:48:56 crc kubenswrapper[4720]: I0122 06:48:56.662521 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Jan 22 06:48:56 crc kubenswrapper[4720]: I0122 06:48:56.662656 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-6c5sm" Jan 22 06:48:56 crc kubenswrapper[4720]: I0122 06:48:56.666090 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-b4rzf"] Jan 22 06:48:56 crc kubenswrapper[4720]: I0122 06:48:56.672348 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/10356d8e-1761-4a55-ad79-fee34dd3aabf-nmstate-lock\") pod \"nmstate-handler-lfx9d\" (UID: \"10356d8e-1761-4a55-ad79-fee34dd3aabf\") " pod="openshift-nmstate/nmstate-handler-lfx9d" Jan 22 06:48:56 crc kubenswrapper[4720]: I0122 06:48:56.672398 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9sffn\" (UniqueName: \"kubernetes.io/projected/10356d8e-1761-4a55-ad79-fee34dd3aabf-kube-api-access-9sffn\") pod \"nmstate-handler-lfx9d\" (UID: \"10356d8e-1761-4a55-ad79-fee34dd3aabf\") " pod="openshift-nmstate/nmstate-handler-lfx9d" Jan 22 06:48:56 crc kubenswrapper[4720]: I0122 06:48:56.672433 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/1d307b97-f8d7-4624-ad82-c40af972eeff-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-68ctq\" (UID: \"1d307b97-f8d7-4624-ad82-c40af972eeff\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-68ctq" Jan 22 06:48:56 crc kubenswrapper[4720]: I0122 06:48:56.672473 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/10356d8e-1761-4a55-ad79-fee34dd3aabf-ovs-socket\") pod \"nmstate-handler-lfx9d\" (UID: \"10356d8e-1761-4a55-ad79-fee34dd3aabf\") " pod="openshift-nmstate/nmstate-handler-lfx9d" Jan 22 06:48:56 crc kubenswrapper[4720]: I0122 06:48:56.672490 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/10356d8e-1761-4a55-ad79-fee34dd3aabf-nmstate-lock\") pod \"nmstate-handler-lfx9d\" (UID: \"10356d8e-1761-4a55-ad79-fee34dd3aabf\") " pod="openshift-nmstate/nmstate-handler-lfx9d" Jan 22 06:48:56 crc kubenswrapper[4720]: I0122 06:48:56.672527 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-62bbq\" (UniqueName: \"kubernetes.io/projected/bce8fd7c-de7e-4ca2-bebf-c37b5c6d5ddb-kube-api-access-62bbq\") pod \"nmstate-metrics-54757c584b-hnrr7\" (UID: \"bce8fd7c-de7e-4ca2-bebf-c37b5c6d5ddb\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-hnrr7" Jan 22 06:48:56 crc kubenswrapper[4720]: I0122 06:48:56.672547 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/10356d8e-1761-4a55-ad79-fee34dd3aabf-dbus-socket\") pod \"nmstate-handler-lfx9d\" (UID: \"10356d8e-1761-4a55-ad79-fee34dd3aabf\") " pod="openshift-nmstate/nmstate-handler-lfx9d" Jan 22 06:48:56 crc kubenswrapper[4720]: I0122 06:48:56.672569 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6n2xt\" (UniqueName: \"kubernetes.io/projected/1d307b97-f8d7-4624-ad82-c40af972eeff-kube-api-access-6n2xt\") pod \"nmstate-webhook-8474b5b9d8-68ctq\" (UID: \"1d307b97-f8d7-4624-ad82-c40af972eeff\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-68ctq" Jan 22 06:48:56 crc kubenswrapper[4720]: I0122 06:48:56.672939 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/10356d8e-1761-4a55-ad79-fee34dd3aabf-ovs-socket\") pod \"nmstate-handler-lfx9d\" (UID: \"10356d8e-1761-4a55-ad79-fee34dd3aabf\") " pod="openshift-nmstate/nmstate-handler-lfx9d" Jan 22 06:48:56 crc kubenswrapper[4720]: I0122 06:48:56.673396 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/10356d8e-1761-4a55-ad79-fee34dd3aabf-dbus-socket\") pod \"nmstate-handler-lfx9d\" (UID: \"10356d8e-1761-4a55-ad79-fee34dd3aabf\") " pod="openshift-nmstate/nmstate-handler-lfx9d" Jan 22 06:48:56 crc kubenswrapper[4720]: I0122 06:48:56.681170 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/1d307b97-f8d7-4624-ad82-c40af972eeff-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-68ctq\" (UID: \"1d307b97-f8d7-4624-ad82-c40af972eeff\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-68ctq" Jan 22 06:48:56 crc kubenswrapper[4720]: I0122 06:48:56.707167 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6n2xt\" (UniqueName: \"kubernetes.io/projected/1d307b97-f8d7-4624-ad82-c40af972eeff-kube-api-access-6n2xt\") pod \"nmstate-webhook-8474b5b9d8-68ctq\" (UID: \"1d307b97-f8d7-4624-ad82-c40af972eeff\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-68ctq" Jan 22 06:48:56 crc kubenswrapper[4720]: I0122 06:48:56.708759 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-62bbq\" (UniqueName: \"kubernetes.io/projected/bce8fd7c-de7e-4ca2-bebf-c37b5c6d5ddb-kube-api-access-62bbq\") pod \"nmstate-metrics-54757c584b-hnrr7\" (UID: \"bce8fd7c-de7e-4ca2-bebf-c37b5c6d5ddb\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-hnrr7" Jan 22 06:48:56 crc kubenswrapper[4720]: I0122 06:48:56.708826 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9sffn\" (UniqueName: \"kubernetes.io/projected/10356d8e-1761-4a55-ad79-fee34dd3aabf-kube-api-access-9sffn\") pod \"nmstate-handler-lfx9d\" (UID: \"10356d8e-1761-4a55-ad79-fee34dd3aabf\") " pod="openshift-nmstate/nmstate-handler-lfx9d" Jan 22 06:48:56 crc kubenswrapper[4720]: I0122 06:48:56.715332 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-hnrr7" Jan 22 06:48:56 crc kubenswrapper[4720]: I0122 06:48:56.729503 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-68ctq" Jan 22 06:48:56 crc kubenswrapper[4720]: I0122 06:48:56.774666 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5515f37e-3d61-49f0-ba5d-5d6896527923-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-b4rzf\" (UID: \"5515f37e-3d61-49f0-ba5d-5d6896527923\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-b4rzf" Jan 22 06:48:56 crc kubenswrapper[4720]: I0122 06:48:56.774739 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wt94t\" (UniqueName: \"kubernetes.io/projected/5515f37e-3d61-49f0-ba5d-5d6896527923-kube-api-access-wt94t\") pod \"nmstate-console-plugin-7754f76f8b-b4rzf\" (UID: \"5515f37e-3d61-49f0-ba5d-5d6896527923\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-b4rzf" Jan 22 06:48:56 crc kubenswrapper[4720]: I0122 06:48:56.774780 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/5515f37e-3d61-49f0-ba5d-5d6896527923-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-b4rzf\" (UID: \"5515f37e-3d61-49f0-ba5d-5d6896527923\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-b4rzf" Jan 22 06:48:56 crc kubenswrapper[4720]: I0122 06:48:56.793354 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-lfx9d" Jan 22 06:48:56 crc kubenswrapper[4720]: W0122 06:48:56.833109 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod10356d8e_1761_4a55_ad79_fee34dd3aabf.slice/crio-3a6945b10512b5cde01ef5e6b742cddcc6914a15c42ba7d6373f632919781a90 WatchSource:0}: Error finding container 3a6945b10512b5cde01ef5e6b742cddcc6914a15c42ba7d6373f632919781a90: Status 404 returned error can't find the container with id 3a6945b10512b5cde01ef5e6b742cddcc6914a15c42ba7d6373f632919781a90 Jan 22 06:48:56 crc kubenswrapper[4720]: I0122 06:48:56.876872 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5515f37e-3d61-49f0-ba5d-5d6896527923-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-b4rzf\" (UID: \"5515f37e-3d61-49f0-ba5d-5d6896527923\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-b4rzf" Jan 22 06:48:56 crc kubenswrapper[4720]: I0122 06:48:56.876957 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wt94t\" (UniqueName: \"kubernetes.io/projected/5515f37e-3d61-49f0-ba5d-5d6896527923-kube-api-access-wt94t\") pod \"nmstate-console-plugin-7754f76f8b-b4rzf\" (UID: \"5515f37e-3d61-49f0-ba5d-5d6896527923\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-b4rzf" Jan 22 06:48:56 crc kubenswrapper[4720]: I0122 06:48:56.876993 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/5515f37e-3d61-49f0-ba5d-5d6896527923-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-b4rzf\" (UID: \"5515f37e-3d61-49f0-ba5d-5d6896527923\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-b4rzf" Jan 22 06:48:56 crc kubenswrapper[4720]: I0122 06:48:56.878568 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5515f37e-3d61-49f0-ba5d-5d6896527923-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-b4rzf\" (UID: \"5515f37e-3d61-49f0-ba5d-5d6896527923\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-b4rzf" Jan 22 06:48:56 crc kubenswrapper[4720]: I0122 06:48:56.882074 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-795669dc4d-wqfdz"] Jan 22 06:48:56 crc kubenswrapper[4720]: I0122 06:48:56.883116 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-795669dc4d-wqfdz" Jan 22 06:48:56 crc kubenswrapper[4720]: I0122 06:48:56.888258 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/5515f37e-3d61-49f0-ba5d-5d6896527923-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-b4rzf\" (UID: \"5515f37e-3d61-49f0-ba5d-5d6896527923\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-b4rzf" Jan 22 06:48:56 crc kubenswrapper[4720]: I0122 06:48:56.923888 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-795669dc4d-wqfdz"] Jan 22 06:48:56 crc kubenswrapper[4720]: I0122 06:48:56.950066 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wt94t\" (UniqueName: \"kubernetes.io/projected/5515f37e-3d61-49f0-ba5d-5d6896527923-kube-api-access-wt94t\") pod \"nmstate-console-plugin-7754f76f8b-b4rzf\" (UID: \"5515f37e-3d61-49f0-ba5d-5d6896527923\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-b4rzf" Jan 22 06:48:56 crc kubenswrapper[4720]: I0122 06:48:56.979716 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/5fe824a8-1f65-47bd-af5b-88f2cc67c738-oauth-serving-cert\") pod \"console-795669dc4d-wqfdz\" (UID: \"5fe824a8-1f65-47bd-af5b-88f2cc67c738\") " pod="openshift-console/console-795669dc4d-wqfdz" Jan 22 06:48:56 crc kubenswrapper[4720]: I0122 06:48:56.979781 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5fe824a8-1f65-47bd-af5b-88f2cc67c738-trusted-ca-bundle\") pod \"console-795669dc4d-wqfdz\" (UID: \"5fe824a8-1f65-47bd-af5b-88f2cc67c738\") " pod="openshift-console/console-795669dc4d-wqfdz" Jan 22 06:48:56 crc kubenswrapper[4720]: I0122 06:48:56.979806 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/5fe824a8-1f65-47bd-af5b-88f2cc67c738-console-config\") pod \"console-795669dc4d-wqfdz\" (UID: \"5fe824a8-1f65-47bd-af5b-88f2cc67c738\") " pod="openshift-console/console-795669dc4d-wqfdz" Jan 22 06:48:56 crc kubenswrapper[4720]: I0122 06:48:56.979821 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/5fe824a8-1f65-47bd-af5b-88f2cc67c738-service-ca\") pod \"console-795669dc4d-wqfdz\" (UID: \"5fe824a8-1f65-47bd-af5b-88f2cc67c738\") " pod="openshift-console/console-795669dc4d-wqfdz" Jan 22 06:48:56 crc kubenswrapper[4720]: I0122 06:48:56.979858 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/5fe824a8-1f65-47bd-af5b-88f2cc67c738-console-oauth-config\") pod \"console-795669dc4d-wqfdz\" (UID: \"5fe824a8-1f65-47bd-af5b-88f2cc67c738\") " pod="openshift-console/console-795669dc4d-wqfdz" Jan 22 06:48:56 crc kubenswrapper[4720]: I0122 06:48:56.979901 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-frh5c\" (UniqueName: \"kubernetes.io/projected/5fe824a8-1f65-47bd-af5b-88f2cc67c738-kube-api-access-frh5c\") pod \"console-795669dc4d-wqfdz\" (UID: \"5fe824a8-1f65-47bd-af5b-88f2cc67c738\") " pod="openshift-console/console-795669dc4d-wqfdz" Jan 22 06:48:56 crc kubenswrapper[4720]: I0122 06:48:56.980005 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/5fe824a8-1f65-47bd-af5b-88f2cc67c738-console-serving-cert\") pod \"console-795669dc4d-wqfdz\" (UID: \"5fe824a8-1f65-47bd-af5b-88f2cc67c738\") " pod="openshift-console/console-795669dc4d-wqfdz" Jan 22 06:48:56 crc kubenswrapper[4720]: I0122 06:48:56.982430 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-b4rzf" Jan 22 06:48:57 crc kubenswrapper[4720]: I0122 06:48:57.092932 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5fe824a8-1f65-47bd-af5b-88f2cc67c738-trusted-ca-bundle\") pod \"console-795669dc4d-wqfdz\" (UID: \"5fe824a8-1f65-47bd-af5b-88f2cc67c738\") " pod="openshift-console/console-795669dc4d-wqfdz" Jan 22 06:48:57 crc kubenswrapper[4720]: I0122 06:48:57.093011 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/5fe824a8-1f65-47bd-af5b-88f2cc67c738-console-config\") pod \"console-795669dc4d-wqfdz\" (UID: \"5fe824a8-1f65-47bd-af5b-88f2cc67c738\") " pod="openshift-console/console-795669dc4d-wqfdz" Jan 22 06:48:57 crc kubenswrapper[4720]: I0122 06:48:57.093036 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/5fe824a8-1f65-47bd-af5b-88f2cc67c738-service-ca\") pod \"console-795669dc4d-wqfdz\" (UID: \"5fe824a8-1f65-47bd-af5b-88f2cc67c738\") " pod="openshift-console/console-795669dc4d-wqfdz" Jan 22 06:48:57 crc kubenswrapper[4720]: I0122 06:48:57.093078 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/5fe824a8-1f65-47bd-af5b-88f2cc67c738-console-oauth-config\") pod \"console-795669dc4d-wqfdz\" (UID: \"5fe824a8-1f65-47bd-af5b-88f2cc67c738\") " pod="openshift-console/console-795669dc4d-wqfdz" Jan 22 06:48:57 crc kubenswrapper[4720]: I0122 06:48:57.093118 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-frh5c\" (UniqueName: \"kubernetes.io/projected/5fe824a8-1f65-47bd-af5b-88f2cc67c738-kube-api-access-frh5c\") pod \"console-795669dc4d-wqfdz\" (UID: \"5fe824a8-1f65-47bd-af5b-88f2cc67c738\") " pod="openshift-console/console-795669dc4d-wqfdz" Jan 22 06:48:57 crc kubenswrapper[4720]: I0122 06:48:57.093150 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/5fe824a8-1f65-47bd-af5b-88f2cc67c738-console-serving-cert\") pod \"console-795669dc4d-wqfdz\" (UID: \"5fe824a8-1f65-47bd-af5b-88f2cc67c738\") " pod="openshift-console/console-795669dc4d-wqfdz" Jan 22 06:48:57 crc kubenswrapper[4720]: I0122 06:48:57.093179 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/5fe824a8-1f65-47bd-af5b-88f2cc67c738-oauth-serving-cert\") pod \"console-795669dc4d-wqfdz\" (UID: \"5fe824a8-1f65-47bd-af5b-88f2cc67c738\") " pod="openshift-console/console-795669dc4d-wqfdz" Jan 22 06:48:57 crc kubenswrapper[4720]: I0122 06:48:57.094414 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/5fe824a8-1f65-47bd-af5b-88f2cc67c738-service-ca\") pod \"console-795669dc4d-wqfdz\" (UID: \"5fe824a8-1f65-47bd-af5b-88f2cc67c738\") " pod="openshift-console/console-795669dc4d-wqfdz" Jan 22 06:48:57 crc kubenswrapper[4720]: I0122 06:48:57.094460 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/5fe824a8-1f65-47bd-af5b-88f2cc67c738-oauth-serving-cert\") pod \"console-795669dc4d-wqfdz\" (UID: \"5fe824a8-1f65-47bd-af5b-88f2cc67c738\") " pod="openshift-console/console-795669dc4d-wqfdz" Jan 22 06:48:57 crc kubenswrapper[4720]: I0122 06:48:57.094542 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/5fe824a8-1f65-47bd-af5b-88f2cc67c738-console-config\") pod \"console-795669dc4d-wqfdz\" (UID: \"5fe824a8-1f65-47bd-af5b-88f2cc67c738\") " pod="openshift-console/console-795669dc4d-wqfdz" Jan 22 06:48:57 crc kubenswrapper[4720]: I0122 06:48:57.094894 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5fe824a8-1f65-47bd-af5b-88f2cc67c738-trusted-ca-bundle\") pod \"console-795669dc4d-wqfdz\" (UID: \"5fe824a8-1f65-47bd-af5b-88f2cc67c738\") " pod="openshift-console/console-795669dc4d-wqfdz" Jan 22 06:48:57 crc kubenswrapper[4720]: I0122 06:48:57.113011 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/5fe824a8-1f65-47bd-af5b-88f2cc67c738-console-oauth-config\") pod \"console-795669dc4d-wqfdz\" (UID: \"5fe824a8-1f65-47bd-af5b-88f2cc67c738\") " pod="openshift-console/console-795669dc4d-wqfdz" Jan 22 06:48:57 crc kubenswrapper[4720]: I0122 06:48:57.117237 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/5fe824a8-1f65-47bd-af5b-88f2cc67c738-console-serving-cert\") pod \"console-795669dc4d-wqfdz\" (UID: \"5fe824a8-1f65-47bd-af5b-88f2cc67c738\") " pod="openshift-console/console-795669dc4d-wqfdz" Jan 22 06:48:57 crc kubenswrapper[4720]: I0122 06:48:57.120802 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-frh5c\" (UniqueName: \"kubernetes.io/projected/5fe824a8-1f65-47bd-af5b-88f2cc67c738-kube-api-access-frh5c\") pod \"console-795669dc4d-wqfdz\" (UID: \"5fe824a8-1f65-47bd-af5b-88f2cc67c738\") " pod="openshift-console/console-795669dc4d-wqfdz" Jan 22 06:48:57 crc kubenswrapper[4720]: I0122 06:48:57.217374 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-795669dc4d-wqfdz" Jan 22 06:48:57 crc kubenswrapper[4720]: I0122 06:48:57.257317 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-68ctq"] Jan 22 06:48:57 crc kubenswrapper[4720]: W0122 06:48:57.272208 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1d307b97_f8d7_4624_ad82_c40af972eeff.slice/crio-ff04edc01220b86a972e0575116ee1bb5223e6e9be46767c82d950bc9fd4fb4a WatchSource:0}: Error finding container ff04edc01220b86a972e0575116ee1bb5223e6e9be46767c82d950bc9fd4fb4a: Status 404 returned error can't find the container with id ff04edc01220b86a972e0575116ee1bb5223e6e9be46767c82d950bc9fd4fb4a Jan 22 06:48:57 crc kubenswrapper[4720]: I0122 06:48:57.325799 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-b4rzf"] Jan 22 06:48:57 crc kubenswrapper[4720]: W0122 06:48:57.336146 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5515f37e_3d61_49f0_ba5d_5d6896527923.slice/crio-980df641def603c95beef6948fdaa39d125a0f3a2933f9e273d4fc9f0502f2c8 WatchSource:0}: Error finding container 980df641def603c95beef6948fdaa39d125a0f3a2933f9e273d4fc9f0502f2c8: Status 404 returned error can't find the container with id 980df641def603c95beef6948fdaa39d125a0f3a2933f9e273d4fc9f0502f2c8 Jan 22 06:48:57 crc kubenswrapper[4720]: I0122 06:48:57.376061 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-hnrr7"] Jan 22 06:48:57 crc kubenswrapper[4720]: W0122 06:48:57.390371 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbce8fd7c_de7e_4ca2_bebf_c37b5c6d5ddb.slice/crio-6c7229a686b2dff6fdbcba6f119e6698a0b6018ce6e586ecf2f343ea5cdf384f WatchSource:0}: Error finding container 6c7229a686b2dff6fdbcba6f119e6698a0b6018ce6e586ecf2f343ea5cdf384f: Status 404 returned error can't find the container with id 6c7229a686b2dff6fdbcba6f119e6698a0b6018ce6e586ecf2f343ea5cdf384f Jan 22 06:48:57 crc kubenswrapper[4720]: I0122 06:48:57.420927 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-b4rzf" event={"ID":"5515f37e-3d61-49f0-ba5d-5d6896527923","Type":"ContainerStarted","Data":"980df641def603c95beef6948fdaa39d125a0f3a2933f9e273d4fc9f0502f2c8"} Jan 22 06:48:57 crc kubenswrapper[4720]: I0122 06:48:57.423182 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-lfx9d" event={"ID":"10356d8e-1761-4a55-ad79-fee34dd3aabf","Type":"ContainerStarted","Data":"3a6945b10512b5cde01ef5e6b742cddcc6914a15c42ba7d6373f632919781a90"} Jan 22 06:48:57 crc kubenswrapper[4720]: I0122 06:48:57.424388 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-hnrr7" event={"ID":"bce8fd7c-de7e-4ca2-bebf-c37b5c6d5ddb","Type":"ContainerStarted","Data":"6c7229a686b2dff6fdbcba6f119e6698a0b6018ce6e586ecf2f343ea5cdf384f"} Jan 22 06:48:57 crc kubenswrapper[4720]: I0122 06:48:57.430121 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-68ctq" event={"ID":"1d307b97-f8d7-4624-ad82-c40af972eeff","Type":"ContainerStarted","Data":"ff04edc01220b86a972e0575116ee1bb5223e6e9be46767c82d950bc9fd4fb4a"} Jan 22 06:48:57 crc kubenswrapper[4720]: I0122 06:48:57.454828 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-795669dc4d-wqfdz"] Jan 22 06:48:57 crc kubenswrapper[4720]: W0122 06:48:57.465140 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5fe824a8_1f65_47bd_af5b_88f2cc67c738.slice/crio-48f1e6b5c8e775221749369cbcc3b0036106be8923c6357fa39021085b6ba8c9 WatchSource:0}: Error finding container 48f1e6b5c8e775221749369cbcc3b0036106be8923c6357fa39021085b6ba8c9: Status 404 returned error can't find the container with id 48f1e6b5c8e775221749369cbcc3b0036106be8923c6357fa39021085b6ba8c9 Jan 22 06:48:58 crc kubenswrapper[4720]: I0122 06:48:58.452951 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-795669dc4d-wqfdz" event={"ID":"5fe824a8-1f65-47bd-af5b-88f2cc67c738","Type":"ContainerStarted","Data":"48f1e6b5c8e775221749369cbcc3b0036106be8923c6357fa39021085b6ba8c9"} Jan 22 06:48:59 crc kubenswrapper[4720]: I0122 06:48:59.463426 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-795669dc4d-wqfdz" event={"ID":"5fe824a8-1f65-47bd-af5b-88f2cc67c738","Type":"ContainerStarted","Data":"e9f398eb7668d1f4f733099ad6defc58c2c75458c49654bd9caad13e636638fd"} Jan 22 06:48:59 crc kubenswrapper[4720]: I0122 06:48:59.493164 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-795669dc4d-wqfdz" podStartSLOduration=3.493141022 podStartE2EDuration="3.493141022s" podCreationTimestamp="2026-01-22 06:48:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 06:48:59.492329539 +0000 UTC m=+831.634236354" watchObservedRunningTime="2026-01-22 06:48:59.493141022 +0000 UTC m=+831.635047727" Jan 22 06:48:59 crc kubenswrapper[4720]: I0122 06:48:59.780896 4720 patch_prober.go:28] interesting pod/machine-config-daemon-bnsvd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 06:48:59 crc kubenswrapper[4720]: I0122 06:48:59.781019 4720 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" podUID="f4b26e9d-6a95-4b1c-9750-88b6aa100c67" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 06:49:05 crc kubenswrapper[4720]: I0122 06:49:05.517357 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-68ctq" event={"ID":"1d307b97-f8d7-4624-ad82-c40af972eeff","Type":"ContainerStarted","Data":"d8ba39b5782f98c2003afa779426e35ca5078a83d05c04fe9e82ac529419fc17"} Jan 22 06:49:05 crc kubenswrapper[4720]: I0122 06:49:05.518290 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-68ctq" Jan 22 06:49:05 crc kubenswrapper[4720]: I0122 06:49:05.520521 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-b4rzf" event={"ID":"5515f37e-3d61-49f0-ba5d-5d6896527923","Type":"ContainerStarted","Data":"a8016bd75671c1d872c60ae1817274bd7d6223dd71ece790a495acc9306f75a9"} Jan 22 06:49:05 crc kubenswrapper[4720]: I0122 06:49:05.525214 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-lfx9d" event={"ID":"10356d8e-1761-4a55-ad79-fee34dd3aabf","Type":"ContainerStarted","Data":"cb2a35230261cb29547386addd51bbf8efe5d5e7eb14a87ac74215c3467e81ca"} Jan 22 06:49:05 crc kubenswrapper[4720]: I0122 06:49:05.525952 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-lfx9d" Jan 22 06:49:05 crc kubenswrapper[4720]: I0122 06:49:05.527664 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-hnrr7" event={"ID":"bce8fd7c-de7e-4ca2-bebf-c37b5c6d5ddb","Type":"ContainerStarted","Data":"0eddd23039f67bee7ee863e0837e57c6c44d4088db8bee79afd606e3f548e4c7"} Jan 22 06:49:05 crc kubenswrapper[4720]: I0122 06:49:05.568957 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-b4rzf" podStartSLOduration=1.993562119 podStartE2EDuration="9.568892903s" podCreationTimestamp="2026-01-22 06:48:56 +0000 UTC" firstStartedPulling="2026-01-22 06:48:57.337979765 +0000 UTC m=+829.479886470" lastFinishedPulling="2026-01-22 06:49:04.913310509 +0000 UTC m=+837.055217254" observedRunningTime="2026-01-22 06:49:05.567634797 +0000 UTC m=+837.709541522" watchObservedRunningTime="2026-01-22 06:49:05.568892903 +0000 UTC m=+837.710799618" Jan 22 06:49:05 crc kubenswrapper[4720]: I0122 06:49:05.571154 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-68ctq" podStartSLOduration=1.8996697139999998 podStartE2EDuration="9.571141785s" podCreationTimestamp="2026-01-22 06:48:56 +0000 UTC" firstStartedPulling="2026-01-22 06:48:57.277031601 +0000 UTC m=+829.418938306" lastFinishedPulling="2026-01-22 06:49:04.948503652 +0000 UTC m=+837.090410377" observedRunningTime="2026-01-22 06:49:05.545443037 +0000 UTC m=+837.687349772" watchObservedRunningTime="2026-01-22 06:49:05.571141785 +0000 UTC m=+837.713048510" Jan 22 06:49:05 crc kubenswrapper[4720]: I0122 06:49:05.611622 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-lfx9d" podStartSLOduration=1.532862262 podStartE2EDuration="9.611587626s" podCreationTimestamp="2026-01-22 06:48:56 +0000 UTC" firstStartedPulling="2026-01-22 06:48:56.836155428 +0000 UTC m=+828.978062133" lastFinishedPulling="2026-01-22 06:49:04.914880792 +0000 UTC m=+837.056787497" observedRunningTime="2026-01-22 06:49:05.604458557 +0000 UTC m=+837.746365312" watchObservedRunningTime="2026-01-22 06:49:05.611587626 +0000 UTC m=+837.753494341" Jan 22 06:49:07 crc kubenswrapper[4720]: I0122 06:49:07.218126 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-795669dc4d-wqfdz" Jan 22 06:49:07 crc kubenswrapper[4720]: I0122 06:49:07.218190 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-795669dc4d-wqfdz" Jan 22 06:49:07 crc kubenswrapper[4720]: I0122 06:49:07.223663 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-795669dc4d-wqfdz" Jan 22 06:49:07 crc kubenswrapper[4720]: I0122 06:49:07.552102 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-795669dc4d-wqfdz" Jan 22 06:49:07 crc kubenswrapper[4720]: I0122 06:49:07.629059 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-zv6lm"] Jan 22 06:49:09 crc kubenswrapper[4720]: I0122 06:49:09.565242 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-hnrr7" event={"ID":"bce8fd7c-de7e-4ca2-bebf-c37b5c6d5ddb","Type":"ContainerStarted","Data":"56345e2822331af5014212b2657b06dcdc2cac908ddcdc6f174c2d4475d021f1"} Jan 22 06:49:09 crc kubenswrapper[4720]: I0122 06:49:09.588010 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-54757c584b-hnrr7" podStartSLOduration=1.951493363 podStartE2EDuration="13.587976647s" podCreationTimestamp="2026-01-22 06:48:56 +0000 UTC" firstStartedPulling="2026-01-22 06:48:57.395798111 +0000 UTC m=+829.537704826" lastFinishedPulling="2026-01-22 06:49:09.032281405 +0000 UTC m=+841.174188110" observedRunningTime="2026-01-22 06:49:09.587068082 +0000 UTC m=+841.728974857" watchObservedRunningTime="2026-01-22 06:49:09.587976647 +0000 UTC m=+841.729883392" Jan 22 06:49:11 crc kubenswrapper[4720]: I0122 06:49:11.820699 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-lfx9d" Jan 22 06:49:16 crc kubenswrapper[4720]: I0122 06:49:16.740199 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-68ctq" Jan 22 06:49:29 crc kubenswrapper[4720]: I0122 06:49:29.780352 4720 patch_prober.go:28] interesting pod/machine-config-daemon-bnsvd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 06:49:29 crc kubenswrapper[4720]: I0122 06:49:29.781091 4720 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" podUID="f4b26e9d-6a95-4b1c-9750-88b6aa100c67" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 06:49:29 crc kubenswrapper[4720]: I0122 06:49:29.781158 4720 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" Jan 22 06:49:29 crc kubenswrapper[4720]: I0122 06:49:29.783429 4720 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5133cd7a4f98ed55da7368ea4892714f9b22a1313703673917d384626f9d42e1"} pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 22 06:49:29 crc kubenswrapper[4720]: I0122 06:49:29.783491 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" podUID="f4b26e9d-6a95-4b1c-9750-88b6aa100c67" containerName="machine-config-daemon" containerID="cri-o://5133cd7a4f98ed55da7368ea4892714f9b22a1313703673917d384626f9d42e1" gracePeriod=600 Jan 22 06:49:30 crc kubenswrapper[4720]: I0122 06:49:30.763674 4720 generic.go:334] "Generic (PLEG): container finished" podID="f4b26e9d-6a95-4b1c-9750-88b6aa100c67" containerID="5133cd7a4f98ed55da7368ea4892714f9b22a1313703673917d384626f9d42e1" exitCode=0 Jan 22 06:49:30 crc kubenswrapper[4720]: I0122 06:49:30.763768 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" event={"ID":"f4b26e9d-6a95-4b1c-9750-88b6aa100c67","Type":"ContainerDied","Data":"5133cd7a4f98ed55da7368ea4892714f9b22a1313703673917d384626f9d42e1"} Jan 22 06:49:30 crc kubenswrapper[4720]: I0122 06:49:30.764810 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" event={"ID":"f4b26e9d-6a95-4b1c-9750-88b6aa100c67","Type":"ContainerStarted","Data":"b414bde178e4b56f6099e1ff683f7636b4d4b7f1bac281d62264b75dc74b4bc6"} Jan 22 06:49:30 crc kubenswrapper[4720]: I0122 06:49:30.764859 4720 scope.go:117] "RemoveContainer" containerID="7bcb5112b649a106e66f934306ee592f8a752080d8191cf468e62a0e5b343bf1" Jan 22 06:49:32 crc kubenswrapper[4720]: I0122 06:49:32.672777 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-zv6lm" podUID="86ad3ffd-89b2-4b4a-b1b1-72d6ad907204" containerName="console" containerID="cri-o://8500c559d9ee3415d9214bf5106ac73d580edddeb82863b177b4bf6ac6f0be07" gracePeriod=15 Jan 22 06:49:32 crc kubenswrapper[4720]: I0122 06:49:32.807164 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2pcgr"] Jan 22 06:49:32 crc kubenswrapper[4720]: I0122 06:49:32.808645 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2pcgr" Jan 22 06:49:32 crc kubenswrapper[4720]: I0122 06:49:32.812254 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 22 06:49:32 crc kubenswrapper[4720]: I0122 06:49:32.823959 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2pcgr"] Jan 22 06:49:32 crc kubenswrapper[4720]: I0122 06:49:32.844141 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8xb49\" (UniqueName: \"kubernetes.io/projected/da345b49-94f9-4cab-ba07-78dd68bd874b-kube-api-access-8xb49\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2pcgr\" (UID: \"da345b49-94f9-4cab-ba07-78dd68bd874b\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2pcgr" Jan 22 06:49:32 crc kubenswrapper[4720]: I0122 06:49:32.844547 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/da345b49-94f9-4cab-ba07-78dd68bd874b-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2pcgr\" (UID: \"da345b49-94f9-4cab-ba07-78dd68bd874b\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2pcgr" Jan 22 06:49:32 crc kubenswrapper[4720]: I0122 06:49:32.844679 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/da345b49-94f9-4cab-ba07-78dd68bd874b-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2pcgr\" (UID: \"da345b49-94f9-4cab-ba07-78dd68bd874b\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2pcgr" Jan 22 06:49:32 crc kubenswrapper[4720]: I0122 06:49:32.945747 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8xb49\" (UniqueName: \"kubernetes.io/projected/da345b49-94f9-4cab-ba07-78dd68bd874b-kube-api-access-8xb49\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2pcgr\" (UID: \"da345b49-94f9-4cab-ba07-78dd68bd874b\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2pcgr" Jan 22 06:49:32 crc kubenswrapper[4720]: I0122 06:49:32.946178 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/da345b49-94f9-4cab-ba07-78dd68bd874b-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2pcgr\" (UID: \"da345b49-94f9-4cab-ba07-78dd68bd874b\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2pcgr" Jan 22 06:49:32 crc kubenswrapper[4720]: I0122 06:49:32.946310 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/da345b49-94f9-4cab-ba07-78dd68bd874b-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2pcgr\" (UID: \"da345b49-94f9-4cab-ba07-78dd68bd874b\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2pcgr" Jan 22 06:49:32 crc kubenswrapper[4720]: I0122 06:49:32.946899 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/da345b49-94f9-4cab-ba07-78dd68bd874b-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2pcgr\" (UID: \"da345b49-94f9-4cab-ba07-78dd68bd874b\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2pcgr" Jan 22 06:49:32 crc kubenswrapper[4720]: I0122 06:49:32.947243 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/da345b49-94f9-4cab-ba07-78dd68bd874b-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2pcgr\" (UID: \"da345b49-94f9-4cab-ba07-78dd68bd874b\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2pcgr" Jan 22 06:49:32 crc kubenswrapper[4720]: I0122 06:49:32.983206 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8xb49\" (UniqueName: \"kubernetes.io/projected/da345b49-94f9-4cab-ba07-78dd68bd874b-kube-api-access-8xb49\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2pcgr\" (UID: \"da345b49-94f9-4cab-ba07-78dd68bd874b\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2pcgr" Jan 22 06:49:33 crc kubenswrapper[4720]: I0122 06:49:33.130078 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2pcgr" Jan 22 06:49:33 crc kubenswrapper[4720]: I0122 06:49:33.134272 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-zv6lm_86ad3ffd-89b2-4b4a-b1b1-72d6ad907204/console/0.log" Jan 22 06:49:33 crc kubenswrapper[4720]: I0122 06:49:33.134338 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-zv6lm" Jan 22 06:49:33 crc kubenswrapper[4720]: I0122 06:49:33.156017 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hf4tk\" (UniqueName: \"kubernetes.io/projected/86ad3ffd-89b2-4b4a-b1b1-72d6ad907204-kube-api-access-hf4tk\") pod \"86ad3ffd-89b2-4b4a-b1b1-72d6ad907204\" (UID: \"86ad3ffd-89b2-4b4a-b1b1-72d6ad907204\") " Jan 22 06:49:33 crc kubenswrapper[4720]: I0122 06:49:33.156109 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/86ad3ffd-89b2-4b4a-b1b1-72d6ad907204-service-ca\") pod \"86ad3ffd-89b2-4b4a-b1b1-72d6ad907204\" (UID: \"86ad3ffd-89b2-4b4a-b1b1-72d6ad907204\") " Jan 22 06:49:33 crc kubenswrapper[4720]: I0122 06:49:33.156159 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/86ad3ffd-89b2-4b4a-b1b1-72d6ad907204-console-oauth-config\") pod \"86ad3ffd-89b2-4b4a-b1b1-72d6ad907204\" (UID: \"86ad3ffd-89b2-4b4a-b1b1-72d6ad907204\") " Jan 22 06:49:33 crc kubenswrapper[4720]: I0122 06:49:33.156203 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/86ad3ffd-89b2-4b4a-b1b1-72d6ad907204-console-config\") pod \"86ad3ffd-89b2-4b4a-b1b1-72d6ad907204\" (UID: \"86ad3ffd-89b2-4b4a-b1b1-72d6ad907204\") " Jan 22 06:49:33 crc kubenswrapper[4720]: I0122 06:49:33.156244 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/86ad3ffd-89b2-4b4a-b1b1-72d6ad907204-console-serving-cert\") pod \"86ad3ffd-89b2-4b4a-b1b1-72d6ad907204\" (UID: \"86ad3ffd-89b2-4b4a-b1b1-72d6ad907204\") " Jan 22 06:49:33 crc kubenswrapper[4720]: I0122 06:49:33.156277 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/86ad3ffd-89b2-4b4a-b1b1-72d6ad907204-trusted-ca-bundle\") pod \"86ad3ffd-89b2-4b4a-b1b1-72d6ad907204\" (UID: \"86ad3ffd-89b2-4b4a-b1b1-72d6ad907204\") " Jan 22 06:49:33 crc kubenswrapper[4720]: I0122 06:49:33.156307 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/86ad3ffd-89b2-4b4a-b1b1-72d6ad907204-oauth-serving-cert\") pod \"86ad3ffd-89b2-4b4a-b1b1-72d6ad907204\" (UID: \"86ad3ffd-89b2-4b4a-b1b1-72d6ad907204\") " Jan 22 06:49:33 crc kubenswrapper[4720]: I0122 06:49:33.157725 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/86ad3ffd-89b2-4b4a-b1b1-72d6ad907204-service-ca" (OuterVolumeSpecName: "service-ca") pod "86ad3ffd-89b2-4b4a-b1b1-72d6ad907204" (UID: "86ad3ffd-89b2-4b4a-b1b1-72d6ad907204"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 06:49:33 crc kubenswrapper[4720]: I0122 06:49:33.164484 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/86ad3ffd-89b2-4b4a-b1b1-72d6ad907204-console-config" (OuterVolumeSpecName: "console-config") pod "86ad3ffd-89b2-4b4a-b1b1-72d6ad907204" (UID: "86ad3ffd-89b2-4b4a-b1b1-72d6ad907204"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 06:49:33 crc kubenswrapper[4720]: I0122 06:49:33.165253 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/86ad3ffd-89b2-4b4a-b1b1-72d6ad907204-kube-api-access-hf4tk" (OuterVolumeSpecName: "kube-api-access-hf4tk") pod "86ad3ffd-89b2-4b4a-b1b1-72d6ad907204" (UID: "86ad3ffd-89b2-4b4a-b1b1-72d6ad907204"). InnerVolumeSpecName "kube-api-access-hf4tk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 06:49:33 crc kubenswrapper[4720]: I0122 06:49:33.165374 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86ad3ffd-89b2-4b4a-b1b1-72d6ad907204-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "86ad3ffd-89b2-4b4a-b1b1-72d6ad907204" (UID: "86ad3ffd-89b2-4b4a-b1b1-72d6ad907204"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 06:49:33 crc kubenswrapper[4720]: I0122 06:49:33.167455 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/86ad3ffd-89b2-4b4a-b1b1-72d6ad907204-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "86ad3ffd-89b2-4b4a-b1b1-72d6ad907204" (UID: "86ad3ffd-89b2-4b4a-b1b1-72d6ad907204"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 06:49:33 crc kubenswrapper[4720]: I0122 06:49:33.167688 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/86ad3ffd-89b2-4b4a-b1b1-72d6ad907204-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "86ad3ffd-89b2-4b4a-b1b1-72d6ad907204" (UID: "86ad3ffd-89b2-4b4a-b1b1-72d6ad907204"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 06:49:33 crc kubenswrapper[4720]: I0122 06:49:33.169187 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86ad3ffd-89b2-4b4a-b1b1-72d6ad907204-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "86ad3ffd-89b2-4b4a-b1b1-72d6ad907204" (UID: "86ad3ffd-89b2-4b4a-b1b1-72d6ad907204"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 06:49:33 crc kubenswrapper[4720]: I0122 06:49:33.258030 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hf4tk\" (UniqueName: \"kubernetes.io/projected/86ad3ffd-89b2-4b4a-b1b1-72d6ad907204-kube-api-access-hf4tk\") on node \"crc\" DevicePath \"\"" Jan 22 06:49:33 crc kubenswrapper[4720]: I0122 06:49:33.258066 4720 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/86ad3ffd-89b2-4b4a-b1b1-72d6ad907204-service-ca\") on node \"crc\" DevicePath \"\"" Jan 22 06:49:33 crc kubenswrapper[4720]: I0122 06:49:33.258079 4720 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/86ad3ffd-89b2-4b4a-b1b1-72d6ad907204-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 22 06:49:33 crc kubenswrapper[4720]: I0122 06:49:33.258091 4720 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/86ad3ffd-89b2-4b4a-b1b1-72d6ad907204-console-config\") on node \"crc\" DevicePath \"\"" Jan 22 06:49:33 crc kubenswrapper[4720]: I0122 06:49:33.258108 4720 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/86ad3ffd-89b2-4b4a-b1b1-72d6ad907204-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 06:49:33 crc kubenswrapper[4720]: I0122 06:49:33.258122 4720 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/86ad3ffd-89b2-4b4a-b1b1-72d6ad907204-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 06:49:33 crc kubenswrapper[4720]: I0122 06:49:33.258138 4720 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/86ad3ffd-89b2-4b4a-b1b1-72d6ad907204-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 06:49:33 crc kubenswrapper[4720]: I0122 06:49:33.400713 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2pcgr"] Jan 22 06:49:33 crc kubenswrapper[4720]: I0122 06:49:33.789168 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-zv6lm_86ad3ffd-89b2-4b4a-b1b1-72d6ad907204/console/0.log" Jan 22 06:49:33 crc kubenswrapper[4720]: I0122 06:49:33.789273 4720 generic.go:334] "Generic (PLEG): container finished" podID="86ad3ffd-89b2-4b4a-b1b1-72d6ad907204" containerID="8500c559d9ee3415d9214bf5106ac73d580edddeb82863b177b4bf6ac6f0be07" exitCode=2 Jan 22 06:49:33 crc kubenswrapper[4720]: I0122 06:49:33.789324 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-zv6lm" Jan 22 06:49:33 crc kubenswrapper[4720]: I0122 06:49:33.789350 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-zv6lm" event={"ID":"86ad3ffd-89b2-4b4a-b1b1-72d6ad907204","Type":"ContainerDied","Data":"8500c559d9ee3415d9214bf5106ac73d580edddeb82863b177b4bf6ac6f0be07"} Jan 22 06:49:33 crc kubenswrapper[4720]: I0122 06:49:33.789501 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-zv6lm" event={"ID":"86ad3ffd-89b2-4b4a-b1b1-72d6ad907204","Type":"ContainerDied","Data":"c5fd865239210e1750f480bcfb9d45e08e6a72b727a98c562dfe6f9cca9746a9"} Jan 22 06:49:33 crc kubenswrapper[4720]: I0122 06:49:33.789542 4720 scope.go:117] "RemoveContainer" containerID="8500c559d9ee3415d9214bf5106ac73d580edddeb82863b177b4bf6ac6f0be07" Jan 22 06:49:33 crc kubenswrapper[4720]: I0122 06:49:33.792481 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2pcgr" event={"ID":"da345b49-94f9-4cab-ba07-78dd68bd874b","Type":"ContainerStarted","Data":"16f2379e31d1029f86f03764e7438171a1eb26f825544a5313b004f31609fa23"} Jan 22 06:49:33 crc kubenswrapper[4720]: I0122 06:49:33.792853 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2pcgr" event={"ID":"da345b49-94f9-4cab-ba07-78dd68bd874b","Type":"ContainerStarted","Data":"18e4aa200cece3ac531602d94331a8902916aef6c8b0189f867f06d00a72f5b4"} Jan 22 06:49:33 crc kubenswrapper[4720]: I0122 06:49:33.876833 4720 scope.go:117] "RemoveContainer" containerID="8500c559d9ee3415d9214bf5106ac73d580edddeb82863b177b4bf6ac6f0be07" Jan 22 06:49:33 crc kubenswrapper[4720]: E0122 06:49:33.881451 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8500c559d9ee3415d9214bf5106ac73d580edddeb82863b177b4bf6ac6f0be07\": container with ID starting with 8500c559d9ee3415d9214bf5106ac73d580edddeb82863b177b4bf6ac6f0be07 not found: ID does not exist" containerID="8500c559d9ee3415d9214bf5106ac73d580edddeb82863b177b4bf6ac6f0be07" Jan 22 06:49:33 crc kubenswrapper[4720]: I0122 06:49:33.881528 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8500c559d9ee3415d9214bf5106ac73d580edddeb82863b177b4bf6ac6f0be07"} err="failed to get container status \"8500c559d9ee3415d9214bf5106ac73d580edddeb82863b177b4bf6ac6f0be07\": rpc error: code = NotFound desc = could not find container \"8500c559d9ee3415d9214bf5106ac73d580edddeb82863b177b4bf6ac6f0be07\": container with ID starting with 8500c559d9ee3415d9214bf5106ac73d580edddeb82863b177b4bf6ac6f0be07 not found: ID does not exist" Jan 22 06:49:33 crc kubenswrapper[4720]: I0122 06:49:33.885990 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-zv6lm"] Jan 22 06:49:33 crc kubenswrapper[4720]: I0122 06:49:33.890335 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-zv6lm"] Jan 22 06:49:34 crc kubenswrapper[4720]: I0122 06:49:34.221963 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="86ad3ffd-89b2-4b4a-b1b1-72d6ad907204" path="/var/lib/kubelet/pods/86ad3ffd-89b2-4b4a-b1b1-72d6ad907204/volumes" Jan 22 06:49:34 crc kubenswrapper[4720]: I0122 06:49:34.802876 4720 generic.go:334] "Generic (PLEG): container finished" podID="da345b49-94f9-4cab-ba07-78dd68bd874b" containerID="16f2379e31d1029f86f03764e7438171a1eb26f825544a5313b004f31609fa23" exitCode=0 Jan 22 06:49:34 crc kubenswrapper[4720]: I0122 06:49:34.803015 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2pcgr" event={"ID":"da345b49-94f9-4cab-ba07-78dd68bd874b","Type":"ContainerDied","Data":"16f2379e31d1029f86f03764e7438171a1eb26f825544a5313b004f31609fa23"} Jan 22 06:49:36 crc kubenswrapper[4720]: I0122 06:49:36.357093 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-2gqg2"] Jan 22 06:49:36 crc kubenswrapper[4720]: E0122 06:49:36.359042 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86ad3ffd-89b2-4b4a-b1b1-72d6ad907204" containerName="console" Jan 22 06:49:36 crc kubenswrapper[4720]: I0122 06:49:36.359215 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="86ad3ffd-89b2-4b4a-b1b1-72d6ad907204" containerName="console" Jan 22 06:49:36 crc kubenswrapper[4720]: I0122 06:49:36.359588 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="86ad3ffd-89b2-4b4a-b1b1-72d6ad907204" containerName="console" Jan 22 06:49:36 crc kubenswrapper[4720]: I0122 06:49:36.361534 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2gqg2" Jan 22 06:49:36 crc kubenswrapper[4720]: I0122 06:49:36.379197 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2gqg2"] Jan 22 06:49:36 crc kubenswrapper[4720]: I0122 06:49:36.506770 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/90763cf9-c272-4870-8f6d-9e3b506a712f-catalog-content\") pod \"community-operators-2gqg2\" (UID: \"90763cf9-c272-4870-8f6d-9e3b506a712f\") " pod="openshift-marketplace/community-operators-2gqg2" Jan 22 06:49:36 crc kubenswrapper[4720]: I0122 06:49:36.507272 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5wpn4\" (UniqueName: \"kubernetes.io/projected/90763cf9-c272-4870-8f6d-9e3b506a712f-kube-api-access-5wpn4\") pod \"community-operators-2gqg2\" (UID: \"90763cf9-c272-4870-8f6d-9e3b506a712f\") " pod="openshift-marketplace/community-operators-2gqg2" Jan 22 06:49:36 crc kubenswrapper[4720]: I0122 06:49:36.507462 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/90763cf9-c272-4870-8f6d-9e3b506a712f-utilities\") pod \"community-operators-2gqg2\" (UID: \"90763cf9-c272-4870-8f6d-9e3b506a712f\") " pod="openshift-marketplace/community-operators-2gqg2" Jan 22 06:49:36 crc kubenswrapper[4720]: I0122 06:49:36.608206 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/90763cf9-c272-4870-8f6d-9e3b506a712f-utilities\") pod \"community-operators-2gqg2\" (UID: \"90763cf9-c272-4870-8f6d-9e3b506a712f\") " pod="openshift-marketplace/community-operators-2gqg2" Jan 22 06:49:36 crc kubenswrapper[4720]: I0122 06:49:36.608692 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/90763cf9-c272-4870-8f6d-9e3b506a712f-catalog-content\") pod \"community-operators-2gqg2\" (UID: \"90763cf9-c272-4870-8f6d-9e3b506a712f\") " pod="openshift-marketplace/community-operators-2gqg2" Jan 22 06:49:36 crc kubenswrapper[4720]: I0122 06:49:36.608833 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5wpn4\" (UniqueName: \"kubernetes.io/projected/90763cf9-c272-4870-8f6d-9e3b506a712f-kube-api-access-5wpn4\") pod \"community-operators-2gqg2\" (UID: \"90763cf9-c272-4870-8f6d-9e3b506a712f\") " pod="openshift-marketplace/community-operators-2gqg2" Jan 22 06:49:36 crc kubenswrapper[4720]: I0122 06:49:36.609305 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/90763cf9-c272-4870-8f6d-9e3b506a712f-utilities\") pod \"community-operators-2gqg2\" (UID: \"90763cf9-c272-4870-8f6d-9e3b506a712f\") " pod="openshift-marketplace/community-operators-2gqg2" Jan 22 06:49:36 crc kubenswrapper[4720]: I0122 06:49:36.609793 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/90763cf9-c272-4870-8f6d-9e3b506a712f-catalog-content\") pod \"community-operators-2gqg2\" (UID: \"90763cf9-c272-4870-8f6d-9e3b506a712f\") " pod="openshift-marketplace/community-operators-2gqg2" Jan 22 06:49:36 crc kubenswrapper[4720]: I0122 06:49:36.637541 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5wpn4\" (UniqueName: \"kubernetes.io/projected/90763cf9-c272-4870-8f6d-9e3b506a712f-kube-api-access-5wpn4\") pod \"community-operators-2gqg2\" (UID: \"90763cf9-c272-4870-8f6d-9e3b506a712f\") " pod="openshift-marketplace/community-operators-2gqg2" Jan 22 06:49:36 crc kubenswrapper[4720]: I0122 06:49:36.691802 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2gqg2" Jan 22 06:49:37 crc kubenswrapper[4720]: I0122 06:49:37.060679 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2gqg2"] Jan 22 06:49:37 crc kubenswrapper[4720]: I0122 06:49:37.855445 4720 generic.go:334] "Generic (PLEG): container finished" podID="90763cf9-c272-4870-8f6d-9e3b506a712f" containerID="bddc3808f07ea3e4fabf2daf1ac7b44c1a86b38710cd563c632beb7f9cdb7fcc" exitCode=0 Jan 22 06:49:37 crc kubenswrapper[4720]: I0122 06:49:37.855519 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2gqg2" event={"ID":"90763cf9-c272-4870-8f6d-9e3b506a712f","Type":"ContainerDied","Data":"bddc3808f07ea3e4fabf2daf1ac7b44c1a86b38710cd563c632beb7f9cdb7fcc"} Jan 22 06:49:37 crc kubenswrapper[4720]: I0122 06:49:37.855561 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2gqg2" event={"ID":"90763cf9-c272-4870-8f6d-9e3b506a712f","Type":"ContainerStarted","Data":"363332e23e9640fee57c53b74e58042ef7f201ba79c0109ae2b0f2dc18cdfb4e"} Jan 22 06:49:38 crc kubenswrapper[4720]: I0122 06:49:38.883111 4720 generic.go:334] "Generic (PLEG): container finished" podID="da345b49-94f9-4cab-ba07-78dd68bd874b" containerID="63c16eff1c0aa6a8284b9f9e4e0db3a75d088daac80c43d255e99c580135f83b" exitCode=0 Jan 22 06:49:38 crc kubenswrapper[4720]: I0122 06:49:38.883655 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2pcgr" event={"ID":"da345b49-94f9-4cab-ba07-78dd68bd874b","Type":"ContainerDied","Data":"63c16eff1c0aa6a8284b9f9e4e0db3a75d088daac80c43d255e99c580135f83b"} Jan 22 06:49:39 crc kubenswrapper[4720]: I0122 06:49:39.895336 4720 generic.go:334] "Generic (PLEG): container finished" podID="da345b49-94f9-4cab-ba07-78dd68bd874b" containerID="d125839d6c91135cdded1414d17dcb27b36e40c3a73bb9ad4406633a340a48bf" exitCode=0 Jan 22 06:49:39 crc kubenswrapper[4720]: I0122 06:49:39.895406 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2pcgr" event={"ID":"da345b49-94f9-4cab-ba07-78dd68bd874b","Type":"ContainerDied","Data":"d125839d6c91135cdded1414d17dcb27b36e40c3a73bb9ad4406633a340a48bf"} Jan 22 06:49:44 crc kubenswrapper[4720]: I0122 06:49:44.760739 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-8tfbw"] Jan 22 06:49:44 crc kubenswrapper[4720]: I0122 06:49:44.762999 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8tfbw" Jan 22 06:49:44 crc kubenswrapper[4720]: I0122 06:49:44.829817 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-8tfbw"] Jan 22 06:49:44 crc kubenswrapper[4720]: I0122 06:49:44.937011 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/20c3d28d-88e7-43da-81fa-57df712470e9-catalog-content\") pod \"redhat-marketplace-8tfbw\" (UID: \"20c3d28d-88e7-43da-81fa-57df712470e9\") " pod="openshift-marketplace/redhat-marketplace-8tfbw" Jan 22 06:49:44 crc kubenswrapper[4720]: I0122 06:49:44.937076 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/20c3d28d-88e7-43da-81fa-57df712470e9-utilities\") pod \"redhat-marketplace-8tfbw\" (UID: \"20c3d28d-88e7-43da-81fa-57df712470e9\") " pod="openshift-marketplace/redhat-marketplace-8tfbw" Jan 22 06:49:44 crc kubenswrapper[4720]: I0122 06:49:44.937102 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qggd9\" (UniqueName: \"kubernetes.io/projected/20c3d28d-88e7-43da-81fa-57df712470e9-kube-api-access-qggd9\") pod \"redhat-marketplace-8tfbw\" (UID: \"20c3d28d-88e7-43da-81fa-57df712470e9\") " pod="openshift-marketplace/redhat-marketplace-8tfbw" Jan 22 06:49:45 crc kubenswrapper[4720]: I0122 06:49:45.038258 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qggd9\" (UniqueName: \"kubernetes.io/projected/20c3d28d-88e7-43da-81fa-57df712470e9-kube-api-access-qggd9\") pod \"redhat-marketplace-8tfbw\" (UID: \"20c3d28d-88e7-43da-81fa-57df712470e9\") " pod="openshift-marketplace/redhat-marketplace-8tfbw" Jan 22 06:49:45 crc kubenswrapper[4720]: I0122 06:49:45.038387 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/20c3d28d-88e7-43da-81fa-57df712470e9-catalog-content\") pod \"redhat-marketplace-8tfbw\" (UID: \"20c3d28d-88e7-43da-81fa-57df712470e9\") " pod="openshift-marketplace/redhat-marketplace-8tfbw" Jan 22 06:49:45 crc kubenswrapper[4720]: I0122 06:49:45.038425 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/20c3d28d-88e7-43da-81fa-57df712470e9-utilities\") pod \"redhat-marketplace-8tfbw\" (UID: \"20c3d28d-88e7-43da-81fa-57df712470e9\") " pod="openshift-marketplace/redhat-marketplace-8tfbw" Jan 22 06:49:45 crc kubenswrapper[4720]: I0122 06:49:45.039284 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/20c3d28d-88e7-43da-81fa-57df712470e9-utilities\") pod \"redhat-marketplace-8tfbw\" (UID: \"20c3d28d-88e7-43da-81fa-57df712470e9\") " pod="openshift-marketplace/redhat-marketplace-8tfbw" Jan 22 06:49:45 crc kubenswrapper[4720]: I0122 06:49:45.039334 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/20c3d28d-88e7-43da-81fa-57df712470e9-catalog-content\") pod \"redhat-marketplace-8tfbw\" (UID: \"20c3d28d-88e7-43da-81fa-57df712470e9\") " pod="openshift-marketplace/redhat-marketplace-8tfbw" Jan 22 06:49:45 crc kubenswrapper[4720]: I0122 06:49:45.061135 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qggd9\" (UniqueName: \"kubernetes.io/projected/20c3d28d-88e7-43da-81fa-57df712470e9-kube-api-access-qggd9\") pod \"redhat-marketplace-8tfbw\" (UID: \"20c3d28d-88e7-43da-81fa-57df712470e9\") " pod="openshift-marketplace/redhat-marketplace-8tfbw" Jan 22 06:49:45 crc kubenswrapper[4720]: I0122 06:49:45.079602 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8tfbw" Jan 22 06:49:46 crc kubenswrapper[4720]: I0122 06:49:46.698762 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2pcgr" Jan 22 06:49:46 crc kubenswrapper[4720]: I0122 06:49:46.866557 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8xb49\" (UniqueName: \"kubernetes.io/projected/da345b49-94f9-4cab-ba07-78dd68bd874b-kube-api-access-8xb49\") pod \"da345b49-94f9-4cab-ba07-78dd68bd874b\" (UID: \"da345b49-94f9-4cab-ba07-78dd68bd874b\") " Jan 22 06:49:46 crc kubenswrapper[4720]: I0122 06:49:46.866691 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/da345b49-94f9-4cab-ba07-78dd68bd874b-util\") pod \"da345b49-94f9-4cab-ba07-78dd68bd874b\" (UID: \"da345b49-94f9-4cab-ba07-78dd68bd874b\") " Jan 22 06:49:46 crc kubenswrapper[4720]: I0122 06:49:46.866755 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/da345b49-94f9-4cab-ba07-78dd68bd874b-bundle\") pod \"da345b49-94f9-4cab-ba07-78dd68bd874b\" (UID: \"da345b49-94f9-4cab-ba07-78dd68bd874b\") " Jan 22 06:49:46 crc kubenswrapper[4720]: I0122 06:49:46.871946 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/da345b49-94f9-4cab-ba07-78dd68bd874b-bundle" (OuterVolumeSpecName: "bundle") pod "da345b49-94f9-4cab-ba07-78dd68bd874b" (UID: "da345b49-94f9-4cab-ba07-78dd68bd874b"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 06:49:46 crc kubenswrapper[4720]: I0122 06:49:46.878517 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/da345b49-94f9-4cab-ba07-78dd68bd874b-kube-api-access-8xb49" (OuterVolumeSpecName: "kube-api-access-8xb49") pod "da345b49-94f9-4cab-ba07-78dd68bd874b" (UID: "da345b49-94f9-4cab-ba07-78dd68bd874b"). InnerVolumeSpecName "kube-api-access-8xb49". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 06:49:46 crc kubenswrapper[4720]: I0122 06:49:46.889889 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-8tfbw"] Jan 22 06:49:46 crc kubenswrapper[4720]: I0122 06:49:46.891780 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/da345b49-94f9-4cab-ba07-78dd68bd874b-util" (OuterVolumeSpecName: "util") pod "da345b49-94f9-4cab-ba07-78dd68bd874b" (UID: "da345b49-94f9-4cab-ba07-78dd68bd874b"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 06:49:46 crc kubenswrapper[4720]: I0122 06:49:46.954605 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8tfbw" event={"ID":"20c3d28d-88e7-43da-81fa-57df712470e9","Type":"ContainerStarted","Data":"b269d8df496e126948efd7ae1815b1e8fcf16fb4042e21f6ff1ce7157c38bc7f"} Jan 22 06:49:46 crc kubenswrapper[4720]: I0122 06:49:46.958697 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2pcgr" event={"ID":"da345b49-94f9-4cab-ba07-78dd68bd874b","Type":"ContainerDied","Data":"18e4aa200cece3ac531602d94331a8902916aef6c8b0189f867f06d00a72f5b4"} Jan 22 06:49:46 crc kubenswrapper[4720]: I0122 06:49:46.958866 4720 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="18e4aa200cece3ac531602d94331a8902916aef6c8b0189f867f06d00a72f5b4" Jan 22 06:49:46 crc kubenswrapper[4720]: I0122 06:49:46.958798 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2pcgr" Jan 22 06:49:46 crc kubenswrapper[4720]: I0122 06:49:46.968273 4720 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/da345b49-94f9-4cab-ba07-78dd68bd874b-util\") on node \"crc\" DevicePath \"\"" Jan 22 06:49:46 crc kubenswrapper[4720]: I0122 06:49:46.968329 4720 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/da345b49-94f9-4cab-ba07-78dd68bd874b-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 06:49:46 crc kubenswrapper[4720]: I0122 06:49:46.968348 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8xb49\" (UniqueName: \"kubernetes.io/projected/da345b49-94f9-4cab-ba07-78dd68bd874b-kube-api-access-8xb49\") on node \"crc\" DevicePath \"\"" Jan 22 06:49:47 crc kubenswrapper[4720]: I0122 06:49:47.970054 4720 generic.go:334] "Generic (PLEG): container finished" podID="20c3d28d-88e7-43da-81fa-57df712470e9" containerID="1b8ff919a2cb2f5093a2b33c4786bb5349a59701f251e1412fbb34e8417c3919" exitCode=0 Jan 22 06:49:47 crc kubenswrapper[4720]: I0122 06:49:47.970148 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8tfbw" event={"ID":"20c3d28d-88e7-43da-81fa-57df712470e9","Type":"ContainerDied","Data":"1b8ff919a2cb2f5093a2b33c4786bb5349a59701f251e1412fbb34e8417c3919"} Jan 22 06:49:50 crc kubenswrapper[4720]: I0122 06:49:50.996546 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2gqg2" event={"ID":"90763cf9-c272-4870-8f6d-9e3b506a712f","Type":"ContainerStarted","Data":"74e209790f74949de20bc519b38b1efe965e2c1ffc2a4edc8514c244108da16f"} Jan 22 06:49:51 crc kubenswrapper[4720]: I0122 06:49:51.000226 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8tfbw" event={"ID":"20c3d28d-88e7-43da-81fa-57df712470e9","Type":"ContainerStarted","Data":"d30cb007b2be8a4daebf21b423dea691f247e3cefdff7dd8e494df20f6b3ed51"} Jan 22 06:49:52 crc kubenswrapper[4720]: I0122 06:49:52.009725 4720 generic.go:334] "Generic (PLEG): container finished" podID="20c3d28d-88e7-43da-81fa-57df712470e9" containerID="d30cb007b2be8a4daebf21b423dea691f247e3cefdff7dd8e494df20f6b3ed51" exitCode=0 Jan 22 06:49:52 crc kubenswrapper[4720]: I0122 06:49:52.009823 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8tfbw" event={"ID":"20c3d28d-88e7-43da-81fa-57df712470e9","Type":"ContainerDied","Data":"d30cb007b2be8a4daebf21b423dea691f247e3cefdff7dd8e494df20f6b3ed51"} Jan 22 06:49:53 crc kubenswrapper[4720]: I0122 06:49:53.018407 4720 generic.go:334] "Generic (PLEG): container finished" podID="90763cf9-c272-4870-8f6d-9e3b506a712f" containerID="74e209790f74949de20bc519b38b1efe965e2c1ffc2a4edc8514c244108da16f" exitCode=0 Jan 22 06:49:53 crc kubenswrapper[4720]: I0122 06:49:53.018484 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2gqg2" event={"ID":"90763cf9-c272-4870-8f6d-9e3b506a712f","Type":"ContainerDied","Data":"74e209790f74949de20bc519b38b1efe965e2c1ffc2a4edc8514c244108da16f"} Jan 22 06:49:53 crc kubenswrapper[4720]: I0122 06:49:53.960755 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-7449444d4b-xh4ps"] Jan 22 06:49:53 crc kubenswrapper[4720]: E0122 06:49:53.961500 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da345b49-94f9-4cab-ba07-78dd68bd874b" containerName="util" Jan 22 06:49:53 crc kubenswrapper[4720]: I0122 06:49:53.961588 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="da345b49-94f9-4cab-ba07-78dd68bd874b" containerName="util" Jan 22 06:49:53 crc kubenswrapper[4720]: E0122 06:49:53.961648 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da345b49-94f9-4cab-ba07-78dd68bd874b" containerName="pull" Jan 22 06:49:53 crc kubenswrapper[4720]: I0122 06:49:53.961708 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="da345b49-94f9-4cab-ba07-78dd68bd874b" containerName="pull" Jan 22 06:49:53 crc kubenswrapper[4720]: E0122 06:49:53.961762 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da345b49-94f9-4cab-ba07-78dd68bd874b" containerName="extract" Jan 22 06:49:53 crc kubenswrapper[4720]: I0122 06:49:53.962621 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="da345b49-94f9-4cab-ba07-78dd68bd874b" containerName="extract" Jan 22 06:49:53 crc kubenswrapper[4720]: I0122 06:49:53.962836 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="da345b49-94f9-4cab-ba07-78dd68bd874b" containerName="extract" Jan 22 06:49:53 crc kubenswrapper[4720]: I0122 06:49:53.963435 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-7449444d4b-xh4ps" Jan 22 06:49:53 crc kubenswrapper[4720]: I0122 06:49:53.967798 4720 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Jan 22 06:49:53 crc kubenswrapper[4720]: I0122 06:49:53.968014 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Jan 22 06:49:53 crc kubenswrapper[4720]: I0122 06:49:53.968326 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Jan 22 06:49:53 crc kubenswrapper[4720]: I0122 06:49:53.970701 4720 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-qk2mk" Jan 22 06:49:53 crc kubenswrapper[4720]: I0122 06:49:53.972293 4720 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Jan 22 06:49:54 crc kubenswrapper[4720]: I0122 06:49:54.024693 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-7449444d4b-xh4ps"] Jan 22 06:49:54 crc kubenswrapper[4720]: I0122 06:49:54.032198 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2gqg2" event={"ID":"90763cf9-c272-4870-8f6d-9e3b506a712f","Type":"ContainerStarted","Data":"9e94f7f64ad3716dfb0de52c4c0fe4945be96884dcbafc8e978b4488e870101c"} Jan 22 06:49:54 crc kubenswrapper[4720]: I0122 06:49:54.036084 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8tfbw" event={"ID":"20c3d28d-88e7-43da-81fa-57df712470e9","Type":"ContainerStarted","Data":"043bdd3bdce30d2d05f3eafd4f167c9beb2e4abb1c5b6b97f6e0525b214c8cd0"} Jan 22 06:49:54 crc kubenswrapper[4720]: I0122 06:49:54.093516 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-8tfbw" podStartSLOduration=5.260812236 podStartE2EDuration="10.093494512s" podCreationTimestamp="2026-01-22 06:49:44 +0000 UTC" firstStartedPulling="2026-01-22 06:49:47.982393229 +0000 UTC m=+880.124299934" lastFinishedPulling="2026-01-22 06:49:52.815075505 +0000 UTC m=+884.956982210" observedRunningTime="2026-01-22 06:49:54.088970253 +0000 UTC m=+886.230876978" watchObservedRunningTime="2026-01-22 06:49:54.093494512 +0000 UTC m=+886.235401207" Jan 22 06:49:54 crc kubenswrapper[4720]: I0122 06:49:54.141544 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s2zgl\" (UniqueName: \"kubernetes.io/projected/0e4de6cb-3e0d-46e0-a286-ab0ac437bb3c-kube-api-access-s2zgl\") pod \"metallb-operator-controller-manager-7449444d4b-xh4ps\" (UID: \"0e4de6cb-3e0d-46e0-a286-ab0ac437bb3c\") " pod="metallb-system/metallb-operator-controller-manager-7449444d4b-xh4ps" Jan 22 06:49:54 crc kubenswrapper[4720]: I0122 06:49:54.141609 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0e4de6cb-3e0d-46e0-a286-ab0ac437bb3c-webhook-cert\") pod \"metallb-operator-controller-manager-7449444d4b-xh4ps\" (UID: \"0e4de6cb-3e0d-46e0-a286-ab0ac437bb3c\") " pod="metallb-system/metallb-operator-controller-manager-7449444d4b-xh4ps" Jan 22 06:49:54 crc kubenswrapper[4720]: I0122 06:49:54.141662 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0e4de6cb-3e0d-46e0-a286-ab0ac437bb3c-apiservice-cert\") pod \"metallb-operator-controller-manager-7449444d4b-xh4ps\" (UID: \"0e4de6cb-3e0d-46e0-a286-ab0ac437bb3c\") " pod="metallb-system/metallb-operator-controller-manager-7449444d4b-xh4ps" Jan 22 06:49:54 crc kubenswrapper[4720]: I0122 06:49:54.155018 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-2gqg2" podStartSLOduration=2.404717495 podStartE2EDuration="18.154994803s" podCreationTimestamp="2026-01-22 06:49:36 +0000 UTC" firstStartedPulling="2026-01-22 06:49:37.873943045 +0000 UTC m=+870.015849750" lastFinishedPulling="2026-01-22 06:49:53.624220353 +0000 UTC m=+885.766127058" observedRunningTime="2026-01-22 06:49:54.154362325 +0000 UTC m=+886.296269030" watchObservedRunningTime="2026-01-22 06:49:54.154994803 +0000 UTC m=+886.296901518" Jan 22 06:49:54 crc kubenswrapper[4720]: I0122 06:49:54.242746 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2zgl\" (UniqueName: \"kubernetes.io/projected/0e4de6cb-3e0d-46e0-a286-ab0ac437bb3c-kube-api-access-s2zgl\") pod \"metallb-operator-controller-manager-7449444d4b-xh4ps\" (UID: \"0e4de6cb-3e0d-46e0-a286-ab0ac437bb3c\") " pod="metallb-system/metallb-operator-controller-manager-7449444d4b-xh4ps" Jan 22 06:49:54 crc kubenswrapper[4720]: I0122 06:49:54.243286 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0e4de6cb-3e0d-46e0-a286-ab0ac437bb3c-webhook-cert\") pod \"metallb-operator-controller-manager-7449444d4b-xh4ps\" (UID: \"0e4de6cb-3e0d-46e0-a286-ab0ac437bb3c\") " pod="metallb-system/metallb-operator-controller-manager-7449444d4b-xh4ps" Jan 22 06:49:54 crc kubenswrapper[4720]: I0122 06:49:54.243557 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0e4de6cb-3e0d-46e0-a286-ab0ac437bb3c-apiservice-cert\") pod \"metallb-operator-controller-manager-7449444d4b-xh4ps\" (UID: \"0e4de6cb-3e0d-46e0-a286-ab0ac437bb3c\") " pod="metallb-system/metallb-operator-controller-manager-7449444d4b-xh4ps" Jan 22 06:49:54 crc kubenswrapper[4720]: I0122 06:49:54.250791 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0e4de6cb-3e0d-46e0-a286-ab0ac437bb3c-webhook-cert\") pod \"metallb-operator-controller-manager-7449444d4b-xh4ps\" (UID: \"0e4de6cb-3e0d-46e0-a286-ab0ac437bb3c\") " pod="metallb-system/metallb-operator-controller-manager-7449444d4b-xh4ps" Jan 22 06:49:54 crc kubenswrapper[4720]: I0122 06:49:54.254722 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0e4de6cb-3e0d-46e0-a286-ab0ac437bb3c-apiservice-cert\") pod \"metallb-operator-controller-manager-7449444d4b-xh4ps\" (UID: \"0e4de6cb-3e0d-46e0-a286-ab0ac437bb3c\") " pod="metallb-system/metallb-operator-controller-manager-7449444d4b-xh4ps" Jan 22 06:49:54 crc kubenswrapper[4720]: I0122 06:49:54.265553 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2zgl\" (UniqueName: \"kubernetes.io/projected/0e4de6cb-3e0d-46e0-a286-ab0ac437bb3c-kube-api-access-s2zgl\") pod \"metallb-operator-controller-manager-7449444d4b-xh4ps\" (UID: \"0e4de6cb-3e0d-46e0-a286-ab0ac437bb3c\") " pod="metallb-system/metallb-operator-controller-manager-7449444d4b-xh4ps" Jan 22 06:49:54 crc kubenswrapper[4720]: I0122 06:49:54.321950 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-7449444d4b-xh4ps" Jan 22 06:49:54 crc kubenswrapper[4720]: I0122 06:49:54.412370 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-fc49cf759-5hjst"] Jan 22 06:49:54 crc kubenswrapper[4720]: I0122 06:49:54.413534 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-fc49cf759-5hjst" Jan 22 06:49:54 crc kubenswrapper[4720]: I0122 06:49:54.417763 4720 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 22 06:49:54 crc kubenswrapper[4720]: I0122 06:49:54.418207 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-fc49cf759-5hjst"] Jan 22 06:49:54 crc kubenswrapper[4720]: I0122 06:49:54.418384 4720 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Jan 22 06:49:54 crc kubenswrapper[4720]: I0122 06:49:54.419731 4720 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-c59cj" Jan 22 06:49:54 crc kubenswrapper[4720]: I0122 06:49:54.446531 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tpw2s\" (UniqueName: \"kubernetes.io/projected/48a13b3e-ee8e-4ba2-ad41-c83176d673a5-kube-api-access-tpw2s\") pod \"metallb-operator-webhook-server-fc49cf759-5hjst\" (UID: \"48a13b3e-ee8e-4ba2-ad41-c83176d673a5\") " pod="metallb-system/metallb-operator-webhook-server-fc49cf759-5hjst" Jan 22 06:49:54 crc kubenswrapper[4720]: I0122 06:49:54.446620 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/48a13b3e-ee8e-4ba2-ad41-c83176d673a5-webhook-cert\") pod \"metallb-operator-webhook-server-fc49cf759-5hjst\" (UID: \"48a13b3e-ee8e-4ba2-ad41-c83176d673a5\") " pod="metallb-system/metallb-operator-webhook-server-fc49cf759-5hjst" Jan 22 06:49:54 crc kubenswrapper[4720]: I0122 06:49:54.446671 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/48a13b3e-ee8e-4ba2-ad41-c83176d673a5-apiservice-cert\") pod \"metallb-operator-webhook-server-fc49cf759-5hjst\" (UID: \"48a13b3e-ee8e-4ba2-ad41-c83176d673a5\") " pod="metallb-system/metallb-operator-webhook-server-fc49cf759-5hjst" Jan 22 06:49:54 crc kubenswrapper[4720]: I0122 06:49:54.548136 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/48a13b3e-ee8e-4ba2-ad41-c83176d673a5-apiservice-cert\") pod \"metallb-operator-webhook-server-fc49cf759-5hjst\" (UID: \"48a13b3e-ee8e-4ba2-ad41-c83176d673a5\") " pod="metallb-system/metallb-operator-webhook-server-fc49cf759-5hjst" Jan 22 06:49:54 crc kubenswrapper[4720]: I0122 06:49:54.548271 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tpw2s\" (UniqueName: \"kubernetes.io/projected/48a13b3e-ee8e-4ba2-ad41-c83176d673a5-kube-api-access-tpw2s\") pod \"metallb-operator-webhook-server-fc49cf759-5hjst\" (UID: \"48a13b3e-ee8e-4ba2-ad41-c83176d673a5\") " pod="metallb-system/metallb-operator-webhook-server-fc49cf759-5hjst" Jan 22 06:49:54 crc kubenswrapper[4720]: I0122 06:49:54.548360 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/48a13b3e-ee8e-4ba2-ad41-c83176d673a5-webhook-cert\") pod \"metallb-operator-webhook-server-fc49cf759-5hjst\" (UID: \"48a13b3e-ee8e-4ba2-ad41-c83176d673a5\") " pod="metallb-system/metallb-operator-webhook-server-fc49cf759-5hjst" Jan 22 06:49:54 crc kubenswrapper[4720]: I0122 06:49:54.552733 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/48a13b3e-ee8e-4ba2-ad41-c83176d673a5-apiservice-cert\") pod \"metallb-operator-webhook-server-fc49cf759-5hjst\" (UID: \"48a13b3e-ee8e-4ba2-ad41-c83176d673a5\") " pod="metallb-system/metallb-operator-webhook-server-fc49cf759-5hjst" Jan 22 06:49:54 crc kubenswrapper[4720]: I0122 06:49:54.553583 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/48a13b3e-ee8e-4ba2-ad41-c83176d673a5-webhook-cert\") pod \"metallb-operator-webhook-server-fc49cf759-5hjst\" (UID: \"48a13b3e-ee8e-4ba2-ad41-c83176d673a5\") " pod="metallb-system/metallb-operator-webhook-server-fc49cf759-5hjst" Jan 22 06:49:54 crc kubenswrapper[4720]: I0122 06:49:54.568704 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tpw2s\" (UniqueName: \"kubernetes.io/projected/48a13b3e-ee8e-4ba2-ad41-c83176d673a5-kube-api-access-tpw2s\") pod \"metallb-operator-webhook-server-fc49cf759-5hjst\" (UID: \"48a13b3e-ee8e-4ba2-ad41-c83176d673a5\") " pod="metallb-system/metallb-operator-webhook-server-fc49cf759-5hjst" Jan 22 06:49:54 crc kubenswrapper[4720]: I0122 06:49:54.736974 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-fc49cf759-5hjst" Jan 22 06:49:54 crc kubenswrapper[4720]: I0122 06:49:54.825607 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-7449444d4b-xh4ps"] Jan 22 06:49:54 crc kubenswrapper[4720]: W0122 06:49:54.841866 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0e4de6cb_3e0d_46e0_a286_ab0ac437bb3c.slice/crio-3a1b5dc1b9fa7a0d31bd4b96632bd509a56df6e82f1e9c62fcdcfb774b564ddd WatchSource:0}: Error finding container 3a1b5dc1b9fa7a0d31bd4b96632bd509a56df6e82f1e9c62fcdcfb774b564ddd: Status 404 returned error can't find the container with id 3a1b5dc1b9fa7a0d31bd4b96632bd509a56df6e82f1e9c62fcdcfb774b564ddd Jan 22 06:49:55 crc kubenswrapper[4720]: I0122 06:49:55.046823 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-7449444d4b-xh4ps" event={"ID":"0e4de6cb-3e0d-46e0-a286-ab0ac437bb3c","Type":"ContainerStarted","Data":"3a1b5dc1b9fa7a0d31bd4b96632bd509a56df6e82f1e9c62fcdcfb774b564ddd"} Jan 22 06:49:55 crc kubenswrapper[4720]: I0122 06:49:55.080443 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-8tfbw" Jan 22 06:49:55 crc kubenswrapper[4720]: I0122 06:49:55.080535 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-8tfbw" Jan 22 06:49:55 crc kubenswrapper[4720]: W0122 06:49:55.232654 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod48a13b3e_ee8e_4ba2_ad41_c83176d673a5.slice/crio-2a80a553a419bfbfadfb21adb93f0103f0b7089759938501e5a4cefddd7a0bab WatchSource:0}: Error finding container 2a80a553a419bfbfadfb21adb93f0103f0b7089759938501e5a4cefddd7a0bab: Status 404 returned error can't find the container with id 2a80a553a419bfbfadfb21adb93f0103f0b7089759938501e5a4cefddd7a0bab Jan 22 06:49:55 crc kubenswrapper[4720]: I0122 06:49:55.232504 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-fc49cf759-5hjst"] Jan 22 06:49:56 crc kubenswrapper[4720]: I0122 06:49:56.056085 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-fc49cf759-5hjst" event={"ID":"48a13b3e-ee8e-4ba2-ad41-c83176d673a5","Type":"ContainerStarted","Data":"2a80a553a419bfbfadfb21adb93f0103f0b7089759938501e5a4cefddd7a0bab"} Jan 22 06:49:56 crc kubenswrapper[4720]: I0122 06:49:56.158008 4720 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-8tfbw" podUID="20c3d28d-88e7-43da-81fa-57df712470e9" containerName="registry-server" probeResult="failure" output=< Jan 22 06:49:56 crc kubenswrapper[4720]: timeout: failed to connect service ":50051" within 1s Jan 22 06:49:56 crc kubenswrapper[4720]: > Jan 22 06:49:56 crc kubenswrapper[4720]: I0122 06:49:56.692067 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-2gqg2" Jan 22 06:49:56 crc kubenswrapper[4720]: I0122 06:49:56.692154 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-2gqg2" Jan 22 06:49:56 crc kubenswrapper[4720]: I0122 06:49:56.789436 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-2gqg2" Jan 22 06:49:58 crc kubenswrapper[4720]: I0122 06:49:58.145101 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-2gqg2" Jan 22 06:49:59 crc kubenswrapper[4720]: I0122 06:49:59.827733 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2gqg2"] Jan 22 06:49:59 crc kubenswrapper[4720]: I0122 06:49:59.945982 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-trf28"] Jan 22 06:49:59 crc kubenswrapper[4720]: I0122 06:49:59.946341 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-trf28" podUID="a64a2970-44b4-4c97-98d8-7d7de717e554" containerName="registry-server" containerID="cri-o://7d5ffa46107ed89646b6094ba3272783810517ad027d6af7f6148a265433dd9d" gracePeriod=2 Jan 22 06:50:00 crc kubenswrapper[4720]: I0122 06:50:00.135703 4720 generic.go:334] "Generic (PLEG): container finished" podID="a64a2970-44b4-4c97-98d8-7d7de717e554" containerID="7d5ffa46107ed89646b6094ba3272783810517ad027d6af7f6148a265433dd9d" exitCode=0 Jan 22 06:50:00 crc kubenswrapper[4720]: I0122 06:50:00.136294 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-trf28" event={"ID":"a64a2970-44b4-4c97-98d8-7d7de717e554","Type":"ContainerDied","Data":"7d5ffa46107ed89646b6094ba3272783810517ad027d6af7f6148a265433dd9d"} Jan 22 06:50:05 crc kubenswrapper[4720]: I0122 06:50:05.128010 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-8tfbw" Jan 22 06:50:05 crc kubenswrapper[4720]: I0122 06:50:05.178467 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-8tfbw" Jan 22 06:50:05 crc kubenswrapper[4720]: I0122 06:50:05.371879 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-8tfbw"] Jan 22 06:50:06 crc kubenswrapper[4720]: I0122 06:50:06.181100 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-8tfbw" podUID="20c3d28d-88e7-43da-81fa-57df712470e9" containerName="registry-server" containerID="cri-o://043bdd3bdce30d2d05f3eafd4f167c9beb2e4abb1c5b6b97f6e0525b214c8cd0" gracePeriod=2 Jan 22 06:50:07 crc kubenswrapper[4720]: I0122 06:50:07.430413 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-trf28" Jan 22 06:50:07 crc kubenswrapper[4720]: I0122 06:50:07.561936 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a64a2970-44b4-4c97-98d8-7d7de717e554-catalog-content\") pod \"a64a2970-44b4-4c97-98d8-7d7de717e554\" (UID: \"a64a2970-44b4-4c97-98d8-7d7de717e554\") " Jan 22 06:50:07 crc kubenswrapper[4720]: I0122 06:50:07.562006 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d9mqp\" (UniqueName: \"kubernetes.io/projected/a64a2970-44b4-4c97-98d8-7d7de717e554-kube-api-access-d9mqp\") pod \"a64a2970-44b4-4c97-98d8-7d7de717e554\" (UID: \"a64a2970-44b4-4c97-98d8-7d7de717e554\") " Jan 22 06:50:07 crc kubenswrapper[4720]: I0122 06:50:07.562112 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a64a2970-44b4-4c97-98d8-7d7de717e554-utilities\") pod \"a64a2970-44b4-4c97-98d8-7d7de717e554\" (UID: \"a64a2970-44b4-4c97-98d8-7d7de717e554\") " Jan 22 06:50:07 crc kubenswrapper[4720]: I0122 06:50:07.563044 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a64a2970-44b4-4c97-98d8-7d7de717e554-utilities" (OuterVolumeSpecName: "utilities") pod "a64a2970-44b4-4c97-98d8-7d7de717e554" (UID: "a64a2970-44b4-4c97-98d8-7d7de717e554"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 06:50:07 crc kubenswrapper[4720]: I0122 06:50:07.570119 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a64a2970-44b4-4c97-98d8-7d7de717e554-kube-api-access-d9mqp" (OuterVolumeSpecName: "kube-api-access-d9mqp") pod "a64a2970-44b4-4c97-98d8-7d7de717e554" (UID: "a64a2970-44b4-4c97-98d8-7d7de717e554"). InnerVolumeSpecName "kube-api-access-d9mqp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 06:50:07 crc kubenswrapper[4720]: I0122 06:50:07.571948 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8tfbw" Jan 22 06:50:07 crc kubenswrapper[4720]: I0122 06:50:07.650264 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a64a2970-44b4-4c97-98d8-7d7de717e554-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a64a2970-44b4-4c97-98d8-7d7de717e554" (UID: "a64a2970-44b4-4c97-98d8-7d7de717e554"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 06:50:07 crc kubenswrapper[4720]: I0122 06:50:07.663052 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qggd9\" (UniqueName: \"kubernetes.io/projected/20c3d28d-88e7-43da-81fa-57df712470e9-kube-api-access-qggd9\") pod \"20c3d28d-88e7-43da-81fa-57df712470e9\" (UID: \"20c3d28d-88e7-43da-81fa-57df712470e9\") " Jan 22 06:50:07 crc kubenswrapper[4720]: I0122 06:50:07.663143 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/20c3d28d-88e7-43da-81fa-57df712470e9-utilities\") pod \"20c3d28d-88e7-43da-81fa-57df712470e9\" (UID: \"20c3d28d-88e7-43da-81fa-57df712470e9\") " Jan 22 06:50:07 crc kubenswrapper[4720]: I0122 06:50:07.663332 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/20c3d28d-88e7-43da-81fa-57df712470e9-catalog-content\") pod \"20c3d28d-88e7-43da-81fa-57df712470e9\" (UID: \"20c3d28d-88e7-43da-81fa-57df712470e9\") " Jan 22 06:50:07 crc kubenswrapper[4720]: I0122 06:50:07.663583 4720 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a64a2970-44b4-4c97-98d8-7d7de717e554-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 06:50:07 crc kubenswrapper[4720]: I0122 06:50:07.663604 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d9mqp\" (UniqueName: \"kubernetes.io/projected/a64a2970-44b4-4c97-98d8-7d7de717e554-kube-api-access-d9mqp\") on node \"crc\" DevicePath \"\"" Jan 22 06:50:07 crc kubenswrapper[4720]: I0122 06:50:07.663616 4720 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a64a2970-44b4-4c97-98d8-7d7de717e554-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 06:50:07 crc kubenswrapper[4720]: I0122 06:50:07.664319 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20c3d28d-88e7-43da-81fa-57df712470e9-utilities" (OuterVolumeSpecName: "utilities") pod "20c3d28d-88e7-43da-81fa-57df712470e9" (UID: "20c3d28d-88e7-43da-81fa-57df712470e9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 06:50:07 crc kubenswrapper[4720]: I0122 06:50:07.666997 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20c3d28d-88e7-43da-81fa-57df712470e9-kube-api-access-qggd9" (OuterVolumeSpecName: "kube-api-access-qggd9") pod "20c3d28d-88e7-43da-81fa-57df712470e9" (UID: "20c3d28d-88e7-43da-81fa-57df712470e9"). InnerVolumeSpecName "kube-api-access-qggd9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 06:50:07 crc kubenswrapper[4720]: I0122 06:50:07.688960 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20c3d28d-88e7-43da-81fa-57df712470e9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "20c3d28d-88e7-43da-81fa-57df712470e9" (UID: "20c3d28d-88e7-43da-81fa-57df712470e9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 06:50:07 crc kubenswrapper[4720]: I0122 06:50:07.765292 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qggd9\" (UniqueName: \"kubernetes.io/projected/20c3d28d-88e7-43da-81fa-57df712470e9-kube-api-access-qggd9\") on node \"crc\" DevicePath \"\"" Jan 22 06:50:07 crc kubenswrapper[4720]: I0122 06:50:07.765348 4720 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/20c3d28d-88e7-43da-81fa-57df712470e9-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 06:50:07 crc kubenswrapper[4720]: I0122 06:50:07.765365 4720 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/20c3d28d-88e7-43da-81fa-57df712470e9-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 06:50:08 crc kubenswrapper[4720]: I0122 06:50:08.198956 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-trf28" Jan 22 06:50:08 crc kubenswrapper[4720]: I0122 06:50:08.198833 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-trf28" event={"ID":"a64a2970-44b4-4c97-98d8-7d7de717e554","Type":"ContainerDied","Data":"efbae97e8be5ce4a1c8ac6e5ffe17a3a3f59408125616f360bdd48394a84f4e2"} Jan 22 06:50:08 crc kubenswrapper[4720]: I0122 06:50:08.199194 4720 scope.go:117] "RemoveContainer" containerID="7d5ffa46107ed89646b6094ba3272783810517ad027d6af7f6148a265433dd9d" Jan 22 06:50:08 crc kubenswrapper[4720]: I0122 06:50:08.200517 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-fc49cf759-5hjst" event={"ID":"48a13b3e-ee8e-4ba2-ad41-c83176d673a5","Type":"ContainerStarted","Data":"695a2332e60aafe6b8ce739dcc980480ddf4b77514216af45dad21ad4562936a"} Jan 22 06:50:08 crc kubenswrapper[4720]: I0122 06:50:08.200642 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-fc49cf759-5hjst" Jan 22 06:50:08 crc kubenswrapper[4720]: I0122 06:50:08.203603 4720 generic.go:334] "Generic (PLEG): container finished" podID="20c3d28d-88e7-43da-81fa-57df712470e9" containerID="043bdd3bdce30d2d05f3eafd4f167c9beb2e4abb1c5b6b97f6e0525b214c8cd0" exitCode=0 Jan 22 06:50:08 crc kubenswrapper[4720]: I0122 06:50:08.203669 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8tfbw" event={"ID":"20c3d28d-88e7-43da-81fa-57df712470e9","Type":"ContainerDied","Data":"043bdd3bdce30d2d05f3eafd4f167c9beb2e4abb1c5b6b97f6e0525b214c8cd0"} Jan 22 06:50:08 crc kubenswrapper[4720]: I0122 06:50:08.203694 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8tfbw" event={"ID":"20c3d28d-88e7-43da-81fa-57df712470e9","Type":"ContainerDied","Data":"b269d8df496e126948efd7ae1815b1e8fcf16fb4042e21f6ff1ce7157c38bc7f"} Jan 22 06:50:08 crc kubenswrapper[4720]: I0122 06:50:08.203784 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8tfbw" Jan 22 06:50:08 crc kubenswrapper[4720]: I0122 06:50:08.209367 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-7449444d4b-xh4ps" event={"ID":"0e4de6cb-3e0d-46e0-a286-ab0ac437bb3c","Type":"ContainerStarted","Data":"753456c26de752b5a00d24d2f3f1b5db0676a10545168229d13d7a1ec3cb42ba"} Jan 22 06:50:08 crc kubenswrapper[4720]: I0122 06:50:08.220589 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-7449444d4b-xh4ps" Jan 22 06:50:08 crc kubenswrapper[4720]: I0122 06:50:08.235237 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-fc49cf759-5hjst" podStartSLOduration=2.077370096 podStartE2EDuration="14.235219199s" podCreationTimestamp="2026-01-22 06:49:54 +0000 UTC" firstStartedPulling="2026-01-22 06:49:55.236688807 +0000 UTC m=+887.378595512" lastFinishedPulling="2026-01-22 06:50:07.3945379 +0000 UTC m=+899.536444615" observedRunningTime="2026-01-22 06:50:08.233624933 +0000 UTC m=+900.375531648" watchObservedRunningTime="2026-01-22 06:50:08.235219199 +0000 UTC m=+900.377125904" Jan 22 06:50:08 crc kubenswrapper[4720]: I0122 06:50:08.238209 4720 scope.go:117] "RemoveContainer" containerID="8765e253c41e7369a7adf593d9763a8895da11cf4ed1507e00c51a562b83139b" Jan 22 06:50:08 crc kubenswrapper[4720]: I0122 06:50:08.261367 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-trf28"] Jan 22 06:50:08 crc kubenswrapper[4720]: I0122 06:50:08.268485 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-trf28"] Jan 22 06:50:08 crc kubenswrapper[4720]: I0122 06:50:08.287162 4720 scope.go:117] "RemoveContainer" containerID="cbf5a0dc51f7b9a0f7f2f7078ea5ab1438c70e2d178ec76e5747e51b968d969d" Jan 22 06:50:08 crc kubenswrapper[4720]: I0122 06:50:08.289989 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-7449444d4b-xh4ps" podStartSLOduration=2.752975085 podStartE2EDuration="15.289955778s" podCreationTimestamp="2026-01-22 06:49:53 +0000 UTC" firstStartedPulling="2026-01-22 06:49:54.84466004 +0000 UTC m=+886.986566745" lastFinishedPulling="2026-01-22 06:50:07.381640733 +0000 UTC m=+899.523547438" observedRunningTime="2026-01-22 06:50:08.279581422 +0000 UTC m=+900.421488137" watchObservedRunningTime="2026-01-22 06:50:08.289955778 +0000 UTC m=+900.431862503" Jan 22 06:50:08 crc kubenswrapper[4720]: I0122 06:50:08.303199 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-8tfbw"] Jan 22 06:50:08 crc kubenswrapper[4720]: I0122 06:50:08.307498 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-8tfbw"] Jan 22 06:50:08 crc kubenswrapper[4720]: I0122 06:50:08.318416 4720 scope.go:117] "RemoveContainer" containerID="043bdd3bdce30d2d05f3eafd4f167c9beb2e4abb1c5b6b97f6e0525b214c8cd0" Jan 22 06:50:08 crc kubenswrapper[4720]: I0122 06:50:08.338815 4720 scope.go:117] "RemoveContainer" containerID="d30cb007b2be8a4daebf21b423dea691f247e3cefdff7dd8e494df20f6b3ed51" Jan 22 06:50:08 crc kubenswrapper[4720]: I0122 06:50:08.360179 4720 scope.go:117] "RemoveContainer" containerID="1b8ff919a2cb2f5093a2b33c4786bb5349a59701f251e1412fbb34e8417c3919" Jan 22 06:50:08 crc kubenswrapper[4720]: I0122 06:50:08.376067 4720 scope.go:117] "RemoveContainer" containerID="043bdd3bdce30d2d05f3eafd4f167c9beb2e4abb1c5b6b97f6e0525b214c8cd0" Jan 22 06:50:08 crc kubenswrapper[4720]: E0122 06:50:08.376634 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"043bdd3bdce30d2d05f3eafd4f167c9beb2e4abb1c5b6b97f6e0525b214c8cd0\": container with ID starting with 043bdd3bdce30d2d05f3eafd4f167c9beb2e4abb1c5b6b97f6e0525b214c8cd0 not found: ID does not exist" containerID="043bdd3bdce30d2d05f3eafd4f167c9beb2e4abb1c5b6b97f6e0525b214c8cd0" Jan 22 06:50:08 crc kubenswrapper[4720]: I0122 06:50:08.376697 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"043bdd3bdce30d2d05f3eafd4f167c9beb2e4abb1c5b6b97f6e0525b214c8cd0"} err="failed to get container status \"043bdd3bdce30d2d05f3eafd4f167c9beb2e4abb1c5b6b97f6e0525b214c8cd0\": rpc error: code = NotFound desc = could not find container \"043bdd3bdce30d2d05f3eafd4f167c9beb2e4abb1c5b6b97f6e0525b214c8cd0\": container with ID starting with 043bdd3bdce30d2d05f3eafd4f167c9beb2e4abb1c5b6b97f6e0525b214c8cd0 not found: ID does not exist" Jan 22 06:50:08 crc kubenswrapper[4720]: I0122 06:50:08.376740 4720 scope.go:117] "RemoveContainer" containerID="d30cb007b2be8a4daebf21b423dea691f247e3cefdff7dd8e494df20f6b3ed51" Jan 22 06:50:08 crc kubenswrapper[4720]: E0122 06:50:08.377412 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d30cb007b2be8a4daebf21b423dea691f247e3cefdff7dd8e494df20f6b3ed51\": container with ID starting with d30cb007b2be8a4daebf21b423dea691f247e3cefdff7dd8e494df20f6b3ed51 not found: ID does not exist" containerID="d30cb007b2be8a4daebf21b423dea691f247e3cefdff7dd8e494df20f6b3ed51" Jan 22 06:50:08 crc kubenswrapper[4720]: I0122 06:50:08.377466 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d30cb007b2be8a4daebf21b423dea691f247e3cefdff7dd8e494df20f6b3ed51"} err="failed to get container status \"d30cb007b2be8a4daebf21b423dea691f247e3cefdff7dd8e494df20f6b3ed51\": rpc error: code = NotFound desc = could not find container \"d30cb007b2be8a4daebf21b423dea691f247e3cefdff7dd8e494df20f6b3ed51\": container with ID starting with d30cb007b2be8a4daebf21b423dea691f247e3cefdff7dd8e494df20f6b3ed51 not found: ID does not exist" Jan 22 06:50:08 crc kubenswrapper[4720]: I0122 06:50:08.377504 4720 scope.go:117] "RemoveContainer" containerID="1b8ff919a2cb2f5093a2b33c4786bb5349a59701f251e1412fbb34e8417c3919" Jan 22 06:50:08 crc kubenswrapper[4720]: E0122 06:50:08.377963 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1b8ff919a2cb2f5093a2b33c4786bb5349a59701f251e1412fbb34e8417c3919\": container with ID starting with 1b8ff919a2cb2f5093a2b33c4786bb5349a59701f251e1412fbb34e8417c3919 not found: ID does not exist" containerID="1b8ff919a2cb2f5093a2b33c4786bb5349a59701f251e1412fbb34e8417c3919" Jan 22 06:50:08 crc kubenswrapper[4720]: I0122 06:50:08.378003 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1b8ff919a2cb2f5093a2b33c4786bb5349a59701f251e1412fbb34e8417c3919"} err="failed to get container status \"1b8ff919a2cb2f5093a2b33c4786bb5349a59701f251e1412fbb34e8417c3919\": rpc error: code = NotFound desc = could not find container \"1b8ff919a2cb2f5093a2b33c4786bb5349a59701f251e1412fbb34e8417c3919\": container with ID starting with 1b8ff919a2cb2f5093a2b33c4786bb5349a59701f251e1412fbb34e8417c3919 not found: ID does not exist" Jan 22 06:50:09 crc kubenswrapper[4720]: E0122 06:50:09.613440 4720 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod20c3d28d_88e7_43da_81fa_57df712470e9.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod20c3d28d_88e7_43da_81fa_57df712470e9.slice/crio-b269d8df496e126948efd7ae1815b1e8fcf16fb4042e21f6ff1ce7157c38bc7f\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda64a2970_44b4_4c97_98d8_7d7de717e554.slice\": RecentStats: unable to find data in memory cache]" Jan 22 06:50:10 crc kubenswrapper[4720]: I0122 06:50:10.225970 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20c3d28d-88e7-43da-81fa-57df712470e9" path="/var/lib/kubelet/pods/20c3d28d-88e7-43da-81fa-57df712470e9/volumes" Jan 22 06:50:10 crc kubenswrapper[4720]: I0122 06:50:10.226755 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a64a2970-44b4-4c97-98d8-7d7de717e554" path="/var/lib/kubelet/pods/a64a2970-44b4-4c97-98d8-7d7de717e554/volumes" Jan 22 06:50:19 crc kubenswrapper[4720]: E0122 06:50:19.786693 4720 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod20c3d28d_88e7_43da_81fa_57df712470e9.slice/crio-b269d8df496e126948efd7ae1815b1e8fcf16fb4042e21f6ff1ce7157c38bc7f\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda64a2970_44b4_4c97_98d8_7d7de717e554.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod20c3d28d_88e7_43da_81fa_57df712470e9.slice\": RecentStats: unable to find data in memory cache]" Jan 22 06:50:24 crc kubenswrapper[4720]: I0122 06:50:24.747235 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-fc49cf759-5hjst" Jan 22 06:50:30 crc kubenswrapper[4720]: E0122 06:50:30.030871 4720 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod20c3d28d_88e7_43da_81fa_57df712470e9.slice/crio-b269d8df496e126948efd7ae1815b1e8fcf16fb4042e21f6ff1ce7157c38bc7f\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda64a2970_44b4_4c97_98d8_7d7de717e554.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod20c3d28d_88e7_43da_81fa_57df712470e9.slice\": RecentStats: unable to find data in memory cache]" Jan 22 06:50:40 crc kubenswrapper[4720]: E0122 06:50:40.196891 4720 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod20c3d28d_88e7_43da_81fa_57df712470e9.slice/crio-b269d8df496e126948efd7ae1815b1e8fcf16fb4042e21f6ff1ce7157c38bc7f\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda64a2970_44b4_4c97_98d8_7d7de717e554.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod20c3d28d_88e7_43da_81fa_57df712470e9.slice\": RecentStats: unable to find data in memory cache]" Jan 22 06:50:44 crc kubenswrapper[4720]: I0122 06:50:44.326452 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-7449444d4b-xh4ps" Jan 22 06:50:45 crc kubenswrapper[4720]: I0122 06:50:45.132638 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-kdlvf"] Jan 22 06:50:45 crc kubenswrapper[4720]: E0122 06:50:45.132990 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20c3d28d-88e7-43da-81fa-57df712470e9" containerName="registry-server" Jan 22 06:50:45 crc kubenswrapper[4720]: I0122 06:50:45.133009 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="20c3d28d-88e7-43da-81fa-57df712470e9" containerName="registry-server" Jan 22 06:50:45 crc kubenswrapper[4720]: E0122 06:50:45.133027 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a64a2970-44b4-4c97-98d8-7d7de717e554" containerName="extract-utilities" Jan 22 06:50:45 crc kubenswrapper[4720]: I0122 06:50:45.133035 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="a64a2970-44b4-4c97-98d8-7d7de717e554" containerName="extract-utilities" Jan 22 06:50:45 crc kubenswrapper[4720]: E0122 06:50:45.133045 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a64a2970-44b4-4c97-98d8-7d7de717e554" containerName="extract-content" Jan 22 06:50:45 crc kubenswrapper[4720]: I0122 06:50:45.133052 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="a64a2970-44b4-4c97-98d8-7d7de717e554" containerName="extract-content" Jan 22 06:50:45 crc kubenswrapper[4720]: E0122 06:50:45.133061 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a64a2970-44b4-4c97-98d8-7d7de717e554" containerName="registry-server" Jan 22 06:50:45 crc kubenswrapper[4720]: I0122 06:50:45.133070 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="a64a2970-44b4-4c97-98d8-7d7de717e554" containerName="registry-server" Jan 22 06:50:45 crc kubenswrapper[4720]: E0122 06:50:45.133077 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20c3d28d-88e7-43da-81fa-57df712470e9" containerName="extract-content" Jan 22 06:50:45 crc kubenswrapper[4720]: I0122 06:50:45.133083 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="20c3d28d-88e7-43da-81fa-57df712470e9" containerName="extract-content" Jan 22 06:50:45 crc kubenswrapper[4720]: E0122 06:50:45.133094 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20c3d28d-88e7-43da-81fa-57df712470e9" containerName="extract-utilities" Jan 22 06:50:45 crc kubenswrapper[4720]: I0122 06:50:45.133100 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="20c3d28d-88e7-43da-81fa-57df712470e9" containerName="extract-utilities" Jan 22 06:50:45 crc kubenswrapper[4720]: I0122 06:50:45.133222 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="a64a2970-44b4-4c97-98d8-7d7de717e554" containerName="registry-server" Jan 22 06:50:45 crc kubenswrapper[4720]: I0122 06:50:45.133233 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="20c3d28d-88e7-43da-81fa-57df712470e9" containerName="registry-server" Jan 22 06:50:45 crc kubenswrapper[4720]: I0122 06:50:45.143611 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-kdlvf" Jan 22 06:50:45 crc kubenswrapper[4720]: I0122 06:50:45.148584 4720 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-t8kxg" Jan 22 06:50:45 crc kubenswrapper[4720]: I0122 06:50:45.148630 4720 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Jan 22 06:50:45 crc kubenswrapper[4720]: I0122 06:50:45.160117 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Jan 22 06:50:45 crc kubenswrapper[4720]: I0122 06:50:45.163668 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/e41ff3f3-3360-4fd3-99ed-448ca648f3b6-frr-startup\") pod \"frr-k8s-kdlvf\" (UID: \"e41ff3f3-3360-4fd3-99ed-448ca648f3b6\") " pod="metallb-system/frr-k8s-kdlvf" Jan 22 06:50:45 crc kubenswrapper[4720]: I0122 06:50:45.163749 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/e41ff3f3-3360-4fd3-99ed-448ca648f3b6-frr-conf\") pod \"frr-k8s-kdlvf\" (UID: \"e41ff3f3-3360-4fd3-99ed-448ca648f3b6\") " pod="metallb-system/frr-k8s-kdlvf" Jan 22 06:50:45 crc kubenswrapper[4720]: I0122 06:50:45.163815 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e41ff3f3-3360-4fd3-99ed-448ca648f3b6-metrics-certs\") pod \"frr-k8s-kdlvf\" (UID: \"e41ff3f3-3360-4fd3-99ed-448ca648f3b6\") " pod="metallb-system/frr-k8s-kdlvf" Jan 22 06:50:45 crc kubenswrapper[4720]: I0122 06:50:45.163891 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/e41ff3f3-3360-4fd3-99ed-448ca648f3b6-metrics\") pod \"frr-k8s-kdlvf\" (UID: \"e41ff3f3-3360-4fd3-99ed-448ca648f3b6\") " pod="metallb-system/frr-k8s-kdlvf" Jan 22 06:50:45 crc kubenswrapper[4720]: I0122 06:50:45.163984 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/e41ff3f3-3360-4fd3-99ed-448ca648f3b6-frr-sockets\") pod \"frr-k8s-kdlvf\" (UID: \"e41ff3f3-3360-4fd3-99ed-448ca648f3b6\") " pod="metallb-system/frr-k8s-kdlvf" Jan 22 06:50:45 crc kubenswrapper[4720]: I0122 06:50:45.164025 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/e41ff3f3-3360-4fd3-99ed-448ca648f3b6-reloader\") pod \"frr-k8s-kdlvf\" (UID: \"e41ff3f3-3360-4fd3-99ed-448ca648f3b6\") " pod="metallb-system/frr-k8s-kdlvf" Jan 22 06:50:45 crc kubenswrapper[4720]: I0122 06:50:45.164249 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v4sz8\" (UniqueName: \"kubernetes.io/projected/e41ff3f3-3360-4fd3-99ed-448ca648f3b6-kube-api-access-v4sz8\") pod \"frr-k8s-kdlvf\" (UID: \"e41ff3f3-3360-4fd3-99ed-448ca648f3b6\") " pod="metallb-system/frr-k8s-kdlvf" Jan 22 06:50:45 crc kubenswrapper[4720]: I0122 06:50:45.175210 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-bnntl"] Jan 22 06:50:45 crc kubenswrapper[4720]: I0122 06:50:45.178835 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-bnntl" Jan 22 06:50:45 crc kubenswrapper[4720]: I0122 06:50:45.185299 4720 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Jan 22 06:50:45 crc kubenswrapper[4720]: I0122 06:50:45.196021 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-bnntl"] Jan 22 06:50:45 crc kubenswrapper[4720]: I0122 06:50:45.266838 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/e41ff3f3-3360-4fd3-99ed-448ca648f3b6-frr-startup\") pod \"frr-k8s-kdlvf\" (UID: \"e41ff3f3-3360-4fd3-99ed-448ca648f3b6\") " pod="metallb-system/frr-k8s-kdlvf" Jan 22 06:50:45 crc kubenswrapper[4720]: I0122 06:50:45.266920 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/e41ff3f3-3360-4fd3-99ed-448ca648f3b6-frr-conf\") pod \"frr-k8s-kdlvf\" (UID: \"e41ff3f3-3360-4fd3-99ed-448ca648f3b6\") " pod="metallb-system/frr-k8s-kdlvf" Jan 22 06:50:45 crc kubenswrapper[4720]: I0122 06:50:45.266959 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e41ff3f3-3360-4fd3-99ed-448ca648f3b6-metrics-certs\") pod \"frr-k8s-kdlvf\" (UID: \"e41ff3f3-3360-4fd3-99ed-448ca648f3b6\") " pod="metallb-system/frr-k8s-kdlvf" Jan 22 06:50:45 crc kubenswrapper[4720]: I0122 06:50:45.267348 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/e41ff3f3-3360-4fd3-99ed-448ca648f3b6-metrics\") pod \"frr-k8s-kdlvf\" (UID: \"e41ff3f3-3360-4fd3-99ed-448ca648f3b6\") " pod="metallb-system/frr-k8s-kdlvf" Jan 22 06:50:45 crc kubenswrapper[4720]: I0122 06:50:45.267394 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/e41ff3f3-3360-4fd3-99ed-448ca648f3b6-frr-sockets\") pod \"frr-k8s-kdlvf\" (UID: \"e41ff3f3-3360-4fd3-99ed-448ca648f3b6\") " pod="metallb-system/frr-k8s-kdlvf" Jan 22 06:50:45 crc kubenswrapper[4720]: I0122 06:50:45.267421 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/e41ff3f3-3360-4fd3-99ed-448ca648f3b6-reloader\") pod \"frr-k8s-kdlvf\" (UID: \"e41ff3f3-3360-4fd3-99ed-448ca648f3b6\") " pod="metallb-system/frr-k8s-kdlvf" Jan 22 06:50:45 crc kubenswrapper[4720]: I0122 06:50:45.267460 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v4sz8\" (UniqueName: \"kubernetes.io/projected/e41ff3f3-3360-4fd3-99ed-448ca648f3b6-kube-api-access-v4sz8\") pod \"frr-k8s-kdlvf\" (UID: \"e41ff3f3-3360-4fd3-99ed-448ca648f3b6\") " pod="metallb-system/frr-k8s-kdlvf" Jan 22 06:50:45 crc kubenswrapper[4720]: I0122 06:50:45.267792 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/e41ff3f3-3360-4fd3-99ed-448ca648f3b6-frr-conf\") pod \"frr-k8s-kdlvf\" (UID: \"e41ff3f3-3360-4fd3-99ed-448ca648f3b6\") " pod="metallb-system/frr-k8s-kdlvf" Jan 22 06:50:45 crc kubenswrapper[4720]: I0122 06:50:45.268146 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-67m5k"] Jan 22 06:50:45 crc kubenswrapper[4720]: E0122 06:50:45.268373 4720 secret.go:188] Couldn't get secret metallb-system/frr-k8s-certs-secret: secret "frr-k8s-certs-secret" not found Jan 22 06:50:45 crc kubenswrapper[4720]: I0122 06:50:45.268468 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/e41ff3f3-3360-4fd3-99ed-448ca648f3b6-frr-startup\") pod \"frr-k8s-kdlvf\" (UID: \"e41ff3f3-3360-4fd3-99ed-448ca648f3b6\") " pod="metallb-system/frr-k8s-kdlvf" Jan 22 06:50:45 crc kubenswrapper[4720]: I0122 06:50:45.268392 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/e41ff3f3-3360-4fd3-99ed-448ca648f3b6-frr-sockets\") pod \"frr-k8s-kdlvf\" (UID: \"e41ff3f3-3360-4fd3-99ed-448ca648f3b6\") " pod="metallb-system/frr-k8s-kdlvf" Jan 22 06:50:45 crc kubenswrapper[4720]: E0122 06:50:45.268520 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e41ff3f3-3360-4fd3-99ed-448ca648f3b6-metrics-certs podName:e41ff3f3-3360-4fd3-99ed-448ca648f3b6 nodeName:}" failed. No retries permitted until 2026-01-22 06:50:45.768483699 +0000 UTC m=+937.910390574 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e41ff3f3-3360-4fd3-99ed-448ca648f3b6-metrics-certs") pod "frr-k8s-kdlvf" (UID: "e41ff3f3-3360-4fd3-99ed-448ca648f3b6") : secret "frr-k8s-certs-secret" not found Jan 22 06:50:45 crc kubenswrapper[4720]: I0122 06:50:45.269564 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-67m5k" Jan 22 06:50:45 crc kubenswrapper[4720]: I0122 06:50:45.270425 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/e41ff3f3-3360-4fd3-99ed-448ca648f3b6-reloader\") pod \"frr-k8s-kdlvf\" (UID: \"e41ff3f3-3360-4fd3-99ed-448ca648f3b6\") " pod="metallb-system/frr-k8s-kdlvf" Jan 22 06:50:45 crc kubenswrapper[4720]: I0122 06:50:45.272172 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/e41ff3f3-3360-4fd3-99ed-448ca648f3b6-metrics\") pod \"frr-k8s-kdlvf\" (UID: \"e41ff3f3-3360-4fd3-99ed-448ca648f3b6\") " pod="metallb-system/frr-k8s-kdlvf" Jan 22 06:50:45 crc kubenswrapper[4720]: I0122 06:50:45.272609 4720 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Jan 22 06:50:45 crc kubenswrapper[4720]: I0122 06:50:45.272856 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Jan 22 06:50:45 crc kubenswrapper[4720]: I0122 06:50:45.273928 4720 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-k5dbn" Jan 22 06:50:45 crc kubenswrapper[4720]: I0122 06:50:45.274155 4720 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Jan 22 06:50:45 crc kubenswrapper[4720]: I0122 06:50:45.278629 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-6968d8fdc4-49vhq"] Jan 22 06:50:45 crc kubenswrapper[4720]: I0122 06:50:45.280142 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-49vhq" Jan 22 06:50:45 crc kubenswrapper[4720]: I0122 06:50:45.284817 4720 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Jan 22 06:50:45 crc kubenswrapper[4720]: I0122 06:50:45.291993 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-49vhq"] Jan 22 06:50:45 crc kubenswrapper[4720]: I0122 06:50:45.313634 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v4sz8\" (UniqueName: \"kubernetes.io/projected/e41ff3f3-3360-4fd3-99ed-448ca648f3b6-kube-api-access-v4sz8\") pod \"frr-k8s-kdlvf\" (UID: \"e41ff3f3-3360-4fd3-99ed-448ca648f3b6\") " pod="metallb-system/frr-k8s-kdlvf" Jan 22 06:50:45 crc kubenswrapper[4720]: I0122 06:50:45.369292 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kqkl8\" (UniqueName: \"kubernetes.io/projected/15c14672-daa2-408e-a693-6ac7bef81828-kube-api-access-kqkl8\") pod \"frr-k8s-webhook-server-7df86c4f6c-bnntl\" (UID: \"15c14672-daa2-408e-a693-6ac7bef81828\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-bnntl" Jan 22 06:50:45 crc kubenswrapper[4720]: I0122 06:50:45.369354 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/dfe2424d-a522-48e7-921c-ddce7a244b13-cert\") pod \"controller-6968d8fdc4-49vhq\" (UID: \"dfe2424d-a522-48e7-921c-ddce7a244b13\") " pod="metallb-system/controller-6968d8fdc4-49vhq" Jan 22 06:50:45 crc kubenswrapper[4720]: I0122 06:50:45.369372 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/15c14672-daa2-408e-a693-6ac7bef81828-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-bnntl\" (UID: \"15c14672-daa2-408e-a693-6ac7bef81828\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-bnntl" Jan 22 06:50:45 crc kubenswrapper[4720]: I0122 06:50:45.369390 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/ce7509f5-f9e6-4130-b569-986bb9b61ffd-memberlist\") pod \"speaker-67m5k\" (UID: \"ce7509f5-f9e6-4130-b569-986bb9b61ffd\") " pod="metallb-system/speaker-67m5k" Jan 22 06:50:45 crc kubenswrapper[4720]: I0122 06:50:45.369410 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mrqln\" (UniqueName: \"kubernetes.io/projected/ce7509f5-f9e6-4130-b569-986bb9b61ffd-kube-api-access-mrqln\") pod \"speaker-67m5k\" (UID: \"ce7509f5-f9e6-4130-b569-986bb9b61ffd\") " pod="metallb-system/speaker-67m5k" Jan 22 06:50:45 crc kubenswrapper[4720]: I0122 06:50:45.369429 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ce7509f5-f9e6-4130-b569-986bb9b61ffd-metrics-certs\") pod \"speaker-67m5k\" (UID: \"ce7509f5-f9e6-4130-b569-986bb9b61ffd\") " pod="metallb-system/speaker-67m5k" Jan 22 06:50:45 crc kubenswrapper[4720]: I0122 06:50:45.369485 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/ce7509f5-f9e6-4130-b569-986bb9b61ffd-metallb-excludel2\") pod \"speaker-67m5k\" (UID: \"ce7509f5-f9e6-4130-b569-986bb9b61ffd\") " pod="metallb-system/speaker-67m5k" Jan 22 06:50:45 crc kubenswrapper[4720]: I0122 06:50:45.369516 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/dfe2424d-a522-48e7-921c-ddce7a244b13-metrics-certs\") pod \"controller-6968d8fdc4-49vhq\" (UID: \"dfe2424d-a522-48e7-921c-ddce7a244b13\") " pod="metallb-system/controller-6968d8fdc4-49vhq" Jan 22 06:50:45 crc kubenswrapper[4720]: I0122 06:50:45.369530 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xh8pf\" (UniqueName: \"kubernetes.io/projected/dfe2424d-a522-48e7-921c-ddce7a244b13-kube-api-access-xh8pf\") pod \"controller-6968d8fdc4-49vhq\" (UID: \"dfe2424d-a522-48e7-921c-ddce7a244b13\") " pod="metallb-system/controller-6968d8fdc4-49vhq" Jan 22 06:50:45 crc kubenswrapper[4720]: I0122 06:50:45.470614 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/ce7509f5-f9e6-4130-b569-986bb9b61ffd-metallb-excludel2\") pod \"speaker-67m5k\" (UID: \"ce7509f5-f9e6-4130-b569-986bb9b61ffd\") " pod="metallb-system/speaker-67m5k" Jan 22 06:50:45 crc kubenswrapper[4720]: I0122 06:50:45.470696 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/dfe2424d-a522-48e7-921c-ddce7a244b13-metrics-certs\") pod \"controller-6968d8fdc4-49vhq\" (UID: \"dfe2424d-a522-48e7-921c-ddce7a244b13\") " pod="metallb-system/controller-6968d8fdc4-49vhq" Jan 22 06:50:45 crc kubenswrapper[4720]: I0122 06:50:45.470728 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xh8pf\" (UniqueName: \"kubernetes.io/projected/dfe2424d-a522-48e7-921c-ddce7a244b13-kube-api-access-xh8pf\") pod \"controller-6968d8fdc4-49vhq\" (UID: \"dfe2424d-a522-48e7-921c-ddce7a244b13\") " pod="metallb-system/controller-6968d8fdc4-49vhq" Jan 22 06:50:45 crc kubenswrapper[4720]: I0122 06:50:45.470776 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kqkl8\" (UniqueName: \"kubernetes.io/projected/15c14672-daa2-408e-a693-6ac7bef81828-kube-api-access-kqkl8\") pod \"frr-k8s-webhook-server-7df86c4f6c-bnntl\" (UID: \"15c14672-daa2-408e-a693-6ac7bef81828\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-bnntl" Jan 22 06:50:45 crc kubenswrapper[4720]: I0122 06:50:45.470851 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/dfe2424d-a522-48e7-921c-ddce7a244b13-cert\") pod \"controller-6968d8fdc4-49vhq\" (UID: \"dfe2424d-a522-48e7-921c-ddce7a244b13\") " pod="metallb-system/controller-6968d8fdc4-49vhq" Jan 22 06:50:45 crc kubenswrapper[4720]: I0122 06:50:45.471273 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/15c14672-daa2-408e-a693-6ac7bef81828-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-bnntl\" (UID: \"15c14672-daa2-408e-a693-6ac7bef81828\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-bnntl" Jan 22 06:50:45 crc kubenswrapper[4720]: I0122 06:50:45.471355 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/ce7509f5-f9e6-4130-b569-986bb9b61ffd-memberlist\") pod \"speaker-67m5k\" (UID: \"ce7509f5-f9e6-4130-b569-986bb9b61ffd\") " pod="metallb-system/speaker-67m5k" Jan 22 06:50:45 crc kubenswrapper[4720]: I0122 06:50:45.471409 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mrqln\" (UniqueName: \"kubernetes.io/projected/ce7509f5-f9e6-4130-b569-986bb9b61ffd-kube-api-access-mrqln\") pod \"speaker-67m5k\" (UID: \"ce7509f5-f9e6-4130-b569-986bb9b61ffd\") " pod="metallb-system/speaker-67m5k" Jan 22 06:50:45 crc kubenswrapper[4720]: I0122 06:50:45.471464 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ce7509f5-f9e6-4130-b569-986bb9b61ffd-metrics-certs\") pod \"speaker-67m5k\" (UID: \"ce7509f5-f9e6-4130-b569-986bb9b61ffd\") " pod="metallb-system/speaker-67m5k" Jan 22 06:50:45 crc kubenswrapper[4720]: E0122 06:50:45.471505 4720 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 22 06:50:45 crc kubenswrapper[4720]: E0122 06:50:45.471583 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ce7509f5-f9e6-4130-b569-986bb9b61ffd-memberlist podName:ce7509f5-f9e6-4130-b569-986bb9b61ffd nodeName:}" failed. No retries permitted until 2026-01-22 06:50:45.971558827 +0000 UTC m=+938.113465542 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/ce7509f5-f9e6-4130-b569-986bb9b61ffd-memberlist") pod "speaker-67m5k" (UID: "ce7509f5-f9e6-4130-b569-986bb9b61ffd") : secret "metallb-memberlist" not found Jan 22 06:50:45 crc kubenswrapper[4720]: I0122 06:50:45.471749 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/ce7509f5-f9e6-4130-b569-986bb9b61ffd-metallb-excludel2\") pod \"speaker-67m5k\" (UID: \"ce7509f5-f9e6-4130-b569-986bb9b61ffd\") " pod="metallb-system/speaker-67m5k" Jan 22 06:50:45 crc kubenswrapper[4720]: I0122 06:50:45.474542 4720 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 22 06:50:45 crc kubenswrapper[4720]: I0122 06:50:45.476646 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/dfe2424d-a522-48e7-921c-ddce7a244b13-metrics-certs\") pod \"controller-6968d8fdc4-49vhq\" (UID: \"dfe2424d-a522-48e7-921c-ddce7a244b13\") " pod="metallb-system/controller-6968d8fdc4-49vhq" Jan 22 06:50:45 crc kubenswrapper[4720]: I0122 06:50:45.477359 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/15c14672-daa2-408e-a693-6ac7bef81828-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-bnntl\" (UID: \"15c14672-daa2-408e-a693-6ac7bef81828\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-bnntl" Jan 22 06:50:45 crc kubenswrapper[4720]: I0122 06:50:45.487047 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/dfe2424d-a522-48e7-921c-ddce7a244b13-cert\") pod \"controller-6968d8fdc4-49vhq\" (UID: \"dfe2424d-a522-48e7-921c-ddce7a244b13\") " pod="metallb-system/controller-6968d8fdc4-49vhq" Jan 22 06:50:45 crc kubenswrapper[4720]: I0122 06:50:45.490742 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ce7509f5-f9e6-4130-b569-986bb9b61ffd-metrics-certs\") pod \"speaker-67m5k\" (UID: \"ce7509f5-f9e6-4130-b569-986bb9b61ffd\") " pod="metallb-system/speaker-67m5k" Jan 22 06:50:45 crc kubenswrapper[4720]: I0122 06:50:45.491277 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xh8pf\" (UniqueName: \"kubernetes.io/projected/dfe2424d-a522-48e7-921c-ddce7a244b13-kube-api-access-xh8pf\") pod \"controller-6968d8fdc4-49vhq\" (UID: \"dfe2424d-a522-48e7-921c-ddce7a244b13\") " pod="metallb-system/controller-6968d8fdc4-49vhq" Jan 22 06:50:45 crc kubenswrapper[4720]: I0122 06:50:45.491332 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kqkl8\" (UniqueName: \"kubernetes.io/projected/15c14672-daa2-408e-a693-6ac7bef81828-kube-api-access-kqkl8\") pod \"frr-k8s-webhook-server-7df86c4f6c-bnntl\" (UID: \"15c14672-daa2-408e-a693-6ac7bef81828\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-bnntl" Jan 22 06:50:45 crc kubenswrapper[4720]: I0122 06:50:45.491424 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mrqln\" (UniqueName: \"kubernetes.io/projected/ce7509f5-f9e6-4130-b569-986bb9b61ffd-kube-api-access-mrqln\") pod \"speaker-67m5k\" (UID: \"ce7509f5-f9e6-4130-b569-986bb9b61ffd\") " pod="metallb-system/speaker-67m5k" Jan 22 06:50:45 crc kubenswrapper[4720]: I0122 06:50:45.512946 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-bnntl" Jan 22 06:50:45 crc kubenswrapper[4720]: I0122 06:50:45.638540 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-49vhq" Jan 22 06:50:45 crc kubenswrapper[4720]: I0122 06:50:45.777024 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e41ff3f3-3360-4fd3-99ed-448ca648f3b6-metrics-certs\") pod \"frr-k8s-kdlvf\" (UID: \"e41ff3f3-3360-4fd3-99ed-448ca648f3b6\") " pod="metallb-system/frr-k8s-kdlvf" Jan 22 06:50:45 crc kubenswrapper[4720]: I0122 06:50:45.782036 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e41ff3f3-3360-4fd3-99ed-448ca648f3b6-metrics-certs\") pod \"frr-k8s-kdlvf\" (UID: \"e41ff3f3-3360-4fd3-99ed-448ca648f3b6\") " pod="metallb-system/frr-k8s-kdlvf" Jan 22 06:50:45 crc kubenswrapper[4720]: I0122 06:50:45.921072 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-49vhq"] Jan 22 06:50:45 crc kubenswrapper[4720]: W0122 06:50:45.925859 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddfe2424d_a522_48e7_921c_ddce7a244b13.slice/crio-a7932bd361fa05e2e733b03f8677b42303ae1dbcb5f0d37ce85411f8a31121e5 WatchSource:0}: Error finding container a7932bd361fa05e2e733b03f8677b42303ae1dbcb5f0d37ce85411f8a31121e5: Status 404 returned error can't find the container with id a7932bd361fa05e2e733b03f8677b42303ae1dbcb5f0d37ce85411f8a31121e5 Jan 22 06:50:45 crc kubenswrapper[4720]: I0122 06:50:45.969356 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-bnntl"] Jan 22 06:50:45 crc kubenswrapper[4720]: W0122 06:50:45.972211 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod15c14672_daa2_408e_a693_6ac7bef81828.slice/crio-667f32bfc1db186b710992744f83b11a29510f61f1f9d88779e8fde477f1ad64 WatchSource:0}: Error finding container 667f32bfc1db186b710992744f83b11a29510f61f1f9d88779e8fde477f1ad64: Status 404 returned error can't find the container with id 667f32bfc1db186b710992744f83b11a29510f61f1f9d88779e8fde477f1ad64 Jan 22 06:50:45 crc kubenswrapper[4720]: I0122 06:50:45.980250 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/ce7509f5-f9e6-4130-b569-986bb9b61ffd-memberlist\") pod \"speaker-67m5k\" (UID: \"ce7509f5-f9e6-4130-b569-986bb9b61ffd\") " pod="metallb-system/speaker-67m5k" Jan 22 06:50:45 crc kubenswrapper[4720]: E0122 06:50:45.980450 4720 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 22 06:50:45 crc kubenswrapper[4720]: E0122 06:50:45.980548 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ce7509f5-f9e6-4130-b569-986bb9b61ffd-memberlist podName:ce7509f5-f9e6-4130-b569-986bb9b61ffd nodeName:}" failed. No retries permitted until 2026-01-22 06:50:46.980521733 +0000 UTC m=+939.122428438 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/ce7509f5-f9e6-4130-b569-986bb9b61ffd-memberlist") pod "speaker-67m5k" (UID: "ce7509f5-f9e6-4130-b569-986bb9b61ffd") : secret "metallb-memberlist" not found Jan 22 06:50:46 crc kubenswrapper[4720]: I0122 06:50:46.082074 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-kdlvf" Jan 22 06:50:46 crc kubenswrapper[4720]: I0122 06:50:46.512295 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-bnntl" event={"ID":"15c14672-daa2-408e-a693-6ac7bef81828","Type":"ContainerStarted","Data":"667f32bfc1db186b710992744f83b11a29510f61f1f9d88779e8fde477f1ad64"} Jan 22 06:50:46 crc kubenswrapper[4720]: I0122 06:50:46.513822 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-kdlvf" event={"ID":"e41ff3f3-3360-4fd3-99ed-448ca648f3b6","Type":"ContainerStarted","Data":"0fa3647da7dce3a1be3ec27cd82230f45974e63034dc6d26280558d2fafde6c5"} Jan 22 06:50:46 crc kubenswrapper[4720]: I0122 06:50:46.515963 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-49vhq" event={"ID":"dfe2424d-a522-48e7-921c-ddce7a244b13","Type":"ContainerStarted","Data":"cadb01a6dd3fae8b2ed0ec18cd083af8836051ec7a289d456d34a700f5a49c19"} Jan 22 06:50:46 crc kubenswrapper[4720]: I0122 06:50:46.516047 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-49vhq" event={"ID":"dfe2424d-a522-48e7-921c-ddce7a244b13","Type":"ContainerStarted","Data":"4d6686aa98e49bb2df3d495b020620bd048ba493edb34f0a1724ec00debf0555"} Jan 22 06:50:46 crc kubenswrapper[4720]: I0122 06:50:46.516068 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-49vhq" event={"ID":"dfe2424d-a522-48e7-921c-ddce7a244b13","Type":"ContainerStarted","Data":"a7932bd361fa05e2e733b03f8677b42303ae1dbcb5f0d37ce85411f8a31121e5"} Jan 22 06:50:46 crc kubenswrapper[4720]: I0122 06:50:46.516124 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-6968d8fdc4-49vhq" Jan 22 06:50:46 crc kubenswrapper[4720]: I0122 06:50:46.538487 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-6968d8fdc4-49vhq" podStartSLOduration=1.53846494 podStartE2EDuration="1.53846494s" podCreationTimestamp="2026-01-22 06:50:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 06:50:46.534093396 +0000 UTC m=+938.676000141" watchObservedRunningTime="2026-01-22 06:50:46.53846494 +0000 UTC m=+938.680371645" Jan 22 06:50:46 crc kubenswrapper[4720]: I0122 06:50:46.995002 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/ce7509f5-f9e6-4130-b569-986bb9b61ffd-memberlist\") pod \"speaker-67m5k\" (UID: \"ce7509f5-f9e6-4130-b569-986bb9b61ffd\") " pod="metallb-system/speaker-67m5k" Jan 22 06:50:47 crc kubenswrapper[4720]: I0122 06:50:47.016656 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/ce7509f5-f9e6-4130-b569-986bb9b61ffd-memberlist\") pod \"speaker-67m5k\" (UID: \"ce7509f5-f9e6-4130-b569-986bb9b61ffd\") " pod="metallb-system/speaker-67m5k" Jan 22 06:50:47 crc kubenswrapper[4720]: I0122 06:50:47.088828 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-67m5k" Jan 22 06:50:47 crc kubenswrapper[4720]: W0122 06:50:47.138962 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podce7509f5_f9e6_4130_b569_986bb9b61ffd.slice/crio-9af5a5946554a40b951067dd6a437a228e50fc7c7768d6dd5aad5148f5a16e82 WatchSource:0}: Error finding container 9af5a5946554a40b951067dd6a437a228e50fc7c7768d6dd5aad5148f5a16e82: Status 404 returned error can't find the container with id 9af5a5946554a40b951067dd6a437a228e50fc7c7768d6dd5aad5148f5a16e82 Jan 22 06:50:47 crc kubenswrapper[4720]: I0122 06:50:47.536558 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-67m5k" event={"ID":"ce7509f5-f9e6-4130-b569-986bb9b61ffd","Type":"ContainerStarted","Data":"8fba745748a900d62cdb7f2d5cb55c633024f9e8a7ef5debb879bcfd13a8fdc8"} Jan 22 06:50:47 crc kubenswrapper[4720]: I0122 06:50:47.536637 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-67m5k" event={"ID":"ce7509f5-f9e6-4130-b569-986bb9b61ffd","Type":"ContainerStarted","Data":"9af5a5946554a40b951067dd6a437a228e50fc7c7768d6dd5aad5148f5a16e82"} Jan 22 06:50:48 crc kubenswrapper[4720]: I0122 06:50:48.567629 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-67m5k" event={"ID":"ce7509f5-f9e6-4130-b569-986bb9b61ffd","Type":"ContainerStarted","Data":"b6ce81ed320c2e653ec3640536b4b496c420cf2d1624b2e1db38d05a232d632e"} Jan 22 06:50:48 crc kubenswrapper[4720]: I0122 06:50:48.569608 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-67m5k" Jan 22 06:50:48 crc kubenswrapper[4720]: I0122 06:50:48.640177 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-67m5k" podStartSLOduration=3.640160433 podStartE2EDuration="3.640160433s" podCreationTimestamp="2026-01-22 06:50:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 06:50:48.638138416 +0000 UTC m=+940.780045121" watchObservedRunningTime="2026-01-22 06:50:48.640160433 +0000 UTC m=+940.782067138" Jan 22 06:50:50 crc kubenswrapper[4720]: E0122 06:50:50.346609 4720 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda64a2970_44b4_4c97_98d8_7d7de717e554.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod20c3d28d_88e7_43da_81fa_57df712470e9.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod20c3d28d_88e7_43da_81fa_57df712470e9.slice/crio-b269d8df496e126948efd7ae1815b1e8fcf16fb4042e21f6ff1ce7157c38bc7f\": RecentStats: unable to find data in memory cache]" Jan 22 06:50:55 crc kubenswrapper[4720]: I0122 06:50:55.682059 4720 generic.go:334] "Generic (PLEG): container finished" podID="e41ff3f3-3360-4fd3-99ed-448ca648f3b6" containerID="50c6e6f820a8e9d331f0fee40ac8ae56a27a9079f6b7c74723d4a2a89a351856" exitCode=0 Jan 22 06:50:55 crc kubenswrapper[4720]: I0122 06:50:55.682180 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-kdlvf" event={"ID":"e41ff3f3-3360-4fd3-99ed-448ca648f3b6","Type":"ContainerDied","Data":"50c6e6f820a8e9d331f0fee40ac8ae56a27a9079f6b7c74723d4a2a89a351856"} Jan 22 06:50:55 crc kubenswrapper[4720]: I0122 06:50:55.684613 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-bnntl" event={"ID":"15c14672-daa2-408e-a693-6ac7bef81828","Type":"ContainerStarted","Data":"7998296dacf5fb28e4ed1641a9a4779ed43134847a689a772de0855186fd7008"} Jan 22 06:50:55 crc kubenswrapper[4720]: I0122 06:50:55.684746 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-bnntl" Jan 22 06:50:55 crc kubenswrapper[4720]: I0122 06:50:55.735090 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-bnntl" podStartSLOduration=1.425962865 podStartE2EDuration="10.735066996s" podCreationTimestamp="2026-01-22 06:50:45 +0000 UTC" firstStartedPulling="2026-01-22 06:50:45.975344696 +0000 UTC m=+938.117251401" lastFinishedPulling="2026-01-22 06:50:55.284448827 +0000 UTC m=+947.426355532" observedRunningTime="2026-01-22 06:50:55.730418764 +0000 UTC m=+947.872325469" watchObservedRunningTime="2026-01-22 06:50:55.735066996 +0000 UTC m=+947.876973701" Jan 22 06:50:56 crc kubenswrapper[4720]: I0122 06:50:56.695682 4720 generic.go:334] "Generic (PLEG): container finished" podID="e41ff3f3-3360-4fd3-99ed-448ca648f3b6" containerID="22d517d8c51aa723ab1b8d0df82d546d7c7a42fcfd6cdaf558f590ac911f168c" exitCode=0 Jan 22 06:50:56 crc kubenswrapper[4720]: I0122 06:50:56.695781 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-kdlvf" event={"ID":"e41ff3f3-3360-4fd3-99ed-448ca648f3b6","Type":"ContainerDied","Data":"22d517d8c51aa723ab1b8d0df82d546d7c7a42fcfd6cdaf558f590ac911f168c"} Jan 22 06:50:57 crc kubenswrapper[4720]: I0122 06:50:57.094794 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-67m5k" Jan 22 06:50:57 crc kubenswrapper[4720]: I0122 06:50:57.707227 4720 generic.go:334] "Generic (PLEG): container finished" podID="e41ff3f3-3360-4fd3-99ed-448ca648f3b6" containerID="51381071910b1d8ff268d008412d73d1dd6c3e09e36f654be13d73c5af2db929" exitCode=0 Jan 22 06:50:57 crc kubenswrapper[4720]: I0122 06:50:57.707295 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-kdlvf" event={"ID":"e41ff3f3-3360-4fd3-99ed-448ca648f3b6","Type":"ContainerDied","Data":"51381071910b1d8ff268d008412d73d1dd6c3e09e36f654be13d73c5af2db929"} Jan 22 06:50:58 crc kubenswrapper[4720]: I0122 06:50:58.720107 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-kdlvf" event={"ID":"e41ff3f3-3360-4fd3-99ed-448ca648f3b6","Type":"ContainerStarted","Data":"d4a846326e605ad45604bd4e8e18d6b3eb43f9be66a82c18b69e9307a421f43f"} Jan 22 06:50:58 crc kubenswrapper[4720]: I0122 06:50:58.720594 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-kdlvf" event={"ID":"e41ff3f3-3360-4fd3-99ed-448ca648f3b6","Type":"ContainerStarted","Data":"936cb5698293426c52b3ba89999450e616acfd7295685abbc3343b078cdada3a"} Jan 22 06:50:58 crc kubenswrapper[4720]: I0122 06:50:58.720615 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-kdlvf" event={"ID":"e41ff3f3-3360-4fd3-99ed-448ca648f3b6","Type":"ContainerStarted","Data":"67e7f9ea4f25bc1f2ac8ab8f246057c13fb303139bb1723f1a1611c7b5124d90"} Jan 22 06:50:58 crc kubenswrapper[4720]: I0122 06:50:58.720629 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-kdlvf" event={"ID":"e41ff3f3-3360-4fd3-99ed-448ca648f3b6","Type":"ContainerStarted","Data":"617870146d27ecc55807a7a5b971ec77d861c7c1d3ad0c7c29e55043cea0b901"} Jan 22 06:50:58 crc kubenswrapper[4720]: I0122 06:50:58.877182 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931apsqcw"] Jan 22 06:50:58 crc kubenswrapper[4720]: I0122 06:50:58.878655 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931apsqcw" Jan 22 06:50:58 crc kubenswrapper[4720]: I0122 06:50:58.881294 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 22 06:50:58 crc kubenswrapper[4720]: I0122 06:50:58.888617 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931apsqcw"] Jan 22 06:50:58 crc kubenswrapper[4720]: I0122 06:50:58.955332 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fc1373bb-3c54-4e19-9129-6d8b288bdc1a-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931apsqcw\" (UID: \"fc1373bb-3c54-4e19-9129-6d8b288bdc1a\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931apsqcw" Jan 22 06:50:58 crc kubenswrapper[4720]: I0122 06:50:58.955400 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fc1373bb-3c54-4e19-9129-6d8b288bdc1a-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931apsqcw\" (UID: \"fc1373bb-3c54-4e19-9129-6d8b288bdc1a\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931apsqcw" Jan 22 06:50:58 crc kubenswrapper[4720]: I0122 06:50:58.955486 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6fh57\" (UniqueName: \"kubernetes.io/projected/fc1373bb-3c54-4e19-9129-6d8b288bdc1a-kube-api-access-6fh57\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931apsqcw\" (UID: \"fc1373bb-3c54-4e19-9129-6d8b288bdc1a\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931apsqcw" Jan 22 06:50:59 crc kubenswrapper[4720]: I0122 06:50:59.056648 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6fh57\" (UniqueName: \"kubernetes.io/projected/fc1373bb-3c54-4e19-9129-6d8b288bdc1a-kube-api-access-6fh57\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931apsqcw\" (UID: \"fc1373bb-3c54-4e19-9129-6d8b288bdc1a\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931apsqcw" Jan 22 06:50:59 crc kubenswrapper[4720]: I0122 06:50:59.056742 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fc1373bb-3c54-4e19-9129-6d8b288bdc1a-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931apsqcw\" (UID: \"fc1373bb-3c54-4e19-9129-6d8b288bdc1a\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931apsqcw" Jan 22 06:50:59 crc kubenswrapper[4720]: I0122 06:50:59.056804 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fc1373bb-3c54-4e19-9129-6d8b288bdc1a-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931apsqcw\" (UID: \"fc1373bb-3c54-4e19-9129-6d8b288bdc1a\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931apsqcw" Jan 22 06:50:59 crc kubenswrapper[4720]: I0122 06:50:59.057469 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fc1373bb-3c54-4e19-9129-6d8b288bdc1a-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931apsqcw\" (UID: \"fc1373bb-3c54-4e19-9129-6d8b288bdc1a\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931apsqcw" Jan 22 06:50:59 crc kubenswrapper[4720]: I0122 06:50:59.057562 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fc1373bb-3c54-4e19-9129-6d8b288bdc1a-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931apsqcw\" (UID: \"fc1373bb-3c54-4e19-9129-6d8b288bdc1a\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931apsqcw" Jan 22 06:50:59 crc kubenswrapper[4720]: I0122 06:50:59.088805 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6fh57\" (UniqueName: \"kubernetes.io/projected/fc1373bb-3c54-4e19-9129-6d8b288bdc1a-kube-api-access-6fh57\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931apsqcw\" (UID: \"fc1373bb-3c54-4e19-9129-6d8b288bdc1a\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931apsqcw" Jan 22 06:50:59 crc kubenswrapper[4720]: I0122 06:50:59.205562 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931apsqcw" Jan 22 06:50:59 crc kubenswrapper[4720]: I0122 06:50:59.677163 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931apsqcw"] Jan 22 06:50:59 crc kubenswrapper[4720]: I0122 06:50:59.735375 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-kdlvf" event={"ID":"e41ff3f3-3360-4fd3-99ed-448ca648f3b6","Type":"ContainerStarted","Data":"18a52eda497c5bcbcf4cddc0c1c88f71f8684ef76d60a05f275705df682dd6e5"} Jan 22 06:50:59 crc kubenswrapper[4720]: I0122 06:50:59.735928 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-kdlvf" event={"ID":"e41ff3f3-3360-4fd3-99ed-448ca648f3b6","Type":"ContainerStarted","Data":"553f0076b6c5952f66d437f49edecddee8b7492e756f332265110d7e3dcbb2fa"} Jan 22 06:50:59 crc kubenswrapper[4720]: I0122 06:50:59.735957 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-kdlvf" Jan 22 06:50:59 crc kubenswrapper[4720]: I0122 06:50:59.737745 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931apsqcw" event={"ID":"fc1373bb-3c54-4e19-9129-6d8b288bdc1a","Type":"ContainerStarted","Data":"a9c34fc7a099063fcde5da73e5b1d3052a9b13cacb9545ff7feac1745b990eb5"} Jan 22 06:50:59 crc kubenswrapper[4720]: I0122 06:50:59.761163 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-kdlvf" podStartSLOduration=5.644002507 podStartE2EDuration="14.761129076s" podCreationTimestamp="2026-01-22 06:50:45 +0000 UTC" firstStartedPulling="2026-01-22 06:50:46.19306923 +0000 UTC m=+938.334975925" lastFinishedPulling="2026-01-22 06:50:55.310195789 +0000 UTC m=+947.452102494" observedRunningTime="2026-01-22 06:50:59.760805877 +0000 UTC m=+951.902712592" watchObservedRunningTime="2026-01-22 06:50:59.761129076 +0000 UTC m=+951.903035801" Jan 22 06:51:00 crc kubenswrapper[4720]: E0122 06:51:00.494632 4720 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod20c3d28d_88e7_43da_81fa_57df712470e9.slice/crio-b269d8df496e126948efd7ae1815b1e8fcf16fb4042e21f6ff1ce7157c38bc7f\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda64a2970_44b4_4c97_98d8_7d7de717e554.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod20c3d28d_88e7_43da_81fa_57df712470e9.slice\": RecentStats: unable to find data in memory cache]" Jan 22 06:51:00 crc kubenswrapper[4720]: I0122 06:51:00.750042 4720 generic.go:334] "Generic (PLEG): container finished" podID="fc1373bb-3c54-4e19-9129-6d8b288bdc1a" containerID="7205f081ed4b081013886261b49119eff10d0bf28586ef49f9e2b67155dc50cf" exitCode=0 Jan 22 06:51:00 crc kubenswrapper[4720]: I0122 06:51:00.750146 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931apsqcw" event={"ID":"fc1373bb-3c54-4e19-9129-6d8b288bdc1a","Type":"ContainerDied","Data":"7205f081ed4b081013886261b49119eff10d0bf28586ef49f9e2b67155dc50cf"} Jan 22 06:51:01 crc kubenswrapper[4720]: I0122 06:51:01.082613 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-kdlvf" Jan 22 06:51:01 crc kubenswrapper[4720]: I0122 06:51:01.152458 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-kdlvf" Jan 22 06:51:04 crc kubenswrapper[4720]: I0122 06:51:04.784756 4720 generic.go:334] "Generic (PLEG): container finished" podID="fc1373bb-3c54-4e19-9129-6d8b288bdc1a" containerID="a3e6c7ae46ce844171d2965a4b9bb5c69db26b95d5ef1e4d83ee3a46c226322b" exitCode=0 Jan 22 06:51:04 crc kubenswrapper[4720]: I0122 06:51:04.784879 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931apsqcw" event={"ID":"fc1373bb-3c54-4e19-9129-6d8b288bdc1a","Type":"ContainerDied","Data":"a3e6c7ae46ce844171d2965a4b9bb5c69db26b95d5ef1e4d83ee3a46c226322b"} Jan 22 06:51:05 crc kubenswrapper[4720]: I0122 06:51:05.524634 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-bnntl" Jan 22 06:51:05 crc kubenswrapper[4720]: I0122 06:51:05.645635 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-6968d8fdc4-49vhq" Jan 22 06:51:05 crc kubenswrapper[4720]: I0122 06:51:05.796274 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931apsqcw" event={"ID":"fc1373bb-3c54-4e19-9129-6d8b288bdc1a","Type":"ContainerStarted","Data":"20339909c4d914f8c0a9c428f248da6d0fb4d269563404d290d3ab47fbe478ef"} Jan 22 06:51:05 crc kubenswrapper[4720]: I0122 06:51:05.820590 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931apsqcw" podStartSLOduration=4.281427128 podStartE2EDuration="7.820562039s" podCreationTimestamp="2026-01-22 06:50:58 +0000 UTC" firstStartedPulling="2026-01-22 06:51:00.753031879 +0000 UTC m=+952.894938584" lastFinishedPulling="2026-01-22 06:51:04.29216679 +0000 UTC m=+956.434073495" observedRunningTime="2026-01-22 06:51:05.814481987 +0000 UTC m=+957.956388712" watchObservedRunningTime="2026-01-22 06:51:05.820562039 +0000 UTC m=+957.962468744" Jan 22 06:51:06 crc kubenswrapper[4720]: I0122 06:51:06.809293 4720 generic.go:334] "Generic (PLEG): container finished" podID="fc1373bb-3c54-4e19-9129-6d8b288bdc1a" containerID="20339909c4d914f8c0a9c428f248da6d0fb4d269563404d290d3ab47fbe478ef" exitCode=0 Jan 22 06:51:06 crc kubenswrapper[4720]: I0122 06:51:06.809362 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931apsqcw" event={"ID":"fc1373bb-3c54-4e19-9129-6d8b288bdc1a","Type":"ContainerDied","Data":"20339909c4d914f8c0a9c428f248da6d0fb4d269563404d290d3ab47fbe478ef"} Jan 22 06:51:08 crc kubenswrapper[4720]: I0122 06:51:08.142329 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931apsqcw" Jan 22 06:51:08 crc kubenswrapper[4720]: I0122 06:51:08.239793 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6fh57\" (UniqueName: \"kubernetes.io/projected/fc1373bb-3c54-4e19-9129-6d8b288bdc1a-kube-api-access-6fh57\") pod \"fc1373bb-3c54-4e19-9129-6d8b288bdc1a\" (UID: \"fc1373bb-3c54-4e19-9129-6d8b288bdc1a\") " Jan 22 06:51:08 crc kubenswrapper[4720]: I0122 06:51:08.239852 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fc1373bb-3c54-4e19-9129-6d8b288bdc1a-util\") pod \"fc1373bb-3c54-4e19-9129-6d8b288bdc1a\" (UID: \"fc1373bb-3c54-4e19-9129-6d8b288bdc1a\") " Jan 22 06:51:08 crc kubenswrapper[4720]: I0122 06:51:08.240015 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fc1373bb-3c54-4e19-9129-6d8b288bdc1a-bundle\") pod \"fc1373bb-3c54-4e19-9129-6d8b288bdc1a\" (UID: \"fc1373bb-3c54-4e19-9129-6d8b288bdc1a\") " Jan 22 06:51:08 crc kubenswrapper[4720]: I0122 06:51:08.241831 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fc1373bb-3c54-4e19-9129-6d8b288bdc1a-bundle" (OuterVolumeSpecName: "bundle") pod "fc1373bb-3c54-4e19-9129-6d8b288bdc1a" (UID: "fc1373bb-3c54-4e19-9129-6d8b288bdc1a"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 06:51:08 crc kubenswrapper[4720]: E0122 06:51:08.250317 4720 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/1138f4827bffc71f1b7d89b2202a41fd0dbc2d936677512d87f13eb4356f4d0b/diff" to get inode usage: stat /var/lib/containers/storage/overlay/1138f4827bffc71f1b7d89b2202a41fd0dbc2d936677512d87f13eb4356f4d0b/diff: no such file or directory, extraDiskErr: Jan 22 06:51:08 crc kubenswrapper[4720]: I0122 06:51:08.251213 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fc1373bb-3c54-4e19-9129-6d8b288bdc1a-util" (OuterVolumeSpecName: "util") pod "fc1373bb-3c54-4e19-9129-6d8b288bdc1a" (UID: "fc1373bb-3c54-4e19-9129-6d8b288bdc1a"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 06:51:08 crc kubenswrapper[4720]: I0122 06:51:08.251407 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc1373bb-3c54-4e19-9129-6d8b288bdc1a-kube-api-access-6fh57" (OuterVolumeSpecName: "kube-api-access-6fh57") pod "fc1373bb-3c54-4e19-9129-6d8b288bdc1a" (UID: "fc1373bb-3c54-4e19-9129-6d8b288bdc1a"). InnerVolumeSpecName "kube-api-access-6fh57". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 06:51:08 crc kubenswrapper[4720]: I0122 06:51:08.342298 4720 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fc1373bb-3c54-4e19-9129-6d8b288bdc1a-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 06:51:08 crc kubenswrapper[4720]: I0122 06:51:08.342707 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6fh57\" (UniqueName: \"kubernetes.io/projected/fc1373bb-3c54-4e19-9129-6d8b288bdc1a-kube-api-access-6fh57\") on node \"crc\" DevicePath \"\"" Jan 22 06:51:08 crc kubenswrapper[4720]: I0122 06:51:08.342793 4720 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fc1373bb-3c54-4e19-9129-6d8b288bdc1a-util\") on node \"crc\" DevicePath \"\"" Jan 22 06:51:08 crc kubenswrapper[4720]: I0122 06:51:08.830679 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931apsqcw" event={"ID":"fc1373bb-3c54-4e19-9129-6d8b288bdc1a","Type":"ContainerDied","Data":"a9c34fc7a099063fcde5da73e5b1d3052a9b13cacb9545ff7feac1745b990eb5"} Jan 22 06:51:08 crc kubenswrapper[4720]: I0122 06:51:08.830736 4720 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a9c34fc7a099063fcde5da73e5b1d3052a9b13cacb9545ff7feac1745b990eb5" Jan 22 06:51:08 crc kubenswrapper[4720]: I0122 06:51:08.830776 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931apsqcw" Jan 22 06:51:10 crc kubenswrapper[4720]: E0122 06:51:10.672044 4720 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfc1373bb_3c54_4e19_9129_6d8b288bdc1a.slice\": RecentStats: unable to find data in memory cache]" Jan 22 06:51:12 crc kubenswrapper[4720]: I0122 06:51:12.256793 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-ndkgg"] Jan 22 06:51:12 crc kubenswrapper[4720]: E0122 06:51:12.257716 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc1373bb-3c54-4e19-9129-6d8b288bdc1a" containerName="util" Jan 22 06:51:12 crc kubenswrapper[4720]: I0122 06:51:12.257738 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc1373bb-3c54-4e19-9129-6d8b288bdc1a" containerName="util" Jan 22 06:51:12 crc kubenswrapper[4720]: E0122 06:51:12.257757 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc1373bb-3c54-4e19-9129-6d8b288bdc1a" containerName="extract" Jan 22 06:51:12 crc kubenswrapper[4720]: I0122 06:51:12.257765 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc1373bb-3c54-4e19-9129-6d8b288bdc1a" containerName="extract" Jan 22 06:51:12 crc kubenswrapper[4720]: E0122 06:51:12.257781 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc1373bb-3c54-4e19-9129-6d8b288bdc1a" containerName="pull" Jan 22 06:51:12 crc kubenswrapper[4720]: I0122 06:51:12.257790 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc1373bb-3c54-4e19-9129-6d8b288bdc1a" containerName="pull" Jan 22 06:51:12 crc kubenswrapper[4720]: I0122 06:51:12.257962 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="fc1373bb-3c54-4e19-9129-6d8b288bdc1a" containerName="extract" Jan 22 06:51:12 crc kubenswrapper[4720]: I0122 06:51:12.258661 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-ndkgg" Jan 22 06:51:12 crc kubenswrapper[4720]: I0122 06:51:12.260774 4720 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager-operator"/"cert-manager-operator-controller-manager-dockercfg-kdvzw" Jan 22 06:51:12 crc kubenswrapper[4720]: I0122 06:51:12.261001 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"kube-root-ca.crt" Jan 22 06:51:12 crc kubenswrapper[4720]: I0122 06:51:12.262758 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"openshift-service-ca.crt" Jan 22 06:51:12 crc kubenswrapper[4720]: I0122 06:51:12.270826 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-ndkgg"] Jan 22 06:51:12 crc kubenswrapper[4720]: I0122 06:51:12.307563 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/c6a1d69b-4319-4fd9-8881-ded047df5b70-tmp\") pod \"cert-manager-operator-controller-manager-64cf6dff88-ndkgg\" (UID: \"c6a1d69b-4319-4fd9-8881-ded047df5b70\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-ndkgg" Jan 22 06:51:12 crc kubenswrapper[4720]: I0122 06:51:12.307641 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ptgx7\" (UniqueName: \"kubernetes.io/projected/c6a1d69b-4319-4fd9-8881-ded047df5b70-kube-api-access-ptgx7\") pod \"cert-manager-operator-controller-manager-64cf6dff88-ndkgg\" (UID: \"c6a1d69b-4319-4fd9-8881-ded047df5b70\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-ndkgg" Jan 22 06:51:12 crc kubenswrapper[4720]: I0122 06:51:12.408825 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/c6a1d69b-4319-4fd9-8881-ded047df5b70-tmp\") pod \"cert-manager-operator-controller-manager-64cf6dff88-ndkgg\" (UID: \"c6a1d69b-4319-4fd9-8881-ded047df5b70\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-ndkgg" Jan 22 06:51:12 crc kubenswrapper[4720]: I0122 06:51:12.408901 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ptgx7\" (UniqueName: \"kubernetes.io/projected/c6a1d69b-4319-4fd9-8881-ded047df5b70-kube-api-access-ptgx7\") pod \"cert-manager-operator-controller-manager-64cf6dff88-ndkgg\" (UID: \"c6a1d69b-4319-4fd9-8881-ded047df5b70\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-ndkgg" Jan 22 06:51:12 crc kubenswrapper[4720]: I0122 06:51:12.409455 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/c6a1d69b-4319-4fd9-8881-ded047df5b70-tmp\") pod \"cert-manager-operator-controller-manager-64cf6dff88-ndkgg\" (UID: \"c6a1d69b-4319-4fd9-8881-ded047df5b70\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-ndkgg" Jan 22 06:51:12 crc kubenswrapper[4720]: I0122 06:51:12.445851 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ptgx7\" (UniqueName: \"kubernetes.io/projected/c6a1d69b-4319-4fd9-8881-ded047df5b70-kube-api-access-ptgx7\") pod \"cert-manager-operator-controller-manager-64cf6dff88-ndkgg\" (UID: \"c6a1d69b-4319-4fd9-8881-ded047df5b70\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-ndkgg" Jan 22 06:51:12 crc kubenswrapper[4720]: I0122 06:51:12.585283 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-ndkgg" Jan 22 06:51:13 crc kubenswrapper[4720]: I0122 06:51:13.081781 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-ndkgg"] Jan 22 06:51:13 crc kubenswrapper[4720]: W0122 06:51:13.086142 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc6a1d69b_4319_4fd9_8881_ded047df5b70.slice/crio-826f41d67647da59a8f5f2551a1a8a005906f5e0053fca6cd071fb6ea85487e0 WatchSource:0}: Error finding container 826f41d67647da59a8f5f2551a1a8a005906f5e0053fca6cd071fb6ea85487e0: Status 404 returned error can't find the container with id 826f41d67647da59a8f5f2551a1a8a005906f5e0053fca6cd071fb6ea85487e0 Jan 22 06:51:13 crc kubenswrapper[4720]: I0122 06:51:13.870580 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-ndkgg" event={"ID":"c6a1d69b-4319-4fd9-8881-ded047df5b70","Type":"ContainerStarted","Data":"826f41d67647da59a8f5f2551a1a8a005906f5e0053fca6cd071fb6ea85487e0"} Jan 22 06:51:16 crc kubenswrapper[4720]: I0122 06:51:16.085074 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-kdlvf" Jan 22 06:51:20 crc kubenswrapper[4720]: E0122 06:51:20.878654 4720 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfc1373bb_3c54_4e19_9129_6d8b288bdc1a.slice\": RecentStats: unable to find data in memory cache]" Jan 22 06:51:20 crc kubenswrapper[4720]: I0122 06:51:20.927665 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-ndkgg" event={"ID":"c6a1d69b-4319-4fd9-8881-ded047df5b70","Type":"ContainerStarted","Data":"19a5012185c0de6f79f0b8ece761e93ba20b00cfd52a143d54d568eed86ce941"} Jan 22 06:51:20 crc kubenswrapper[4720]: I0122 06:51:20.954400 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-ndkgg" podStartSLOduration=1.382547201 podStartE2EDuration="8.95436936s" podCreationTimestamp="2026-01-22 06:51:12 +0000 UTC" firstStartedPulling="2026-01-22 06:51:13.094384856 +0000 UTC m=+965.236291561" lastFinishedPulling="2026-01-22 06:51:20.666207015 +0000 UTC m=+972.808113720" observedRunningTime="2026-01-22 06:51:20.947539086 +0000 UTC m=+973.089445791" watchObservedRunningTime="2026-01-22 06:51:20.95436936 +0000 UTC m=+973.096276065" Jan 22 06:51:23 crc kubenswrapper[4720]: I0122 06:51:23.820334 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-f4fb5df64-5klsr"] Jan 22 06:51:23 crc kubenswrapper[4720]: I0122 06:51:23.822535 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-f4fb5df64-5klsr" Jan 22 06:51:23 crc kubenswrapper[4720]: I0122 06:51:23.825351 4720 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-d6cth" Jan 22 06:51:23 crc kubenswrapper[4720]: I0122 06:51:23.825588 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Jan 22 06:51:23 crc kubenswrapper[4720]: I0122 06:51:23.827369 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Jan 22 06:51:23 crc kubenswrapper[4720]: I0122 06:51:23.843312 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-f4fb5df64-5klsr"] Jan 22 06:51:24 crc kubenswrapper[4720]: I0122 06:51:24.003184 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/34799a28-6c13-4288-946f-bc4d9e57b756-bound-sa-token\") pod \"cert-manager-webhook-f4fb5df64-5klsr\" (UID: \"34799a28-6c13-4288-946f-bc4d9e57b756\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-5klsr" Jan 22 06:51:24 crc kubenswrapper[4720]: I0122 06:51:24.003375 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kc5v7\" (UniqueName: \"kubernetes.io/projected/34799a28-6c13-4288-946f-bc4d9e57b756-kube-api-access-kc5v7\") pod \"cert-manager-webhook-f4fb5df64-5klsr\" (UID: \"34799a28-6c13-4288-946f-bc4d9e57b756\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-5klsr" Jan 22 06:51:24 crc kubenswrapper[4720]: I0122 06:51:24.104928 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kc5v7\" (UniqueName: \"kubernetes.io/projected/34799a28-6c13-4288-946f-bc4d9e57b756-kube-api-access-kc5v7\") pod \"cert-manager-webhook-f4fb5df64-5klsr\" (UID: \"34799a28-6c13-4288-946f-bc4d9e57b756\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-5klsr" Jan 22 06:51:24 crc kubenswrapper[4720]: I0122 06:51:24.105046 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/34799a28-6c13-4288-946f-bc4d9e57b756-bound-sa-token\") pod \"cert-manager-webhook-f4fb5df64-5klsr\" (UID: \"34799a28-6c13-4288-946f-bc4d9e57b756\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-5klsr" Jan 22 06:51:24 crc kubenswrapper[4720]: I0122 06:51:24.132976 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kc5v7\" (UniqueName: \"kubernetes.io/projected/34799a28-6c13-4288-946f-bc4d9e57b756-kube-api-access-kc5v7\") pod \"cert-manager-webhook-f4fb5df64-5klsr\" (UID: \"34799a28-6c13-4288-946f-bc4d9e57b756\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-5klsr" Jan 22 06:51:24 crc kubenswrapper[4720]: I0122 06:51:24.149806 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/34799a28-6c13-4288-946f-bc4d9e57b756-bound-sa-token\") pod \"cert-manager-webhook-f4fb5df64-5klsr\" (UID: \"34799a28-6c13-4288-946f-bc4d9e57b756\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-5klsr" Jan 22 06:51:24 crc kubenswrapper[4720]: I0122 06:51:24.448253 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-f4fb5df64-5klsr" Jan 22 06:51:24 crc kubenswrapper[4720]: I0122 06:51:24.790722 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-f4fb5df64-5klsr"] Jan 22 06:51:24 crc kubenswrapper[4720]: I0122 06:51:24.954101 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-f4fb5df64-5klsr" event={"ID":"34799a28-6c13-4288-946f-bc4d9e57b756","Type":"ContainerStarted","Data":"d955e651600fa984155600eec01c70ca8713689184e8fdaf53d3aecec1f9b98b"} Jan 22 06:51:27 crc kubenswrapper[4720]: I0122 06:51:27.447412 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-855d9ccff4-fkxm7"] Jan 22 06:51:27 crc kubenswrapper[4720]: I0122 06:51:27.450562 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-855d9ccff4-fkxm7" Jan 22 06:51:27 crc kubenswrapper[4720]: I0122 06:51:27.453584 4720 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-wjq4t" Jan 22 06:51:27 crc kubenswrapper[4720]: I0122 06:51:27.485501 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-855d9ccff4-fkxm7"] Jan 22 06:51:27 crc kubenswrapper[4720]: I0122 06:51:27.566073 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/adf9f211-0196-4391-ae7a-c98e6e20147e-bound-sa-token\") pod \"cert-manager-cainjector-855d9ccff4-fkxm7\" (UID: \"adf9f211-0196-4391-ae7a-c98e6e20147e\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-fkxm7" Jan 22 06:51:27 crc kubenswrapper[4720]: I0122 06:51:27.566143 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fp4xd\" (UniqueName: \"kubernetes.io/projected/adf9f211-0196-4391-ae7a-c98e6e20147e-kube-api-access-fp4xd\") pod \"cert-manager-cainjector-855d9ccff4-fkxm7\" (UID: \"adf9f211-0196-4391-ae7a-c98e6e20147e\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-fkxm7" Jan 22 06:51:27 crc kubenswrapper[4720]: I0122 06:51:27.668154 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/adf9f211-0196-4391-ae7a-c98e6e20147e-bound-sa-token\") pod \"cert-manager-cainjector-855d9ccff4-fkxm7\" (UID: \"adf9f211-0196-4391-ae7a-c98e6e20147e\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-fkxm7" Jan 22 06:51:27 crc kubenswrapper[4720]: I0122 06:51:27.668231 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fp4xd\" (UniqueName: \"kubernetes.io/projected/adf9f211-0196-4391-ae7a-c98e6e20147e-kube-api-access-fp4xd\") pod \"cert-manager-cainjector-855d9ccff4-fkxm7\" (UID: \"adf9f211-0196-4391-ae7a-c98e6e20147e\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-fkxm7" Jan 22 06:51:27 crc kubenswrapper[4720]: I0122 06:51:27.687800 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/adf9f211-0196-4391-ae7a-c98e6e20147e-bound-sa-token\") pod \"cert-manager-cainjector-855d9ccff4-fkxm7\" (UID: \"adf9f211-0196-4391-ae7a-c98e6e20147e\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-fkxm7" Jan 22 06:51:27 crc kubenswrapper[4720]: I0122 06:51:27.688546 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fp4xd\" (UniqueName: \"kubernetes.io/projected/adf9f211-0196-4391-ae7a-c98e6e20147e-kube-api-access-fp4xd\") pod \"cert-manager-cainjector-855d9ccff4-fkxm7\" (UID: \"adf9f211-0196-4391-ae7a-c98e6e20147e\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-fkxm7" Jan 22 06:51:27 crc kubenswrapper[4720]: I0122 06:51:27.787888 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-855d9ccff4-fkxm7" Jan 22 06:51:28 crc kubenswrapper[4720]: I0122 06:51:28.398848 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-855d9ccff4-fkxm7"] Jan 22 06:51:31 crc kubenswrapper[4720]: E0122 06:51:31.085649 4720 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfc1373bb_3c54_4e19_9129_6d8b288bdc1a.slice\": RecentStats: unable to find data in memory cache]" Jan 22 06:51:35 crc kubenswrapper[4720]: I0122 06:51:35.438318 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-86cb77c54b-b9fc8"] Jan 22 06:51:35 crc kubenswrapper[4720]: I0122 06:51:35.440125 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-86cb77c54b-b9fc8" Jan 22 06:51:35 crc kubenswrapper[4720]: I0122 06:51:35.442343 4720 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-qrqnk" Jan 22 06:51:35 crc kubenswrapper[4720]: I0122 06:51:35.448578 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-86cb77c54b-b9fc8"] Jan 22 06:51:35 crc kubenswrapper[4720]: I0122 06:51:35.599474 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rb99c\" (UniqueName: \"kubernetes.io/projected/34089ae4-0f59-4909-96f9-b64ebe3e1a29-kube-api-access-rb99c\") pod \"cert-manager-86cb77c54b-b9fc8\" (UID: \"34089ae4-0f59-4909-96f9-b64ebe3e1a29\") " pod="cert-manager/cert-manager-86cb77c54b-b9fc8" Jan 22 06:51:35 crc kubenswrapper[4720]: I0122 06:51:35.599589 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/34089ae4-0f59-4909-96f9-b64ebe3e1a29-bound-sa-token\") pod \"cert-manager-86cb77c54b-b9fc8\" (UID: \"34089ae4-0f59-4909-96f9-b64ebe3e1a29\") " pod="cert-manager/cert-manager-86cb77c54b-b9fc8" Jan 22 06:51:35 crc kubenswrapper[4720]: I0122 06:51:35.744936 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rb99c\" (UniqueName: \"kubernetes.io/projected/34089ae4-0f59-4909-96f9-b64ebe3e1a29-kube-api-access-rb99c\") pod \"cert-manager-86cb77c54b-b9fc8\" (UID: \"34089ae4-0f59-4909-96f9-b64ebe3e1a29\") " pod="cert-manager/cert-manager-86cb77c54b-b9fc8" Jan 22 06:51:35 crc kubenswrapper[4720]: I0122 06:51:35.745009 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/34089ae4-0f59-4909-96f9-b64ebe3e1a29-bound-sa-token\") pod \"cert-manager-86cb77c54b-b9fc8\" (UID: \"34089ae4-0f59-4909-96f9-b64ebe3e1a29\") " pod="cert-manager/cert-manager-86cb77c54b-b9fc8" Jan 22 06:51:35 crc kubenswrapper[4720]: I0122 06:51:35.774652 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rb99c\" (UniqueName: \"kubernetes.io/projected/34089ae4-0f59-4909-96f9-b64ebe3e1a29-kube-api-access-rb99c\") pod \"cert-manager-86cb77c54b-b9fc8\" (UID: \"34089ae4-0f59-4909-96f9-b64ebe3e1a29\") " pod="cert-manager/cert-manager-86cb77c54b-b9fc8" Jan 22 06:51:35 crc kubenswrapper[4720]: I0122 06:51:35.790030 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/34089ae4-0f59-4909-96f9-b64ebe3e1a29-bound-sa-token\") pod \"cert-manager-86cb77c54b-b9fc8\" (UID: \"34089ae4-0f59-4909-96f9-b64ebe3e1a29\") " pod="cert-manager/cert-manager-86cb77c54b-b9fc8" Jan 22 06:51:36 crc kubenswrapper[4720]: I0122 06:51:36.064723 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-86cb77c54b-b9fc8" Jan 22 06:51:36 crc kubenswrapper[4720]: I0122 06:51:36.878521 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-86cb77c54b-b9fc8"] Jan 22 06:51:36 crc kubenswrapper[4720]: W0122 06:51:36.891470 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod34089ae4_0f59_4909_96f9_b64ebe3e1a29.slice/crio-7501b5cc55679407e09cd4670f092d43317c31de2779ded119ac41df7ce6ed17 WatchSource:0}: Error finding container 7501b5cc55679407e09cd4670f092d43317c31de2779ded119ac41df7ce6ed17: Status 404 returned error can't find the container with id 7501b5cc55679407e09cd4670f092d43317c31de2779ded119ac41df7ce6ed17 Jan 22 06:51:37 crc kubenswrapper[4720]: I0122 06:51:37.074109 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-855d9ccff4-fkxm7" event={"ID":"adf9f211-0196-4391-ae7a-c98e6e20147e","Type":"ContainerStarted","Data":"569bc98596e7533bae0d2f724f16f1ee01febc2b041fec743a7c454638d06c92"} Jan 22 06:51:37 crc kubenswrapper[4720]: I0122 06:51:37.075332 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-86cb77c54b-b9fc8" event={"ID":"34089ae4-0f59-4909-96f9-b64ebe3e1a29","Type":"ContainerStarted","Data":"7501b5cc55679407e09cd4670f092d43317c31de2779ded119ac41df7ce6ed17"} Jan 22 06:51:38 crc kubenswrapper[4720]: I0122 06:51:38.084642 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-f4fb5df64-5klsr" event={"ID":"34799a28-6c13-4288-946f-bc4d9e57b756","Type":"ContainerStarted","Data":"7c8b3e8afe44a757a0a74b259847abfe4749e1d774bda15554720e5af8e003bc"} Jan 22 06:51:38 crc kubenswrapper[4720]: I0122 06:51:38.085743 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-f4fb5df64-5klsr" Jan 22 06:51:38 crc kubenswrapper[4720]: I0122 06:51:38.087131 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-855d9ccff4-fkxm7" event={"ID":"adf9f211-0196-4391-ae7a-c98e6e20147e","Type":"ContainerStarted","Data":"5bdfb84e1a7e68f6832f2e833da49ca278102de69710f0460dcbddb056ef2107"} Jan 22 06:51:38 crc kubenswrapper[4720]: I0122 06:51:38.089922 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-86cb77c54b-b9fc8" event={"ID":"34089ae4-0f59-4909-96f9-b64ebe3e1a29","Type":"ContainerStarted","Data":"afa77724bfb6f09c9225ad6c03fbeb033f73bfee1ff13bbb123d6c15205bef94"} Jan 22 06:51:38 crc kubenswrapper[4720]: I0122 06:51:38.110192 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-f4fb5df64-5klsr" podStartSLOduration=2.583626232 podStartE2EDuration="15.110168457s" podCreationTimestamp="2026-01-22 06:51:23 +0000 UTC" firstStartedPulling="2026-01-22 06:51:24.807056136 +0000 UTC m=+976.948962851" lastFinishedPulling="2026-01-22 06:51:37.333598371 +0000 UTC m=+989.475505076" observedRunningTime="2026-01-22 06:51:38.10532084 +0000 UTC m=+990.247227555" watchObservedRunningTime="2026-01-22 06:51:38.110168457 +0000 UTC m=+990.252075172" Jan 22 06:51:38 crc kubenswrapper[4720]: I0122 06:51:38.136116 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-855d9ccff4-fkxm7" podStartSLOduration=10.239125318 podStartE2EDuration="11.136078293s" podCreationTimestamp="2026-01-22 06:51:27 +0000 UTC" firstStartedPulling="2026-01-22 06:51:36.414524417 +0000 UTC m=+988.556431142" lastFinishedPulling="2026-01-22 06:51:37.311477412 +0000 UTC m=+989.453384117" observedRunningTime="2026-01-22 06:51:38.127427958 +0000 UTC m=+990.269334683" watchObservedRunningTime="2026-01-22 06:51:38.136078293 +0000 UTC m=+990.277985058" Jan 22 06:51:41 crc kubenswrapper[4720]: E0122 06:51:41.264726 4720 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfc1373bb_3c54_4e19_9129_6d8b288bdc1a.slice\": RecentStats: unable to find data in memory cache]" Jan 22 06:51:44 crc kubenswrapper[4720]: I0122 06:51:44.453221 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-f4fb5df64-5klsr" Jan 22 06:51:44 crc kubenswrapper[4720]: I0122 06:51:44.477858 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-86cb77c54b-b9fc8" podStartSLOduration=8.800431265 podStartE2EDuration="9.477826655s" podCreationTimestamp="2026-01-22 06:51:35 +0000 UTC" firstStartedPulling="2026-01-22 06:51:36.895780415 +0000 UTC m=+989.037687130" lastFinishedPulling="2026-01-22 06:51:37.573175825 +0000 UTC m=+989.715082520" observedRunningTime="2026-01-22 06:51:38.15145638 +0000 UTC m=+990.293363105" watchObservedRunningTime="2026-01-22 06:51:44.477826655 +0000 UTC m=+996.619733360" Jan 22 06:51:48 crc kubenswrapper[4720]: I0122 06:51:48.176556 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-nnqrz"] Jan 22 06:51:48 crc kubenswrapper[4720]: I0122 06:51:48.177663 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-nnqrz" Jan 22 06:51:48 crc kubenswrapper[4720]: I0122 06:51:48.206851 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-2mjlm" Jan 22 06:51:48 crc kubenswrapper[4720]: I0122 06:51:48.207120 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Jan 22 06:51:48 crc kubenswrapper[4720]: I0122 06:51:48.207355 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Jan 22 06:51:48 crc kubenswrapper[4720]: I0122 06:51:48.228572 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-nnqrz"] Jan 22 06:51:48 crc kubenswrapper[4720]: I0122 06:51:48.353280 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8j8fl\" (UniqueName: \"kubernetes.io/projected/39173aff-54de-48b2-aac0-4e515c596f7a-kube-api-access-8j8fl\") pod \"openstack-operator-index-nnqrz\" (UID: \"39173aff-54de-48b2-aac0-4e515c596f7a\") " pod="openstack-operators/openstack-operator-index-nnqrz" Jan 22 06:51:48 crc kubenswrapper[4720]: I0122 06:51:48.455343 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8j8fl\" (UniqueName: \"kubernetes.io/projected/39173aff-54de-48b2-aac0-4e515c596f7a-kube-api-access-8j8fl\") pod \"openstack-operator-index-nnqrz\" (UID: \"39173aff-54de-48b2-aac0-4e515c596f7a\") " pod="openstack-operators/openstack-operator-index-nnqrz" Jan 22 06:51:48 crc kubenswrapper[4720]: I0122 06:51:48.477680 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8j8fl\" (UniqueName: \"kubernetes.io/projected/39173aff-54de-48b2-aac0-4e515c596f7a-kube-api-access-8j8fl\") pod \"openstack-operator-index-nnqrz\" (UID: \"39173aff-54de-48b2-aac0-4e515c596f7a\") " pod="openstack-operators/openstack-operator-index-nnqrz" Jan 22 06:51:48 crc kubenswrapper[4720]: I0122 06:51:48.545567 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-nnqrz" Jan 22 06:51:48 crc kubenswrapper[4720]: I0122 06:51:48.796553 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-nnqrz"] Jan 22 06:51:49 crc kubenswrapper[4720]: I0122 06:51:49.171357 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-nnqrz" event={"ID":"39173aff-54de-48b2-aac0-4e515c596f7a","Type":"ContainerStarted","Data":"40ec59b6f237d97401736726c12a17838dc7e74d76225078c8101212238839bb"} Jan 22 06:51:51 crc kubenswrapper[4720]: E0122 06:51:51.440487 4720 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfc1373bb_3c54_4e19_9129_6d8b288bdc1a.slice\": RecentStats: unable to find data in memory cache]" Jan 22 06:51:52 crc kubenswrapper[4720]: I0122 06:51:52.953806 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-nnqrz"] Jan 22 06:51:53 crc kubenswrapper[4720]: I0122 06:51:53.361026 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-2g762"] Jan 22 06:51:53 crc kubenswrapper[4720]: I0122 06:51:53.362117 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-2g762" Jan 22 06:51:53 crc kubenswrapper[4720]: I0122 06:51:53.375437 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-2g762"] Jan 22 06:51:53 crc kubenswrapper[4720]: I0122 06:51:53.436409 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zrm7b\" (UniqueName: \"kubernetes.io/projected/80ba9f63-ae49-476c-9282-f9b32f804ab3-kube-api-access-zrm7b\") pod \"openstack-operator-index-2g762\" (UID: \"80ba9f63-ae49-476c-9282-f9b32f804ab3\") " pod="openstack-operators/openstack-operator-index-2g762" Jan 22 06:51:53 crc kubenswrapper[4720]: I0122 06:51:53.539975 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zrm7b\" (UniqueName: \"kubernetes.io/projected/80ba9f63-ae49-476c-9282-f9b32f804ab3-kube-api-access-zrm7b\") pod \"openstack-operator-index-2g762\" (UID: \"80ba9f63-ae49-476c-9282-f9b32f804ab3\") " pod="openstack-operators/openstack-operator-index-2g762" Jan 22 06:51:53 crc kubenswrapper[4720]: I0122 06:51:53.574189 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zrm7b\" (UniqueName: \"kubernetes.io/projected/80ba9f63-ae49-476c-9282-f9b32f804ab3-kube-api-access-zrm7b\") pod \"openstack-operator-index-2g762\" (UID: \"80ba9f63-ae49-476c-9282-f9b32f804ab3\") " pod="openstack-operators/openstack-operator-index-2g762" Jan 22 06:51:53 crc kubenswrapper[4720]: I0122 06:51:53.702831 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-2g762" Jan 22 06:51:54 crc kubenswrapper[4720]: I0122 06:51:54.809827 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-2g762"] Jan 22 06:51:55 crc kubenswrapper[4720]: I0122 06:51:55.229322 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-2g762" event={"ID":"80ba9f63-ae49-476c-9282-f9b32f804ab3","Type":"ContainerStarted","Data":"7c1081bbf8910a62b6648a237422cfa48d99d01ad0de2851b63074fabdfb8617"} Jan 22 06:51:55 crc kubenswrapper[4720]: I0122 06:51:55.229384 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-2g762" event={"ID":"80ba9f63-ae49-476c-9282-f9b32f804ab3","Type":"ContainerStarted","Data":"ede2efeb2f143cd9fcd6281a310069d4011c6341ce89a0bcefbd328349971545"} Jan 22 06:51:55 crc kubenswrapper[4720]: I0122 06:51:55.230814 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-nnqrz" event={"ID":"39173aff-54de-48b2-aac0-4e515c596f7a","Type":"ContainerStarted","Data":"c541402c1bb14a01cdb656309d83aba3c0242fd35851904f4734e3d5d6bacf1a"} Jan 22 06:51:55 crc kubenswrapper[4720]: I0122 06:51:55.230962 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-nnqrz" podUID="39173aff-54de-48b2-aac0-4e515c596f7a" containerName="registry-server" containerID="cri-o://c541402c1bb14a01cdb656309d83aba3c0242fd35851904f4734e3d5d6bacf1a" gracePeriod=2 Jan 22 06:51:55 crc kubenswrapper[4720]: I0122 06:51:55.259554 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-2g762" podStartSLOduration=2.188038993 podStartE2EDuration="2.259527893s" podCreationTimestamp="2026-01-22 06:51:53 +0000 UTC" firstStartedPulling="2026-01-22 06:51:54.821512612 +0000 UTC m=+1006.963419327" lastFinishedPulling="2026-01-22 06:51:54.893001512 +0000 UTC m=+1007.034908227" observedRunningTime="2026-01-22 06:51:55.257767923 +0000 UTC m=+1007.399674708" watchObservedRunningTime="2026-01-22 06:51:55.259527893 +0000 UTC m=+1007.401434608" Jan 22 06:51:55 crc kubenswrapper[4720]: I0122 06:51:55.292411 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-nnqrz" podStartSLOduration=1.7854561260000001 podStartE2EDuration="7.292384146s" podCreationTimestamp="2026-01-22 06:51:48 +0000 UTC" firstStartedPulling="2026-01-22 06:51:48.816767903 +0000 UTC m=+1000.958674598" lastFinishedPulling="2026-01-22 06:51:54.323695913 +0000 UTC m=+1006.465602618" observedRunningTime="2026-01-22 06:51:55.290250315 +0000 UTC m=+1007.432157040" watchObservedRunningTime="2026-01-22 06:51:55.292384146 +0000 UTC m=+1007.434290861" Jan 22 06:51:55 crc kubenswrapper[4720]: I0122 06:51:55.700848 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-nnqrz" Jan 22 06:51:55 crc kubenswrapper[4720]: I0122 06:51:55.896128 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8j8fl\" (UniqueName: \"kubernetes.io/projected/39173aff-54de-48b2-aac0-4e515c596f7a-kube-api-access-8j8fl\") pod \"39173aff-54de-48b2-aac0-4e515c596f7a\" (UID: \"39173aff-54de-48b2-aac0-4e515c596f7a\") " Jan 22 06:51:55 crc kubenswrapper[4720]: I0122 06:51:55.907538 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/39173aff-54de-48b2-aac0-4e515c596f7a-kube-api-access-8j8fl" (OuterVolumeSpecName: "kube-api-access-8j8fl") pod "39173aff-54de-48b2-aac0-4e515c596f7a" (UID: "39173aff-54de-48b2-aac0-4e515c596f7a"). InnerVolumeSpecName "kube-api-access-8j8fl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 06:51:55 crc kubenswrapper[4720]: I0122 06:51:55.998754 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8j8fl\" (UniqueName: \"kubernetes.io/projected/39173aff-54de-48b2-aac0-4e515c596f7a-kube-api-access-8j8fl\") on node \"crc\" DevicePath \"\"" Jan 22 06:51:56 crc kubenswrapper[4720]: I0122 06:51:56.243182 4720 generic.go:334] "Generic (PLEG): container finished" podID="39173aff-54de-48b2-aac0-4e515c596f7a" containerID="c541402c1bb14a01cdb656309d83aba3c0242fd35851904f4734e3d5d6bacf1a" exitCode=0 Jan 22 06:51:56 crc kubenswrapper[4720]: I0122 06:51:56.244633 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-nnqrz" Jan 22 06:51:56 crc kubenswrapper[4720]: I0122 06:51:56.249174 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-nnqrz" event={"ID":"39173aff-54de-48b2-aac0-4e515c596f7a","Type":"ContainerDied","Data":"c541402c1bb14a01cdb656309d83aba3c0242fd35851904f4734e3d5d6bacf1a"} Jan 22 06:51:56 crc kubenswrapper[4720]: I0122 06:51:56.249252 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-nnqrz" event={"ID":"39173aff-54de-48b2-aac0-4e515c596f7a","Type":"ContainerDied","Data":"40ec59b6f237d97401736726c12a17838dc7e74d76225078c8101212238839bb"} Jan 22 06:51:56 crc kubenswrapper[4720]: I0122 06:51:56.249279 4720 scope.go:117] "RemoveContainer" containerID="c541402c1bb14a01cdb656309d83aba3c0242fd35851904f4734e3d5d6bacf1a" Jan 22 06:51:56 crc kubenswrapper[4720]: I0122 06:51:56.284400 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-nnqrz"] Jan 22 06:51:56 crc kubenswrapper[4720]: I0122 06:51:56.287575 4720 scope.go:117] "RemoveContainer" containerID="c541402c1bb14a01cdb656309d83aba3c0242fd35851904f4734e3d5d6bacf1a" Jan 22 06:51:56 crc kubenswrapper[4720]: E0122 06:51:56.288257 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c541402c1bb14a01cdb656309d83aba3c0242fd35851904f4734e3d5d6bacf1a\": container with ID starting with c541402c1bb14a01cdb656309d83aba3c0242fd35851904f4734e3d5d6bacf1a not found: ID does not exist" containerID="c541402c1bb14a01cdb656309d83aba3c0242fd35851904f4734e3d5d6bacf1a" Jan 22 06:51:56 crc kubenswrapper[4720]: I0122 06:51:56.288334 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c541402c1bb14a01cdb656309d83aba3c0242fd35851904f4734e3d5d6bacf1a"} err="failed to get container status \"c541402c1bb14a01cdb656309d83aba3c0242fd35851904f4734e3d5d6bacf1a\": rpc error: code = NotFound desc = could not find container \"c541402c1bb14a01cdb656309d83aba3c0242fd35851904f4734e3d5d6bacf1a\": container with ID starting with c541402c1bb14a01cdb656309d83aba3c0242fd35851904f4734e3d5d6bacf1a not found: ID does not exist" Jan 22 06:51:56 crc kubenswrapper[4720]: I0122 06:51:56.290589 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-nnqrz"] Jan 22 06:51:58 crc kubenswrapper[4720]: I0122 06:51:58.222150 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="39173aff-54de-48b2-aac0-4e515c596f7a" path="/var/lib/kubelet/pods/39173aff-54de-48b2-aac0-4e515c596f7a/volumes" Jan 22 06:51:59 crc kubenswrapper[4720]: I0122 06:51:59.780595 4720 patch_prober.go:28] interesting pod/machine-config-daemon-bnsvd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 06:51:59 crc kubenswrapper[4720]: I0122 06:51:59.780758 4720 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" podUID="f4b26e9d-6a95-4b1c-9750-88b6aa100c67" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 06:52:01 crc kubenswrapper[4720]: E0122 06:52:01.634533 4720 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfc1373bb_3c54_4e19_9129_6d8b288bdc1a.slice\": RecentStats: unable to find data in memory cache]" Jan 22 06:52:03 crc kubenswrapper[4720]: I0122 06:52:03.704265 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-2g762" Jan 22 06:52:03 crc kubenswrapper[4720]: I0122 06:52:03.704341 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-2g762" Jan 22 06:52:03 crc kubenswrapper[4720]: I0122 06:52:03.757123 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-2g762" Jan 22 06:52:04 crc kubenswrapper[4720]: I0122 06:52:04.362889 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-2g762" Jan 22 06:52:12 crc kubenswrapper[4720]: I0122 06:52:12.567876 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/037caf7323fbd4952783c45a5586c50076cc7a0d4d49824be452e917b1vmprt"] Jan 22 06:52:12 crc kubenswrapper[4720]: E0122 06:52:12.569196 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39173aff-54de-48b2-aac0-4e515c596f7a" containerName="registry-server" Jan 22 06:52:12 crc kubenswrapper[4720]: I0122 06:52:12.569218 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="39173aff-54de-48b2-aac0-4e515c596f7a" containerName="registry-server" Jan 22 06:52:12 crc kubenswrapper[4720]: I0122 06:52:12.569467 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="39173aff-54de-48b2-aac0-4e515c596f7a" containerName="registry-server" Jan 22 06:52:12 crc kubenswrapper[4720]: I0122 06:52:12.570937 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/037caf7323fbd4952783c45a5586c50076cc7a0d4d49824be452e917b1vmprt" Jan 22 06:52:12 crc kubenswrapper[4720]: I0122 06:52:12.573931 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-8bfns" Jan 22 06:52:12 crc kubenswrapper[4720]: I0122 06:52:12.584294 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/037caf7323fbd4952783c45a5586c50076cc7a0d4d49824be452e917b1vmprt"] Jan 22 06:52:12 crc kubenswrapper[4720]: I0122 06:52:12.619749 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/61a1b004-dab4-4246-93a6-81d023e08232-util\") pod \"037caf7323fbd4952783c45a5586c50076cc7a0d4d49824be452e917b1vmprt\" (UID: \"61a1b004-dab4-4246-93a6-81d023e08232\") " pod="openstack-operators/037caf7323fbd4952783c45a5586c50076cc7a0d4d49824be452e917b1vmprt" Jan 22 06:52:12 crc kubenswrapper[4720]: I0122 06:52:12.619991 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cbjxz\" (UniqueName: \"kubernetes.io/projected/61a1b004-dab4-4246-93a6-81d023e08232-kube-api-access-cbjxz\") pod \"037caf7323fbd4952783c45a5586c50076cc7a0d4d49824be452e917b1vmprt\" (UID: \"61a1b004-dab4-4246-93a6-81d023e08232\") " pod="openstack-operators/037caf7323fbd4952783c45a5586c50076cc7a0d4d49824be452e917b1vmprt" Jan 22 06:52:12 crc kubenswrapper[4720]: I0122 06:52:12.620078 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/61a1b004-dab4-4246-93a6-81d023e08232-bundle\") pod \"037caf7323fbd4952783c45a5586c50076cc7a0d4d49824be452e917b1vmprt\" (UID: \"61a1b004-dab4-4246-93a6-81d023e08232\") " pod="openstack-operators/037caf7323fbd4952783c45a5586c50076cc7a0d4d49824be452e917b1vmprt" Jan 22 06:52:12 crc kubenswrapper[4720]: I0122 06:52:12.722166 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cbjxz\" (UniqueName: \"kubernetes.io/projected/61a1b004-dab4-4246-93a6-81d023e08232-kube-api-access-cbjxz\") pod \"037caf7323fbd4952783c45a5586c50076cc7a0d4d49824be452e917b1vmprt\" (UID: \"61a1b004-dab4-4246-93a6-81d023e08232\") " pod="openstack-operators/037caf7323fbd4952783c45a5586c50076cc7a0d4d49824be452e917b1vmprt" Jan 22 06:52:12 crc kubenswrapper[4720]: I0122 06:52:12.722356 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/61a1b004-dab4-4246-93a6-81d023e08232-bundle\") pod \"037caf7323fbd4952783c45a5586c50076cc7a0d4d49824be452e917b1vmprt\" (UID: \"61a1b004-dab4-4246-93a6-81d023e08232\") " pod="openstack-operators/037caf7323fbd4952783c45a5586c50076cc7a0d4d49824be452e917b1vmprt" Jan 22 06:52:12 crc kubenswrapper[4720]: I0122 06:52:12.723173 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/61a1b004-dab4-4246-93a6-81d023e08232-util\") pod \"037caf7323fbd4952783c45a5586c50076cc7a0d4d49824be452e917b1vmprt\" (UID: \"61a1b004-dab4-4246-93a6-81d023e08232\") " pod="openstack-operators/037caf7323fbd4952783c45a5586c50076cc7a0d4d49824be452e917b1vmprt" Jan 22 06:52:12 crc kubenswrapper[4720]: I0122 06:52:12.723733 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/61a1b004-dab4-4246-93a6-81d023e08232-util\") pod \"037caf7323fbd4952783c45a5586c50076cc7a0d4d49824be452e917b1vmprt\" (UID: \"61a1b004-dab4-4246-93a6-81d023e08232\") " pod="openstack-operators/037caf7323fbd4952783c45a5586c50076cc7a0d4d49824be452e917b1vmprt" Jan 22 06:52:12 crc kubenswrapper[4720]: I0122 06:52:12.723950 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/61a1b004-dab4-4246-93a6-81d023e08232-bundle\") pod \"037caf7323fbd4952783c45a5586c50076cc7a0d4d49824be452e917b1vmprt\" (UID: \"61a1b004-dab4-4246-93a6-81d023e08232\") " pod="openstack-operators/037caf7323fbd4952783c45a5586c50076cc7a0d4d49824be452e917b1vmprt" Jan 22 06:52:12 crc kubenswrapper[4720]: I0122 06:52:12.752259 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cbjxz\" (UniqueName: \"kubernetes.io/projected/61a1b004-dab4-4246-93a6-81d023e08232-kube-api-access-cbjxz\") pod \"037caf7323fbd4952783c45a5586c50076cc7a0d4d49824be452e917b1vmprt\" (UID: \"61a1b004-dab4-4246-93a6-81d023e08232\") " pod="openstack-operators/037caf7323fbd4952783c45a5586c50076cc7a0d4d49824be452e917b1vmprt" Jan 22 06:52:12 crc kubenswrapper[4720]: I0122 06:52:12.925158 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/037caf7323fbd4952783c45a5586c50076cc7a0d4d49824be452e917b1vmprt" Jan 22 06:52:13 crc kubenswrapper[4720]: I0122 06:52:13.427830 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/037caf7323fbd4952783c45a5586c50076cc7a0d4d49824be452e917b1vmprt"] Jan 22 06:52:14 crc kubenswrapper[4720]: I0122 06:52:14.411172 4720 generic.go:334] "Generic (PLEG): container finished" podID="61a1b004-dab4-4246-93a6-81d023e08232" containerID="28a998f3c03a978bfcf0a25804fb63169f106f1b1ed35eb16627b06b6433cc44" exitCode=0 Jan 22 06:52:14 crc kubenswrapper[4720]: I0122 06:52:14.411260 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/037caf7323fbd4952783c45a5586c50076cc7a0d4d49824be452e917b1vmprt" event={"ID":"61a1b004-dab4-4246-93a6-81d023e08232","Type":"ContainerDied","Data":"28a998f3c03a978bfcf0a25804fb63169f106f1b1ed35eb16627b06b6433cc44"} Jan 22 06:52:14 crc kubenswrapper[4720]: I0122 06:52:14.411700 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/037caf7323fbd4952783c45a5586c50076cc7a0d4d49824be452e917b1vmprt" event={"ID":"61a1b004-dab4-4246-93a6-81d023e08232","Type":"ContainerStarted","Data":"f61e50600cedd91592d04165aeba4183eea5053dc98338823583f54beb2a6ebc"} Jan 22 06:52:15 crc kubenswrapper[4720]: I0122 06:52:15.423582 4720 generic.go:334] "Generic (PLEG): container finished" podID="61a1b004-dab4-4246-93a6-81d023e08232" containerID="34e3a121babe9098ed7085146dd93036a94f5ceed2c66bd2abe1441991ff366b" exitCode=0 Jan 22 06:52:15 crc kubenswrapper[4720]: I0122 06:52:15.423680 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/037caf7323fbd4952783c45a5586c50076cc7a0d4d49824be452e917b1vmprt" event={"ID":"61a1b004-dab4-4246-93a6-81d023e08232","Type":"ContainerDied","Data":"34e3a121babe9098ed7085146dd93036a94f5ceed2c66bd2abe1441991ff366b"} Jan 22 06:52:16 crc kubenswrapper[4720]: I0122 06:52:16.433976 4720 generic.go:334] "Generic (PLEG): container finished" podID="61a1b004-dab4-4246-93a6-81d023e08232" containerID="ae448b8bdfd8ee756f6a879409f65546d021482976d2ccbe0b1add9a02fee717" exitCode=0 Jan 22 06:52:16 crc kubenswrapper[4720]: I0122 06:52:16.434035 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/037caf7323fbd4952783c45a5586c50076cc7a0d4d49824be452e917b1vmprt" event={"ID":"61a1b004-dab4-4246-93a6-81d023e08232","Type":"ContainerDied","Data":"ae448b8bdfd8ee756f6a879409f65546d021482976d2ccbe0b1add9a02fee717"} Jan 22 06:52:17 crc kubenswrapper[4720]: I0122 06:52:17.704159 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/037caf7323fbd4952783c45a5586c50076cc7a0d4d49824be452e917b1vmprt" Jan 22 06:52:17 crc kubenswrapper[4720]: I0122 06:52:17.816985 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cbjxz\" (UniqueName: \"kubernetes.io/projected/61a1b004-dab4-4246-93a6-81d023e08232-kube-api-access-cbjxz\") pod \"61a1b004-dab4-4246-93a6-81d023e08232\" (UID: \"61a1b004-dab4-4246-93a6-81d023e08232\") " Jan 22 06:52:17 crc kubenswrapper[4720]: I0122 06:52:17.817166 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/61a1b004-dab4-4246-93a6-81d023e08232-bundle\") pod \"61a1b004-dab4-4246-93a6-81d023e08232\" (UID: \"61a1b004-dab4-4246-93a6-81d023e08232\") " Jan 22 06:52:17 crc kubenswrapper[4720]: I0122 06:52:17.817262 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/61a1b004-dab4-4246-93a6-81d023e08232-util\") pod \"61a1b004-dab4-4246-93a6-81d023e08232\" (UID: \"61a1b004-dab4-4246-93a6-81d023e08232\") " Jan 22 06:52:17 crc kubenswrapper[4720]: I0122 06:52:17.818159 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/61a1b004-dab4-4246-93a6-81d023e08232-bundle" (OuterVolumeSpecName: "bundle") pod "61a1b004-dab4-4246-93a6-81d023e08232" (UID: "61a1b004-dab4-4246-93a6-81d023e08232"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 06:52:17 crc kubenswrapper[4720]: I0122 06:52:17.826432 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/61a1b004-dab4-4246-93a6-81d023e08232-kube-api-access-cbjxz" (OuterVolumeSpecName: "kube-api-access-cbjxz") pod "61a1b004-dab4-4246-93a6-81d023e08232" (UID: "61a1b004-dab4-4246-93a6-81d023e08232"). InnerVolumeSpecName "kube-api-access-cbjxz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 06:52:17 crc kubenswrapper[4720]: I0122 06:52:17.839658 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/61a1b004-dab4-4246-93a6-81d023e08232-util" (OuterVolumeSpecName: "util") pod "61a1b004-dab4-4246-93a6-81d023e08232" (UID: "61a1b004-dab4-4246-93a6-81d023e08232"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 06:52:17 crc kubenswrapper[4720]: I0122 06:52:17.919017 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cbjxz\" (UniqueName: \"kubernetes.io/projected/61a1b004-dab4-4246-93a6-81d023e08232-kube-api-access-cbjxz\") on node \"crc\" DevicePath \"\"" Jan 22 06:52:17 crc kubenswrapper[4720]: I0122 06:52:17.919073 4720 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/61a1b004-dab4-4246-93a6-81d023e08232-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 06:52:17 crc kubenswrapper[4720]: I0122 06:52:17.919087 4720 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/61a1b004-dab4-4246-93a6-81d023e08232-util\") on node \"crc\" DevicePath \"\"" Jan 22 06:52:18 crc kubenswrapper[4720]: I0122 06:52:18.455783 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/037caf7323fbd4952783c45a5586c50076cc7a0d4d49824be452e917b1vmprt" event={"ID":"61a1b004-dab4-4246-93a6-81d023e08232","Type":"ContainerDied","Data":"f61e50600cedd91592d04165aeba4183eea5053dc98338823583f54beb2a6ebc"} Jan 22 06:52:18 crc kubenswrapper[4720]: I0122 06:52:18.455839 4720 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f61e50600cedd91592d04165aeba4183eea5053dc98338823583f54beb2a6ebc" Jan 22 06:52:18 crc kubenswrapper[4720]: I0122 06:52:18.455954 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/037caf7323fbd4952783c45a5586c50076cc7a0d4d49824be452e917b1vmprt" Jan 22 06:52:24 crc kubenswrapper[4720]: I0122 06:52:24.529007 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-547d554b65-gvw8s"] Jan 22 06:52:24 crc kubenswrapper[4720]: E0122 06:52:24.530142 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61a1b004-dab4-4246-93a6-81d023e08232" containerName="util" Jan 22 06:52:24 crc kubenswrapper[4720]: I0122 06:52:24.530159 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="61a1b004-dab4-4246-93a6-81d023e08232" containerName="util" Jan 22 06:52:24 crc kubenswrapper[4720]: E0122 06:52:24.530167 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61a1b004-dab4-4246-93a6-81d023e08232" containerName="pull" Jan 22 06:52:24 crc kubenswrapper[4720]: I0122 06:52:24.530174 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="61a1b004-dab4-4246-93a6-81d023e08232" containerName="pull" Jan 22 06:52:24 crc kubenswrapper[4720]: E0122 06:52:24.530203 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61a1b004-dab4-4246-93a6-81d023e08232" containerName="extract" Jan 22 06:52:24 crc kubenswrapper[4720]: I0122 06:52:24.530212 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="61a1b004-dab4-4246-93a6-81d023e08232" containerName="extract" Jan 22 06:52:24 crc kubenswrapper[4720]: I0122 06:52:24.530360 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="61a1b004-dab4-4246-93a6-81d023e08232" containerName="extract" Jan 22 06:52:24 crc kubenswrapper[4720]: I0122 06:52:24.531064 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-547d554b65-gvw8s" Jan 22 06:52:24 crc kubenswrapper[4720]: I0122 06:52:24.535124 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-qjcnd" Jan 22 06:52:24 crc kubenswrapper[4720]: I0122 06:52:24.570628 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-547d554b65-gvw8s"] Jan 22 06:52:24 crc kubenswrapper[4720]: I0122 06:52:24.623357 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tzs2b\" (UniqueName: \"kubernetes.io/projected/80d040c7-3118-45d5-9f1e-2681a8d116d7-kube-api-access-tzs2b\") pod \"openstack-operator-controller-init-547d554b65-gvw8s\" (UID: \"80d040c7-3118-45d5-9f1e-2681a8d116d7\") " pod="openstack-operators/openstack-operator-controller-init-547d554b65-gvw8s" Jan 22 06:52:24 crc kubenswrapper[4720]: I0122 06:52:24.724461 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tzs2b\" (UniqueName: \"kubernetes.io/projected/80d040c7-3118-45d5-9f1e-2681a8d116d7-kube-api-access-tzs2b\") pod \"openstack-operator-controller-init-547d554b65-gvw8s\" (UID: \"80d040c7-3118-45d5-9f1e-2681a8d116d7\") " pod="openstack-operators/openstack-operator-controller-init-547d554b65-gvw8s" Jan 22 06:52:24 crc kubenswrapper[4720]: I0122 06:52:24.758481 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tzs2b\" (UniqueName: \"kubernetes.io/projected/80d040c7-3118-45d5-9f1e-2681a8d116d7-kube-api-access-tzs2b\") pod \"openstack-operator-controller-init-547d554b65-gvw8s\" (UID: \"80d040c7-3118-45d5-9f1e-2681a8d116d7\") " pod="openstack-operators/openstack-operator-controller-init-547d554b65-gvw8s" Jan 22 06:52:24 crc kubenswrapper[4720]: I0122 06:52:24.855019 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-547d554b65-gvw8s" Jan 22 06:52:25 crc kubenswrapper[4720]: I0122 06:52:25.318131 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-547d554b65-gvw8s"] Jan 22 06:52:25 crc kubenswrapper[4720]: I0122 06:52:25.521333 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-547d554b65-gvw8s" event={"ID":"80d040c7-3118-45d5-9f1e-2681a8d116d7","Type":"ContainerStarted","Data":"2bdb9a41eae3dfee7be190ffab88fa68ac9004d05c469ed40d61c6b5858bfd82"} Jan 22 06:52:29 crc kubenswrapper[4720]: I0122 06:52:29.780597 4720 patch_prober.go:28] interesting pod/machine-config-daemon-bnsvd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 06:52:29 crc kubenswrapper[4720]: I0122 06:52:29.781170 4720 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" podUID="f4b26e9d-6a95-4b1c-9750-88b6aa100c67" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 06:52:32 crc kubenswrapper[4720]: I0122 06:52:32.578236 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-547d554b65-gvw8s" event={"ID":"80d040c7-3118-45d5-9f1e-2681a8d116d7","Type":"ContainerStarted","Data":"0630b7100f90bc02361979ca329124c3f5cb8d034d8af4ac6efb5ff5c50988ca"} Jan 22 06:52:32 crc kubenswrapper[4720]: I0122 06:52:32.578813 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-547d554b65-gvw8s" Jan 22 06:52:32 crc kubenswrapper[4720]: I0122 06:52:32.618323 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-547d554b65-gvw8s" podStartSLOduration=1.630804535 podStartE2EDuration="8.618299267s" podCreationTimestamp="2026-01-22 06:52:24 +0000 UTC" firstStartedPulling="2026-01-22 06:52:25.334146549 +0000 UTC m=+1037.476053264" lastFinishedPulling="2026-01-22 06:52:32.321641291 +0000 UTC m=+1044.463547996" observedRunningTime="2026-01-22 06:52:32.613504421 +0000 UTC m=+1044.755411166" watchObservedRunningTime="2026-01-22 06:52:32.618299267 +0000 UTC m=+1044.760205992" Jan 22 06:52:44 crc kubenswrapper[4720]: I0122 06:52:44.859483 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-547d554b65-gvw8s" Jan 22 06:52:59 crc kubenswrapper[4720]: I0122 06:52:59.780685 4720 patch_prober.go:28] interesting pod/machine-config-daemon-bnsvd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 06:52:59 crc kubenswrapper[4720]: I0122 06:52:59.781597 4720 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" podUID="f4b26e9d-6a95-4b1c-9750-88b6aa100c67" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 06:52:59 crc kubenswrapper[4720]: I0122 06:52:59.781652 4720 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" Jan 22 06:52:59 crc kubenswrapper[4720]: I0122 06:52:59.782465 4720 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b414bde178e4b56f6099e1ff683f7636b4d4b7f1bac281d62264b75dc74b4bc6"} pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 22 06:52:59 crc kubenswrapper[4720]: I0122 06:52:59.782540 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" podUID="f4b26e9d-6a95-4b1c-9750-88b6aa100c67" containerName="machine-config-daemon" containerID="cri-o://b414bde178e4b56f6099e1ff683f7636b4d4b7f1bac281d62264b75dc74b4bc6" gracePeriod=600 Jan 22 06:53:01 crc kubenswrapper[4720]: I0122 06:53:01.833993 4720 generic.go:334] "Generic (PLEG): container finished" podID="f4b26e9d-6a95-4b1c-9750-88b6aa100c67" containerID="b414bde178e4b56f6099e1ff683f7636b4d4b7f1bac281d62264b75dc74b4bc6" exitCode=0 Jan 22 06:53:01 crc kubenswrapper[4720]: I0122 06:53:01.834392 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" event={"ID":"f4b26e9d-6a95-4b1c-9750-88b6aa100c67","Type":"ContainerDied","Data":"b414bde178e4b56f6099e1ff683f7636b4d4b7f1bac281d62264b75dc74b4bc6"} Jan 22 06:53:01 crc kubenswrapper[4720]: I0122 06:53:01.834438 4720 scope.go:117] "RemoveContainer" containerID="5133cd7a4f98ed55da7368ea4892714f9b22a1313703673917d384626f9d42e1" Jan 22 06:53:02 crc kubenswrapper[4720]: I0122 06:53:02.850798 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" event={"ID":"f4b26e9d-6a95-4b1c-9750-88b6aa100c67","Type":"ContainerStarted","Data":"c3c253bdde52e7e13d966a713540bfc6fece8955f90bf08577d309f38a73e677"} Jan 22 06:53:06 crc kubenswrapper[4720]: I0122 06:53:06.322339 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-59dd8b7cbf-kp5p9"] Jan 22 06:53:06 crc kubenswrapper[4720]: I0122 06:53:06.324301 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-59dd8b7cbf-kp5p9" Jan 22 06:53:06 crc kubenswrapper[4720]: I0122 06:53:06.326314 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-9rzfx" Jan 22 06:53:06 crc kubenswrapper[4720]: I0122 06:53:06.338165 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-59dd8b7cbf-kp5p9"] Jan 22 06:53:06 crc kubenswrapper[4720]: I0122 06:53:06.349461 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-69cf5d4557-g9d9q"] Jan 22 06:53:06 crc kubenswrapper[4720]: I0122 06:53:06.350403 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-g9d9q" Jan 22 06:53:06 crc kubenswrapper[4720]: I0122 06:53:06.353845 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-s8ns6" Jan 22 06:53:06 crc kubenswrapper[4720]: I0122 06:53:06.372960 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-b45d7bf98-4nvtq"] Jan 22 06:53:06 crc kubenswrapper[4720]: I0122 06:53:06.376679 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-4nvtq" Jan 22 06:53:06 crc kubenswrapper[4720]: I0122 06:53:06.381705 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-7t2wv" Jan 22 06:53:06 crc kubenswrapper[4720]: I0122 06:53:06.407988 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-b45d7bf98-4nvtq"] Jan 22 06:53:06 crc kubenswrapper[4720]: I0122 06:53:06.411561 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-78fdd796fd-l7wpl"] Jan 22 06:53:06 crc kubenswrapper[4720]: I0122 06:53:06.413582 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-l7wpl" Jan 22 06:53:06 crc kubenswrapper[4720]: I0122 06:53:06.420492 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-vr5kx" Jan 22 06:53:06 crc kubenswrapper[4720]: I0122 06:53:06.434670 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-78fdd796fd-l7wpl"] Jan 22 06:53:06 crc kubenswrapper[4720]: I0122 06:53:06.454798 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-69cf5d4557-g9d9q"] Jan 22 06:53:06 crc kubenswrapper[4720]: I0122 06:53:06.459831 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-594c8c9d5d-6rl8m"] Jan 22 06:53:06 crc kubenswrapper[4720]: I0122 06:53:06.461052 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-6rl8m" Jan 22 06:53:06 crc kubenswrapper[4720]: I0122 06:53:06.464676 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-j55hl" Jan 22 06:53:06 crc kubenswrapper[4720]: I0122 06:53:06.466888 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-9jw99"] Jan 22 06:53:06 crc kubenswrapper[4720]: I0122 06:53:06.467808 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-9jw99" Jan 22 06:53:06 crc kubenswrapper[4720]: I0122 06:53:06.473015 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wrnz5\" (UniqueName: \"kubernetes.io/projected/a072cd1a-6b0c-4f3c-aa50-12a441bc87e3-kube-api-access-wrnz5\") pod \"barbican-operator-controller-manager-59dd8b7cbf-kp5p9\" (UID: \"a072cd1a-6b0c-4f3c-aa50-12a441bc87e3\") " pod="openstack-operators/barbican-operator-controller-manager-59dd8b7cbf-kp5p9" Jan 22 06:53:06 crc kubenswrapper[4720]: I0122 06:53:06.473086 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-46cg6\" (UniqueName: \"kubernetes.io/projected/15bf2b23-40fc-4958-9774-3c6e4f2c591a-kube-api-access-46cg6\") pod \"cinder-operator-controller-manager-69cf5d4557-g9d9q\" (UID: \"15bf2b23-40fc-4958-9774-3c6e4f2c591a\") " pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-g9d9q" Jan 22 06:53:06 crc kubenswrapper[4720]: I0122 06:53:06.473119 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p9k2h\" (UniqueName: \"kubernetes.io/projected/cc13fc87-a160-4804-aef4-bb2c6ee89f13-kube-api-access-p9k2h\") pod \"designate-operator-controller-manager-b45d7bf98-4nvtq\" (UID: \"cc13fc87-a160-4804-aef4-bb2c6ee89f13\") " pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-4nvtq" Jan 22 06:53:06 crc kubenswrapper[4720]: I0122 06:53:06.477038 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-w9tx4" Jan 22 06:53:06 crc kubenswrapper[4720]: I0122 06:53:06.491097 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-9jw99"] Jan 22 06:53:06 crc kubenswrapper[4720]: I0122 06:53:06.510107 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-594c8c9d5d-6rl8m"] Jan 22 06:53:06 crc kubenswrapper[4720]: I0122 06:53:06.528989 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-d5h9r"] Jan 22 06:53:06 crc kubenswrapper[4720]: I0122 06:53:06.530367 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-d5h9r" Jan 22 06:53:06 crc kubenswrapper[4720]: I0122 06:53:06.557442 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-zvx87" Jan 22 06:53:06 crc kubenswrapper[4720]: I0122 06:53:06.573632 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b8b6d4659-ddkv8"] Jan 22 06:53:06 crc kubenswrapper[4720]: I0122 06:53:06.606404 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dcqdv\" (UniqueName: \"kubernetes.io/projected/b464ce62-6f79-452c-a1c6-3c4878bcc8ba-kube-api-access-dcqdv\") pod \"glance-operator-controller-manager-78fdd796fd-l7wpl\" (UID: \"b464ce62-6f79-452c-a1c6-3c4878bcc8ba\") " pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-l7wpl" Jan 22 06:53:06 crc kubenswrapper[4720]: I0122 06:53:06.606741 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p9k2h\" (UniqueName: \"kubernetes.io/projected/cc13fc87-a160-4804-aef4-bb2c6ee89f13-kube-api-access-p9k2h\") pod \"designate-operator-controller-manager-b45d7bf98-4nvtq\" (UID: \"cc13fc87-a160-4804-aef4-bb2c6ee89f13\") " pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-4nvtq" Jan 22 06:53:06 crc kubenswrapper[4720]: I0122 06:53:06.607755 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-24p8n\" (UniqueName: \"kubernetes.io/projected/f30c0975-10b7-4d3b-98f7-63a02ae44927-kube-api-access-24p8n\") pod \"heat-operator-controller-manager-594c8c9d5d-6rl8m\" (UID: \"f30c0975-10b7-4d3b-98f7-63a02ae44927\") " pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-6rl8m" Jan 22 06:53:06 crc kubenswrapper[4720]: I0122 06:53:06.608229 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lbqlf\" (UniqueName: \"kubernetes.io/projected/7d67431b-e376-4558-83f2-af33c36b403b-kube-api-access-lbqlf\") pod \"horizon-operator-controller-manager-77d5c5b54f-9jw99\" (UID: \"7d67431b-e376-4558-83f2-af33c36b403b\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-9jw99" Jan 22 06:53:06 crc kubenswrapper[4720]: I0122 06:53:06.633450 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-ddkv8" Jan 22 06:53:06 crc kubenswrapper[4720]: I0122 06:53:06.634609 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wrnz5\" (UniqueName: \"kubernetes.io/projected/a072cd1a-6b0c-4f3c-aa50-12a441bc87e3-kube-api-access-wrnz5\") pod \"barbican-operator-controller-manager-59dd8b7cbf-kp5p9\" (UID: \"a072cd1a-6b0c-4f3c-aa50-12a441bc87e3\") " pod="openstack-operators/barbican-operator-controller-manager-59dd8b7cbf-kp5p9" Jan 22 06:53:06 crc kubenswrapper[4720]: I0122 06:53:06.634661 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-46cg6\" (UniqueName: \"kubernetes.io/projected/15bf2b23-40fc-4958-9774-3c6e4f2c591a-kube-api-access-46cg6\") pod \"cinder-operator-controller-manager-69cf5d4557-g9d9q\" (UID: \"15bf2b23-40fc-4958-9774-3c6e4f2c591a\") " pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-g9d9q" Jan 22 06:53:06 crc kubenswrapper[4720]: I0122 06:53:06.642717 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-xldq5" Jan 22 06:53:06 crc kubenswrapper[4720]: I0122 06:53:06.643345 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-d5h9r"] Jan 22 06:53:06 crc kubenswrapper[4720]: I0122 06:53:06.691098 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-c87fff755-hq64w"] Jan 22 06:53:06 crc kubenswrapper[4720]: I0122 06:53:06.692509 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-hq64w" Jan 22 06:53:06 crc kubenswrapper[4720]: I0122 06:53:06.698505 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-r29cz" Jan 22 06:53:06 crc kubenswrapper[4720]: I0122 06:53:06.699629 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p9k2h\" (UniqueName: \"kubernetes.io/projected/cc13fc87-a160-4804-aef4-bb2c6ee89f13-kube-api-access-p9k2h\") pod \"designate-operator-controller-manager-b45d7bf98-4nvtq\" (UID: \"cc13fc87-a160-4804-aef4-bb2c6ee89f13\") " pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-4nvtq" Jan 22 06:53:06 crc kubenswrapper[4720]: I0122 06:53:06.712571 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-4nvtq" Jan 22 06:53:06 crc kubenswrapper[4720]: I0122 06:53:06.720737 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wrnz5\" (UniqueName: \"kubernetes.io/projected/a072cd1a-6b0c-4f3c-aa50-12a441bc87e3-kube-api-access-wrnz5\") pod \"barbican-operator-controller-manager-59dd8b7cbf-kp5p9\" (UID: \"a072cd1a-6b0c-4f3c-aa50-12a441bc87e3\") " pod="openstack-operators/barbican-operator-controller-manager-59dd8b7cbf-kp5p9" Jan 22 06:53:06 crc kubenswrapper[4720]: I0122 06:53:06.726358 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-78c6999f6f-nn4jg"] Jan 22 06:53:06 crc kubenswrapper[4720]: I0122 06:53:06.727632 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-nn4jg" Jan 22 06:53:06 crc kubenswrapper[4720]: I0122 06:53:06.729104 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-46cg6\" (UniqueName: \"kubernetes.io/projected/15bf2b23-40fc-4958-9774-3c6e4f2c591a-kube-api-access-46cg6\") pod \"cinder-operator-controller-manager-69cf5d4557-g9d9q\" (UID: \"15bf2b23-40fc-4958-9774-3c6e4f2c591a\") " pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-g9d9q" Jan 22 06:53:06 crc kubenswrapper[4720]: I0122 06:53:06.737173 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-kdlw7" Jan 22 06:53:06 crc kubenswrapper[4720]: I0122 06:53:06.738073 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-24p8n\" (UniqueName: \"kubernetes.io/projected/f30c0975-10b7-4d3b-98f7-63a02ae44927-kube-api-access-24p8n\") pod \"heat-operator-controller-manager-594c8c9d5d-6rl8m\" (UID: \"f30c0975-10b7-4d3b-98f7-63a02ae44927\") " pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-6rl8m" Jan 22 06:53:06 crc kubenswrapper[4720]: I0122 06:53:06.738142 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p2824\" (UniqueName: \"kubernetes.io/projected/ace6e6bf-fddd-4105-af4e-5ad7fcd9f4d1-kube-api-access-p2824\") pod \"ironic-operator-controller-manager-69d6c9f5b8-d5h9r\" (UID: \"ace6e6bf-fddd-4105-af4e-5ad7fcd9f4d1\") " pod="openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-d5h9r" Jan 22 06:53:06 crc kubenswrapper[4720]: I0122 06:53:06.738207 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lbqlf\" (UniqueName: \"kubernetes.io/projected/7d67431b-e376-4558-83f2-af33c36b403b-kube-api-access-lbqlf\") pod \"horizon-operator-controller-manager-77d5c5b54f-9jw99\" (UID: \"7d67431b-e376-4558-83f2-af33c36b403b\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-9jw99" Jan 22 06:53:06 crc kubenswrapper[4720]: I0122 06:53:06.738269 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dcqdv\" (UniqueName: \"kubernetes.io/projected/b464ce62-6f79-452c-a1c6-3c4878bcc8ba-kube-api-access-dcqdv\") pod \"glance-operator-controller-manager-78fdd796fd-l7wpl\" (UID: \"b464ce62-6f79-452c-a1c6-3c4878bcc8ba\") " pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-l7wpl" Jan 22 06:53:06 crc kubenswrapper[4720]: I0122 06:53:06.773617 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b8b6d4659-ddkv8"] Jan 22 06:53:06 crc kubenswrapper[4720]: I0122 06:53:06.787739 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-24p8n\" (UniqueName: \"kubernetes.io/projected/f30c0975-10b7-4d3b-98f7-63a02ae44927-kube-api-access-24p8n\") pod \"heat-operator-controller-manager-594c8c9d5d-6rl8m\" (UID: \"f30c0975-10b7-4d3b-98f7-63a02ae44927\") " pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-6rl8m" Jan 22 06:53:06 crc kubenswrapper[4720]: I0122 06:53:06.811611 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-54ccf4f85d-h6fd5"] Jan 22 06:53:06 crc kubenswrapper[4720]: I0122 06:53:06.813481 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-h6fd5" Jan 22 06:53:06 crc kubenswrapper[4720]: I0122 06:53:06.815571 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dcqdv\" (UniqueName: \"kubernetes.io/projected/b464ce62-6f79-452c-a1c6-3c4878bcc8ba-kube-api-access-dcqdv\") pod \"glance-operator-controller-manager-78fdd796fd-l7wpl\" (UID: \"b464ce62-6f79-452c-a1c6-3c4878bcc8ba\") " pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-l7wpl" Jan 22 06:53:06 crc kubenswrapper[4720]: I0122 06:53:06.819806 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-dkbtc" Jan 22 06:53:06 crc kubenswrapper[4720]: I0122 06:53:06.819871 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Jan 22 06:53:06 crc kubenswrapper[4720]: I0122 06:53:06.825977 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-78c6999f6f-nn4jg"] Jan 22 06:53:06 crc kubenswrapper[4720]: I0122 06:53:06.828766 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lbqlf\" (UniqueName: \"kubernetes.io/projected/7d67431b-e376-4558-83f2-af33c36b403b-kube-api-access-lbqlf\") pod \"horizon-operator-controller-manager-77d5c5b54f-9jw99\" (UID: \"7d67431b-e376-4558-83f2-af33c36b403b\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-9jw99" Jan 22 06:53:06 crc kubenswrapper[4720]: I0122 06:53:06.837545 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-c87fff755-hq64w"] Jan 22 06:53:06 crc kubenswrapper[4720]: I0122 06:53:06.842983 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-54ccf4f85d-h6fd5"] Jan 22 06:53:06 crc kubenswrapper[4720]: I0122 06:53:06.846051 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p5lq8\" (UniqueName: \"kubernetes.io/projected/d681304a-06cd-4870-b2b5-4f10936b7775-kube-api-access-p5lq8\") pod \"manila-operator-controller-manager-78c6999f6f-nn4jg\" (UID: \"d681304a-06cd-4870-b2b5-4f10936b7775\") " pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-nn4jg" Jan 22 06:53:06 crc kubenswrapper[4720]: I0122 06:53:06.846165 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p2824\" (UniqueName: \"kubernetes.io/projected/ace6e6bf-fddd-4105-af4e-5ad7fcd9f4d1-kube-api-access-p2824\") pod \"ironic-operator-controller-manager-69d6c9f5b8-d5h9r\" (UID: \"ace6e6bf-fddd-4105-af4e-5ad7fcd9f4d1\") " pod="openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-d5h9r" Jan 22 06:53:06 crc kubenswrapper[4720]: I0122 06:53:06.846517 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tg8wz\" (UniqueName: \"kubernetes.io/projected/21ee70f0-2938-4d3a-9edf-beaa943261ab-kube-api-access-tg8wz\") pod \"keystone-operator-controller-manager-b8b6d4659-ddkv8\" (UID: \"21ee70f0-2938-4d3a-9edf-beaa943261ab\") " pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-ddkv8" Jan 22 06:53:06 crc kubenswrapper[4720]: I0122 06:53:06.846559 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t24qr\" (UniqueName: \"kubernetes.io/projected/de14bbbe-09fc-4f3c-8857-e3f7abca82f8-kube-api-access-t24qr\") pod \"mariadb-operator-controller-manager-c87fff755-hq64w\" (UID: \"de14bbbe-09fc-4f3c-8857-e3f7abca82f8\") " pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-hq64w" Jan 22 06:53:06 crc kubenswrapper[4720]: I0122 06:53:06.853069 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-5d8f59fb49-47njc"] Jan 22 06:53:06 crc kubenswrapper[4720]: I0122 06:53:06.854648 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-5d8f59fb49-47njc" Jan 22 06:53:06 crc kubenswrapper[4720]: I0122 06:53:06.859839 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-6b8bc8d87d-tnvdl"] Jan 22 06:53:06 crc kubenswrapper[4720]: I0122 06:53:06.860992 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-tnvdl" Jan 22 06:53:06 crc kubenswrapper[4720]: I0122 06:53:06.869938 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-tzw4v" Jan 22 06:53:06 crc kubenswrapper[4720]: I0122 06:53:06.870831 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-8jcp4" Jan 22 06:53:06 crc kubenswrapper[4720]: I0122 06:53:06.874772 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-7bd9774b6-gkhjf"] Jan 22 06:53:06 crc kubenswrapper[4720]: I0122 06:53:06.875873 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-gkhjf" Jan 22 06:53:06 crc kubenswrapper[4720]: I0122 06:53:06.877053 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-6b8bc8d87d-tnvdl"] Jan 22 06:53:06 crc kubenswrapper[4720]: I0122 06:53:06.882439 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-6q8xq" Jan 22 06:53:06 crc kubenswrapper[4720]: I0122 06:53:06.882755 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-7bd9774b6-gkhjf"] Jan 22 06:53:06 crc kubenswrapper[4720]: I0122 06:53:06.891645 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-5d8f59fb49-47njc"] Jan 22 06:53:06 crc kubenswrapper[4720]: I0122 06:53:06.895509 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b85485jc7"] Jan 22 06:53:06 crc kubenswrapper[4720]: I0122 06:53:06.897172 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b85485jc7" Jan 22 06:53:06 crc kubenswrapper[4720]: I0122 06:53:06.899988 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Jan 22 06:53:06 crc kubenswrapper[4720]: I0122 06:53:06.900136 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-rhlb6" Jan 22 06:53:06 crc kubenswrapper[4720]: I0122 06:53:06.903644 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p2824\" (UniqueName: \"kubernetes.io/projected/ace6e6bf-fddd-4105-af4e-5ad7fcd9f4d1-kube-api-access-p2824\") pod \"ironic-operator-controller-manager-69d6c9f5b8-d5h9r\" (UID: \"ace6e6bf-fddd-4105-af4e-5ad7fcd9f4d1\") " pod="openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-d5h9r" Jan 22 06:53:06 crc kubenswrapper[4720]: I0122 06:53:06.906929 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-55db956ddc-m2hkw"] Jan 22 06:53:06 crc kubenswrapper[4720]: I0122 06:53:06.908314 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-m2hkw" Jan 22 06:53:06 crc kubenswrapper[4720]: I0122 06:53:06.912418 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-6xdml" Jan 22 06:53:06 crc kubenswrapper[4720]: I0122 06:53:06.918290 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-55db956ddc-m2hkw"] Jan 22 06:53:06 crc kubenswrapper[4720]: I0122 06:53:06.936800 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-5d646b7d76-wmhbp"] Jan 22 06:53:06 crc kubenswrapper[4720]: I0122 06:53:06.938114 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-wmhbp" Jan 22 06:53:06 crc kubenswrapper[4720]: I0122 06:53:06.947529 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-4bbmz" Jan 22 06:53:06 crc kubenswrapper[4720]: I0122 06:53:06.956382 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-59dd8b7cbf-kp5p9" Jan 22 06:53:06 crc kubenswrapper[4720]: I0122 06:53:06.954567 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tg8wz\" (UniqueName: \"kubernetes.io/projected/21ee70f0-2938-4d3a-9edf-beaa943261ab-kube-api-access-tg8wz\") pod \"keystone-operator-controller-manager-b8b6d4659-ddkv8\" (UID: \"21ee70f0-2938-4d3a-9edf-beaa943261ab\") " pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-ddkv8" Jan 22 06:53:06 crc kubenswrapper[4720]: I0122 06:53:06.957071 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t24qr\" (UniqueName: \"kubernetes.io/projected/de14bbbe-09fc-4f3c-8857-e3f7abca82f8-kube-api-access-t24qr\") pod \"mariadb-operator-controller-manager-c87fff755-hq64w\" (UID: \"de14bbbe-09fc-4f3c-8857-e3f7abca82f8\") " pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-hq64w" Jan 22 06:53:06 crc kubenswrapper[4720]: I0122 06:53:06.957104 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mfsm9\" (UniqueName: \"kubernetes.io/projected/25a73ab8-0306-4e57-9417-ce651e370925-kube-api-access-mfsm9\") pod \"infra-operator-controller-manager-54ccf4f85d-h6fd5\" (UID: \"25a73ab8-0306-4e57-9417-ce651e370925\") " pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-h6fd5" Jan 22 06:53:06 crc kubenswrapper[4720]: I0122 06:53:06.957134 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6fr86\" (UniqueName: \"kubernetes.io/projected/fd7a6c01-1255-4f11-9dba-d3119753d47c-kube-api-access-6fr86\") pod \"neutron-operator-controller-manager-5d8f59fb49-47njc\" (UID: \"fd7a6c01-1255-4f11-9dba-d3119753d47c\") " pod="openstack-operators/neutron-operator-controller-manager-5d8f59fb49-47njc" Jan 22 06:53:06 crc kubenswrapper[4720]: I0122 06:53:06.957161 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/25a73ab8-0306-4e57-9417-ce651e370925-cert\") pod \"infra-operator-controller-manager-54ccf4f85d-h6fd5\" (UID: \"25a73ab8-0306-4e57-9417-ce651e370925\") " pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-h6fd5" Jan 22 06:53:06 crc kubenswrapper[4720]: I0122 06:53:06.957188 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p5lq8\" (UniqueName: \"kubernetes.io/projected/d681304a-06cd-4870-b2b5-4f10936b7775-kube-api-access-p5lq8\") pod \"manila-operator-controller-manager-78c6999f6f-nn4jg\" (UID: \"d681304a-06cd-4870-b2b5-4f10936b7775\") " pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-nn4jg" Jan 22 06:53:06 crc kubenswrapper[4720]: I0122 06:53:06.966839 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b85485jc7"] Jan 22 06:53:06 crc kubenswrapper[4720]: I0122 06:53:06.971281 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-85cd9769bb-4tlfl"] Jan 22 06:53:06 crc kubenswrapper[4720]: I0122 06:53:06.972249 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-4tlfl" Jan 22 06:53:06 crc kubenswrapper[4720]: I0122 06:53:06.974003 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-g9d9q" Jan 22 06:53:06 crc kubenswrapper[4720]: I0122 06:53:06.974685 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5d646b7d76-wmhbp"] Jan 22 06:53:06 crc kubenswrapper[4720]: I0122 06:53:06.985288 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-n4nnt" Jan 22 06:53:06 crc kubenswrapper[4720]: I0122 06:53:06.987447 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-547cbdb99f-xqx67"] Jan 22 06:53:06 crc kubenswrapper[4720]: I0122 06:53:06.988448 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-xqx67" Jan 22 06:53:07 crc kubenswrapper[4720]: I0122 06:53:07.001777 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-xkfmk" Jan 22 06:53:07 crc kubenswrapper[4720]: I0122 06:53:07.004711 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tg8wz\" (UniqueName: \"kubernetes.io/projected/21ee70f0-2938-4d3a-9edf-beaa943261ab-kube-api-access-tg8wz\") pod \"keystone-operator-controller-manager-b8b6d4659-ddkv8\" (UID: \"21ee70f0-2938-4d3a-9edf-beaa943261ab\") " pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-ddkv8" Jan 22 06:53:07 crc kubenswrapper[4720]: I0122 06:53:07.016160 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p5lq8\" (UniqueName: \"kubernetes.io/projected/d681304a-06cd-4870-b2b5-4f10936b7775-kube-api-access-p5lq8\") pod \"manila-operator-controller-manager-78c6999f6f-nn4jg\" (UID: \"d681304a-06cd-4870-b2b5-4f10936b7775\") " pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-nn4jg" Jan 22 06:53:07 crc kubenswrapper[4720]: I0122 06:53:07.020254 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t24qr\" (UniqueName: \"kubernetes.io/projected/de14bbbe-09fc-4f3c-8857-e3f7abca82f8-kube-api-access-t24qr\") pod \"mariadb-operator-controller-manager-c87fff755-hq64w\" (UID: \"de14bbbe-09fc-4f3c-8857-e3f7abca82f8\") " pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-hq64w" Jan 22 06:53:07 crc kubenswrapper[4720]: I0122 06:53:07.045297 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-l7wpl" Jan 22 06:53:07 crc kubenswrapper[4720]: I0122 06:53:07.050168 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-85cd9769bb-4tlfl"] Jan 22 06:53:07 crc kubenswrapper[4720]: I0122 06:53:07.058814 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mfsm9\" (UniqueName: \"kubernetes.io/projected/25a73ab8-0306-4e57-9417-ce651e370925-kube-api-access-mfsm9\") pod \"infra-operator-controller-manager-54ccf4f85d-h6fd5\" (UID: \"25a73ab8-0306-4e57-9417-ce651e370925\") " pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-h6fd5" Jan 22 06:53:07 crc kubenswrapper[4720]: I0122 06:53:07.058879 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jvzk5\" (UniqueName: \"kubernetes.io/projected/a2440b28-2217-482c-87c6-443616b586cb-kube-api-access-jvzk5\") pod \"nova-operator-controller-manager-6b8bc8d87d-tnvdl\" (UID: \"a2440b28-2217-482c-87c6-443616b586cb\") " pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-tnvdl" Jan 22 06:53:07 crc kubenswrapper[4720]: I0122 06:53:07.058937 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pnnqh\" (UniqueName: \"kubernetes.io/projected/e77f3a0e-4936-4b98-829b-6ea9ebe6e817-kube-api-access-pnnqh\") pod \"octavia-operator-controller-manager-7bd9774b6-gkhjf\" (UID: \"e77f3a0e-4936-4b98-829b-6ea9ebe6e817\") " pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-gkhjf" Jan 22 06:53:07 crc kubenswrapper[4720]: I0122 06:53:07.058967 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6fr86\" (UniqueName: \"kubernetes.io/projected/fd7a6c01-1255-4f11-9dba-d3119753d47c-kube-api-access-6fr86\") pod \"neutron-operator-controller-manager-5d8f59fb49-47njc\" (UID: \"fd7a6c01-1255-4f11-9dba-d3119753d47c\") " pod="openstack-operators/neutron-operator-controller-manager-5d8f59fb49-47njc" Jan 22 06:53:07 crc kubenswrapper[4720]: I0122 06:53:07.059002 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kwf8j\" (UniqueName: \"kubernetes.io/projected/476ecc66-be12-4a68-8de1-3a062ec12f55-kube-api-access-kwf8j\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b85485jc7\" (UID: \"476ecc66-be12-4a68-8de1-3a062ec12f55\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b85485jc7" Jan 22 06:53:07 crc kubenswrapper[4720]: I0122 06:53:07.059030 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/25a73ab8-0306-4e57-9417-ce651e370925-cert\") pod \"infra-operator-controller-manager-54ccf4f85d-h6fd5\" (UID: \"25a73ab8-0306-4e57-9417-ce651e370925\") " pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-h6fd5" Jan 22 06:53:07 crc kubenswrapper[4720]: I0122 06:53:07.059117 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9wdtb\" (UniqueName: \"kubernetes.io/projected/0a6de6f6-4bef-4f84-b4b8-4de46e9347b1-kube-api-access-9wdtb\") pod \"placement-operator-controller-manager-5d646b7d76-wmhbp\" (UID: \"0a6de6f6-4bef-4f84-b4b8-4de46e9347b1\") " pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-wmhbp" Jan 22 06:53:07 crc kubenswrapper[4720]: I0122 06:53:07.059148 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/476ecc66-be12-4a68-8de1-3a062ec12f55-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b85485jc7\" (UID: \"476ecc66-be12-4a68-8de1-3a062ec12f55\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b85485jc7" Jan 22 06:53:07 crc kubenswrapper[4720]: I0122 06:53:07.059169 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gf5gz\" (UniqueName: \"kubernetes.io/projected/6a45a130-7295-401c-a63c-1df68c263764-kube-api-access-gf5gz\") pod \"ovn-operator-controller-manager-55db956ddc-m2hkw\" (UID: \"6a45a130-7295-401c-a63c-1df68c263764\") " pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-m2hkw" Jan 22 06:53:07 crc kubenswrapper[4720]: E0122 06:53:07.059781 4720 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 22 06:53:07 crc kubenswrapper[4720]: E0122 06:53:07.059838 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/25a73ab8-0306-4e57-9417-ce651e370925-cert podName:25a73ab8-0306-4e57-9417-ce651e370925 nodeName:}" failed. No retries permitted until 2026-01-22 06:53:07.559814443 +0000 UTC m=+1079.701721148 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/25a73ab8-0306-4e57-9417-ce651e370925-cert") pod "infra-operator-controller-manager-54ccf4f85d-h6fd5" (UID: "25a73ab8-0306-4e57-9417-ce651e370925") : secret "infra-operator-webhook-server-cert" not found Jan 22 06:53:07 crc kubenswrapper[4720]: I0122 06:53:07.083255 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-6rl8m" Jan 22 06:53:07 crc kubenswrapper[4720]: I0122 06:53:07.098344 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-9jw99" Jan 22 06:53:07 crc kubenswrapper[4720]: I0122 06:53:07.130167 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6fr86\" (UniqueName: \"kubernetes.io/projected/fd7a6c01-1255-4f11-9dba-d3119753d47c-kube-api-access-6fr86\") pod \"neutron-operator-controller-manager-5d8f59fb49-47njc\" (UID: \"fd7a6c01-1255-4f11-9dba-d3119753d47c\") " pod="openstack-operators/neutron-operator-controller-manager-5d8f59fb49-47njc" Jan 22 06:53:07 crc kubenswrapper[4720]: I0122 06:53:07.133159 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mfsm9\" (UniqueName: \"kubernetes.io/projected/25a73ab8-0306-4e57-9417-ce651e370925-kube-api-access-mfsm9\") pod \"infra-operator-controller-manager-54ccf4f85d-h6fd5\" (UID: \"25a73ab8-0306-4e57-9417-ce651e370925\") " pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-h6fd5" Jan 22 06:53:07 crc kubenswrapper[4720]: I0122 06:53:07.135001 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-547cbdb99f-xqx67"] Jan 22 06:53:07 crc kubenswrapper[4720]: I0122 06:53:07.147387 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-hq64w" Jan 22 06:53:07 crc kubenswrapper[4720]: I0122 06:53:07.170241 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-nn4jg" Jan 22 06:53:07 crc kubenswrapper[4720]: I0122 06:53:07.170887 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5bppz\" (UniqueName: \"kubernetes.io/projected/0e186e5c-83e6-465d-9353-e9314702d85a-kube-api-access-5bppz\") pod \"telemetry-operator-controller-manager-85cd9769bb-4tlfl\" (UID: \"0e186e5c-83e6-465d-9353-e9314702d85a\") " pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-4tlfl" Jan 22 06:53:07 crc kubenswrapper[4720]: I0122 06:53:07.171441 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mz8xl\" (UniqueName: \"kubernetes.io/projected/e9c3503d-2a2a-4f59-8c25-b28a681cdcfb-kube-api-access-mz8xl\") pod \"swift-operator-controller-manager-547cbdb99f-xqx67\" (UID: \"e9c3503d-2a2a-4f59-8c25-b28a681cdcfb\") " pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-xqx67" Jan 22 06:53:07 crc kubenswrapper[4720]: I0122 06:53:07.171490 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9wdtb\" (UniqueName: \"kubernetes.io/projected/0a6de6f6-4bef-4f84-b4b8-4de46e9347b1-kube-api-access-9wdtb\") pod \"placement-operator-controller-manager-5d646b7d76-wmhbp\" (UID: \"0a6de6f6-4bef-4f84-b4b8-4de46e9347b1\") " pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-wmhbp" Jan 22 06:53:07 crc kubenswrapper[4720]: I0122 06:53:07.171526 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/476ecc66-be12-4a68-8de1-3a062ec12f55-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b85485jc7\" (UID: \"476ecc66-be12-4a68-8de1-3a062ec12f55\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b85485jc7" Jan 22 06:53:07 crc kubenswrapper[4720]: I0122 06:53:07.171549 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gf5gz\" (UniqueName: \"kubernetes.io/projected/6a45a130-7295-401c-a63c-1df68c263764-kube-api-access-gf5gz\") pod \"ovn-operator-controller-manager-55db956ddc-m2hkw\" (UID: \"6a45a130-7295-401c-a63c-1df68c263764\") " pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-m2hkw" Jan 22 06:53:07 crc kubenswrapper[4720]: I0122 06:53:07.171598 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jvzk5\" (UniqueName: \"kubernetes.io/projected/a2440b28-2217-482c-87c6-443616b586cb-kube-api-access-jvzk5\") pod \"nova-operator-controller-manager-6b8bc8d87d-tnvdl\" (UID: \"a2440b28-2217-482c-87c6-443616b586cb\") " pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-tnvdl" Jan 22 06:53:07 crc kubenswrapper[4720]: I0122 06:53:07.171623 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pnnqh\" (UniqueName: \"kubernetes.io/projected/e77f3a0e-4936-4b98-829b-6ea9ebe6e817-kube-api-access-pnnqh\") pod \"octavia-operator-controller-manager-7bd9774b6-gkhjf\" (UID: \"e77f3a0e-4936-4b98-829b-6ea9ebe6e817\") " pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-gkhjf" Jan 22 06:53:07 crc kubenswrapper[4720]: I0122 06:53:07.171666 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kwf8j\" (UniqueName: \"kubernetes.io/projected/476ecc66-be12-4a68-8de1-3a062ec12f55-kube-api-access-kwf8j\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b85485jc7\" (UID: \"476ecc66-be12-4a68-8de1-3a062ec12f55\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b85485jc7" Jan 22 06:53:07 crc kubenswrapper[4720]: E0122 06:53:07.172173 4720 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 22 06:53:07 crc kubenswrapper[4720]: E0122 06:53:07.172228 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/476ecc66-be12-4a68-8de1-3a062ec12f55-cert podName:476ecc66-be12-4a68-8de1-3a062ec12f55 nodeName:}" failed. No retries permitted until 2026-01-22 06:53:07.672207425 +0000 UTC m=+1079.814114130 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/476ecc66-be12-4a68-8de1-3a062ec12f55-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b85485jc7" (UID: "476ecc66-be12-4a68-8de1-3a062ec12f55") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 22 06:53:07 crc kubenswrapper[4720]: I0122 06:53:07.189937 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-69797bbcbd-2cs6n"] Jan 22 06:53:07 crc kubenswrapper[4720]: I0122 06:53:07.207595 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-2cs6n" Jan 22 06:53:07 crc kubenswrapper[4720]: I0122 06:53:07.211658 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-jpws5" Jan 22 06:53:07 crc kubenswrapper[4720]: I0122 06:53:07.213061 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-d5h9r" Jan 22 06:53:07 crc kubenswrapper[4720]: I0122 06:53:07.255011 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-5d8f59fb49-47njc" Jan 22 06:53:07 crc kubenswrapper[4720]: I0122 06:53:07.278794 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pnnqh\" (UniqueName: \"kubernetes.io/projected/e77f3a0e-4936-4b98-829b-6ea9ebe6e817-kube-api-access-pnnqh\") pod \"octavia-operator-controller-manager-7bd9774b6-gkhjf\" (UID: \"e77f3a0e-4936-4b98-829b-6ea9ebe6e817\") " pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-gkhjf" Jan 22 06:53:07 crc kubenswrapper[4720]: I0122 06:53:07.287632 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-ddkv8" Jan 22 06:53:07 crc kubenswrapper[4720]: I0122 06:53:07.291071 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5bppz\" (UniqueName: \"kubernetes.io/projected/0e186e5c-83e6-465d-9353-e9314702d85a-kube-api-access-5bppz\") pod \"telemetry-operator-controller-manager-85cd9769bb-4tlfl\" (UID: \"0e186e5c-83e6-465d-9353-e9314702d85a\") " pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-4tlfl" Jan 22 06:53:07 crc kubenswrapper[4720]: I0122 06:53:07.291176 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q6jrq\" (UniqueName: \"kubernetes.io/projected/1a3c6a91-064b-4006-b40f-ba7bc317aa83-kube-api-access-q6jrq\") pod \"test-operator-controller-manager-69797bbcbd-2cs6n\" (UID: \"1a3c6a91-064b-4006-b40f-ba7bc317aa83\") " pod="openstack-operators/test-operator-controller-manager-69797bbcbd-2cs6n" Jan 22 06:53:07 crc kubenswrapper[4720]: I0122 06:53:07.291242 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mz8xl\" (UniqueName: \"kubernetes.io/projected/e9c3503d-2a2a-4f59-8c25-b28a681cdcfb-kube-api-access-mz8xl\") pod \"swift-operator-controller-manager-547cbdb99f-xqx67\" (UID: \"e9c3503d-2a2a-4f59-8c25-b28a681cdcfb\") " pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-xqx67" Jan 22 06:53:07 crc kubenswrapper[4720]: I0122 06:53:07.319408 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jvzk5\" (UniqueName: \"kubernetes.io/projected/a2440b28-2217-482c-87c6-443616b586cb-kube-api-access-jvzk5\") pod \"nova-operator-controller-manager-6b8bc8d87d-tnvdl\" (UID: \"a2440b28-2217-482c-87c6-443616b586cb\") " pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-tnvdl" Jan 22 06:53:07 crc kubenswrapper[4720]: I0122 06:53:07.322695 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9wdtb\" (UniqueName: \"kubernetes.io/projected/0a6de6f6-4bef-4f84-b4b8-4de46e9347b1-kube-api-access-9wdtb\") pod \"placement-operator-controller-manager-5d646b7d76-wmhbp\" (UID: \"0a6de6f6-4bef-4f84-b4b8-4de46e9347b1\") " pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-wmhbp" Jan 22 06:53:07 crc kubenswrapper[4720]: I0122 06:53:07.323380 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kwf8j\" (UniqueName: \"kubernetes.io/projected/476ecc66-be12-4a68-8de1-3a062ec12f55-kube-api-access-kwf8j\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b85485jc7\" (UID: \"476ecc66-be12-4a68-8de1-3a062ec12f55\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b85485jc7" Jan 22 06:53:07 crc kubenswrapper[4720]: I0122 06:53:07.360802 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-69797bbcbd-2cs6n"] Jan 22 06:53:07 crc kubenswrapper[4720]: I0122 06:53:07.399465 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gf5gz\" (UniqueName: \"kubernetes.io/projected/6a45a130-7295-401c-a63c-1df68c263764-kube-api-access-gf5gz\") pod \"ovn-operator-controller-manager-55db956ddc-m2hkw\" (UID: \"6a45a130-7295-401c-a63c-1df68c263764\") " pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-m2hkw" Jan 22 06:53:07 crc kubenswrapper[4720]: I0122 06:53:07.401996 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-gkhjf" Jan 22 06:53:07 crc kubenswrapper[4720]: I0122 06:53:07.403043 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-m2hkw" Jan 22 06:53:07 crc kubenswrapper[4720]: I0122 06:53:07.403640 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q6jrq\" (UniqueName: \"kubernetes.io/projected/1a3c6a91-064b-4006-b40f-ba7bc317aa83-kube-api-access-q6jrq\") pod \"test-operator-controller-manager-69797bbcbd-2cs6n\" (UID: \"1a3c6a91-064b-4006-b40f-ba7bc317aa83\") " pod="openstack-operators/test-operator-controller-manager-69797bbcbd-2cs6n" Jan 22 06:53:07 crc kubenswrapper[4720]: I0122 06:53:07.404229 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-wmhbp" Jan 22 06:53:07 crc kubenswrapper[4720]: I0122 06:53:07.424836 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mz8xl\" (UniqueName: \"kubernetes.io/projected/e9c3503d-2a2a-4f59-8c25-b28a681cdcfb-kube-api-access-mz8xl\") pod \"swift-operator-controller-manager-547cbdb99f-xqx67\" (UID: \"e9c3503d-2a2a-4f59-8c25-b28a681cdcfb\") " pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-xqx67" Jan 22 06:53:07 crc kubenswrapper[4720]: I0122 06:53:07.459876 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q6jrq\" (UniqueName: \"kubernetes.io/projected/1a3c6a91-064b-4006-b40f-ba7bc317aa83-kube-api-access-q6jrq\") pod \"test-operator-controller-manager-69797bbcbd-2cs6n\" (UID: \"1a3c6a91-064b-4006-b40f-ba7bc317aa83\") " pod="openstack-operators/test-operator-controller-manager-69797bbcbd-2cs6n" Jan 22 06:53:07 crc kubenswrapper[4720]: I0122 06:53:07.467603 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5bppz\" (UniqueName: \"kubernetes.io/projected/0e186e5c-83e6-465d-9353-e9314702d85a-kube-api-access-5bppz\") pod \"telemetry-operator-controller-manager-85cd9769bb-4tlfl\" (UID: \"0e186e5c-83e6-465d-9353-e9314702d85a\") " pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-4tlfl" Jan 22 06:53:07 crc kubenswrapper[4720]: I0122 06:53:07.523050 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-57c994f794-t6ms5"] Jan 22 06:53:07 crc kubenswrapper[4720]: I0122 06:53:07.543013 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-57c994f794-t6ms5" Jan 22 06:53:07 crc kubenswrapper[4720]: I0122 06:53:07.550768 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-zxvbg" Jan 22 06:53:07 crc kubenswrapper[4720]: I0122 06:53:07.552633 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-4tlfl" Jan 22 06:53:07 crc kubenswrapper[4720]: I0122 06:53:07.553405 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-xqx67" Jan 22 06:53:07 crc kubenswrapper[4720]: I0122 06:53:07.562286 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-57c994f794-t6ms5"] Jan 22 06:53:07 crc kubenswrapper[4720]: I0122 06:53:07.588113 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-2cs6n" Jan 22 06:53:07 crc kubenswrapper[4720]: I0122 06:53:07.601173 4720 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 22 06:53:07 crc kubenswrapper[4720]: I0122 06:53:07.606187 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tp4rg\" (UniqueName: \"kubernetes.io/projected/f88ed309-12b4-4cb4-bc95-6e6873c72c10-kube-api-access-tp4rg\") pod \"watcher-operator-controller-manager-57c994f794-t6ms5\" (UID: \"f88ed309-12b4-4cb4-bc95-6e6873c72c10\") " pod="openstack-operators/watcher-operator-controller-manager-57c994f794-t6ms5" Jan 22 06:53:07 crc kubenswrapper[4720]: I0122 06:53:07.606307 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/25a73ab8-0306-4e57-9417-ce651e370925-cert\") pod \"infra-operator-controller-manager-54ccf4f85d-h6fd5\" (UID: \"25a73ab8-0306-4e57-9417-ce651e370925\") " pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-h6fd5" Jan 22 06:53:07 crc kubenswrapper[4720]: E0122 06:53:07.606461 4720 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 22 06:53:07 crc kubenswrapper[4720]: E0122 06:53:07.606521 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/25a73ab8-0306-4e57-9417-ce651e370925-cert podName:25a73ab8-0306-4e57-9417-ce651e370925 nodeName:}" failed. No retries permitted until 2026-01-22 06:53:08.60650253 +0000 UTC m=+1080.748409235 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/25a73ab8-0306-4e57-9417-ce651e370925-cert") pod "infra-operator-controller-manager-54ccf4f85d-h6fd5" (UID: "25a73ab8-0306-4e57-9417-ce651e370925") : secret "infra-operator-webhook-server-cert" not found Jan 22 06:53:07 crc kubenswrapper[4720]: I0122 06:53:07.614255 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-tnvdl" Jan 22 06:53:07 crc kubenswrapper[4720]: I0122 06:53:07.658595 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-758ddb75c6-rjkvm"] Jan 22 06:53:07 crc kubenswrapper[4720]: I0122 06:53:07.660578 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-758ddb75c6-rjkvm" Jan 22 06:53:07 crc kubenswrapper[4720]: I0122 06:53:07.666344 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-mprp8" Jan 22 06:53:07 crc kubenswrapper[4720]: I0122 06:53:07.666432 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Jan 22 06:53:07 crc kubenswrapper[4720]: I0122 06:53:07.668005 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Jan 22 06:53:07 crc kubenswrapper[4720]: I0122 06:53:07.669627 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-758ddb75c6-rjkvm"] Jan 22 06:53:07 crc kubenswrapper[4720]: I0122 06:53:07.673630 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-b45d7bf98-4nvtq"] Jan 22 06:53:07 crc kubenswrapper[4720]: I0122 06:53:07.677538 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-4nvz6"] Jan 22 06:53:07 crc kubenswrapper[4720]: I0122 06:53:07.678553 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-4nvz6" Jan 22 06:53:07 crc kubenswrapper[4720]: I0122 06:53:07.698472 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-fn6z7" Jan 22 06:53:07 crc kubenswrapper[4720]: I0122 06:53:07.701337 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-4nvz6"] Jan 22 06:53:07 crc kubenswrapper[4720]: I0122 06:53:07.708156 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/611fcdc7-1f1f-4530-9f34-68dae9bf4bd5-metrics-certs\") pod \"openstack-operator-controller-manager-758ddb75c6-rjkvm\" (UID: \"611fcdc7-1f1f-4530-9f34-68dae9bf4bd5\") " pod="openstack-operators/openstack-operator-controller-manager-758ddb75c6-rjkvm" Jan 22 06:53:07 crc kubenswrapper[4720]: I0122 06:53:07.709241 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tp4rg\" (UniqueName: \"kubernetes.io/projected/f88ed309-12b4-4cb4-bc95-6e6873c72c10-kube-api-access-tp4rg\") pod \"watcher-operator-controller-manager-57c994f794-t6ms5\" (UID: \"f88ed309-12b4-4cb4-bc95-6e6873c72c10\") " pod="openstack-operators/watcher-operator-controller-manager-57c994f794-t6ms5" Jan 22 06:53:07 crc kubenswrapper[4720]: I0122 06:53:07.709287 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gxtjv\" (UniqueName: \"kubernetes.io/projected/611fcdc7-1f1f-4530-9f34-68dae9bf4bd5-kube-api-access-gxtjv\") pod \"openstack-operator-controller-manager-758ddb75c6-rjkvm\" (UID: \"611fcdc7-1f1f-4530-9f34-68dae9bf4bd5\") " pod="openstack-operators/openstack-operator-controller-manager-758ddb75c6-rjkvm" Jan 22 06:53:07 crc kubenswrapper[4720]: I0122 06:53:07.709309 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/611fcdc7-1f1f-4530-9f34-68dae9bf4bd5-webhook-certs\") pod \"openstack-operator-controller-manager-758ddb75c6-rjkvm\" (UID: \"611fcdc7-1f1f-4530-9f34-68dae9bf4bd5\") " pod="openstack-operators/openstack-operator-controller-manager-758ddb75c6-rjkvm" Jan 22 06:53:07 crc kubenswrapper[4720]: I0122 06:53:07.709333 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/476ecc66-be12-4a68-8de1-3a062ec12f55-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b85485jc7\" (UID: \"476ecc66-be12-4a68-8de1-3a062ec12f55\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b85485jc7" Jan 22 06:53:07 crc kubenswrapper[4720]: E0122 06:53:07.721572 4720 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 22 06:53:07 crc kubenswrapper[4720]: E0122 06:53:07.721639 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/476ecc66-be12-4a68-8de1-3a062ec12f55-cert podName:476ecc66-be12-4a68-8de1-3a062ec12f55 nodeName:}" failed. No retries permitted until 2026-01-22 06:53:08.72161824 +0000 UTC m=+1080.863524945 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/476ecc66-be12-4a68-8de1-3a062ec12f55-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b85485jc7" (UID: "476ecc66-be12-4a68-8de1-3a062ec12f55") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 22 06:53:07 crc kubenswrapper[4720]: I0122 06:53:07.741645 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tp4rg\" (UniqueName: \"kubernetes.io/projected/f88ed309-12b4-4cb4-bc95-6e6873c72c10-kube-api-access-tp4rg\") pod \"watcher-operator-controller-manager-57c994f794-t6ms5\" (UID: \"f88ed309-12b4-4cb4-bc95-6e6873c72c10\") " pod="openstack-operators/watcher-operator-controller-manager-57c994f794-t6ms5" Jan 22 06:53:07 crc kubenswrapper[4720]: I0122 06:53:07.848818 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/611fcdc7-1f1f-4530-9f34-68dae9bf4bd5-metrics-certs\") pod \"openstack-operator-controller-manager-758ddb75c6-rjkvm\" (UID: \"611fcdc7-1f1f-4530-9f34-68dae9bf4bd5\") " pod="openstack-operators/openstack-operator-controller-manager-758ddb75c6-rjkvm" Jan 22 06:53:07 crc kubenswrapper[4720]: I0122 06:53:07.848921 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-546mc\" (UniqueName: \"kubernetes.io/projected/ff37e0b2-69d6-4217-b44f-a8bf016e45d6-kube-api-access-546mc\") pod \"rabbitmq-cluster-operator-manager-668c99d594-4nvz6\" (UID: \"ff37e0b2-69d6-4217-b44f-a8bf016e45d6\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-4nvz6" Jan 22 06:53:07 crc kubenswrapper[4720]: I0122 06:53:07.848951 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gxtjv\" (UniqueName: \"kubernetes.io/projected/611fcdc7-1f1f-4530-9f34-68dae9bf4bd5-kube-api-access-gxtjv\") pod \"openstack-operator-controller-manager-758ddb75c6-rjkvm\" (UID: \"611fcdc7-1f1f-4530-9f34-68dae9bf4bd5\") " pod="openstack-operators/openstack-operator-controller-manager-758ddb75c6-rjkvm" Jan 22 06:53:07 crc kubenswrapper[4720]: I0122 06:53:07.848969 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/611fcdc7-1f1f-4530-9f34-68dae9bf4bd5-webhook-certs\") pod \"openstack-operator-controller-manager-758ddb75c6-rjkvm\" (UID: \"611fcdc7-1f1f-4530-9f34-68dae9bf4bd5\") " pod="openstack-operators/openstack-operator-controller-manager-758ddb75c6-rjkvm" Jan 22 06:53:07 crc kubenswrapper[4720]: E0122 06:53:07.849139 4720 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 22 06:53:07 crc kubenswrapper[4720]: E0122 06:53:07.849204 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/611fcdc7-1f1f-4530-9f34-68dae9bf4bd5-webhook-certs podName:611fcdc7-1f1f-4530-9f34-68dae9bf4bd5 nodeName:}" failed. No retries permitted until 2026-01-22 06:53:08.349185293 +0000 UTC m=+1080.491091998 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/611fcdc7-1f1f-4530-9f34-68dae9bf4bd5-webhook-certs") pod "openstack-operator-controller-manager-758ddb75c6-rjkvm" (UID: "611fcdc7-1f1f-4530-9f34-68dae9bf4bd5") : secret "webhook-server-cert" not found Jan 22 06:53:07 crc kubenswrapper[4720]: E0122 06:53:07.849246 4720 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 22 06:53:07 crc kubenswrapper[4720]: E0122 06:53:07.849268 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/611fcdc7-1f1f-4530-9f34-68dae9bf4bd5-metrics-certs podName:611fcdc7-1f1f-4530-9f34-68dae9bf4bd5 nodeName:}" failed. No retries permitted until 2026-01-22 06:53:08.349262135 +0000 UTC m=+1080.491168840 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/611fcdc7-1f1f-4530-9f34-68dae9bf4bd5-metrics-certs") pod "openstack-operator-controller-manager-758ddb75c6-rjkvm" (UID: "611fcdc7-1f1f-4530-9f34-68dae9bf4bd5") : secret "metrics-server-cert" not found Jan 22 06:53:07 crc kubenswrapper[4720]: I0122 06:53:07.940172 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-57c994f794-t6ms5" Jan 22 06:53:07 crc kubenswrapper[4720]: I0122 06:53:07.950354 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gxtjv\" (UniqueName: \"kubernetes.io/projected/611fcdc7-1f1f-4530-9f34-68dae9bf4bd5-kube-api-access-gxtjv\") pod \"openstack-operator-controller-manager-758ddb75c6-rjkvm\" (UID: \"611fcdc7-1f1f-4530-9f34-68dae9bf4bd5\") " pod="openstack-operators/openstack-operator-controller-manager-758ddb75c6-rjkvm" Jan 22 06:53:07 crc kubenswrapper[4720]: I0122 06:53:07.951160 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-546mc\" (UniqueName: \"kubernetes.io/projected/ff37e0b2-69d6-4217-b44f-a8bf016e45d6-kube-api-access-546mc\") pod \"rabbitmq-cluster-operator-manager-668c99d594-4nvz6\" (UID: \"ff37e0b2-69d6-4217-b44f-a8bf016e45d6\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-4nvz6" Jan 22 06:53:08 crc kubenswrapper[4720]: I0122 06:53:08.029229 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-546mc\" (UniqueName: \"kubernetes.io/projected/ff37e0b2-69d6-4217-b44f-a8bf016e45d6-kube-api-access-546mc\") pod \"rabbitmq-cluster-operator-manager-668c99d594-4nvz6\" (UID: \"ff37e0b2-69d6-4217-b44f-a8bf016e45d6\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-4nvz6" Jan 22 06:53:08 crc kubenswrapper[4720]: I0122 06:53:08.029887 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-69cf5d4557-g9d9q"] Jan 22 06:53:08 crc kubenswrapper[4720]: I0122 06:53:08.048436 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-4nvtq" event={"ID":"cc13fc87-a160-4804-aef4-bb2c6ee89f13","Type":"ContainerStarted","Data":"87f49b45faac8db062c18ed3cb902059d6a0e1912bb6fa3166c39429ab7f078b"} Jan 22 06:53:08 crc kubenswrapper[4720]: W0122 06:53:08.060730 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod15bf2b23_40fc_4958_9774_3c6e4f2c591a.slice/crio-3cb836fde06adda73debf052069f2427add25a58e770aa20a3dc417c4ba57f67 WatchSource:0}: Error finding container 3cb836fde06adda73debf052069f2427add25a58e770aa20a3dc417c4ba57f67: Status 404 returned error can't find the container with id 3cb836fde06adda73debf052069f2427add25a58e770aa20a3dc417c4ba57f67 Jan 22 06:53:08 crc kubenswrapper[4720]: I0122 06:53:08.068711 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-59dd8b7cbf-kp5p9"] Jan 22 06:53:08 crc kubenswrapper[4720]: I0122 06:53:08.340373 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-fn6z7" Jan 22 06:53:08 crc kubenswrapper[4720]: I0122 06:53:08.350564 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-4nvz6" Jan 22 06:53:08 crc kubenswrapper[4720]: I0122 06:53:08.429102 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/611fcdc7-1f1f-4530-9f34-68dae9bf4bd5-metrics-certs\") pod \"openstack-operator-controller-manager-758ddb75c6-rjkvm\" (UID: \"611fcdc7-1f1f-4530-9f34-68dae9bf4bd5\") " pod="openstack-operators/openstack-operator-controller-manager-758ddb75c6-rjkvm" Jan 22 06:53:08 crc kubenswrapper[4720]: I0122 06:53:08.429179 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/611fcdc7-1f1f-4530-9f34-68dae9bf4bd5-webhook-certs\") pod \"openstack-operator-controller-manager-758ddb75c6-rjkvm\" (UID: \"611fcdc7-1f1f-4530-9f34-68dae9bf4bd5\") " pod="openstack-operators/openstack-operator-controller-manager-758ddb75c6-rjkvm" Jan 22 06:53:08 crc kubenswrapper[4720]: E0122 06:53:08.429322 4720 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 22 06:53:08 crc kubenswrapper[4720]: E0122 06:53:08.429381 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/611fcdc7-1f1f-4530-9f34-68dae9bf4bd5-webhook-certs podName:611fcdc7-1f1f-4530-9f34-68dae9bf4bd5 nodeName:}" failed. No retries permitted until 2026-01-22 06:53:09.429362931 +0000 UTC m=+1081.571269636 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/611fcdc7-1f1f-4530-9f34-68dae9bf4bd5-webhook-certs") pod "openstack-operator-controller-manager-758ddb75c6-rjkvm" (UID: "611fcdc7-1f1f-4530-9f34-68dae9bf4bd5") : secret "webhook-server-cert" not found Jan 22 06:53:08 crc kubenswrapper[4720]: E0122 06:53:08.430154 4720 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 22 06:53:08 crc kubenswrapper[4720]: E0122 06:53:08.430277 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/611fcdc7-1f1f-4530-9f34-68dae9bf4bd5-metrics-certs podName:611fcdc7-1f1f-4530-9f34-68dae9bf4bd5 nodeName:}" failed. No retries permitted until 2026-01-22 06:53:09.430244926 +0000 UTC m=+1081.572151791 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/611fcdc7-1f1f-4530-9f34-68dae9bf4bd5-metrics-certs") pod "openstack-operator-controller-manager-758ddb75c6-rjkvm" (UID: "611fcdc7-1f1f-4530-9f34-68dae9bf4bd5") : secret "metrics-server-cert" not found Jan 22 06:53:08 crc kubenswrapper[4720]: I0122 06:53:08.635719 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/25a73ab8-0306-4e57-9417-ce651e370925-cert\") pod \"infra-operator-controller-manager-54ccf4f85d-h6fd5\" (UID: \"25a73ab8-0306-4e57-9417-ce651e370925\") " pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-h6fd5" Jan 22 06:53:08 crc kubenswrapper[4720]: E0122 06:53:08.635887 4720 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 22 06:53:08 crc kubenswrapper[4720]: E0122 06:53:08.635963 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/25a73ab8-0306-4e57-9417-ce651e370925-cert podName:25a73ab8-0306-4e57-9417-ce651e370925 nodeName:}" failed. No retries permitted until 2026-01-22 06:53:10.635942949 +0000 UTC m=+1082.777849644 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/25a73ab8-0306-4e57-9417-ce651e370925-cert") pod "infra-operator-controller-manager-54ccf4f85d-h6fd5" (UID: "25a73ab8-0306-4e57-9417-ce651e370925") : secret "infra-operator-webhook-server-cert" not found Jan 22 06:53:08 crc kubenswrapper[4720]: I0122 06:53:08.735850 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-78fdd796fd-l7wpl"] Jan 22 06:53:08 crc kubenswrapper[4720]: I0122 06:53:08.737100 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/476ecc66-be12-4a68-8de1-3a062ec12f55-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b85485jc7\" (UID: \"476ecc66-be12-4a68-8de1-3a062ec12f55\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b85485jc7" Jan 22 06:53:08 crc kubenswrapper[4720]: E0122 06:53:08.737338 4720 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 22 06:53:08 crc kubenswrapper[4720]: E0122 06:53:08.737415 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/476ecc66-be12-4a68-8de1-3a062ec12f55-cert podName:476ecc66-be12-4a68-8de1-3a062ec12f55 nodeName:}" failed. No retries permitted until 2026-01-22 06:53:10.73739083 +0000 UTC m=+1082.879297535 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/476ecc66-be12-4a68-8de1-3a062ec12f55-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b85485jc7" (UID: "476ecc66-be12-4a68-8de1-3a062ec12f55") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 22 06:53:09 crc kubenswrapper[4720]: I0122 06:53:09.005544 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-594c8c9d5d-6rl8m"] Jan 22 06:53:09 crc kubenswrapper[4720]: W0122 06:53:09.057239 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf30c0975_10b7_4d3b_98f7_63a02ae44927.slice/crio-6edc8514cb5659221d69dce61fe736f76dd0bc82ebe3cebd66d2ea9385cbb412 WatchSource:0}: Error finding container 6edc8514cb5659221d69dce61fe736f76dd0bc82ebe3cebd66d2ea9385cbb412: Status 404 returned error can't find the container with id 6edc8514cb5659221d69dce61fe736f76dd0bc82ebe3cebd66d2ea9385cbb412 Jan 22 06:53:09 crc kubenswrapper[4720]: I0122 06:53:09.061449 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-9jw99"] Jan 22 06:53:09 crc kubenswrapper[4720]: I0122 06:53:09.107640 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-g9d9q" event={"ID":"15bf2b23-40fc-4958-9774-3c6e4f2c591a","Type":"ContainerStarted","Data":"3cb836fde06adda73debf052069f2427add25a58e770aa20a3dc417c4ba57f67"} Jan 22 06:53:09 crc kubenswrapper[4720]: I0122 06:53:09.110327 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-59dd8b7cbf-kp5p9" event={"ID":"a072cd1a-6b0c-4f3c-aa50-12a441bc87e3","Type":"ContainerStarted","Data":"27f4c9155220325370b759d5c90c9858df206b994d300438c524f4b906b9a858"} Jan 22 06:53:09 crc kubenswrapper[4720]: I0122 06:53:09.116384 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-l7wpl" event={"ID":"b464ce62-6f79-452c-a1c6-3c4878bcc8ba","Type":"ContainerStarted","Data":"4e6bba0bfb04c52c133144f6774ff51129486beccdccb383010735b6551cadc5"} Jan 22 06:53:09 crc kubenswrapper[4720]: I0122 06:53:09.450738 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/611fcdc7-1f1f-4530-9f34-68dae9bf4bd5-metrics-certs\") pod \"openstack-operator-controller-manager-758ddb75c6-rjkvm\" (UID: \"611fcdc7-1f1f-4530-9f34-68dae9bf4bd5\") " pod="openstack-operators/openstack-operator-controller-manager-758ddb75c6-rjkvm" Jan 22 06:53:09 crc kubenswrapper[4720]: I0122 06:53:09.451249 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/611fcdc7-1f1f-4530-9f34-68dae9bf4bd5-webhook-certs\") pod \"openstack-operator-controller-manager-758ddb75c6-rjkvm\" (UID: \"611fcdc7-1f1f-4530-9f34-68dae9bf4bd5\") " pod="openstack-operators/openstack-operator-controller-manager-758ddb75c6-rjkvm" Jan 22 06:53:09 crc kubenswrapper[4720]: E0122 06:53:09.451005 4720 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 22 06:53:09 crc kubenswrapper[4720]: E0122 06:53:09.451468 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/611fcdc7-1f1f-4530-9f34-68dae9bf4bd5-metrics-certs podName:611fcdc7-1f1f-4530-9f34-68dae9bf4bd5 nodeName:}" failed. No retries permitted until 2026-01-22 06:53:11.451451632 +0000 UTC m=+1083.593358337 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/611fcdc7-1f1f-4530-9f34-68dae9bf4bd5-metrics-certs") pod "openstack-operator-controller-manager-758ddb75c6-rjkvm" (UID: "611fcdc7-1f1f-4530-9f34-68dae9bf4bd5") : secret "metrics-server-cert" not found Jan 22 06:53:09 crc kubenswrapper[4720]: E0122 06:53:09.451403 4720 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 22 06:53:09 crc kubenswrapper[4720]: E0122 06:53:09.451739 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/611fcdc7-1f1f-4530-9f34-68dae9bf4bd5-webhook-certs podName:611fcdc7-1f1f-4530-9f34-68dae9bf4bd5 nodeName:}" failed. No retries permitted until 2026-01-22 06:53:11.451712739 +0000 UTC m=+1083.593619434 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/611fcdc7-1f1f-4530-9f34-68dae9bf4bd5-webhook-certs") pod "openstack-operator-controller-manager-758ddb75c6-rjkvm" (UID: "611fcdc7-1f1f-4530-9f34-68dae9bf4bd5") : secret "webhook-server-cert" not found Jan 22 06:53:09 crc kubenswrapper[4720]: I0122 06:53:09.545542 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-6b8bc8d87d-tnvdl"] Jan 22 06:53:09 crc kubenswrapper[4720]: I0122 06:53:09.557310 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-55db956ddc-m2hkw"] Jan 22 06:53:09 crc kubenswrapper[4720]: W0122 06:53:09.567998 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod21ee70f0_2938_4d3a_9edf_beaa943261ab.slice/crio-ae28ca7056a8cce0c0467c1fe04dddb999df1ef4a59a3a426a65c60d0833e208 WatchSource:0}: Error finding container ae28ca7056a8cce0c0467c1fe04dddb999df1ef4a59a3a426a65c60d0833e208: Status 404 returned error can't find the container with id ae28ca7056a8cce0c0467c1fe04dddb999df1ef4a59a3a426a65c60d0833e208 Jan 22 06:53:09 crc kubenswrapper[4720]: I0122 06:53:09.575534 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b8b6d4659-ddkv8"] Jan 22 06:53:09 crc kubenswrapper[4720]: I0122 06:53:09.606539 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-d5h9r"] Jan 22 06:53:09 crc kubenswrapper[4720]: I0122 06:53:09.625536 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-5d8f59fb49-47njc"] Jan 22 06:53:09 crc kubenswrapper[4720]: I0122 06:53:09.634097 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-547cbdb99f-xqx67"] Jan 22 06:53:09 crc kubenswrapper[4720]: I0122 06:53:09.654365 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-c87fff755-hq64w"] Jan 22 06:53:09 crc kubenswrapper[4720]: W0122 06:53:09.659048 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podde14bbbe_09fc_4f3c_8857_e3f7abca82f8.slice/crio-e5a296bd676a455456a81b6d17864e1b2d377afd5438747ba8a1f51d133823cf WatchSource:0}: Error finding container e5a296bd676a455456a81b6d17864e1b2d377afd5438747ba8a1f51d133823cf: Status 404 returned error can't find the container with id e5a296bd676a455456a81b6d17864e1b2d377afd5438747ba8a1f51d133823cf Jan 22 06:53:09 crc kubenswrapper[4720]: W0122 06:53:09.663798 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode77f3a0e_4936_4b98_829b_6ea9ebe6e817.slice/crio-8d32f44b86a6ee6f2e11e034d9d04b776e0701e8011fb8546fc763fbf6388c71 WatchSource:0}: Error finding container 8d32f44b86a6ee6f2e11e034d9d04b776e0701e8011fb8546fc763fbf6388c71: Status 404 returned error can't find the container with id 8d32f44b86a6ee6f2e11e034d9d04b776e0701e8011fb8546fc763fbf6388c71 Jan 22 06:53:09 crc kubenswrapper[4720]: I0122 06:53:09.666404 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5d646b7d76-wmhbp"] Jan 22 06:53:09 crc kubenswrapper[4720]: W0122 06:53:09.674802 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0e186e5c_83e6_465d_9353_e9314702d85a.slice/crio-0dcd21026654d1438caf439b1a413de3fca2feab73ac16c5896dddf79f890078 WatchSource:0}: Error finding container 0dcd21026654d1438caf439b1a413de3fca2feab73ac16c5896dddf79f890078: Status 404 returned error can't find the container with id 0dcd21026654d1438caf439b1a413de3fca2feab73ac16c5896dddf79f890078 Jan 22 06:53:09 crc kubenswrapper[4720]: I0122 06:53:09.675170 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-7bd9774b6-gkhjf"] Jan 22 06:53:09 crc kubenswrapper[4720]: I0122 06:53:09.699879 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-78c6999f6f-nn4jg"] Jan 22 06:53:09 crc kubenswrapper[4720]: W0122 06:53:09.703406 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf88ed309_12b4_4cb4_bc95_6e6873c72c10.slice/crio-d9442d19e69856b1cf0fef0842b3f32c4021300a8baa2d0f3e56723869da8080 WatchSource:0}: Error finding container d9442d19e69856b1cf0fef0842b3f32c4021300a8baa2d0f3e56723869da8080: Status 404 returned error can't find the container with id d9442d19e69856b1cf0fef0842b3f32c4021300a8baa2d0f3e56723869da8080 Jan 22 06:53:09 crc kubenswrapper[4720]: E0122 06:53:09.704506 4720 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-546mc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-4nvz6_openstack-operators(ff37e0b2-69d6-4217-b44f-a8bf016e45d6): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 22 06:53:09 crc kubenswrapper[4720]: E0122 06:53:09.705797 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-4nvz6" podUID="ff37e0b2-69d6-4217-b44f-a8bf016e45d6" Jan 22 06:53:09 crc kubenswrapper[4720]: I0122 06:53:09.708200 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-85cd9769bb-4tlfl"] Jan 22 06:53:09 crc kubenswrapper[4720]: W0122 06:53:09.714211 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1a3c6a91_064b_4006_b40f_ba7bc317aa83.slice/crio-9117185ea5004350ac9966879f1d421965ed1d2644009ba027c6b3d619958868 WatchSource:0}: Error finding container 9117185ea5004350ac9966879f1d421965ed1d2644009ba027c6b3d619958868: Status 404 returned error can't find the container with id 9117185ea5004350ac9966879f1d421965ed1d2644009ba027c6b3d619958868 Jan 22 06:53:09 crc kubenswrapper[4720]: I0122 06:53:09.714707 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-57c994f794-t6ms5"] Jan 22 06:53:09 crc kubenswrapper[4720]: E0122 06:53:09.716007 4720 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/manila-operator@sha256:8bee4480babd6fd8f686e0ba52a304acb6ffb90f09c7c57e7f5df5f7658836d8,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-p5lq8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod manila-operator-controller-manager-78c6999f6f-nn4jg_openstack-operators(d681304a-06cd-4870-b2b5-4f10936b7775): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 22 06:53:09 crc kubenswrapper[4720]: E0122 06:53:09.716670 4720 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:c8dde42dafd41026ed2e4cfc26efc0fff63c4ba9d31326ae7dc644ccceaafa9d,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-q6jrq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-69797bbcbd-2cs6n_openstack-operators(1a3c6a91-064b-4006-b40f-ba7bc317aa83): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 22 06:53:09 crc kubenswrapper[4720]: E0122 06:53:09.716820 4720 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:38.102.83.50:5001/openstack-k8s-operators/watcher-operator:2bc4688cca96552e6b25883a5eb5cc7a0447d6d9,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-tp4rg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-57c994f794-t6ms5_openstack-operators(f88ed309-12b4-4cb4-bc95-6e6873c72c10): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 22 06:53:09 crc kubenswrapper[4720]: E0122 06:53:09.717128 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-nn4jg" podUID="d681304a-06cd-4870-b2b5-4f10936b7775" Jan 22 06:53:09 crc kubenswrapper[4720]: E0122 06:53:09.717946 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/watcher-operator-controller-manager-57c994f794-t6ms5" podUID="f88ed309-12b4-4cb4-bc95-6e6873c72c10" Jan 22 06:53:09 crc kubenswrapper[4720]: E0122 06:53:09.718019 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-2cs6n" podUID="1a3c6a91-064b-4006-b40f-ba7bc317aa83" Jan 22 06:53:09 crc kubenswrapper[4720]: I0122 06:53:09.720702 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-69797bbcbd-2cs6n"] Jan 22 06:53:09 crc kubenswrapper[4720]: I0122 06:53:09.726809 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-4nvz6"] Jan 22 06:53:10 crc kubenswrapper[4720]: I0122 06:53:10.130476 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-nn4jg" event={"ID":"d681304a-06cd-4870-b2b5-4f10936b7775","Type":"ContainerStarted","Data":"3cc97627cb1e180345ae94ef6c17c15cf6336c6596419cded54d78aded13392f"} Jan 22 06:53:10 crc kubenswrapper[4720]: I0122 06:53:10.131677 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-57c994f794-t6ms5" event={"ID":"f88ed309-12b4-4cb4-bc95-6e6873c72c10","Type":"ContainerStarted","Data":"d9442d19e69856b1cf0fef0842b3f32c4021300a8baa2d0f3e56723869da8080"} Jan 22 06:53:10 crc kubenswrapper[4720]: E0122 06:53:10.133592 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/manila-operator@sha256:8bee4480babd6fd8f686e0ba52a304acb6ffb90f09c7c57e7f5df5f7658836d8\\\"\"" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-nn4jg" podUID="d681304a-06cd-4870-b2b5-4f10936b7775" Jan 22 06:53:10 crc kubenswrapper[4720]: I0122 06:53:10.134271 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-ddkv8" event={"ID":"21ee70f0-2938-4d3a-9edf-beaa943261ab","Type":"ContainerStarted","Data":"ae28ca7056a8cce0c0467c1fe04dddb999df1ef4a59a3a426a65c60d0833e208"} Jan 22 06:53:10 crc kubenswrapper[4720]: E0122 06:53:10.135108 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.50:5001/openstack-k8s-operators/watcher-operator:2bc4688cca96552e6b25883a5eb5cc7a0447d6d9\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-57c994f794-t6ms5" podUID="f88ed309-12b4-4cb4-bc95-6e6873c72c10" Jan 22 06:53:10 crc kubenswrapper[4720]: I0122 06:53:10.157547 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-2cs6n" event={"ID":"1a3c6a91-064b-4006-b40f-ba7bc317aa83","Type":"ContainerStarted","Data":"9117185ea5004350ac9966879f1d421965ed1d2644009ba027c6b3d619958868"} Jan 22 06:53:10 crc kubenswrapper[4720]: E0122 06:53:10.159279 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:c8dde42dafd41026ed2e4cfc26efc0fff63c4ba9d31326ae7dc644ccceaafa9d\\\"\"" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-2cs6n" podUID="1a3c6a91-064b-4006-b40f-ba7bc317aa83" Jan 22 06:53:10 crc kubenswrapper[4720]: I0122 06:53:10.161984 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-6rl8m" event={"ID":"f30c0975-10b7-4d3b-98f7-63a02ae44927","Type":"ContainerStarted","Data":"6edc8514cb5659221d69dce61fe736f76dd0bc82ebe3cebd66d2ea9385cbb412"} Jan 22 06:53:10 crc kubenswrapper[4720]: I0122 06:53:10.165235 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-gkhjf" event={"ID":"e77f3a0e-4936-4b98-829b-6ea9ebe6e817","Type":"ContainerStarted","Data":"8d32f44b86a6ee6f2e11e034d9d04b776e0701e8011fb8546fc763fbf6388c71"} Jan 22 06:53:10 crc kubenswrapper[4720]: I0122 06:53:10.172722 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-d5h9r" event={"ID":"ace6e6bf-fddd-4105-af4e-5ad7fcd9f4d1","Type":"ContainerStarted","Data":"75ead93071d0d4395508717484bd9b03b5a44ed2c499f37faedbb9ea9cf05e16"} Jan 22 06:53:10 crc kubenswrapper[4720]: I0122 06:53:10.178563 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-9jw99" event={"ID":"7d67431b-e376-4558-83f2-af33c36b403b","Type":"ContainerStarted","Data":"3454d2cb4256bdfcba1847bd57b4fdc4af4f590dc1bd7427e3067d386e70b9c7"} Jan 22 06:53:10 crc kubenswrapper[4720]: I0122 06:53:10.181889 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-5d8f59fb49-47njc" event={"ID":"fd7a6c01-1255-4f11-9dba-d3119753d47c","Type":"ContainerStarted","Data":"4341bf97b3d297e5d308d242dc5bdc905b9b107b51650fa01f5fbd234d99f0c8"} Jan 22 06:53:10 crc kubenswrapper[4720]: I0122 06:53:10.186448 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-wmhbp" event={"ID":"0a6de6f6-4bef-4f84-b4b8-4de46e9347b1","Type":"ContainerStarted","Data":"81c081af8aa996172853eadec3a33824df7153675e949dc273a9b8446b1c0d2d"} Jan 22 06:53:10 crc kubenswrapper[4720]: I0122 06:53:10.190283 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-tnvdl" event={"ID":"a2440b28-2217-482c-87c6-443616b586cb","Type":"ContainerStarted","Data":"6cc6acedce06960a3111a7a1f3793a1eb1b730abb7471535d76f644768520df0"} Jan 22 06:53:10 crc kubenswrapper[4720]: I0122 06:53:10.196434 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-hq64w" event={"ID":"de14bbbe-09fc-4f3c-8857-e3f7abca82f8","Type":"ContainerStarted","Data":"e5a296bd676a455456a81b6d17864e1b2d377afd5438747ba8a1f51d133823cf"} Jan 22 06:53:10 crc kubenswrapper[4720]: I0122 06:53:10.197939 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-m2hkw" event={"ID":"6a45a130-7295-401c-a63c-1df68c263764","Type":"ContainerStarted","Data":"c343d759c758ebfa962ce793e7e7deead77b9f5c62c5e27c4fa31556d57cb2b0"} Jan 22 06:53:10 crc kubenswrapper[4720]: I0122 06:53:10.201519 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-4tlfl" event={"ID":"0e186e5c-83e6-465d-9353-e9314702d85a","Type":"ContainerStarted","Data":"0dcd21026654d1438caf439b1a413de3fca2feab73ac16c5896dddf79f890078"} Jan 22 06:53:10 crc kubenswrapper[4720]: I0122 06:53:10.228177 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-xqx67" event={"ID":"e9c3503d-2a2a-4f59-8c25-b28a681cdcfb","Type":"ContainerStarted","Data":"1b57ac8938fe1035eb22a5c84c4cd2a4837f33931f8a62cabd0d55d201c82e01"} Jan 22 06:53:10 crc kubenswrapper[4720]: I0122 06:53:10.228226 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-4nvz6" event={"ID":"ff37e0b2-69d6-4217-b44f-a8bf016e45d6","Type":"ContainerStarted","Data":"be5c42aac85df39a38f7924a3a05bf30e5e9ef1f5d27b20cda0590d79a9f7224"} Jan 22 06:53:10 crc kubenswrapper[4720]: E0122 06:53:10.235121 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-4nvz6" podUID="ff37e0b2-69d6-4217-b44f-a8bf016e45d6" Jan 22 06:53:10 crc kubenswrapper[4720]: I0122 06:53:10.701953 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/25a73ab8-0306-4e57-9417-ce651e370925-cert\") pod \"infra-operator-controller-manager-54ccf4f85d-h6fd5\" (UID: \"25a73ab8-0306-4e57-9417-ce651e370925\") " pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-h6fd5" Jan 22 06:53:10 crc kubenswrapper[4720]: E0122 06:53:10.702231 4720 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 22 06:53:10 crc kubenswrapper[4720]: E0122 06:53:10.702294 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/25a73ab8-0306-4e57-9417-ce651e370925-cert podName:25a73ab8-0306-4e57-9417-ce651e370925 nodeName:}" failed. No retries permitted until 2026-01-22 06:53:14.702274057 +0000 UTC m=+1086.844180762 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/25a73ab8-0306-4e57-9417-ce651e370925-cert") pod "infra-operator-controller-manager-54ccf4f85d-h6fd5" (UID: "25a73ab8-0306-4e57-9417-ce651e370925") : secret "infra-operator-webhook-server-cert" not found Jan 22 06:53:10 crc kubenswrapper[4720]: I0122 06:53:10.804216 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/476ecc66-be12-4a68-8de1-3a062ec12f55-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b85485jc7\" (UID: \"476ecc66-be12-4a68-8de1-3a062ec12f55\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b85485jc7" Jan 22 06:53:10 crc kubenswrapper[4720]: E0122 06:53:10.804456 4720 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 22 06:53:10 crc kubenswrapper[4720]: E0122 06:53:10.804577 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/476ecc66-be12-4a68-8de1-3a062ec12f55-cert podName:476ecc66-be12-4a68-8de1-3a062ec12f55 nodeName:}" failed. No retries permitted until 2026-01-22 06:53:14.804547542 +0000 UTC m=+1086.946454247 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/476ecc66-be12-4a68-8de1-3a062ec12f55-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b85485jc7" (UID: "476ecc66-be12-4a68-8de1-3a062ec12f55") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 22 06:53:11 crc kubenswrapper[4720]: E0122 06:53:11.269968 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:c8dde42dafd41026ed2e4cfc26efc0fff63c4ba9d31326ae7dc644ccceaafa9d\\\"\"" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-2cs6n" podUID="1a3c6a91-064b-4006-b40f-ba7bc317aa83" Jan 22 06:53:11 crc kubenswrapper[4720]: E0122 06:53:11.270020 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-4nvz6" podUID="ff37e0b2-69d6-4217-b44f-a8bf016e45d6" Jan 22 06:53:11 crc kubenswrapper[4720]: E0122 06:53:11.270085 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.50:5001/openstack-k8s-operators/watcher-operator:2bc4688cca96552e6b25883a5eb5cc7a0447d6d9\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-57c994f794-t6ms5" podUID="f88ed309-12b4-4cb4-bc95-6e6873c72c10" Jan 22 06:53:11 crc kubenswrapper[4720]: E0122 06:53:11.276409 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/manila-operator@sha256:8bee4480babd6fd8f686e0ba52a304acb6ffb90f09c7c57e7f5df5f7658836d8\\\"\"" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-nn4jg" podUID="d681304a-06cd-4870-b2b5-4f10936b7775" Jan 22 06:53:11 crc kubenswrapper[4720]: I0122 06:53:11.486027 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/611fcdc7-1f1f-4530-9f34-68dae9bf4bd5-metrics-certs\") pod \"openstack-operator-controller-manager-758ddb75c6-rjkvm\" (UID: \"611fcdc7-1f1f-4530-9f34-68dae9bf4bd5\") " pod="openstack-operators/openstack-operator-controller-manager-758ddb75c6-rjkvm" Jan 22 06:53:11 crc kubenswrapper[4720]: I0122 06:53:11.486134 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/611fcdc7-1f1f-4530-9f34-68dae9bf4bd5-webhook-certs\") pod \"openstack-operator-controller-manager-758ddb75c6-rjkvm\" (UID: \"611fcdc7-1f1f-4530-9f34-68dae9bf4bd5\") " pod="openstack-operators/openstack-operator-controller-manager-758ddb75c6-rjkvm" Jan 22 06:53:11 crc kubenswrapper[4720]: E0122 06:53:11.486315 4720 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 22 06:53:11 crc kubenswrapper[4720]: E0122 06:53:11.486349 4720 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 22 06:53:11 crc kubenswrapper[4720]: E0122 06:53:11.486438 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/611fcdc7-1f1f-4530-9f34-68dae9bf4bd5-metrics-certs podName:611fcdc7-1f1f-4530-9f34-68dae9bf4bd5 nodeName:}" failed. No retries permitted until 2026-01-22 06:53:15.486414359 +0000 UTC m=+1087.628321064 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/611fcdc7-1f1f-4530-9f34-68dae9bf4bd5-metrics-certs") pod "openstack-operator-controller-manager-758ddb75c6-rjkvm" (UID: "611fcdc7-1f1f-4530-9f34-68dae9bf4bd5") : secret "metrics-server-cert" not found Jan 22 06:53:11 crc kubenswrapper[4720]: E0122 06:53:11.486527 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/611fcdc7-1f1f-4530-9f34-68dae9bf4bd5-webhook-certs podName:611fcdc7-1f1f-4530-9f34-68dae9bf4bd5 nodeName:}" failed. No retries permitted until 2026-01-22 06:53:15.486502152 +0000 UTC m=+1087.628408857 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/611fcdc7-1f1f-4530-9f34-68dae9bf4bd5-webhook-certs") pod "openstack-operator-controller-manager-758ddb75c6-rjkvm" (UID: "611fcdc7-1f1f-4530-9f34-68dae9bf4bd5") : secret "webhook-server-cert" not found Jan 22 06:53:14 crc kubenswrapper[4720]: I0122 06:53:14.921192 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/25a73ab8-0306-4e57-9417-ce651e370925-cert\") pod \"infra-operator-controller-manager-54ccf4f85d-h6fd5\" (UID: \"25a73ab8-0306-4e57-9417-ce651e370925\") " pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-h6fd5" Jan 22 06:53:14 crc kubenswrapper[4720]: I0122 06:53:14.922216 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/476ecc66-be12-4a68-8de1-3a062ec12f55-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b85485jc7\" (UID: \"476ecc66-be12-4a68-8de1-3a062ec12f55\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b85485jc7" Jan 22 06:53:14 crc kubenswrapper[4720]: E0122 06:53:14.922391 4720 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 22 06:53:14 crc kubenswrapper[4720]: E0122 06:53:14.922461 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/476ecc66-be12-4a68-8de1-3a062ec12f55-cert podName:476ecc66-be12-4a68-8de1-3a062ec12f55 nodeName:}" failed. No retries permitted until 2026-01-22 06:53:22.922436951 +0000 UTC m=+1095.064343666 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/476ecc66-be12-4a68-8de1-3a062ec12f55-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b85485jc7" (UID: "476ecc66-be12-4a68-8de1-3a062ec12f55") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 22 06:53:14 crc kubenswrapper[4720]: E0122 06:53:14.923065 4720 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 22 06:53:14 crc kubenswrapper[4720]: E0122 06:53:14.923111 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/25a73ab8-0306-4e57-9417-ce651e370925-cert podName:25a73ab8-0306-4e57-9417-ce651e370925 nodeName:}" failed. No retries permitted until 2026-01-22 06:53:22.923091489 +0000 UTC m=+1095.064998194 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/25a73ab8-0306-4e57-9417-ce651e370925-cert") pod "infra-operator-controller-manager-54ccf4f85d-h6fd5" (UID: "25a73ab8-0306-4e57-9417-ce651e370925") : secret "infra-operator-webhook-server-cert" not found Jan 22 06:53:15 crc kubenswrapper[4720]: I0122 06:53:15.531234 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/611fcdc7-1f1f-4530-9f34-68dae9bf4bd5-metrics-certs\") pod \"openstack-operator-controller-manager-758ddb75c6-rjkvm\" (UID: \"611fcdc7-1f1f-4530-9f34-68dae9bf4bd5\") " pod="openstack-operators/openstack-operator-controller-manager-758ddb75c6-rjkvm" Jan 22 06:53:15 crc kubenswrapper[4720]: I0122 06:53:15.531380 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/611fcdc7-1f1f-4530-9f34-68dae9bf4bd5-webhook-certs\") pod \"openstack-operator-controller-manager-758ddb75c6-rjkvm\" (UID: \"611fcdc7-1f1f-4530-9f34-68dae9bf4bd5\") " pod="openstack-operators/openstack-operator-controller-manager-758ddb75c6-rjkvm" Jan 22 06:53:15 crc kubenswrapper[4720]: E0122 06:53:15.531640 4720 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 22 06:53:15 crc kubenswrapper[4720]: E0122 06:53:15.531729 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/611fcdc7-1f1f-4530-9f34-68dae9bf4bd5-webhook-certs podName:611fcdc7-1f1f-4530-9f34-68dae9bf4bd5 nodeName:}" failed. No retries permitted until 2026-01-22 06:53:23.531702026 +0000 UTC m=+1095.673608761 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/611fcdc7-1f1f-4530-9f34-68dae9bf4bd5-webhook-certs") pod "openstack-operator-controller-manager-758ddb75c6-rjkvm" (UID: "611fcdc7-1f1f-4530-9f34-68dae9bf4bd5") : secret "webhook-server-cert" not found Jan 22 06:53:15 crc kubenswrapper[4720]: E0122 06:53:15.532309 4720 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 22 06:53:15 crc kubenswrapper[4720]: E0122 06:53:15.532371 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/611fcdc7-1f1f-4530-9f34-68dae9bf4bd5-metrics-certs podName:611fcdc7-1f1f-4530-9f34-68dae9bf4bd5 nodeName:}" failed. No retries permitted until 2026-01-22 06:53:23.532355244 +0000 UTC m=+1095.674261989 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/611fcdc7-1f1f-4530-9f34-68dae9bf4bd5-metrics-certs") pod "openstack-operator-controller-manager-758ddb75c6-rjkvm" (UID: "611fcdc7-1f1f-4530-9f34-68dae9bf4bd5") : secret "metrics-server-cert" not found Jan 22 06:53:22 crc kubenswrapper[4720]: I0122 06:53:22.941884 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/25a73ab8-0306-4e57-9417-ce651e370925-cert\") pod \"infra-operator-controller-manager-54ccf4f85d-h6fd5\" (UID: \"25a73ab8-0306-4e57-9417-ce651e370925\") " pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-h6fd5" Jan 22 06:53:22 crc kubenswrapper[4720]: I0122 06:53:22.942957 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/476ecc66-be12-4a68-8de1-3a062ec12f55-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b85485jc7\" (UID: \"476ecc66-be12-4a68-8de1-3a062ec12f55\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b85485jc7" Jan 22 06:53:22 crc kubenswrapper[4720]: E0122 06:53:22.942158 4720 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 22 06:53:22 crc kubenswrapper[4720]: E0122 06:53:22.943244 4720 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 22 06:53:22 crc kubenswrapper[4720]: E0122 06:53:22.943339 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/25a73ab8-0306-4e57-9417-ce651e370925-cert podName:25a73ab8-0306-4e57-9417-ce651e370925 nodeName:}" failed. No retries permitted until 2026-01-22 06:53:38.943306424 +0000 UTC m=+1111.085213159 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/25a73ab8-0306-4e57-9417-ce651e370925-cert") pod "infra-operator-controller-manager-54ccf4f85d-h6fd5" (UID: "25a73ab8-0306-4e57-9417-ce651e370925") : secret "infra-operator-webhook-server-cert" not found Jan 22 06:53:22 crc kubenswrapper[4720]: E0122 06:53:22.943417 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/476ecc66-be12-4a68-8de1-3a062ec12f55-cert podName:476ecc66-be12-4a68-8de1-3a062ec12f55 nodeName:}" failed. No retries permitted until 2026-01-22 06:53:38.943385586 +0000 UTC m=+1111.085292281 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/476ecc66-be12-4a68-8de1-3a062ec12f55-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b85485jc7" (UID: "476ecc66-be12-4a68-8de1-3a062ec12f55") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 22 06:53:23 crc kubenswrapper[4720]: E0122 06:53:23.535788 4720 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/heat-operator@sha256:2f9a2f064448faebbae58f52d564dc0e8e39bed0fc12bd6b9fe925e42f1b5492" Jan 22 06:53:23 crc kubenswrapper[4720]: E0122 06:53:23.536032 4720 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/heat-operator@sha256:2f9a2f064448faebbae58f52d564dc0e8e39bed0fc12bd6b9fe925e42f1b5492,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-24p8n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-operator-controller-manager-594c8c9d5d-6rl8m_openstack-operators(f30c0975-10b7-4d3b-98f7-63a02ae44927): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 22 06:53:23 crc kubenswrapper[4720]: E0122 06:53:23.537203 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-6rl8m" podUID="f30c0975-10b7-4d3b-98f7-63a02ae44927" Jan 22 06:53:23 crc kubenswrapper[4720]: I0122 06:53:23.552765 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/611fcdc7-1f1f-4530-9f34-68dae9bf4bd5-webhook-certs\") pod \"openstack-operator-controller-manager-758ddb75c6-rjkvm\" (UID: \"611fcdc7-1f1f-4530-9f34-68dae9bf4bd5\") " pod="openstack-operators/openstack-operator-controller-manager-758ddb75c6-rjkvm" Jan 22 06:53:23 crc kubenswrapper[4720]: I0122 06:53:23.552929 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/611fcdc7-1f1f-4530-9f34-68dae9bf4bd5-metrics-certs\") pod \"openstack-operator-controller-manager-758ddb75c6-rjkvm\" (UID: \"611fcdc7-1f1f-4530-9f34-68dae9bf4bd5\") " pod="openstack-operators/openstack-operator-controller-manager-758ddb75c6-rjkvm" Jan 22 06:53:23 crc kubenswrapper[4720]: E0122 06:53:23.553045 4720 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 22 06:53:23 crc kubenswrapper[4720]: E0122 06:53:23.553128 4720 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 22 06:53:23 crc kubenswrapper[4720]: E0122 06:53:23.553169 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/611fcdc7-1f1f-4530-9f34-68dae9bf4bd5-webhook-certs podName:611fcdc7-1f1f-4530-9f34-68dae9bf4bd5 nodeName:}" failed. No retries permitted until 2026-01-22 06:53:39.553138045 +0000 UTC m=+1111.695044970 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/611fcdc7-1f1f-4530-9f34-68dae9bf4bd5-webhook-certs") pod "openstack-operator-controller-manager-758ddb75c6-rjkvm" (UID: "611fcdc7-1f1f-4530-9f34-68dae9bf4bd5") : secret "webhook-server-cert" not found Jan 22 06:53:23 crc kubenswrapper[4720]: E0122 06:53:23.553246 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/611fcdc7-1f1f-4530-9f34-68dae9bf4bd5-metrics-certs podName:611fcdc7-1f1f-4530-9f34-68dae9bf4bd5 nodeName:}" failed. No retries permitted until 2026-01-22 06:53:39.553214557 +0000 UTC m=+1111.695121262 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/611fcdc7-1f1f-4530-9f34-68dae9bf4bd5-metrics-certs") pod "openstack-operator-controller-manager-758ddb75c6-rjkvm" (UID: "611fcdc7-1f1f-4530-9f34-68dae9bf4bd5") : secret "metrics-server-cert" not found Jan 22 06:53:24 crc kubenswrapper[4720]: E0122 06:53:24.198620 4720 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/placement-operator@sha256:65cfe5b9d5b0571aaf8ff9840b12cc56e90ca4cef162dd260c3a9fa2b52c6dd0" Jan 22 06:53:24 crc kubenswrapper[4720]: E0122 06:53:24.199630 4720 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:65cfe5b9d5b0571aaf8ff9840b12cc56e90ca4cef162dd260c3a9fa2b52c6dd0,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9wdtb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-5d646b7d76-wmhbp_openstack-operators(0a6de6f6-4bef-4f84-b4b8-4de46e9347b1): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 22 06:53:24 crc kubenswrapper[4720]: E0122 06:53:24.201052 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-wmhbp" podUID="0a6de6f6-4bef-4f84-b4b8-4de46e9347b1" Jan 22 06:53:24 crc kubenswrapper[4720]: E0122 06:53:24.431877 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/heat-operator@sha256:2f9a2f064448faebbae58f52d564dc0e8e39bed0fc12bd6b9fe925e42f1b5492\\\"\"" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-6rl8m" podUID="f30c0975-10b7-4d3b-98f7-63a02ae44927" Jan 22 06:53:24 crc kubenswrapper[4720]: E0122 06:53:24.432411 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:65cfe5b9d5b0571aaf8ff9840b12cc56e90ca4cef162dd260c3a9fa2b52c6dd0\\\"\"" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-wmhbp" podUID="0a6de6f6-4bef-4f84-b4b8-4de46e9347b1" Jan 22 06:53:25 crc kubenswrapper[4720]: E0122 06:53:25.189525 4720 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/glance-operator@sha256:9caae9b3ee328df678baa26454e45e47693acdadb27f9c635680597aaec43337" Jan 22 06:53:25 crc kubenswrapper[4720]: E0122 06:53:25.189826 4720 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/glance-operator@sha256:9caae9b3ee328df678baa26454e45e47693acdadb27f9c635680597aaec43337,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dcqdv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-operator-controller-manager-78fdd796fd-l7wpl_openstack-operators(b464ce62-6f79-452c-a1c6-3c4878bcc8ba): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 22 06:53:25 crc kubenswrapper[4720]: E0122 06:53:25.191230 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-l7wpl" podUID="b464ce62-6f79-452c-a1c6-3c4878bcc8ba" Jan 22 06:53:25 crc kubenswrapper[4720]: E0122 06:53:25.441198 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/glance-operator@sha256:9caae9b3ee328df678baa26454e45e47693acdadb27f9c635680597aaec43337\\\"\"" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-l7wpl" podUID="b464ce62-6f79-452c-a1c6-3c4878bcc8ba" Jan 22 06:53:26 crc kubenswrapper[4720]: E0122 06:53:26.035900 4720 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ovn-operator@sha256:8b3bfb9e86618b7ac69443939b0968fae28a22cd62ea1e429b599ff9f8a5f8cf" Jan 22 06:53:26 crc kubenswrapper[4720]: E0122 06:53:26.036156 4720 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:8b3bfb9e86618b7ac69443939b0968fae28a22cd62ea1e429b599ff9f8a5f8cf,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-gf5gz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-55db956ddc-m2hkw_openstack-operators(6a45a130-7295-401c-a63c-1df68c263764): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 22 06:53:26 crc kubenswrapper[4720]: E0122 06:53:26.038133 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-m2hkw" podUID="6a45a130-7295-401c-a63c-1df68c263764" Jan 22 06:53:26 crc kubenswrapper[4720]: E0122 06:53:26.488805 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:8b3bfb9e86618b7ac69443939b0968fae28a22cd62ea1e429b599ff9f8a5f8cf\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-m2hkw" podUID="6a45a130-7295-401c-a63c-1df68c263764" Jan 22 06:53:28 crc kubenswrapper[4720]: E0122 06:53:28.418460 4720 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/horizon-operator@sha256:3311e627bcb860d9443592a2c67078417318c9eb77d8ef4d07f9aa7027d46822" Jan 22 06:53:28 crc kubenswrapper[4720]: E0122 06:53:28.419605 4720 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/horizon-operator@sha256:3311e627bcb860d9443592a2c67078417318c9eb77d8ef4d07f9aa7027d46822,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-lbqlf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-operator-controller-manager-77d5c5b54f-9jw99_openstack-operators(7d67431b-e376-4558-83f2-af33c36b403b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 22 06:53:28 crc kubenswrapper[4720]: E0122 06:53:28.421029 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-9jw99" podUID="7d67431b-e376-4558-83f2-af33c36b403b" Jan 22 06:53:28 crc kubenswrapper[4720]: E0122 06:53:28.503565 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/horizon-operator@sha256:3311e627bcb860d9443592a2c67078417318c9eb77d8ef4d07f9aa7027d46822\\\"\"" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-9jw99" podUID="7d67431b-e376-4558-83f2-af33c36b403b" Jan 22 06:53:31 crc kubenswrapper[4720]: E0122 06:53:31.182715 4720 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/octavia-operator@sha256:a8fc8f9d445b1232f446119015b226008b07c6a259f5bebc1fcbb39ec310afe5" Jan 22 06:53:31 crc kubenswrapper[4720]: E0122 06:53:31.183288 4720 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/octavia-operator@sha256:a8fc8f9d445b1232f446119015b226008b07c6a259f5bebc1fcbb39ec310afe5,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-pnnqh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-7bd9774b6-gkhjf_openstack-operators(e77f3a0e-4936-4b98-829b-6ea9ebe6e817): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 22 06:53:31 crc kubenswrapper[4720]: E0122 06:53:31.184609 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-gkhjf" podUID="e77f3a0e-4936-4b98-829b-6ea9ebe6e817" Jan 22 06:53:31 crc kubenswrapper[4720]: E0122 06:53:31.529141 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:a8fc8f9d445b1232f446119015b226008b07c6a259f5bebc1fcbb39ec310afe5\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-gkhjf" podUID="e77f3a0e-4936-4b98-829b-6ea9ebe6e817" Jan 22 06:53:39 crc kubenswrapper[4720]: I0122 06:53:39.001090 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/476ecc66-be12-4a68-8de1-3a062ec12f55-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b85485jc7\" (UID: \"476ecc66-be12-4a68-8de1-3a062ec12f55\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b85485jc7" Jan 22 06:53:39 crc kubenswrapper[4720]: I0122 06:53:39.002121 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/25a73ab8-0306-4e57-9417-ce651e370925-cert\") pod \"infra-operator-controller-manager-54ccf4f85d-h6fd5\" (UID: \"25a73ab8-0306-4e57-9417-ce651e370925\") " pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-h6fd5" Jan 22 06:53:39 crc kubenswrapper[4720]: I0122 06:53:39.007563 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/25a73ab8-0306-4e57-9417-ce651e370925-cert\") pod \"infra-operator-controller-manager-54ccf4f85d-h6fd5\" (UID: \"25a73ab8-0306-4e57-9417-ce651e370925\") " pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-h6fd5" Jan 22 06:53:39 crc kubenswrapper[4720]: I0122 06:53:39.007658 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/476ecc66-be12-4a68-8de1-3a062ec12f55-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b85485jc7\" (UID: \"476ecc66-be12-4a68-8de1-3a062ec12f55\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b85485jc7" Jan 22 06:53:39 crc kubenswrapper[4720]: I0122 06:53:39.030302 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-dkbtc" Jan 22 06:53:39 crc kubenswrapper[4720]: I0122 06:53:39.038059 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-h6fd5" Jan 22 06:53:39 crc kubenswrapper[4720]: I0122 06:53:39.274818 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-rhlb6" Jan 22 06:53:39 crc kubenswrapper[4720]: I0122 06:53:39.282417 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b85485jc7" Jan 22 06:53:39 crc kubenswrapper[4720]: I0122 06:53:39.611630 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/611fcdc7-1f1f-4530-9f34-68dae9bf4bd5-metrics-certs\") pod \"openstack-operator-controller-manager-758ddb75c6-rjkvm\" (UID: \"611fcdc7-1f1f-4530-9f34-68dae9bf4bd5\") " pod="openstack-operators/openstack-operator-controller-manager-758ddb75c6-rjkvm" Jan 22 06:53:39 crc kubenswrapper[4720]: I0122 06:53:39.611777 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/611fcdc7-1f1f-4530-9f34-68dae9bf4bd5-webhook-certs\") pod \"openstack-operator-controller-manager-758ddb75c6-rjkvm\" (UID: \"611fcdc7-1f1f-4530-9f34-68dae9bf4bd5\") " pod="openstack-operators/openstack-operator-controller-manager-758ddb75c6-rjkvm" Jan 22 06:53:39 crc kubenswrapper[4720]: I0122 06:53:39.617529 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/611fcdc7-1f1f-4530-9f34-68dae9bf4bd5-webhook-certs\") pod \"openstack-operator-controller-manager-758ddb75c6-rjkvm\" (UID: \"611fcdc7-1f1f-4530-9f34-68dae9bf4bd5\") " pod="openstack-operators/openstack-operator-controller-manager-758ddb75c6-rjkvm" Jan 22 06:53:39 crc kubenswrapper[4720]: I0122 06:53:39.617867 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/611fcdc7-1f1f-4530-9f34-68dae9bf4bd5-metrics-certs\") pod \"openstack-operator-controller-manager-758ddb75c6-rjkvm\" (UID: \"611fcdc7-1f1f-4530-9f34-68dae9bf4bd5\") " pod="openstack-operators/openstack-operator-controller-manager-758ddb75c6-rjkvm" Jan 22 06:53:39 crc kubenswrapper[4720]: I0122 06:53:39.834844 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-mprp8" Jan 22 06:53:39 crc kubenswrapper[4720]: I0122 06:53:39.841226 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-758ddb75c6-rjkvm" Jan 22 06:53:41 crc kubenswrapper[4720]: E0122 06:53:41.761470 4720 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/test-operator@sha256:c8dde42dafd41026ed2e4cfc26efc0fff63c4ba9d31326ae7dc644ccceaafa9d" Jan 22 06:53:41 crc kubenswrapper[4720]: E0122 06:53:41.761761 4720 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:c8dde42dafd41026ed2e4cfc26efc0fff63c4ba9d31326ae7dc644ccceaafa9d,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-q6jrq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-69797bbcbd-2cs6n_openstack-operators(1a3c6a91-064b-4006-b40f-ba7bc317aa83): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 22 06:53:41 crc kubenswrapper[4720]: E0122 06:53:41.762978 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-2cs6n" podUID="1a3c6a91-064b-4006-b40f-ba7bc317aa83" Jan 22 06:53:42 crc kubenswrapper[4720]: E0122 06:53:42.358264 4720 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/telemetry-operator@sha256:e02722d7581bfe1c5fc13e2fa6811d8665102ba86635c77547abf6b933cde127" Jan 22 06:53:42 crc kubenswrapper[4720]: E0122 06:53:42.358596 4720 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/telemetry-operator@sha256:e02722d7581bfe1c5fc13e2fa6811d8665102ba86635c77547abf6b933cde127,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5bppz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-85cd9769bb-4tlfl_openstack-operators(0e186e5c-83e6-465d-9353-e9314702d85a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 22 06:53:42 crc kubenswrapper[4720]: E0122 06:53:42.360000 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-4tlfl" podUID="0e186e5c-83e6-465d-9353-e9314702d85a" Jan 22 06:53:42 crc kubenswrapper[4720]: E0122 06:53:42.629748 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:e02722d7581bfe1c5fc13e2fa6811d8665102ba86635c77547abf6b933cde127\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-4tlfl" podUID="0e186e5c-83e6-465d-9353-e9314702d85a" Jan 22 06:53:43 crc kubenswrapper[4720]: E0122 06:53:43.081086 4720 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ironic-operator@sha256:d3c55b59cb192799f8d31196c55c9e9bb3cd38aef7ec51ef257dabf1548e8b30" Jan 22 06:53:43 crc kubenswrapper[4720]: E0122 06:53:43.081858 4720 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ironic-operator@sha256:d3c55b59cb192799f8d31196c55c9e9bb3cd38aef7ec51ef257dabf1548e8b30,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-p2824,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ironic-operator-controller-manager-69d6c9f5b8-d5h9r_openstack-operators(ace6e6bf-fddd-4105-af4e-5ad7fcd9f4d1): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 22 06:53:43 crc kubenswrapper[4720]: E0122 06:53:43.083110 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-d5h9r" podUID="ace6e6bf-fddd-4105-af4e-5ad7fcd9f4d1" Jan 22 06:53:43 crc kubenswrapper[4720]: E0122 06:53:43.577584 4720 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/keystone-operator@sha256:8e340ff11922b38e811261de96982e1aff5f4eb8f225d1d9f5973025a4fe8349" Jan 22 06:53:43 crc kubenswrapper[4720]: E0122 06:53:43.577894 4720 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:8e340ff11922b38e811261de96982e1aff5f4eb8f225d1d9f5973025a4fe8349,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-tg8wz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-b8b6d4659-ddkv8_openstack-operators(21ee70f0-2938-4d3a-9edf-beaa943261ab): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 22 06:53:43 crc kubenswrapper[4720]: E0122 06:53:43.579134 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-ddkv8" podUID="21ee70f0-2938-4d3a-9edf-beaa943261ab" Jan 22 06:53:43 crc kubenswrapper[4720]: E0122 06:53:43.702089 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ironic-operator@sha256:d3c55b59cb192799f8d31196c55c9e9bb3cd38aef7ec51ef257dabf1548e8b30\\\"\"" pod="openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-d5h9r" podUID="ace6e6bf-fddd-4105-af4e-5ad7fcd9f4d1" Jan 22 06:53:43 crc kubenswrapper[4720]: E0122 06:53:43.702089 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:8e340ff11922b38e811261de96982e1aff5f4eb8f225d1d9f5973025a4fe8349\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-ddkv8" podUID="21ee70f0-2938-4d3a-9edf-beaa943261ab" Jan 22 06:53:45 crc kubenswrapper[4720]: E0122 06:53:45.582867 4720 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/manila-operator@sha256:8bee4480babd6fd8f686e0ba52a304acb6ffb90f09c7c57e7f5df5f7658836d8" Jan 22 06:53:45 crc kubenswrapper[4720]: E0122 06:53:45.583395 4720 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/manila-operator@sha256:8bee4480babd6fd8f686e0ba52a304acb6ffb90f09c7c57e7f5df5f7658836d8,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-p5lq8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod manila-operator-controller-manager-78c6999f6f-nn4jg_openstack-operators(d681304a-06cd-4870-b2b5-4f10936b7775): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 22 06:53:45 crc kubenswrapper[4720]: E0122 06:53:45.584624 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-nn4jg" podUID="d681304a-06cd-4870-b2b5-4f10936b7775" Jan 22 06:53:46 crc kubenswrapper[4720]: E0122 06:53:46.325070 4720 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2" Jan 22 06:53:46 crc kubenswrapper[4720]: E0122 06:53:46.325438 4720 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-546mc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-4nvz6_openstack-operators(ff37e0b2-69d6-4217-b44f-a8bf016e45d6): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 22 06:53:46 crc kubenswrapper[4720]: E0122 06:53:46.326609 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-4nvz6" podUID="ff37e0b2-69d6-4217-b44f-a8bf016e45d6" Jan 22 06:53:46 crc kubenswrapper[4720]: E0122 06:53:46.509982 4720 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.50:5001/openstack-k8s-operators/watcher-operator:2bc4688cca96552e6b25883a5eb5cc7a0447d6d9" Jan 22 06:53:46 crc kubenswrapper[4720]: E0122 06:53:46.510063 4720 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.50:5001/openstack-k8s-operators/watcher-operator:2bc4688cca96552e6b25883a5eb5cc7a0447d6d9" Jan 22 06:53:46 crc kubenswrapper[4720]: E0122 06:53:46.510246 4720 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:38.102.83.50:5001/openstack-k8s-operators/watcher-operator:2bc4688cca96552e6b25883a5eb5cc7a0447d6d9,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-tp4rg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-57c994f794-t6ms5_openstack-operators(f88ed309-12b4-4cb4-bc95-6e6873c72c10): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 22 06:53:46 crc kubenswrapper[4720]: E0122 06:53:46.511364 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/watcher-operator-controller-manager-57c994f794-t6ms5" podUID="f88ed309-12b4-4cb4-bc95-6e6873c72c10" Jan 22 06:53:47 crc kubenswrapper[4720]: I0122 06:53:47.057436 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-54ccf4f85d-h6fd5"] Jan 22 06:53:47 crc kubenswrapper[4720]: I0122 06:53:47.109307 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-758ddb75c6-rjkvm"] Jan 22 06:53:47 crc kubenswrapper[4720]: I0122 06:53:47.117370 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b85485jc7"] Jan 22 06:53:47 crc kubenswrapper[4720]: I0122 06:53:47.669601 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b85485jc7" event={"ID":"476ecc66-be12-4a68-8de1-3a062ec12f55","Type":"ContainerStarted","Data":"7049e10bc44a9ffb5bce556d5cc5ed25b6f0f41abf57d8455b3e8feb05ba7758"} Jan 22 06:53:47 crc kubenswrapper[4720]: I0122 06:53:47.671517 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-6rl8m" event={"ID":"f30c0975-10b7-4d3b-98f7-63a02ae44927","Type":"ContainerStarted","Data":"2d88ce036eea6cb37eed3000569d1b10187b1daf507c3127f2f075480c022295"} Jan 22 06:53:47 crc kubenswrapper[4720]: I0122 06:53:47.671772 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-6rl8m" Jan 22 06:53:47 crc kubenswrapper[4720]: I0122 06:53:47.673930 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-hq64w" event={"ID":"de14bbbe-09fc-4f3c-8857-e3f7abca82f8","Type":"ContainerStarted","Data":"d96d79d31f6866712890d916dbb3157db38666f8c22a8624da09a2c89950b60e"} Jan 22 06:53:47 crc kubenswrapper[4720]: I0122 06:53:47.674084 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-hq64w" Jan 22 06:53:47 crc kubenswrapper[4720]: I0122 06:53:47.675283 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-9jw99" event={"ID":"7d67431b-e376-4558-83f2-af33c36b403b","Type":"ContainerStarted","Data":"c6cce0346a67f49d5cb719fb80bd5d879ae674cbd9923cb977ef2e3de6df766a"} Jan 22 06:53:47 crc kubenswrapper[4720]: I0122 06:53:47.675457 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-9jw99" Jan 22 06:53:47 crc kubenswrapper[4720]: I0122 06:53:47.676491 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-m2hkw" event={"ID":"6a45a130-7295-401c-a63c-1df68c263764","Type":"ContainerStarted","Data":"7fbd328b7270c6cff3158e0012d819cc95bd670f8b944e51f6cff4b9b0c8ee19"} Jan 22 06:53:47 crc kubenswrapper[4720]: I0122 06:53:47.676667 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-m2hkw" Jan 22 06:53:47 crc kubenswrapper[4720]: I0122 06:53:47.678599 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-tnvdl" event={"ID":"a2440b28-2217-482c-87c6-443616b586cb","Type":"ContainerStarted","Data":"913c4bdac754595a82109beec35bdafba3ca7485ffeb089fb33954a5d8507d63"} Jan 22 06:53:47 crc kubenswrapper[4720]: I0122 06:53:47.678732 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-tnvdl" Jan 22 06:53:47 crc kubenswrapper[4720]: I0122 06:53:47.680072 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-l7wpl" event={"ID":"b464ce62-6f79-452c-a1c6-3c4878bcc8ba","Type":"ContainerStarted","Data":"48f8da77bab2a81d0dda50f6a3ddbe9372e9ae3dc966da23ee6346198d67a507"} Jan 22 06:53:47 crc kubenswrapper[4720]: I0122 06:53:47.680225 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-l7wpl" Jan 22 06:53:47 crc kubenswrapper[4720]: I0122 06:53:47.681442 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-wmhbp" event={"ID":"0a6de6f6-4bef-4f84-b4b8-4de46e9347b1","Type":"ContainerStarted","Data":"d69241be3a60e3b26ce1fc08d72bb1ead0dd01b3cf725317dc44308ff8958f7d"} Jan 22 06:53:47 crc kubenswrapper[4720]: I0122 06:53:47.681625 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-wmhbp" Jan 22 06:53:47 crc kubenswrapper[4720]: I0122 06:53:47.682909 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-4nvtq" event={"ID":"cc13fc87-a160-4804-aef4-bb2c6ee89f13","Type":"ContainerStarted","Data":"6ca73984ab10678adfa0b8c4a2bc9bec5aeaf72919f5a3707087f925561ad000"} Jan 22 06:53:47 crc kubenswrapper[4720]: I0122 06:53:47.683012 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-4nvtq" Jan 22 06:53:47 crc kubenswrapper[4720]: I0122 06:53:47.684559 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-xqx67" event={"ID":"e9c3503d-2a2a-4f59-8c25-b28a681cdcfb","Type":"ContainerStarted","Data":"c53999b7541762ebfec7041edbfc188a4092f6bc69d122e1831c936fe42d995b"} Jan 22 06:53:47 crc kubenswrapper[4720]: I0122 06:53:47.684711 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-xqx67" Jan 22 06:53:47 crc kubenswrapper[4720]: I0122 06:53:47.685858 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-h6fd5" event={"ID":"25a73ab8-0306-4e57-9417-ce651e370925","Type":"ContainerStarted","Data":"8341c5e9e01c6e55eb2ac9bd1a1f279ba58e6754e6870278d55e9eb8f945ddb3"} Jan 22 06:53:47 crc kubenswrapper[4720]: I0122 06:53:47.687391 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-758ddb75c6-rjkvm" event={"ID":"611fcdc7-1f1f-4530-9f34-68dae9bf4bd5","Type":"ContainerStarted","Data":"07c5b954994014598bcbad4d8a41426dd552982968008f4708b237324bbe8165"} Jan 22 06:53:47 crc kubenswrapper[4720]: I0122 06:53:47.687415 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-758ddb75c6-rjkvm" event={"ID":"611fcdc7-1f1f-4530-9f34-68dae9bf4bd5","Type":"ContainerStarted","Data":"c2af1475af6b7201cf690862002325ea841619e448e5d97535941f38e9de86b6"} Jan 22 06:53:47 crc kubenswrapper[4720]: I0122 06:53:47.687607 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-758ddb75c6-rjkvm" Jan 22 06:53:47 crc kubenswrapper[4720]: I0122 06:53:47.690155 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-gkhjf" event={"ID":"e77f3a0e-4936-4b98-829b-6ea9ebe6e817","Type":"ContainerStarted","Data":"df214168dbe794f6d633695e5b3a3d55a15f416b16c28d18ab49232596e709af"} Jan 22 06:53:47 crc kubenswrapper[4720]: I0122 06:53:47.690486 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-gkhjf" Jan 22 06:53:47 crc kubenswrapper[4720]: I0122 06:53:47.691864 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-59dd8b7cbf-kp5p9" event={"ID":"a072cd1a-6b0c-4f3c-aa50-12a441bc87e3","Type":"ContainerStarted","Data":"9f84082895f8aea4b9759bddc3c9efe8a0a65746371a47b2523bd54652d5c024"} Jan 22 06:53:47 crc kubenswrapper[4720]: I0122 06:53:47.692717 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-59dd8b7cbf-kp5p9" Jan 22 06:53:47 crc kubenswrapper[4720]: I0122 06:53:47.693824 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-g9d9q" event={"ID":"15bf2b23-40fc-4958-9774-3c6e4f2c591a","Type":"ContainerStarted","Data":"536e653a65090dec8567790accf3fdae1b53ffefe2a91eaa73a7d0936bd08178"} Jan 22 06:53:47 crc kubenswrapper[4720]: I0122 06:53:47.694023 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-g9d9q" Jan 22 06:53:47 crc kubenswrapper[4720]: I0122 06:53:47.695435 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-5d8f59fb49-47njc" event={"ID":"fd7a6c01-1255-4f11-9dba-d3119753d47c","Type":"ContainerStarted","Data":"1e85a6de5e470e6d389eb10d31ac791416caf1a58bcc4706ec881d026713e942"} Jan 22 06:53:47 crc kubenswrapper[4720]: I0122 06:53:47.695651 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-5d8f59fb49-47njc" Jan 22 06:53:47 crc kubenswrapper[4720]: I0122 06:53:47.733456 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-6rl8m" podStartSLOduration=4.31856177 podStartE2EDuration="41.733426239s" podCreationTimestamp="2026-01-22 06:53:06 +0000 UTC" firstStartedPulling="2026-01-22 06:53:09.095131791 +0000 UTC m=+1081.237038496" lastFinishedPulling="2026-01-22 06:53:46.50999626 +0000 UTC m=+1118.651902965" observedRunningTime="2026-01-22 06:53:47.728120168 +0000 UTC m=+1119.870026873" watchObservedRunningTime="2026-01-22 06:53:47.733426239 +0000 UTC m=+1119.875332944" Jan 22 06:53:47 crc kubenswrapper[4720]: I0122 06:53:47.753305 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-gkhjf" podStartSLOduration=4.944083547 podStartE2EDuration="41.753281493s" podCreationTimestamp="2026-01-22 06:53:06 +0000 UTC" firstStartedPulling="2026-01-22 06:53:09.696752449 +0000 UTC m=+1081.838659144" lastFinishedPulling="2026-01-22 06:53:46.505950385 +0000 UTC m=+1118.647857090" observedRunningTime="2026-01-22 06:53:47.751434341 +0000 UTC m=+1119.893341046" watchObservedRunningTime="2026-01-22 06:53:47.753281493 +0000 UTC m=+1119.895188198" Jan 22 06:53:47 crc kubenswrapper[4720]: I0122 06:53:47.982198 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-m2hkw" podStartSLOduration=5.084623758 podStartE2EDuration="41.982170794s" podCreationTimestamp="2026-01-22 06:53:06 +0000 UTC" firstStartedPulling="2026-01-22 06:53:09.612678241 +0000 UTC m=+1081.754584946" lastFinishedPulling="2026-01-22 06:53:46.510225277 +0000 UTC m=+1118.652131982" observedRunningTime="2026-01-22 06:53:47.976725879 +0000 UTC m=+1120.118632584" watchObservedRunningTime="2026-01-22 06:53:47.982170794 +0000 UTC m=+1120.124077499" Jan 22 06:53:47 crc kubenswrapper[4720]: I0122 06:53:47.989183 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-tnvdl" podStartSLOduration=5.984222029 podStartE2EDuration="41.989153952s" podCreationTimestamp="2026-01-22 06:53:06 +0000 UTC" firstStartedPulling="2026-01-22 06:53:09.568100175 +0000 UTC m=+1081.710006880" lastFinishedPulling="2026-01-22 06:53:45.573032098 +0000 UTC m=+1117.714938803" observedRunningTime="2026-01-22 06:53:47.886152427 +0000 UTC m=+1120.028059132" watchObservedRunningTime="2026-01-22 06:53:47.989153952 +0000 UTC m=+1120.131060657" Jan 22 06:53:48 crc kubenswrapper[4720]: I0122 06:53:48.036209 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-xqx67" podStartSLOduration=6.114890961 podStartE2EDuration="42.036183758s" podCreationTimestamp="2026-01-22 06:53:06 +0000 UTC" firstStartedPulling="2026-01-22 06:53:09.651780022 +0000 UTC m=+1081.793686727" lastFinishedPulling="2026-01-22 06:53:45.573072819 +0000 UTC m=+1117.714979524" observedRunningTime="2026-01-22 06:53:48.034064858 +0000 UTC m=+1120.175971563" watchObservedRunningTime="2026-01-22 06:53:48.036183758 +0000 UTC m=+1120.178090463" Jan 22 06:53:48 crc kubenswrapper[4720]: I0122 06:53:48.243714 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-4nvtq" podStartSLOduration=4.271481762 podStartE2EDuration="42.243691562s" podCreationTimestamp="2026-01-22 06:53:06 +0000 UTC" firstStartedPulling="2026-01-22 06:53:07.600852339 +0000 UTC m=+1079.742759034" lastFinishedPulling="2026-01-22 06:53:45.573062129 +0000 UTC m=+1117.714968834" observedRunningTime="2026-01-22 06:53:48.068405203 +0000 UTC m=+1120.210311908" watchObservedRunningTime="2026-01-22 06:53:48.243691562 +0000 UTC m=+1120.385598267" Jan 22 06:53:48 crc kubenswrapper[4720]: I0122 06:53:48.286682 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-758ddb75c6-rjkvm" podStartSLOduration=41.286658152 podStartE2EDuration="41.286658152s" podCreationTimestamp="2026-01-22 06:53:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 06:53:48.249176258 +0000 UTC m=+1120.391082963" watchObservedRunningTime="2026-01-22 06:53:48.286658152 +0000 UTC m=+1120.428564857" Jan 22 06:53:48 crc kubenswrapper[4720]: I0122 06:53:48.287106 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-hq64w" podStartSLOduration=6.383012506 podStartE2EDuration="42.287100495s" podCreationTimestamp="2026-01-22 06:53:06 +0000 UTC" firstStartedPulling="2026-01-22 06:53:09.669046382 +0000 UTC m=+1081.810953087" lastFinishedPulling="2026-01-22 06:53:45.573134371 +0000 UTC m=+1117.715041076" observedRunningTime="2026-01-22 06:53:48.285345205 +0000 UTC m=+1120.427251920" watchObservedRunningTime="2026-01-22 06:53:48.287100495 +0000 UTC m=+1120.429007200" Jan 22 06:53:48 crc kubenswrapper[4720]: I0122 06:53:48.311696 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-wmhbp" podStartSLOduration=5.467158343 podStartE2EDuration="42.311657172s" podCreationTimestamp="2026-01-22 06:53:06 +0000 UTC" firstStartedPulling="2026-01-22 06:53:09.663875845 +0000 UTC m=+1081.805782550" lastFinishedPulling="2026-01-22 06:53:46.508374674 +0000 UTC m=+1118.650281379" observedRunningTime="2026-01-22 06:53:48.307367981 +0000 UTC m=+1120.449274686" watchObservedRunningTime="2026-01-22 06:53:48.311657172 +0000 UTC m=+1120.453563877" Jan 22 06:53:48 crc kubenswrapper[4720]: I0122 06:53:48.355580 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-g9d9q" podStartSLOduration=4.861368827 podStartE2EDuration="42.355549109s" podCreationTimestamp="2026-01-22 06:53:06 +0000 UTC" firstStartedPulling="2026-01-22 06:53:08.078832185 +0000 UTC m=+1080.220738890" lastFinishedPulling="2026-01-22 06:53:45.573012467 +0000 UTC m=+1117.714919172" observedRunningTime="2026-01-22 06:53:48.352279906 +0000 UTC m=+1120.494186621" watchObservedRunningTime="2026-01-22 06:53:48.355549109 +0000 UTC m=+1120.497455814" Jan 22 06:53:48 crc kubenswrapper[4720]: I0122 06:53:48.380856 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-59dd8b7cbf-kp5p9" podStartSLOduration=4.990440093 podStartE2EDuration="42.380828587s" podCreationTimestamp="2026-01-22 06:53:06 +0000 UTC" firstStartedPulling="2026-01-22 06:53:08.182685445 +0000 UTC m=+1080.324592150" lastFinishedPulling="2026-01-22 06:53:45.573073929 +0000 UTC m=+1117.714980644" observedRunningTime="2026-01-22 06:53:48.379265013 +0000 UTC m=+1120.521171718" watchObservedRunningTime="2026-01-22 06:53:48.380828587 +0000 UTC m=+1120.522735292" Jan 22 06:53:48 crc kubenswrapper[4720]: I0122 06:53:48.541215 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-9jw99" podStartSLOduration=5.129775872 podStartE2EDuration="42.541183042s" podCreationTimestamp="2026-01-22 06:53:06 +0000 UTC" firstStartedPulling="2026-01-22 06:53:09.097678793 +0000 UTC m=+1081.239585499" lastFinishedPulling="2026-01-22 06:53:46.509085964 +0000 UTC m=+1118.650992669" observedRunningTime="2026-01-22 06:53:48.477456752 +0000 UTC m=+1120.619363457" watchObservedRunningTime="2026-01-22 06:53:48.541183042 +0000 UTC m=+1120.683089747" Jan 22 06:53:48 crc kubenswrapper[4720]: I0122 06:53:48.541763 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-5d8f59fb49-47njc" podStartSLOduration=6.627306514 podStartE2EDuration="42.541756708s" podCreationTimestamp="2026-01-22 06:53:06 +0000 UTC" firstStartedPulling="2026-01-22 06:53:09.658644636 +0000 UTC m=+1081.800551331" lastFinishedPulling="2026-01-22 06:53:45.57309481 +0000 UTC m=+1117.715001525" observedRunningTime="2026-01-22 06:53:48.428298295 +0000 UTC m=+1120.570205000" watchObservedRunningTime="2026-01-22 06:53:48.541756708 +0000 UTC m=+1120.683663413" Jan 22 06:53:48 crc kubenswrapper[4720]: I0122 06:53:48.611332 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-l7wpl" podStartSLOduration=4.985960626 podStartE2EDuration="42.611303563s" podCreationTimestamp="2026-01-22 06:53:06 +0000 UTC" firstStartedPulling="2026-01-22 06:53:08.884775097 +0000 UTC m=+1081.026681802" lastFinishedPulling="2026-01-22 06:53:46.510118024 +0000 UTC m=+1118.652024739" observedRunningTime="2026-01-22 06:53:48.592443498 +0000 UTC m=+1120.734350203" watchObservedRunningTime="2026-01-22 06:53:48.611303563 +0000 UTC m=+1120.753210268" Jan 22 06:53:52 crc kubenswrapper[4720]: I0122 06:53:52.758877 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b85485jc7" event={"ID":"476ecc66-be12-4a68-8de1-3a062ec12f55","Type":"ContainerStarted","Data":"c82bf6bf97badc837b4ea10c56cb17e69515cb23de2d3d23432f5c379ec65354"} Jan 22 06:53:52 crc kubenswrapper[4720]: I0122 06:53:52.760192 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b85485jc7" Jan 22 06:53:52 crc kubenswrapper[4720]: I0122 06:53:52.762504 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-h6fd5" event={"ID":"25a73ab8-0306-4e57-9417-ce651e370925","Type":"ContainerStarted","Data":"6015c403f4079d5712837d51ae2086b016a8d9af721e94d2f4edaf296127f0c8"} Jan 22 06:53:52 crc kubenswrapper[4720]: I0122 06:53:52.762784 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-h6fd5" Jan 22 06:53:52 crc kubenswrapper[4720]: I0122 06:53:52.800078 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b85485jc7" podStartSLOduration=42.36126068 podStartE2EDuration="46.800046233s" podCreationTimestamp="2026-01-22 06:53:06 +0000 UTC" firstStartedPulling="2026-01-22 06:53:47.156618836 +0000 UTC m=+1119.298525541" lastFinishedPulling="2026-01-22 06:53:51.595404369 +0000 UTC m=+1123.737311094" observedRunningTime="2026-01-22 06:53:52.799630802 +0000 UTC m=+1124.941537547" watchObservedRunningTime="2026-01-22 06:53:52.800046233 +0000 UTC m=+1124.941952988" Jan 22 06:53:52 crc kubenswrapper[4720]: I0122 06:53:52.841832 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-h6fd5" podStartSLOduration=42.345131841 podStartE2EDuration="46.841798879s" podCreationTimestamp="2026-01-22 06:53:06 +0000 UTC" firstStartedPulling="2026-01-22 06:53:47.090303322 +0000 UTC m=+1119.232210027" lastFinishedPulling="2026-01-22 06:53:51.58697035 +0000 UTC m=+1123.728877065" observedRunningTime="2026-01-22 06:53:52.833626587 +0000 UTC m=+1124.975533302" watchObservedRunningTime="2026-01-22 06:53:52.841798879 +0000 UTC m=+1124.983705594" Jan 22 06:53:55 crc kubenswrapper[4720]: E0122 06:53:55.214727 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:c8dde42dafd41026ed2e4cfc26efc0fff63c4ba9d31326ae7dc644ccceaafa9d\\\"\"" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-2cs6n" podUID="1a3c6a91-064b-4006-b40f-ba7bc317aa83" Jan 22 06:53:56 crc kubenswrapper[4720]: I0122 06:53:56.719845 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-4nvtq" Jan 22 06:53:56 crc kubenswrapper[4720]: I0122 06:53:56.812849 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-4tlfl" event={"ID":"0e186e5c-83e6-465d-9353-e9314702d85a","Type":"ContainerStarted","Data":"c28243bd398fb97db3b339bacda95ea0af5535f0a711bf4d4987ba094b5b98f5"} Jan 22 06:53:56 crc kubenswrapper[4720]: I0122 06:53:56.813166 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-4tlfl" Jan 22 06:53:56 crc kubenswrapper[4720]: I0122 06:53:56.838472 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-4tlfl" podStartSLOduration=4.833564206 podStartE2EDuration="50.838450763s" podCreationTimestamp="2026-01-22 06:53:06 +0000 UTC" firstStartedPulling="2026-01-22 06:53:09.70418827 +0000 UTC m=+1081.846094975" lastFinishedPulling="2026-01-22 06:53:55.709074827 +0000 UTC m=+1127.850981532" observedRunningTime="2026-01-22 06:53:56.831887307 +0000 UTC m=+1128.973794022" watchObservedRunningTime="2026-01-22 06:53:56.838450763 +0000 UTC m=+1128.980357468" Jan 22 06:53:56 crc kubenswrapper[4720]: I0122 06:53:56.960858 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-59dd8b7cbf-kp5p9" Jan 22 06:53:56 crc kubenswrapper[4720]: I0122 06:53:56.978503 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-g9d9q" Jan 22 06:53:57 crc kubenswrapper[4720]: I0122 06:53:57.049138 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-l7wpl" Jan 22 06:53:57 crc kubenswrapper[4720]: I0122 06:53:57.087439 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-6rl8m" Jan 22 06:53:57 crc kubenswrapper[4720]: I0122 06:53:57.102215 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-9jw99" Jan 22 06:53:57 crc kubenswrapper[4720]: I0122 06:53:57.150732 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-hq64w" Jan 22 06:53:57 crc kubenswrapper[4720]: E0122 06:53:57.213061 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/manila-operator@sha256:8bee4480babd6fd8f686e0ba52a304acb6ffb90f09c7c57e7f5df5f7658836d8\\\"\"" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-nn4jg" podUID="d681304a-06cd-4870-b2b5-4f10936b7775" Jan 22 06:53:57 crc kubenswrapper[4720]: I0122 06:53:57.259614 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-5d8f59fb49-47njc" Jan 22 06:53:57 crc kubenswrapper[4720]: I0122 06:53:57.407009 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-wmhbp" Jan 22 06:53:57 crc kubenswrapper[4720]: I0122 06:53:57.407730 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-gkhjf" Jan 22 06:53:57 crc kubenswrapper[4720]: I0122 06:53:57.407871 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-m2hkw" Jan 22 06:53:57 crc kubenswrapper[4720]: I0122 06:53:57.556473 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-xqx67" Jan 22 06:53:57 crc kubenswrapper[4720]: I0122 06:53:57.625162 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-tnvdl" Jan 22 06:53:58 crc kubenswrapper[4720]: I0122 06:53:58.836367 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-ddkv8" event={"ID":"21ee70f0-2938-4d3a-9edf-beaa943261ab","Type":"ContainerStarted","Data":"e8c0885392b06a7af5d2110424110cf8f7528c5edff12ebc849abe4e6afab1e3"} Jan 22 06:53:58 crc kubenswrapper[4720]: I0122 06:53:58.836701 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-ddkv8" Jan 22 06:53:58 crc kubenswrapper[4720]: I0122 06:53:58.860105 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-ddkv8" podStartSLOduration=4.738310102 podStartE2EDuration="52.860084104s" podCreationTimestamp="2026-01-22 06:53:06 +0000 UTC" firstStartedPulling="2026-01-22 06:53:09.579487048 +0000 UTC m=+1081.721393753" lastFinishedPulling="2026-01-22 06:53:57.70126105 +0000 UTC m=+1129.843167755" observedRunningTime="2026-01-22 06:53:58.856494842 +0000 UTC m=+1130.998401547" watchObservedRunningTime="2026-01-22 06:53:58.860084104 +0000 UTC m=+1131.001990809" Jan 22 06:53:59 crc kubenswrapper[4720]: I0122 06:53:59.049826 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-h6fd5" Jan 22 06:53:59 crc kubenswrapper[4720]: E0122 06:53:59.212758 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.50:5001/openstack-k8s-operators/watcher-operator:2bc4688cca96552e6b25883a5eb5cc7a0447d6d9\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-57c994f794-t6ms5" podUID="f88ed309-12b4-4cb4-bc95-6e6873c72c10" Jan 22 06:53:59 crc kubenswrapper[4720]: I0122 06:53:59.292200 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b85485jc7" Jan 22 06:53:59 crc kubenswrapper[4720]: I0122 06:53:59.848773 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-d5h9r" event={"ID":"ace6e6bf-fddd-4105-af4e-5ad7fcd9f4d1","Type":"ContainerStarted","Data":"93199da53d7334607ab760741b732de6ae9765e8fd809cedc9438cb7d9546d1c"} Jan 22 06:53:59 crc kubenswrapper[4720]: I0122 06:53:59.849422 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-d5h9r" Jan 22 06:53:59 crc kubenswrapper[4720]: I0122 06:53:59.850763 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-758ddb75c6-rjkvm" Jan 22 06:53:59 crc kubenswrapper[4720]: I0122 06:53:59.883707 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-d5h9r" podStartSLOduration=4.497598854 podStartE2EDuration="53.883672255s" podCreationTimestamp="2026-01-22 06:53:06 +0000 UTC" firstStartedPulling="2026-01-22 06:53:09.648941281 +0000 UTC m=+1081.790847986" lastFinishedPulling="2026-01-22 06:53:59.035014612 +0000 UTC m=+1131.176921387" observedRunningTime="2026-01-22 06:53:59.874260298 +0000 UTC m=+1132.016167003" watchObservedRunningTime="2026-01-22 06:53:59.883672255 +0000 UTC m=+1132.025579000" Jan 22 06:54:01 crc kubenswrapper[4720]: E0122 06:54:01.212889 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-4nvz6" podUID="ff37e0b2-69d6-4217-b44f-a8bf016e45d6" Jan 22 06:54:07 crc kubenswrapper[4720]: I0122 06:54:07.291706 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-d5h9r" Jan 22 06:54:07 crc kubenswrapper[4720]: I0122 06:54:07.294936 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-ddkv8" Jan 22 06:54:07 crc kubenswrapper[4720]: I0122 06:54:07.556452 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-4tlfl" Jan 22 06:54:09 crc kubenswrapper[4720]: I0122 06:54:09.047409 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-2cs6n" event={"ID":"1a3c6a91-064b-4006-b40f-ba7bc317aa83","Type":"ContainerStarted","Data":"2be5e5cb7afaf82182e236b6a171d1434786f69d517fb5962a6d74d3e40e9b25"} Jan 22 06:54:09 crc kubenswrapper[4720]: I0122 06:54:09.048010 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-2cs6n" Jan 22 06:54:09 crc kubenswrapper[4720]: I0122 06:54:09.076739 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-2cs6n" podStartSLOduration=4.742520992 podStartE2EDuration="1m3.076712292s" podCreationTimestamp="2026-01-22 06:53:06 +0000 UTC" firstStartedPulling="2026-01-22 06:53:09.716567511 +0000 UTC m=+1081.858474216" lastFinishedPulling="2026-01-22 06:54:08.050758811 +0000 UTC m=+1140.192665516" observedRunningTime="2026-01-22 06:54:09.066601645 +0000 UTC m=+1141.208508390" watchObservedRunningTime="2026-01-22 06:54:09.076712292 +0000 UTC m=+1141.218619017" Jan 22 06:54:10 crc kubenswrapper[4720]: I0122 06:54:10.058633 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-nn4jg" event={"ID":"d681304a-06cd-4870-b2b5-4f10936b7775","Type":"ContainerStarted","Data":"9a8ce54ffe0d27da3068de3d81b8b122fd34d33c3229fd11625270be0f5c0f13"} Jan 22 06:54:10 crc kubenswrapper[4720]: I0122 06:54:10.059622 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-nn4jg" Jan 22 06:54:10 crc kubenswrapper[4720]: I0122 06:54:10.083087 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-nn4jg" podStartSLOduration=4.147613185 podStartE2EDuration="1m4.083063565s" podCreationTimestamp="2026-01-22 06:53:06 +0000 UTC" firstStartedPulling="2026-01-22 06:53:09.715695857 +0000 UTC m=+1081.857602562" lastFinishedPulling="2026-01-22 06:54:09.651146227 +0000 UTC m=+1141.793052942" observedRunningTime="2026-01-22 06:54:10.079890405 +0000 UTC m=+1142.221797110" watchObservedRunningTime="2026-01-22 06:54:10.083063565 +0000 UTC m=+1142.224970270" Jan 22 06:54:12 crc kubenswrapper[4720]: I0122 06:54:12.081126 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-57c994f794-t6ms5" event={"ID":"f88ed309-12b4-4cb4-bc95-6e6873c72c10","Type":"ContainerStarted","Data":"f086f534db9143e73aa75d72b44df4c82cbd1532671825a9c9ed5a3c705fc8ea"} Jan 22 06:54:12 crc kubenswrapper[4720]: I0122 06:54:12.081437 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-57c994f794-t6ms5" Jan 22 06:54:12 crc kubenswrapper[4720]: I0122 06:54:12.133985 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-57c994f794-t6ms5" podStartSLOduration=4.535438149 podStartE2EDuration="1m6.133937365s" podCreationTimestamp="2026-01-22 06:53:06 +0000 UTC" firstStartedPulling="2026-01-22 06:53:09.716761796 +0000 UTC m=+1081.858668501" lastFinishedPulling="2026-01-22 06:54:11.315260982 +0000 UTC m=+1143.457167717" observedRunningTime="2026-01-22 06:54:12.114607096 +0000 UTC m=+1144.256513841" watchObservedRunningTime="2026-01-22 06:54:12.133937365 +0000 UTC m=+1144.275844120" Jan 22 06:54:14 crc kubenswrapper[4720]: I0122 06:54:14.099031 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-4nvz6" event={"ID":"ff37e0b2-69d6-4217-b44f-a8bf016e45d6","Type":"ContainerStarted","Data":"68da5f0bf1444b6aa3a2b258bf6678c448acf3a258fa43bd288298445969f7e5"} Jan 22 06:54:17 crc kubenswrapper[4720]: I0122 06:54:17.174306 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-nn4jg" Jan 22 06:54:17 crc kubenswrapper[4720]: I0122 06:54:17.193678 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-4nvz6" podStartSLOduration=7.034311018 podStartE2EDuration="1m10.193648295s" podCreationTimestamp="2026-01-22 06:53:07 +0000 UTC" firstStartedPulling="2026-01-22 06:53:09.704324024 +0000 UTC m=+1081.846230729" lastFinishedPulling="2026-01-22 06:54:12.863661291 +0000 UTC m=+1145.005568006" observedRunningTime="2026-01-22 06:54:14.128501137 +0000 UTC m=+1146.270407862" watchObservedRunningTime="2026-01-22 06:54:17.193648295 +0000 UTC m=+1149.335555010" Jan 22 06:54:17 crc kubenswrapper[4720]: I0122 06:54:17.592335 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-2cs6n" Jan 22 06:54:17 crc kubenswrapper[4720]: I0122 06:54:17.944847 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-57c994f794-t6ms5" Jan 22 06:54:23 crc kubenswrapper[4720]: I0122 06:54:23.938015 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-57c994f794-t6ms5"] Jan 22 06:54:23 crc kubenswrapper[4720]: I0122 06:54:23.939149 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/watcher-operator-controller-manager-57c994f794-t6ms5" podUID="f88ed309-12b4-4cb4-bc95-6e6873c72c10" containerName="manager" containerID="cri-o://f086f534db9143e73aa75d72b44df4c82cbd1532671825a9c9ed5a3c705fc8ea" gracePeriod=10 Jan 22 06:54:23 crc kubenswrapper[4720]: I0122 06:54:23.993010 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-controller-init-547d554b65-gvw8s"] Jan 22 06:54:23 crc kubenswrapper[4720]: I0122 06:54:23.993276 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-controller-init-547d554b65-gvw8s" podUID="80d040c7-3118-45d5-9f1e-2681a8d116d7" containerName="operator" containerID="cri-o://0630b7100f90bc02361979ca329124c3f5cb8d034d8af4ac6efb5ff5c50988ca" gracePeriod=10 Jan 22 06:54:24 crc kubenswrapper[4720]: I0122 06:54:24.856722 4720 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-init-547d554b65-gvw8s" podUID="80d040c7-3118-45d5-9f1e-2681a8d116d7" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.75:8081/readyz\": dial tcp 10.217.0.75:8081: connect: connection refused" Jan 22 06:54:26 crc kubenswrapper[4720]: I0122 06:54:26.221140 4720 generic.go:334] "Generic (PLEG): container finished" podID="f88ed309-12b4-4cb4-bc95-6e6873c72c10" containerID="f086f534db9143e73aa75d72b44df4c82cbd1532671825a9c9ed5a3c705fc8ea" exitCode=0 Jan 22 06:54:26 crc kubenswrapper[4720]: I0122 06:54:26.223080 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-57c994f794-t6ms5" event={"ID":"f88ed309-12b4-4cb4-bc95-6e6873c72c10","Type":"ContainerDied","Data":"f086f534db9143e73aa75d72b44df4c82cbd1532671825a9c9ed5a3c705fc8ea"} Jan 22 06:54:26 crc kubenswrapper[4720]: I0122 06:54:26.226378 4720 generic.go:334] "Generic (PLEG): container finished" podID="80d040c7-3118-45d5-9f1e-2681a8d116d7" containerID="0630b7100f90bc02361979ca329124c3f5cb8d034d8af4ac6efb5ff5c50988ca" exitCode=0 Jan 22 06:54:26 crc kubenswrapper[4720]: I0122 06:54:26.226472 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-547d554b65-gvw8s" event={"ID":"80d040c7-3118-45d5-9f1e-2681a8d116d7","Type":"ContainerDied","Data":"0630b7100f90bc02361979ca329124c3f5cb8d034d8af4ac6efb5ff5c50988ca"} Jan 22 06:54:26 crc kubenswrapper[4720]: I0122 06:54:26.629405 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-547d554b65-gvw8s" Jan 22 06:54:26 crc kubenswrapper[4720]: I0122 06:54:26.633903 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-57c994f794-t6ms5" Jan 22 06:54:26 crc kubenswrapper[4720]: I0122 06:54:26.778486 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tzs2b\" (UniqueName: \"kubernetes.io/projected/80d040c7-3118-45d5-9f1e-2681a8d116d7-kube-api-access-tzs2b\") pod \"80d040c7-3118-45d5-9f1e-2681a8d116d7\" (UID: \"80d040c7-3118-45d5-9f1e-2681a8d116d7\") " Jan 22 06:54:26 crc kubenswrapper[4720]: I0122 06:54:26.778690 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tp4rg\" (UniqueName: \"kubernetes.io/projected/f88ed309-12b4-4cb4-bc95-6e6873c72c10-kube-api-access-tp4rg\") pod \"f88ed309-12b4-4cb4-bc95-6e6873c72c10\" (UID: \"f88ed309-12b4-4cb4-bc95-6e6873c72c10\") " Jan 22 06:54:26 crc kubenswrapper[4720]: I0122 06:54:26.791263 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88ed309-12b4-4cb4-bc95-6e6873c72c10-kube-api-access-tp4rg" (OuterVolumeSpecName: "kube-api-access-tp4rg") pod "f88ed309-12b4-4cb4-bc95-6e6873c72c10" (UID: "f88ed309-12b4-4cb4-bc95-6e6873c72c10"). InnerVolumeSpecName "kube-api-access-tp4rg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 06:54:26 crc kubenswrapper[4720]: I0122 06:54:26.791354 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/80d040c7-3118-45d5-9f1e-2681a8d116d7-kube-api-access-tzs2b" (OuterVolumeSpecName: "kube-api-access-tzs2b") pod "80d040c7-3118-45d5-9f1e-2681a8d116d7" (UID: "80d040c7-3118-45d5-9f1e-2681a8d116d7"). InnerVolumeSpecName "kube-api-access-tzs2b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 06:54:26 crc kubenswrapper[4720]: I0122 06:54:26.880695 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tzs2b\" (UniqueName: \"kubernetes.io/projected/80d040c7-3118-45d5-9f1e-2681a8d116d7-kube-api-access-tzs2b\") on node \"crc\" DevicePath \"\"" Jan 22 06:54:26 crc kubenswrapper[4720]: I0122 06:54:26.880734 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tp4rg\" (UniqueName: \"kubernetes.io/projected/f88ed309-12b4-4cb4-bc95-6e6873c72c10-kube-api-access-tp4rg\") on node \"crc\" DevicePath \"\"" Jan 22 06:54:27 crc kubenswrapper[4720]: I0122 06:54:27.238398 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-57c994f794-t6ms5" event={"ID":"f88ed309-12b4-4cb4-bc95-6e6873c72c10","Type":"ContainerDied","Data":"d9442d19e69856b1cf0fef0842b3f32c4021300a8baa2d0f3e56723869da8080"} Jan 22 06:54:27 crc kubenswrapper[4720]: I0122 06:54:27.238464 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-57c994f794-t6ms5" Jan 22 06:54:27 crc kubenswrapper[4720]: I0122 06:54:27.238475 4720 scope.go:117] "RemoveContainer" containerID="f086f534db9143e73aa75d72b44df4c82cbd1532671825a9c9ed5a3c705fc8ea" Jan 22 06:54:27 crc kubenswrapper[4720]: I0122 06:54:27.241367 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-547d554b65-gvw8s" event={"ID":"80d040c7-3118-45d5-9f1e-2681a8d116d7","Type":"ContainerDied","Data":"2bdb9a41eae3dfee7be190ffab88fa68ac9004d05c469ed40d61c6b5858bfd82"} Jan 22 06:54:27 crc kubenswrapper[4720]: I0122 06:54:27.241487 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-547d554b65-gvw8s" Jan 22 06:54:27 crc kubenswrapper[4720]: I0122 06:54:27.260376 4720 scope.go:117] "RemoveContainer" containerID="0630b7100f90bc02361979ca329124c3f5cb8d034d8af4ac6efb5ff5c50988ca" Jan 22 06:54:27 crc kubenswrapper[4720]: I0122 06:54:27.309801 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-controller-init-547d554b65-gvw8s"] Jan 22 06:54:27 crc kubenswrapper[4720]: I0122 06:54:27.320048 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-controller-init-547d554b65-gvw8s"] Jan 22 06:54:27 crc kubenswrapper[4720]: I0122 06:54:27.328558 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-57c994f794-t6ms5"] Jan 22 06:54:27 crc kubenswrapper[4720]: I0122 06:54:27.335383 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-57c994f794-t6ms5"] Jan 22 06:54:28 crc kubenswrapper[4720]: I0122 06:54:28.223815 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="80d040c7-3118-45d5-9f1e-2681a8d116d7" path="/var/lib/kubelet/pods/80d040c7-3118-45d5-9f1e-2681a8d116d7/volumes" Jan 22 06:54:28 crc kubenswrapper[4720]: I0122 06:54:28.225315 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88ed309-12b4-4cb4-bc95-6e6873c72c10" path="/var/lib/kubelet/pods/f88ed309-12b4-4cb4-bc95-6e6873c72c10/volumes" Jan 22 06:54:30 crc kubenswrapper[4720]: I0122 06:54:30.944663 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-index-m2mt2"] Jan 22 06:54:30 crc kubenswrapper[4720]: E0122 06:54:30.945410 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f88ed309-12b4-4cb4-bc95-6e6873c72c10" containerName="manager" Jan 22 06:54:30 crc kubenswrapper[4720]: I0122 06:54:30.945423 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="f88ed309-12b4-4cb4-bc95-6e6873c72c10" containerName="manager" Jan 22 06:54:30 crc kubenswrapper[4720]: E0122 06:54:30.945438 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80d040c7-3118-45d5-9f1e-2681a8d116d7" containerName="operator" Jan 22 06:54:30 crc kubenswrapper[4720]: I0122 06:54:30.945443 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="80d040c7-3118-45d5-9f1e-2681a8d116d7" containerName="operator" Jan 22 06:54:30 crc kubenswrapper[4720]: I0122 06:54:30.945583 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="80d040c7-3118-45d5-9f1e-2681a8d116d7" containerName="operator" Jan 22 06:54:30 crc kubenswrapper[4720]: I0122 06:54:30.945595 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="f88ed309-12b4-4cb4-bc95-6e6873c72c10" containerName="manager" Jan 22 06:54:30 crc kubenswrapper[4720]: I0122 06:54:30.946225 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-index-m2mt2" Jan 22 06:54:30 crc kubenswrapper[4720]: I0122 06:54:30.951763 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-index-dockercfg-qhtqj" Jan 22 06:54:30 crc kubenswrapper[4720]: I0122 06:54:30.962532 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-index-m2mt2"] Jan 22 06:54:31 crc kubenswrapper[4720]: I0122 06:54:31.150837 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2hwd6\" (UniqueName: \"kubernetes.io/projected/1d8dbe92-b4dc-4e49-89da-b1c83f668ded-kube-api-access-2hwd6\") pod \"watcher-operator-index-m2mt2\" (UID: \"1d8dbe92-b4dc-4e49-89da-b1c83f668ded\") " pod="openstack-operators/watcher-operator-index-m2mt2" Jan 22 06:54:31 crc kubenswrapper[4720]: I0122 06:54:31.252966 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2hwd6\" (UniqueName: \"kubernetes.io/projected/1d8dbe92-b4dc-4e49-89da-b1c83f668ded-kube-api-access-2hwd6\") pod \"watcher-operator-index-m2mt2\" (UID: \"1d8dbe92-b4dc-4e49-89da-b1c83f668ded\") " pod="openstack-operators/watcher-operator-index-m2mt2" Jan 22 06:54:31 crc kubenswrapper[4720]: I0122 06:54:31.299333 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2hwd6\" (UniqueName: \"kubernetes.io/projected/1d8dbe92-b4dc-4e49-89da-b1c83f668ded-kube-api-access-2hwd6\") pod \"watcher-operator-index-m2mt2\" (UID: \"1d8dbe92-b4dc-4e49-89da-b1c83f668ded\") " pod="openstack-operators/watcher-operator-index-m2mt2" Jan 22 06:54:31 crc kubenswrapper[4720]: I0122 06:54:31.564053 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-index-m2mt2" Jan 22 06:54:32 crc kubenswrapper[4720]: W0122 06:54:32.377191 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1d8dbe92_b4dc_4e49_89da_b1c83f668ded.slice/crio-e91f000297090a7f5cc43166b3522ea1ba7402570636ef5b2dfaf6d4cf6896b1 WatchSource:0}: Error finding container e91f000297090a7f5cc43166b3522ea1ba7402570636ef5b2dfaf6d4cf6896b1: Status 404 returned error can't find the container with id e91f000297090a7f5cc43166b3522ea1ba7402570636ef5b2dfaf6d4cf6896b1 Jan 22 06:54:32 crc kubenswrapper[4720]: I0122 06:54:32.377546 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-index-m2mt2"] Jan 22 06:54:33 crc kubenswrapper[4720]: I0122 06:54:33.311185 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-index-m2mt2" event={"ID":"1d8dbe92-b4dc-4e49-89da-b1c83f668ded","Type":"ContainerStarted","Data":"0454824263ca62224fb3d37a7e907374ad55c2c0f5e599717335525f619027c8"} Jan 22 06:54:33 crc kubenswrapper[4720]: I0122 06:54:33.312175 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-index-m2mt2" event={"ID":"1d8dbe92-b4dc-4e49-89da-b1c83f668ded","Type":"ContainerStarted","Data":"e91f000297090a7f5cc43166b3522ea1ba7402570636ef5b2dfaf6d4cf6896b1"} Jan 22 06:54:33 crc kubenswrapper[4720]: I0122 06:54:33.336114 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-index-m2mt2" podStartSLOduration=3.128630118 podStartE2EDuration="3.33608621s" podCreationTimestamp="2026-01-22 06:54:30 +0000 UTC" firstStartedPulling="2026-01-22 06:54:32.381279721 +0000 UTC m=+1164.523186426" lastFinishedPulling="2026-01-22 06:54:32.588735823 +0000 UTC m=+1164.730642518" observedRunningTime="2026-01-22 06:54:33.329157213 +0000 UTC m=+1165.471063938" watchObservedRunningTime="2026-01-22 06:54:33.33608621 +0000 UTC m=+1165.477992915" Jan 22 06:54:35 crc kubenswrapper[4720]: I0122 06:54:35.341315 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/watcher-operator-index-m2mt2"] Jan 22 06:54:35 crc kubenswrapper[4720]: I0122 06:54:35.344672 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/watcher-operator-index-m2mt2" podUID="1d8dbe92-b4dc-4e49-89da-b1c83f668ded" containerName="registry-server" containerID="cri-o://0454824263ca62224fb3d37a7e907374ad55c2c0f5e599717335525f619027c8" gracePeriod=2 Jan 22 06:54:35 crc kubenswrapper[4720]: I0122 06:54:35.947199 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-index-dsskv"] Jan 22 06:54:35 crc kubenswrapper[4720]: I0122 06:54:35.949038 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-index-dsskv" Jan 22 06:54:35 crc kubenswrapper[4720]: I0122 06:54:35.968819 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-index-dsskv"] Jan 22 06:54:36 crc kubenswrapper[4720]: I0122 06:54:36.041184 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-index-m2mt2" Jan 22 06:54:36 crc kubenswrapper[4720]: I0122 06:54:36.141271 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2hwd6\" (UniqueName: \"kubernetes.io/projected/1d8dbe92-b4dc-4e49-89da-b1c83f668ded-kube-api-access-2hwd6\") pod \"1d8dbe92-b4dc-4e49-89da-b1c83f668ded\" (UID: \"1d8dbe92-b4dc-4e49-89da-b1c83f668ded\") " Jan 22 06:54:36 crc kubenswrapper[4720]: I0122 06:54:36.141606 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jkb8k\" (UniqueName: \"kubernetes.io/projected/cb206016-4343-44c8-88e0-2f6400068e6d-kube-api-access-jkb8k\") pod \"watcher-operator-index-dsskv\" (UID: \"cb206016-4343-44c8-88e0-2f6400068e6d\") " pod="openstack-operators/watcher-operator-index-dsskv" Jan 22 06:54:36 crc kubenswrapper[4720]: I0122 06:54:36.149269 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d8dbe92-b4dc-4e49-89da-b1c83f668ded-kube-api-access-2hwd6" (OuterVolumeSpecName: "kube-api-access-2hwd6") pod "1d8dbe92-b4dc-4e49-89da-b1c83f668ded" (UID: "1d8dbe92-b4dc-4e49-89da-b1c83f668ded"). InnerVolumeSpecName "kube-api-access-2hwd6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 06:54:36 crc kubenswrapper[4720]: I0122 06:54:36.244074 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jkb8k\" (UniqueName: \"kubernetes.io/projected/cb206016-4343-44c8-88e0-2f6400068e6d-kube-api-access-jkb8k\") pod \"watcher-operator-index-dsskv\" (UID: \"cb206016-4343-44c8-88e0-2f6400068e6d\") " pod="openstack-operators/watcher-operator-index-dsskv" Jan 22 06:54:36 crc kubenswrapper[4720]: I0122 06:54:36.244274 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2hwd6\" (UniqueName: \"kubernetes.io/projected/1d8dbe92-b4dc-4e49-89da-b1c83f668ded-kube-api-access-2hwd6\") on node \"crc\" DevicePath \"\"" Jan 22 06:54:36 crc kubenswrapper[4720]: I0122 06:54:36.263247 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jkb8k\" (UniqueName: \"kubernetes.io/projected/cb206016-4343-44c8-88e0-2f6400068e6d-kube-api-access-jkb8k\") pod \"watcher-operator-index-dsskv\" (UID: \"cb206016-4343-44c8-88e0-2f6400068e6d\") " pod="openstack-operators/watcher-operator-index-dsskv" Jan 22 06:54:36 crc kubenswrapper[4720]: I0122 06:54:36.305383 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-index-dsskv" Jan 22 06:54:36 crc kubenswrapper[4720]: I0122 06:54:36.352346 4720 generic.go:334] "Generic (PLEG): container finished" podID="1d8dbe92-b4dc-4e49-89da-b1c83f668ded" containerID="0454824263ca62224fb3d37a7e907374ad55c2c0f5e599717335525f619027c8" exitCode=0 Jan 22 06:54:36 crc kubenswrapper[4720]: I0122 06:54:36.352395 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-index-m2mt2" event={"ID":"1d8dbe92-b4dc-4e49-89da-b1c83f668ded","Type":"ContainerDied","Data":"0454824263ca62224fb3d37a7e907374ad55c2c0f5e599717335525f619027c8"} Jan 22 06:54:36 crc kubenswrapper[4720]: I0122 06:54:36.352426 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-index-m2mt2" event={"ID":"1d8dbe92-b4dc-4e49-89da-b1c83f668ded","Type":"ContainerDied","Data":"e91f000297090a7f5cc43166b3522ea1ba7402570636ef5b2dfaf6d4cf6896b1"} Jan 22 06:54:36 crc kubenswrapper[4720]: I0122 06:54:36.352447 4720 scope.go:117] "RemoveContainer" containerID="0454824263ca62224fb3d37a7e907374ad55c2c0f5e599717335525f619027c8" Jan 22 06:54:36 crc kubenswrapper[4720]: I0122 06:54:36.352570 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-index-m2mt2" Jan 22 06:54:36 crc kubenswrapper[4720]: I0122 06:54:36.383433 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/watcher-operator-index-m2mt2"] Jan 22 06:54:36 crc kubenswrapper[4720]: I0122 06:54:36.384230 4720 scope.go:117] "RemoveContainer" containerID="0454824263ca62224fb3d37a7e907374ad55c2c0f5e599717335525f619027c8" Jan 22 06:54:36 crc kubenswrapper[4720]: E0122 06:54:36.385332 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0454824263ca62224fb3d37a7e907374ad55c2c0f5e599717335525f619027c8\": container with ID starting with 0454824263ca62224fb3d37a7e907374ad55c2c0f5e599717335525f619027c8 not found: ID does not exist" containerID="0454824263ca62224fb3d37a7e907374ad55c2c0f5e599717335525f619027c8" Jan 22 06:54:36 crc kubenswrapper[4720]: I0122 06:54:36.385371 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0454824263ca62224fb3d37a7e907374ad55c2c0f5e599717335525f619027c8"} err="failed to get container status \"0454824263ca62224fb3d37a7e907374ad55c2c0f5e599717335525f619027c8\": rpc error: code = NotFound desc = could not find container \"0454824263ca62224fb3d37a7e907374ad55c2c0f5e599717335525f619027c8\": container with ID starting with 0454824263ca62224fb3d37a7e907374ad55c2c0f5e599717335525f619027c8 not found: ID does not exist" Jan 22 06:54:36 crc kubenswrapper[4720]: I0122 06:54:36.397314 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/watcher-operator-index-m2mt2"] Jan 22 06:54:36 crc kubenswrapper[4720]: I0122 06:54:36.803296 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-index-dsskv"] Jan 22 06:54:36 crc kubenswrapper[4720]: W0122 06:54:36.808247 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcb206016_4343_44c8_88e0_2f6400068e6d.slice/crio-15177bfcb29a2a11ccd5e6bdbc5036a5308da957c00b400b29a4cb41a9c367c5 WatchSource:0}: Error finding container 15177bfcb29a2a11ccd5e6bdbc5036a5308da957c00b400b29a4cb41a9c367c5: Status 404 returned error can't find the container with id 15177bfcb29a2a11ccd5e6bdbc5036a5308da957c00b400b29a4cb41a9c367c5 Jan 22 06:54:37 crc kubenswrapper[4720]: I0122 06:54:37.363057 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-index-dsskv" event={"ID":"cb206016-4343-44c8-88e0-2f6400068e6d","Type":"ContainerStarted","Data":"2927845a73d26e7bb6d6653ae14562d01fa6721d609aff300a712183e6eacfcc"} Jan 22 06:54:37 crc kubenswrapper[4720]: I0122 06:54:37.363573 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-index-dsskv" event={"ID":"cb206016-4343-44c8-88e0-2f6400068e6d","Type":"ContainerStarted","Data":"15177bfcb29a2a11ccd5e6bdbc5036a5308da957c00b400b29a4cb41a9c367c5"} Jan 22 06:54:37 crc kubenswrapper[4720]: I0122 06:54:37.403169 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-index-dsskv" podStartSLOduration=2.342344867 podStartE2EDuration="2.403133362s" podCreationTimestamp="2026-01-22 06:54:35 +0000 UTC" firstStartedPulling="2026-01-22 06:54:36.814201924 +0000 UTC m=+1168.956108629" lastFinishedPulling="2026-01-22 06:54:36.874990409 +0000 UTC m=+1169.016897124" observedRunningTime="2026-01-22 06:54:37.395032309 +0000 UTC m=+1169.536939014" watchObservedRunningTime="2026-01-22 06:54:37.403133362 +0000 UTC m=+1169.545040067" Jan 22 06:54:38 crc kubenswrapper[4720]: I0122 06:54:38.224169 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d8dbe92-b4dc-4e49-89da-b1c83f668ded" path="/var/lib/kubelet/pods/1d8dbe92-b4dc-4e49-89da-b1c83f668ded/volumes" Jan 22 06:54:46 crc kubenswrapper[4720]: I0122 06:54:46.305947 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/watcher-operator-index-dsskv" Jan 22 06:54:46 crc kubenswrapper[4720]: I0122 06:54:46.307011 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-index-dsskv" Jan 22 06:54:46 crc kubenswrapper[4720]: I0122 06:54:46.355503 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/watcher-operator-index-dsskv" Jan 22 06:54:46 crc kubenswrapper[4720]: I0122 06:54:46.465174 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-index-dsskv" Jan 22 06:54:47 crc kubenswrapper[4720]: I0122 06:54:47.410010 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/df9484187864248b0416024bc436a60f9e23a62baa023aadac0f15570dbdgwp"] Jan 22 06:54:47 crc kubenswrapper[4720]: E0122 06:54:47.410865 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d8dbe92-b4dc-4e49-89da-b1c83f668ded" containerName="registry-server" Jan 22 06:54:47 crc kubenswrapper[4720]: I0122 06:54:47.410882 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d8dbe92-b4dc-4e49-89da-b1c83f668ded" containerName="registry-server" Jan 22 06:54:47 crc kubenswrapper[4720]: I0122 06:54:47.411056 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d8dbe92-b4dc-4e49-89da-b1c83f668ded" containerName="registry-server" Jan 22 06:54:47 crc kubenswrapper[4720]: I0122 06:54:47.412283 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/df9484187864248b0416024bc436a60f9e23a62baa023aadac0f15570dbdgwp" Jan 22 06:54:47 crc kubenswrapper[4720]: I0122 06:54:47.415424 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-8bfns" Jan 22 06:54:47 crc kubenswrapper[4720]: I0122 06:54:47.417072 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/df9484187864248b0416024bc436a60f9e23a62baa023aadac0f15570dbdgwp"] Jan 22 06:54:47 crc kubenswrapper[4720]: I0122 06:54:47.441812 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/40adb427-e593-415a-a491-fc641e94e5a2-bundle\") pod \"df9484187864248b0416024bc436a60f9e23a62baa023aadac0f15570dbdgwp\" (UID: \"40adb427-e593-415a-a491-fc641e94e5a2\") " pod="openstack-operators/df9484187864248b0416024bc436a60f9e23a62baa023aadac0f15570dbdgwp" Jan 22 06:54:47 crc kubenswrapper[4720]: I0122 06:54:47.442131 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/40adb427-e593-415a-a491-fc641e94e5a2-util\") pod \"df9484187864248b0416024bc436a60f9e23a62baa023aadac0f15570dbdgwp\" (UID: \"40adb427-e593-415a-a491-fc641e94e5a2\") " pod="openstack-operators/df9484187864248b0416024bc436a60f9e23a62baa023aadac0f15570dbdgwp" Jan 22 06:54:47 crc kubenswrapper[4720]: I0122 06:54:47.442191 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5d2sx\" (UniqueName: \"kubernetes.io/projected/40adb427-e593-415a-a491-fc641e94e5a2-kube-api-access-5d2sx\") pod \"df9484187864248b0416024bc436a60f9e23a62baa023aadac0f15570dbdgwp\" (UID: \"40adb427-e593-415a-a491-fc641e94e5a2\") " pod="openstack-operators/df9484187864248b0416024bc436a60f9e23a62baa023aadac0f15570dbdgwp" Jan 22 06:54:47 crc kubenswrapper[4720]: I0122 06:54:47.544298 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/40adb427-e593-415a-a491-fc641e94e5a2-bundle\") pod \"df9484187864248b0416024bc436a60f9e23a62baa023aadac0f15570dbdgwp\" (UID: \"40adb427-e593-415a-a491-fc641e94e5a2\") " pod="openstack-operators/df9484187864248b0416024bc436a60f9e23a62baa023aadac0f15570dbdgwp" Jan 22 06:54:47 crc kubenswrapper[4720]: I0122 06:54:47.544473 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/40adb427-e593-415a-a491-fc641e94e5a2-util\") pod \"df9484187864248b0416024bc436a60f9e23a62baa023aadac0f15570dbdgwp\" (UID: \"40adb427-e593-415a-a491-fc641e94e5a2\") " pod="openstack-operators/df9484187864248b0416024bc436a60f9e23a62baa023aadac0f15570dbdgwp" Jan 22 06:54:47 crc kubenswrapper[4720]: I0122 06:54:47.544543 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5d2sx\" (UniqueName: \"kubernetes.io/projected/40adb427-e593-415a-a491-fc641e94e5a2-kube-api-access-5d2sx\") pod \"df9484187864248b0416024bc436a60f9e23a62baa023aadac0f15570dbdgwp\" (UID: \"40adb427-e593-415a-a491-fc641e94e5a2\") " pod="openstack-operators/df9484187864248b0416024bc436a60f9e23a62baa023aadac0f15570dbdgwp" Jan 22 06:54:47 crc kubenswrapper[4720]: I0122 06:54:47.545872 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/40adb427-e593-415a-a491-fc641e94e5a2-bundle\") pod \"df9484187864248b0416024bc436a60f9e23a62baa023aadac0f15570dbdgwp\" (UID: \"40adb427-e593-415a-a491-fc641e94e5a2\") " pod="openstack-operators/df9484187864248b0416024bc436a60f9e23a62baa023aadac0f15570dbdgwp" Jan 22 06:54:47 crc kubenswrapper[4720]: I0122 06:54:47.546900 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/40adb427-e593-415a-a491-fc641e94e5a2-util\") pod \"df9484187864248b0416024bc436a60f9e23a62baa023aadac0f15570dbdgwp\" (UID: \"40adb427-e593-415a-a491-fc641e94e5a2\") " pod="openstack-operators/df9484187864248b0416024bc436a60f9e23a62baa023aadac0f15570dbdgwp" Jan 22 06:54:47 crc kubenswrapper[4720]: I0122 06:54:47.572425 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5d2sx\" (UniqueName: \"kubernetes.io/projected/40adb427-e593-415a-a491-fc641e94e5a2-kube-api-access-5d2sx\") pod \"df9484187864248b0416024bc436a60f9e23a62baa023aadac0f15570dbdgwp\" (UID: \"40adb427-e593-415a-a491-fc641e94e5a2\") " pod="openstack-operators/df9484187864248b0416024bc436a60f9e23a62baa023aadac0f15570dbdgwp" Jan 22 06:54:47 crc kubenswrapper[4720]: I0122 06:54:47.746386 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/df9484187864248b0416024bc436a60f9e23a62baa023aadac0f15570dbdgwp" Jan 22 06:54:48 crc kubenswrapper[4720]: I0122 06:54:48.412621 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/df9484187864248b0416024bc436a60f9e23a62baa023aadac0f15570dbdgwp"] Jan 22 06:54:48 crc kubenswrapper[4720]: I0122 06:54:48.462417 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/df9484187864248b0416024bc436a60f9e23a62baa023aadac0f15570dbdgwp" event={"ID":"40adb427-e593-415a-a491-fc641e94e5a2","Type":"ContainerStarted","Data":"00ea224c4717e10a33d6062270a7119d215a8ae435c48580957eea22b0b93f89"} Jan 22 06:54:49 crc kubenswrapper[4720]: I0122 06:54:49.474027 4720 generic.go:334] "Generic (PLEG): container finished" podID="40adb427-e593-415a-a491-fc641e94e5a2" containerID="dd95d4c32a664d77b05c72a18acdd36fe3fa055c213b01f1e233cb79dab2eb6f" exitCode=0 Jan 22 06:54:49 crc kubenswrapper[4720]: I0122 06:54:49.474117 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/df9484187864248b0416024bc436a60f9e23a62baa023aadac0f15570dbdgwp" event={"ID":"40adb427-e593-415a-a491-fc641e94e5a2","Type":"ContainerDied","Data":"dd95d4c32a664d77b05c72a18acdd36fe3fa055c213b01f1e233cb79dab2eb6f"} Jan 22 06:54:50 crc kubenswrapper[4720]: I0122 06:54:50.485757 4720 generic.go:334] "Generic (PLEG): container finished" podID="40adb427-e593-415a-a491-fc641e94e5a2" containerID="2d35ed53eba9529e5ffbe85d104f63d45924bb29e0276b1d86f66e8484559880" exitCode=0 Jan 22 06:54:50 crc kubenswrapper[4720]: I0122 06:54:50.485825 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/df9484187864248b0416024bc436a60f9e23a62baa023aadac0f15570dbdgwp" event={"ID":"40adb427-e593-415a-a491-fc641e94e5a2","Type":"ContainerDied","Data":"2d35ed53eba9529e5ffbe85d104f63d45924bb29e0276b1d86f66e8484559880"} Jan 22 06:54:51 crc kubenswrapper[4720]: I0122 06:54:51.498213 4720 generic.go:334] "Generic (PLEG): container finished" podID="40adb427-e593-415a-a491-fc641e94e5a2" containerID="1392d07769de981e75c6cdf56eca1af00a42d4ec4ed8e7f1a54b12aad518289e" exitCode=0 Jan 22 06:54:51 crc kubenswrapper[4720]: I0122 06:54:51.498355 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/df9484187864248b0416024bc436a60f9e23a62baa023aadac0f15570dbdgwp" event={"ID":"40adb427-e593-415a-a491-fc641e94e5a2","Type":"ContainerDied","Data":"1392d07769de981e75c6cdf56eca1af00a42d4ec4ed8e7f1a54b12aad518289e"} Jan 22 06:54:52 crc kubenswrapper[4720]: I0122 06:54:52.846335 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/df9484187864248b0416024bc436a60f9e23a62baa023aadac0f15570dbdgwp" Jan 22 06:54:52 crc kubenswrapper[4720]: I0122 06:54:52.961362 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/40adb427-e593-415a-a491-fc641e94e5a2-util\") pod \"40adb427-e593-415a-a491-fc641e94e5a2\" (UID: \"40adb427-e593-415a-a491-fc641e94e5a2\") " Jan 22 06:54:52 crc kubenswrapper[4720]: I0122 06:54:52.961462 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/40adb427-e593-415a-a491-fc641e94e5a2-bundle\") pod \"40adb427-e593-415a-a491-fc641e94e5a2\" (UID: \"40adb427-e593-415a-a491-fc641e94e5a2\") " Jan 22 06:54:52 crc kubenswrapper[4720]: I0122 06:54:52.961571 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5d2sx\" (UniqueName: \"kubernetes.io/projected/40adb427-e593-415a-a491-fc641e94e5a2-kube-api-access-5d2sx\") pod \"40adb427-e593-415a-a491-fc641e94e5a2\" (UID: \"40adb427-e593-415a-a491-fc641e94e5a2\") " Jan 22 06:54:52 crc kubenswrapper[4720]: I0122 06:54:52.962551 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/40adb427-e593-415a-a491-fc641e94e5a2-bundle" (OuterVolumeSpecName: "bundle") pod "40adb427-e593-415a-a491-fc641e94e5a2" (UID: "40adb427-e593-415a-a491-fc641e94e5a2"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 06:54:52 crc kubenswrapper[4720]: I0122 06:54:52.967631 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/40adb427-e593-415a-a491-fc641e94e5a2-kube-api-access-5d2sx" (OuterVolumeSpecName: "kube-api-access-5d2sx") pod "40adb427-e593-415a-a491-fc641e94e5a2" (UID: "40adb427-e593-415a-a491-fc641e94e5a2"). InnerVolumeSpecName "kube-api-access-5d2sx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 06:54:52 crc kubenswrapper[4720]: I0122 06:54:52.974789 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/40adb427-e593-415a-a491-fc641e94e5a2-util" (OuterVolumeSpecName: "util") pod "40adb427-e593-415a-a491-fc641e94e5a2" (UID: "40adb427-e593-415a-a491-fc641e94e5a2"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 06:54:53 crc kubenswrapper[4720]: I0122 06:54:53.063546 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5d2sx\" (UniqueName: \"kubernetes.io/projected/40adb427-e593-415a-a491-fc641e94e5a2-kube-api-access-5d2sx\") on node \"crc\" DevicePath \"\"" Jan 22 06:54:53 crc kubenswrapper[4720]: I0122 06:54:53.063639 4720 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/40adb427-e593-415a-a491-fc641e94e5a2-util\") on node \"crc\" DevicePath \"\"" Jan 22 06:54:53 crc kubenswrapper[4720]: I0122 06:54:53.063700 4720 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/40adb427-e593-415a-a491-fc641e94e5a2-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 06:54:53 crc kubenswrapper[4720]: I0122 06:54:53.517564 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/df9484187864248b0416024bc436a60f9e23a62baa023aadac0f15570dbdgwp" event={"ID":"40adb427-e593-415a-a491-fc641e94e5a2","Type":"ContainerDied","Data":"00ea224c4717e10a33d6062270a7119d215a8ae435c48580957eea22b0b93f89"} Jan 22 06:54:53 crc kubenswrapper[4720]: I0122 06:54:53.517620 4720 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="00ea224c4717e10a33d6062270a7119d215a8ae435c48580957eea22b0b93f89" Jan 22 06:54:53 crc kubenswrapper[4720]: I0122 06:54:53.517653 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/df9484187864248b0416024bc436a60f9e23a62baa023aadac0f15570dbdgwp" Jan 22 06:55:05 crc kubenswrapper[4720]: I0122 06:55:05.722386 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-db559d697-hjx74"] Jan 22 06:55:05 crc kubenswrapper[4720]: E0122 06:55:05.723281 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40adb427-e593-415a-a491-fc641e94e5a2" containerName="extract" Jan 22 06:55:05 crc kubenswrapper[4720]: I0122 06:55:05.723297 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="40adb427-e593-415a-a491-fc641e94e5a2" containerName="extract" Jan 22 06:55:05 crc kubenswrapper[4720]: E0122 06:55:05.723306 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40adb427-e593-415a-a491-fc641e94e5a2" containerName="util" Jan 22 06:55:05 crc kubenswrapper[4720]: I0122 06:55:05.723312 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="40adb427-e593-415a-a491-fc641e94e5a2" containerName="util" Jan 22 06:55:05 crc kubenswrapper[4720]: E0122 06:55:05.723329 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40adb427-e593-415a-a491-fc641e94e5a2" containerName="pull" Jan 22 06:55:05 crc kubenswrapper[4720]: I0122 06:55:05.723335 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="40adb427-e593-415a-a491-fc641e94e5a2" containerName="pull" Jan 22 06:55:05 crc kubenswrapper[4720]: I0122 06:55:05.723474 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="40adb427-e593-415a-a491-fc641e94e5a2" containerName="extract" Jan 22 06:55:05 crc kubenswrapper[4720]: I0122 06:55:05.724030 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-db559d697-hjx74" Jan 22 06:55:05 crc kubenswrapper[4720]: I0122 06:55:05.726890 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-zxvbg" Jan 22 06:55:05 crc kubenswrapper[4720]: I0122 06:55:05.727672 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-service-cert" Jan 22 06:55:05 crc kubenswrapper[4720]: I0122 06:55:05.748061 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-db559d697-hjx74"] Jan 22 06:55:05 crc kubenswrapper[4720]: I0122 06:55:05.869216 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nw9m8\" (UniqueName: \"kubernetes.io/projected/12086a20-e137-4c50-8273-3823f70fbfda-kube-api-access-nw9m8\") pod \"watcher-operator-controller-manager-db559d697-hjx74\" (UID: \"12086a20-e137-4c50-8273-3823f70fbfda\") " pod="openstack-operators/watcher-operator-controller-manager-db559d697-hjx74" Jan 22 06:55:05 crc kubenswrapper[4720]: I0122 06:55:05.869300 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/12086a20-e137-4c50-8273-3823f70fbfda-webhook-cert\") pod \"watcher-operator-controller-manager-db559d697-hjx74\" (UID: \"12086a20-e137-4c50-8273-3823f70fbfda\") " pod="openstack-operators/watcher-operator-controller-manager-db559d697-hjx74" Jan 22 06:55:05 crc kubenswrapper[4720]: I0122 06:55:05.869962 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/12086a20-e137-4c50-8273-3823f70fbfda-apiservice-cert\") pod \"watcher-operator-controller-manager-db559d697-hjx74\" (UID: \"12086a20-e137-4c50-8273-3823f70fbfda\") " pod="openstack-operators/watcher-operator-controller-manager-db559d697-hjx74" Jan 22 06:55:05 crc kubenswrapper[4720]: I0122 06:55:05.971496 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/12086a20-e137-4c50-8273-3823f70fbfda-apiservice-cert\") pod \"watcher-operator-controller-manager-db559d697-hjx74\" (UID: \"12086a20-e137-4c50-8273-3823f70fbfda\") " pod="openstack-operators/watcher-operator-controller-manager-db559d697-hjx74" Jan 22 06:55:05 crc kubenswrapper[4720]: I0122 06:55:05.971609 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nw9m8\" (UniqueName: \"kubernetes.io/projected/12086a20-e137-4c50-8273-3823f70fbfda-kube-api-access-nw9m8\") pod \"watcher-operator-controller-manager-db559d697-hjx74\" (UID: \"12086a20-e137-4c50-8273-3823f70fbfda\") " pod="openstack-operators/watcher-operator-controller-manager-db559d697-hjx74" Jan 22 06:55:05 crc kubenswrapper[4720]: I0122 06:55:05.971647 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/12086a20-e137-4c50-8273-3823f70fbfda-webhook-cert\") pod \"watcher-operator-controller-manager-db559d697-hjx74\" (UID: \"12086a20-e137-4c50-8273-3823f70fbfda\") " pod="openstack-operators/watcher-operator-controller-manager-db559d697-hjx74" Jan 22 06:55:05 crc kubenswrapper[4720]: I0122 06:55:05.982248 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/12086a20-e137-4c50-8273-3823f70fbfda-apiservice-cert\") pod \"watcher-operator-controller-manager-db559d697-hjx74\" (UID: \"12086a20-e137-4c50-8273-3823f70fbfda\") " pod="openstack-operators/watcher-operator-controller-manager-db559d697-hjx74" Jan 22 06:55:05 crc kubenswrapper[4720]: I0122 06:55:05.993802 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nw9m8\" (UniqueName: \"kubernetes.io/projected/12086a20-e137-4c50-8273-3823f70fbfda-kube-api-access-nw9m8\") pod \"watcher-operator-controller-manager-db559d697-hjx74\" (UID: \"12086a20-e137-4c50-8273-3823f70fbfda\") " pod="openstack-operators/watcher-operator-controller-manager-db559d697-hjx74" Jan 22 06:55:05 crc kubenswrapper[4720]: I0122 06:55:05.995595 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/12086a20-e137-4c50-8273-3823f70fbfda-webhook-cert\") pod \"watcher-operator-controller-manager-db559d697-hjx74\" (UID: \"12086a20-e137-4c50-8273-3823f70fbfda\") " pod="openstack-operators/watcher-operator-controller-manager-db559d697-hjx74" Jan 22 06:55:06 crc kubenswrapper[4720]: I0122 06:55:06.043951 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-db559d697-hjx74" Jan 22 06:55:06 crc kubenswrapper[4720]: I0122 06:55:06.417470 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-db559d697-hjx74"] Jan 22 06:55:06 crc kubenswrapper[4720]: I0122 06:55:06.643614 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-db559d697-hjx74" event={"ID":"12086a20-e137-4c50-8273-3823f70fbfda","Type":"ContainerStarted","Data":"81ab378e13c6b711a3df36a1aaf2b9ff651e942fb3d1de2a04f82d405637d79b"} Jan 22 06:55:06 crc kubenswrapper[4720]: I0122 06:55:06.643961 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-db559d697-hjx74" event={"ID":"12086a20-e137-4c50-8273-3823f70fbfda","Type":"ContainerStarted","Data":"c04fea0d7d1e9714d729f453f4f188212a31a549ffd02978830d4cfa52aed296"} Jan 22 06:55:06 crc kubenswrapper[4720]: I0122 06:55:06.643983 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-db559d697-hjx74" Jan 22 06:55:06 crc kubenswrapper[4720]: I0122 06:55:06.667436 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-db559d697-hjx74" podStartSLOduration=1.667414797 podStartE2EDuration="1.667414797s" podCreationTimestamp="2026-01-22 06:55:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 06:55:06.663921026 +0000 UTC m=+1198.805827731" watchObservedRunningTime="2026-01-22 06:55:06.667414797 +0000 UTC m=+1198.809321502" Jan 22 06:55:16 crc kubenswrapper[4720]: I0122 06:55:16.049381 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-db559d697-hjx74" Jan 22 06:55:27 crc kubenswrapper[4720]: I0122 06:55:27.962309 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/rabbitmq-notifications-server-0"] Jan 22 06:55:27 crc kubenswrapper[4720]: I0122 06:55:27.964641 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Jan 22 06:55:27 crc kubenswrapper[4720]: I0122 06:55:27.967537 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"watcher-kuttl-default"/"openshift-service-ca.crt" Jan 22 06:55:27 crc kubenswrapper[4720]: I0122 06:55:27.967782 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"watcher-kuttl-default"/"rabbitmq-notifications-plugins-conf" Jan 22 06:55:27 crc kubenswrapper[4720]: I0122 06:55:27.967953 4720 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"rabbitmq-notifications-erlang-cookie" Jan 22 06:55:27 crc kubenswrapper[4720]: I0122 06:55:27.967981 4720 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"rabbitmq-notifications-default-user" Jan 22 06:55:27 crc kubenswrapper[4720]: I0122 06:55:27.968224 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"watcher-kuttl-default"/"rabbitmq-notifications-server-conf" Jan 22 06:55:27 crc kubenswrapper[4720]: I0122 06:55:27.968329 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"watcher-kuttl-default"/"rabbitmq-notifications-config-data" Jan 22 06:55:27 crc kubenswrapper[4720]: I0122 06:55:27.968484 4720 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-rabbitmq-notifications-svc" Jan 22 06:55:27 crc kubenswrapper[4720]: I0122 06:55:27.968574 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"watcher-kuttl-default"/"kube-root-ca.crt" Jan 22 06:55:27 crc kubenswrapper[4720]: I0122 06:55:27.971537 4720 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"rabbitmq-notifications-server-dockercfg-zrbx6" Jan 22 06:55:27 crc kubenswrapper[4720]: I0122 06:55:27.986132 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/rabbitmq-notifications-server-0"] Jan 22 06:55:28 crc kubenswrapper[4720]: I0122 06:55:28.138162 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sllxz\" (UniqueName: \"kubernetes.io/projected/33c789df-54ca-47c4-9688-74e392e3b121-kube-api-access-sllxz\") pod \"rabbitmq-notifications-server-0\" (UID: \"33c789df-54ca-47c4-9688-74e392e3b121\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Jan 22 06:55:28 crc kubenswrapper[4720]: I0122 06:55:28.138246 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-3f1c7729-ddca-4f72-84de-835967a553b0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3f1c7729-ddca-4f72-84de-835967a553b0\") pod \"rabbitmq-notifications-server-0\" (UID: \"33c789df-54ca-47c4-9688-74e392e3b121\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Jan 22 06:55:28 crc kubenswrapper[4720]: I0122 06:55:28.138285 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/33c789df-54ca-47c4-9688-74e392e3b121-config-data\") pod \"rabbitmq-notifications-server-0\" (UID: \"33c789df-54ca-47c4-9688-74e392e3b121\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Jan 22 06:55:28 crc kubenswrapper[4720]: I0122 06:55:28.138309 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/33c789df-54ca-47c4-9688-74e392e3b121-rabbitmq-erlang-cookie\") pod \"rabbitmq-notifications-server-0\" (UID: \"33c789df-54ca-47c4-9688-74e392e3b121\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Jan 22 06:55:28 crc kubenswrapper[4720]: I0122 06:55:28.138354 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/33c789df-54ca-47c4-9688-74e392e3b121-server-conf\") pod \"rabbitmq-notifications-server-0\" (UID: \"33c789df-54ca-47c4-9688-74e392e3b121\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Jan 22 06:55:28 crc kubenswrapper[4720]: I0122 06:55:28.138484 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/33c789df-54ca-47c4-9688-74e392e3b121-plugins-conf\") pod \"rabbitmq-notifications-server-0\" (UID: \"33c789df-54ca-47c4-9688-74e392e3b121\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Jan 22 06:55:28 crc kubenswrapper[4720]: I0122 06:55:28.138535 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/33c789df-54ca-47c4-9688-74e392e3b121-rabbitmq-plugins\") pod \"rabbitmq-notifications-server-0\" (UID: \"33c789df-54ca-47c4-9688-74e392e3b121\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Jan 22 06:55:28 crc kubenswrapper[4720]: I0122 06:55:28.138583 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/33c789df-54ca-47c4-9688-74e392e3b121-pod-info\") pod \"rabbitmq-notifications-server-0\" (UID: \"33c789df-54ca-47c4-9688-74e392e3b121\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Jan 22 06:55:28 crc kubenswrapper[4720]: I0122 06:55:28.138625 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/33c789df-54ca-47c4-9688-74e392e3b121-rabbitmq-confd\") pod \"rabbitmq-notifications-server-0\" (UID: \"33c789df-54ca-47c4-9688-74e392e3b121\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Jan 22 06:55:28 crc kubenswrapper[4720]: I0122 06:55:28.138667 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/33c789df-54ca-47c4-9688-74e392e3b121-erlang-cookie-secret\") pod \"rabbitmq-notifications-server-0\" (UID: \"33c789df-54ca-47c4-9688-74e392e3b121\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Jan 22 06:55:28 crc kubenswrapper[4720]: I0122 06:55:28.138695 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/33c789df-54ca-47c4-9688-74e392e3b121-rabbitmq-tls\") pod \"rabbitmq-notifications-server-0\" (UID: \"33c789df-54ca-47c4-9688-74e392e3b121\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Jan 22 06:55:28 crc kubenswrapper[4720]: I0122 06:55:28.240293 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-3f1c7729-ddca-4f72-84de-835967a553b0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3f1c7729-ddca-4f72-84de-835967a553b0\") pod \"rabbitmq-notifications-server-0\" (UID: \"33c789df-54ca-47c4-9688-74e392e3b121\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Jan 22 06:55:28 crc kubenswrapper[4720]: I0122 06:55:28.240356 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sllxz\" (UniqueName: \"kubernetes.io/projected/33c789df-54ca-47c4-9688-74e392e3b121-kube-api-access-sllxz\") pod \"rabbitmq-notifications-server-0\" (UID: \"33c789df-54ca-47c4-9688-74e392e3b121\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Jan 22 06:55:28 crc kubenswrapper[4720]: I0122 06:55:28.240378 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/33c789df-54ca-47c4-9688-74e392e3b121-config-data\") pod \"rabbitmq-notifications-server-0\" (UID: \"33c789df-54ca-47c4-9688-74e392e3b121\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Jan 22 06:55:28 crc kubenswrapper[4720]: I0122 06:55:28.240400 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/33c789df-54ca-47c4-9688-74e392e3b121-rabbitmq-erlang-cookie\") pod \"rabbitmq-notifications-server-0\" (UID: \"33c789df-54ca-47c4-9688-74e392e3b121\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Jan 22 06:55:28 crc kubenswrapper[4720]: I0122 06:55:28.240429 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/33c789df-54ca-47c4-9688-74e392e3b121-server-conf\") pod \"rabbitmq-notifications-server-0\" (UID: \"33c789df-54ca-47c4-9688-74e392e3b121\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Jan 22 06:55:28 crc kubenswrapper[4720]: I0122 06:55:28.240454 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/33c789df-54ca-47c4-9688-74e392e3b121-plugins-conf\") pod \"rabbitmq-notifications-server-0\" (UID: \"33c789df-54ca-47c4-9688-74e392e3b121\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Jan 22 06:55:28 crc kubenswrapper[4720]: I0122 06:55:28.240495 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/33c789df-54ca-47c4-9688-74e392e3b121-rabbitmq-plugins\") pod \"rabbitmq-notifications-server-0\" (UID: \"33c789df-54ca-47c4-9688-74e392e3b121\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Jan 22 06:55:28 crc kubenswrapper[4720]: I0122 06:55:28.240533 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/33c789df-54ca-47c4-9688-74e392e3b121-pod-info\") pod \"rabbitmq-notifications-server-0\" (UID: \"33c789df-54ca-47c4-9688-74e392e3b121\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Jan 22 06:55:28 crc kubenswrapper[4720]: I0122 06:55:28.240563 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/33c789df-54ca-47c4-9688-74e392e3b121-rabbitmq-confd\") pod \"rabbitmq-notifications-server-0\" (UID: \"33c789df-54ca-47c4-9688-74e392e3b121\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Jan 22 06:55:28 crc kubenswrapper[4720]: I0122 06:55:28.240591 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/33c789df-54ca-47c4-9688-74e392e3b121-erlang-cookie-secret\") pod \"rabbitmq-notifications-server-0\" (UID: \"33c789df-54ca-47c4-9688-74e392e3b121\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Jan 22 06:55:28 crc kubenswrapper[4720]: I0122 06:55:28.240721 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/33c789df-54ca-47c4-9688-74e392e3b121-rabbitmq-tls\") pod \"rabbitmq-notifications-server-0\" (UID: \"33c789df-54ca-47c4-9688-74e392e3b121\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Jan 22 06:55:28 crc kubenswrapper[4720]: I0122 06:55:28.242364 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/33c789df-54ca-47c4-9688-74e392e3b121-rabbitmq-erlang-cookie\") pod \"rabbitmq-notifications-server-0\" (UID: \"33c789df-54ca-47c4-9688-74e392e3b121\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Jan 22 06:55:28 crc kubenswrapper[4720]: I0122 06:55:28.242364 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/33c789df-54ca-47c4-9688-74e392e3b121-rabbitmq-plugins\") pod \"rabbitmq-notifications-server-0\" (UID: \"33c789df-54ca-47c4-9688-74e392e3b121\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Jan 22 06:55:28 crc kubenswrapper[4720]: I0122 06:55:28.242897 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/33c789df-54ca-47c4-9688-74e392e3b121-plugins-conf\") pod \"rabbitmq-notifications-server-0\" (UID: \"33c789df-54ca-47c4-9688-74e392e3b121\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Jan 22 06:55:28 crc kubenswrapper[4720]: I0122 06:55:28.243479 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/33c789df-54ca-47c4-9688-74e392e3b121-config-data\") pod \"rabbitmq-notifications-server-0\" (UID: \"33c789df-54ca-47c4-9688-74e392e3b121\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Jan 22 06:55:28 crc kubenswrapper[4720]: I0122 06:55:28.244861 4720 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 22 06:55:28 crc kubenswrapper[4720]: I0122 06:55:28.244950 4720 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-3f1c7729-ddca-4f72-84de-835967a553b0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3f1c7729-ddca-4f72-84de-835967a553b0\") pod \"rabbitmq-notifications-server-0\" (UID: \"33c789df-54ca-47c4-9688-74e392e3b121\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/b1110b491a85f9b041ea0dbf6d4d50c53f4513fef5aac6c59c5ea30bd237d562/globalmount\"" pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Jan 22 06:55:28 crc kubenswrapper[4720]: I0122 06:55:28.245589 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/33c789df-54ca-47c4-9688-74e392e3b121-server-conf\") pod \"rabbitmq-notifications-server-0\" (UID: \"33c789df-54ca-47c4-9688-74e392e3b121\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Jan 22 06:55:28 crc kubenswrapper[4720]: I0122 06:55:28.248244 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/33c789df-54ca-47c4-9688-74e392e3b121-erlang-cookie-secret\") pod \"rabbitmq-notifications-server-0\" (UID: \"33c789df-54ca-47c4-9688-74e392e3b121\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Jan 22 06:55:28 crc kubenswrapper[4720]: I0122 06:55:28.248258 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/33c789df-54ca-47c4-9688-74e392e3b121-rabbitmq-confd\") pod \"rabbitmq-notifications-server-0\" (UID: \"33c789df-54ca-47c4-9688-74e392e3b121\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Jan 22 06:55:28 crc kubenswrapper[4720]: I0122 06:55:28.248551 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/33c789df-54ca-47c4-9688-74e392e3b121-pod-info\") pod \"rabbitmq-notifications-server-0\" (UID: \"33c789df-54ca-47c4-9688-74e392e3b121\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Jan 22 06:55:28 crc kubenswrapper[4720]: I0122 06:55:28.250041 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/33c789df-54ca-47c4-9688-74e392e3b121-rabbitmq-tls\") pod \"rabbitmq-notifications-server-0\" (UID: \"33c789df-54ca-47c4-9688-74e392e3b121\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Jan 22 06:55:28 crc kubenswrapper[4720]: I0122 06:55:28.265444 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sllxz\" (UniqueName: \"kubernetes.io/projected/33c789df-54ca-47c4-9688-74e392e3b121-kube-api-access-sllxz\") pod \"rabbitmq-notifications-server-0\" (UID: \"33c789df-54ca-47c4-9688-74e392e3b121\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Jan 22 06:55:28 crc kubenswrapper[4720]: I0122 06:55:28.278626 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-3f1c7729-ddca-4f72-84de-835967a553b0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3f1c7729-ddca-4f72-84de-835967a553b0\") pod \"rabbitmq-notifications-server-0\" (UID: \"33c789df-54ca-47c4-9688-74e392e3b121\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Jan 22 06:55:28 crc kubenswrapper[4720]: I0122 06:55:28.298626 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Jan 22 06:55:28 crc kubenswrapper[4720]: I0122 06:55:28.427100 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/rabbitmq-server-0"] Jan 22 06:55:28 crc kubenswrapper[4720]: I0122 06:55:28.433856 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/rabbitmq-server-0" Jan 22 06:55:28 crc kubenswrapper[4720]: I0122 06:55:28.450259 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"watcher-kuttl-default"/"rabbitmq-config-data" Jan 22 06:55:28 crc kubenswrapper[4720]: I0122 06:55:28.454307 4720 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"rabbitmq-erlang-cookie" Jan 22 06:55:28 crc kubenswrapper[4720]: I0122 06:55:28.457081 4720 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"rabbitmq-server-dockercfg-cf2rh" Jan 22 06:55:28 crc kubenswrapper[4720]: I0122 06:55:28.458037 4720 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-rabbitmq-svc" Jan 22 06:55:28 crc kubenswrapper[4720]: I0122 06:55:28.458603 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"watcher-kuttl-default"/"rabbitmq-plugins-conf" Jan 22 06:55:28 crc kubenswrapper[4720]: I0122 06:55:28.459145 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"watcher-kuttl-default"/"rabbitmq-server-conf" Jan 22 06:55:28 crc kubenswrapper[4720]: I0122 06:55:28.459312 4720 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"rabbitmq-default-user" Jan 22 06:55:28 crc kubenswrapper[4720]: I0122 06:55:28.509684 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/rabbitmq-server-0"] Jan 22 06:55:28 crc kubenswrapper[4720]: I0122 06:55:28.552216 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9482dbed-80f4-4d45-9402-5315c0d59310-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"9482dbed-80f4-4d45-9402-5315c0d59310\") " pod="watcher-kuttl-default/rabbitmq-server-0" Jan 22 06:55:28 crc kubenswrapper[4720]: I0122 06:55:28.552334 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9482dbed-80f4-4d45-9402-5315c0d59310-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"9482dbed-80f4-4d45-9402-5315c0d59310\") " pod="watcher-kuttl-default/rabbitmq-server-0" Jan 22 06:55:28 crc kubenswrapper[4720]: I0122 06:55:28.552418 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s5c4f\" (UniqueName: \"kubernetes.io/projected/9482dbed-80f4-4d45-9402-5315c0d59310-kube-api-access-s5c4f\") pod \"rabbitmq-server-0\" (UID: \"9482dbed-80f4-4d45-9402-5315c0d59310\") " pod="watcher-kuttl-default/rabbitmq-server-0" Jan 22 06:55:28 crc kubenswrapper[4720]: I0122 06:55:28.552508 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9482dbed-80f4-4d45-9402-5315c0d59310-server-conf\") pod \"rabbitmq-server-0\" (UID: \"9482dbed-80f4-4d45-9402-5315c0d59310\") " pod="watcher-kuttl-default/rabbitmq-server-0" Jan 22 06:55:28 crc kubenswrapper[4720]: I0122 06:55:28.552564 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9482dbed-80f4-4d45-9402-5315c0d59310-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"9482dbed-80f4-4d45-9402-5315c0d59310\") " pod="watcher-kuttl-default/rabbitmq-server-0" Jan 22 06:55:28 crc kubenswrapper[4720]: I0122 06:55:28.552598 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9482dbed-80f4-4d45-9402-5315c0d59310-config-data\") pod \"rabbitmq-server-0\" (UID: \"9482dbed-80f4-4d45-9402-5315c0d59310\") " pod="watcher-kuttl-default/rabbitmq-server-0" Jan 22 06:55:28 crc kubenswrapper[4720]: I0122 06:55:28.552633 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9482dbed-80f4-4d45-9402-5315c0d59310-pod-info\") pod \"rabbitmq-server-0\" (UID: \"9482dbed-80f4-4d45-9402-5315c0d59310\") " pod="watcher-kuttl-default/rabbitmq-server-0" Jan 22 06:55:28 crc kubenswrapper[4720]: I0122 06:55:28.552673 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9482dbed-80f4-4d45-9402-5315c0d59310-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"9482dbed-80f4-4d45-9402-5315c0d59310\") " pod="watcher-kuttl-default/rabbitmq-server-0" Jan 22 06:55:28 crc kubenswrapper[4720]: I0122 06:55:28.552693 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9482dbed-80f4-4d45-9402-5315c0d59310-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"9482dbed-80f4-4d45-9402-5315c0d59310\") " pod="watcher-kuttl-default/rabbitmq-server-0" Jan 22 06:55:28 crc kubenswrapper[4720]: I0122 06:55:28.552729 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-47956b96-b9ac-4fc8-b042-7169e049bdf8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-47956b96-b9ac-4fc8-b042-7169e049bdf8\") pod \"rabbitmq-server-0\" (UID: \"9482dbed-80f4-4d45-9402-5315c0d59310\") " pod="watcher-kuttl-default/rabbitmq-server-0" Jan 22 06:55:28 crc kubenswrapper[4720]: I0122 06:55:28.552775 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9482dbed-80f4-4d45-9402-5315c0d59310-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"9482dbed-80f4-4d45-9402-5315c0d59310\") " pod="watcher-kuttl-default/rabbitmq-server-0" Jan 22 06:55:28 crc kubenswrapper[4720]: I0122 06:55:28.654100 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9482dbed-80f4-4d45-9402-5315c0d59310-config-data\") pod \"rabbitmq-server-0\" (UID: \"9482dbed-80f4-4d45-9402-5315c0d59310\") " pod="watcher-kuttl-default/rabbitmq-server-0" Jan 22 06:55:28 crc kubenswrapper[4720]: I0122 06:55:28.654173 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9482dbed-80f4-4d45-9402-5315c0d59310-pod-info\") pod \"rabbitmq-server-0\" (UID: \"9482dbed-80f4-4d45-9402-5315c0d59310\") " pod="watcher-kuttl-default/rabbitmq-server-0" Jan 22 06:55:28 crc kubenswrapper[4720]: I0122 06:55:28.654209 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9482dbed-80f4-4d45-9402-5315c0d59310-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"9482dbed-80f4-4d45-9402-5315c0d59310\") " pod="watcher-kuttl-default/rabbitmq-server-0" Jan 22 06:55:28 crc kubenswrapper[4720]: I0122 06:55:28.654242 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9482dbed-80f4-4d45-9402-5315c0d59310-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"9482dbed-80f4-4d45-9402-5315c0d59310\") " pod="watcher-kuttl-default/rabbitmq-server-0" Jan 22 06:55:28 crc kubenswrapper[4720]: I0122 06:55:28.654278 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-47956b96-b9ac-4fc8-b042-7169e049bdf8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-47956b96-b9ac-4fc8-b042-7169e049bdf8\") pod \"rabbitmq-server-0\" (UID: \"9482dbed-80f4-4d45-9402-5315c0d59310\") " pod="watcher-kuttl-default/rabbitmq-server-0" Jan 22 06:55:28 crc kubenswrapper[4720]: I0122 06:55:28.654320 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9482dbed-80f4-4d45-9402-5315c0d59310-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"9482dbed-80f4-4d45-9402-5315c0d59310\") " pod="watcher-kuttl-default/rabbitmq-server-0" Jan 22 06:55:28 crc kubenswrapper[4720]: I0122 06:55:28.654347 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9482dbed-80f4-4d45-9402-5315c0d59310-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"9482dbed-80f4-4d45-9402-5315c0d59310\") " pod="watcher-kuttl-default/rabbitmq-server-0" Jan 22 06:55:28 crc kubenswrapper[4720]: I0122 06:55:28.654380 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9482dbed-80f4-4d45-9402-5315c0d59310-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"9482dbed-80f4-4d45-9402-5315c0d59310\") " pod="watcher-kuttl-default/rabbitmq-server-0" Jan 22 06:55:28 crc kubenswrapper[4720]: I0122 06:55:28.654408 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s5c4f\" (UniqueName: \"kubernetes.io/projected/9482dbed-80f4-4d45-9402-5315c0d59310-kube-api-access-s5c4f\") pod \"rabbitmq-server-0\" (UID: \"9482dbed-80f4-4d45-9402-5315c0d59310\") " pod="watcher-kuttl-default/rabbitmq-server-0" Jan 22 06:55:28 crc kubenswrapper[4720]: I0122 06:55:28.654436 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9482dbed-80f4-4d45-9402-5315c0d59310-server-conf\") pod \"rabbitmq-server-0\" (UID: \"9482dbed-80f4-4d45-9402-5315c0d59310\") " pod="watcher-kuttl-default/rabbitmq-server-0" Jan 22 06:55:28 crc kubenswrapper[4720]: I0122 06:55:28.654471 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9482dbed-80f4-4d45-9402-5315c0d59310-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"9482dbed-80f4-4d45-9402-5315c0d59310\") " pod="watcher-kuttl-default/rabbitmq-server-0" Jan 22 06:55:28 crc kubenswrapper[4720]: I0122 06:55:28.655418 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9482dbed-80f4-4d45-9402-5315c0d59310-config-data\") pod \"rabbitmq-server-0\" (UID: \"9482dbed-80f4-4d45-9402-5315c0d59310\") " pod="watcher-kuttl-default/rabbitmq-server-0" Jan 22 06:55:28 crc kubenswrapper[4720]: I0122 06:55:28.655727 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9482dbed-80f4-4d45-9402-5315c0d59310-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"9482dbed-80f4-4d45-9402-5315c0d59310\") " pod="watcher-kuttl-default/rabbitmq-server-0" Jan 22 06:55:28 crc kubenswrapper[4720]: I0122 06:55:28.655814 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9482dbed-80f4-4d45-9402-5315c0d59310-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"9482dbed-80f4-4d45-9402-5315c0d59310\") " pod="watcher-kuttl-default/rabbitmq-server-0" Jan 22 06:55:28 crc kubenswrapper[4720]: I0122 06:55:28.655424 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9482dbed-80f4-4d45-9402-5315c0d59310-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"9482dbed-80f4-4d45-9402-5315c0d59310\") " pod="watcher-kuttl-default/rabbitmq-server-0" Jan 22 06:55:28 crc kubenswrapper[4720]: I0122 06:55:28.656769 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9482dbed-80f4-4d45-9402-5315c0d59310-server-conf\") pod \"rabbitmq-server-0\" (UID: \"9482dbed-80f4-4d45-9402-5315c0d59310\") " pod="watcher-kuttl-default/rabbitmq-server-0" Jan 22 06:55:28 crc kubenswrapper[4720]: I0122 06:55:28.658945 4720 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 22 06:55:28 crc kubenswrapper[4720]: I0122 06:55:28.658983 4720 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-47956b96-b9ac-4fc8-b042-7169e049bdf8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-47956b96-b9ac-4fc8-b042-7169e049bdf8\") pod \"rabbitmq-server-0\" (UID: \"9482dbed-80f4-4d45-9402-5315c0d59310\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/aef3d8da179f2494edf8c924d23503cc217e249009a061552cec1abeccdaa116/globalmount\"" pod="watcher-kuttl-default/rabbitmq-server-0" Jan 22 06:55:28 crc kubenswrapper[4720]: I0122 06:55:28.663043 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9482dbed-80f4-4d45-9402-5315c0d59310-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"9482dbed-80f4-4d45-9402-5315c0d59310\") " pod="watcher-kuttl-default/rabbitmq-server-0" Jan 22 06:55:28 crc kubenswrapper[4720]: I0122 06:55:28.671213 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9482dbed-80f4-4d45-9402-5315c0d59310-pod-info\") pod \"rabbitmq-server-0\" (UID: \"9482dbed-80f4-4d45-9402-5315c0d59310\") " pod="watcher-kuttl-default/rabbitmq-server-0" Jan 22 06:55:28 crc kubenswrapper[4720]: I0122 06:55:28.684537 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s5c4f\" (UniqueName: \"kubernetes.io/projected/9482dbed-80f4-4d45-9402-5315c0d59310-kube-api-access-s5c4f\") pod \"rabbitmq-server-0\" (UID: \"9482dbed-80f4-4d45-9402-5315c0d59310\") " pod="watcher-kuttl-default/rabbitmq-server-0" Jan 22 06:55:28 crc kubenswrapper[4720]: I0122 06:55:28.728359 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-47956b96-b9ac-4fc8-b042-7169e049bdf8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-47956b96-b9ac-4fc8-b042-7169e049bdf8\") pod \"rabbitmq-server-0\" (UID: \"9482dbed-80f4-4d45-9402-5315c0d59310\") " pod="watcher-kuttl-default/rabbitmq-server-0" Jan 22 06:55:28 crc kubenswrapper[4720]: I0122 06:55:28.758605 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9482dbed-80f4-4d45-9402-5315c0d59310-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"9482dbed-80f4-4d45-9402-5315c0d59310\") " pod="watcher-kuttl-default/rabbitmq-server-0" Jan 22 06:55:28 crc kubenswrapper[4720]: I0122 06:55:28.765429 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9482dbed-80f4-4d45-9402-5315c0d59310-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"9482dbed-80f4-4d45-9402-5315c0d59310\") " pod="watcher-kuttl-default/rabbitmq-server-0" Jan 22 06:55:28 crc kubenswrapper[4720]: I0122 06:55:28.804460 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/rabbitmq-server-0" Jan 22 06:55:28 crc kubenswrapper[4720]: I0122 06:55:28.928760 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/rabbitmq-notifications-server-0"] Jan 22 06:55:29 crc kubenswrapper[4720]: I0122 06:55:29.173998 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/rabbitmq-server-0"] Jan 22 06:55:29 crc kubenswrapper[4720]: I0122 06:55:29.787190 4720 patch_prober.go:28] interesting pod/machine-config-daemon-bnsvd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 06:55:29 crc kubenswrapper[4720]: I0122 06:55:29.787641 4720 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" podUID="f4b26e9d-6a95-4b1c-9750-88b6aa100c67" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 06:55:29 crc kubenswrapper[4720]: I0122 06:55:29.854828 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/openstack-galera-0"] Jan 22 06:55:29 crc kubenswrapper[4720]: I0122 06:55:29.856639 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/openstack-galera-0" Jan 22 06:55:29 crc kubenswrapper[4720]: I0122 06:55:29.857176 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/rabbitmq-server-0" event={"ID":"9482dbed-80f4-4d45-9402-5315c0d59310","Type":"ContainerStarted","Data":"58650c26c732f0c8700035ce0bf7e32ce1b9f57f994810a9095b0ca3dbff9a1b"} Jan 22 06:55:29 crc kubenswrapper[4720]: I0122 06:55:29.861144 4720 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-galera-openstack-svc" Jan 22 06:55:29 crc kubenswrapper[4720]: I0122 06:55:29.861756 4720 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"galera-openstack-dockercfg-4wtxg" Jan 22 06:55:29 crc kubenswrapper[4720]: I0122 06:55:29.866486 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/rabbitmq-notifications-server-0" event={"ID":"33c789df-54ca-47c4-9688-74e392e3b121","Type":"ContainerStarted","Data":"184c8c5fac280b9d39593fb89bd549e977d292206977b29387be27884cccafdc"} Jan 22 06:55:29 crc kubenswrapper[4720]: I0122 06:55:29.866574 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"watcher-kuttl-default"/"openstack-config-data" Jan 22 06:55:29 crc kubenswrapper[4720]: I0122 06:55:29.866845 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"watcher-kuttl-default"/"openstack-scripts" Jan 22 06:55:29 crc kubenswrapper[4720]: I0122 06:55:29.868653 4720 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"combined-ca-bundle" Jan 22 06:55:29 crc kubenswrapper[4720]: I0122 06:55:29.872189 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/openstack-galera-0"] Jan 22 06:55:30 crc kubenswrapper[4720]: I0122 06:55:30.024997 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6rckp\" (UniqueName: \"kubernetes.io/projected/7bcd3174-ca47-4882-a14d-1b631d973fcc-kube-api-access-6rckp\") pod \"openstack-galera-0\" (UID: \"7bcd3174-ca47-4882-a14d-1b631d973fcc\") " pod="watcher-kuttl-default/openstack-galera-0" Jan 22 06:55:30 crc kubenswrapper[4720]: I0122 06:55:30.025057 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7bcd3174-ca47-4882-a14d-1b631d973fcc-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"7bcd3174-ca47-4882-a14d-1b631d973fcc\") " pod="watcher-kuttl-default/openstack-galera-0" Jan 22 06:55:30 crc kubenswrapper[4720]: I0122 06:55:30.025084 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/7bcd3174-ca47-4882-a14d-1b631d973fcc-config-data-default\") pod \"openstack-galera-0\" (UID: \"7bcd3174-ca47-4882-a14d-1b631d973fcc\") " pod="watcher-kuttl-default/openstack-galera-0" Jan 22 06:55:30 crc kubenswrapper[4720]: I0122 06:55:30.025146 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-7b5dd71b-790a-407f-8a4b-bfff82f0f7c6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7b5dd71b-790a-407f-8a4b-bfff82f0f7c6\") pod \"openstack-galera-0\" (UID: \"7bcd3174-ca47-4882-a14d-1b631d973fcc\") " pod="watcher-kuttl-default/openstack-galera-0" Jan 22 06:55:30 crc kubenswrapper[4720]: I0122 06:55:30.025182 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/7bcd3174-ca47-4882-a14d-1b631d973fcc-kolla-config\") pod \"openstack-galera-0\" (UID: \"7bcd3174-ca47-4882-a14d-1b631d973fcc\") " pod="watcher-kuttl-default/openstack-galera-0" Jan 22 06:55:30 crc kubenswrapper[4720]: I0122 06:55:30.025219 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7bcd3174-ca47-4882-a14d-1b631d973fcc-operator-scripts\") pod \"openstack-galera-0\" (UID: \"7bcd3174-ca47-4882-a14d-1b631d973fcc\") " pod="watcher-kuttl-default/openstack-galera-0" Jan 22 06:55:30 crc kubenswrapper[4720]: I0122 06:55:30.025240 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/7bcd3174-ca47-4882-a14d-1b631d973fcc-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"7bcd3174-ca47-4882-a14d-1b631d973fcc\") " pod="watcher-kuttl-default/openstack-galera-0" Jan 22 06:55:30 crc kubenswrapper[4720]: I0122 06:55:30.025277 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/7bcd3174-ca47-4882-a14d-1b631d973fcc-config-data-generated\") pod \"openstack-galera-0\" (UID: \"7bcd3174-ca47-4882-a14d-1b631d973fcc\") " pod="watcher-kuttl-default/openstack-galera-0" Jan 22 06:55:30 crc kubenswrapper[4720]: I0122 06:55:30.126953 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-7b5dd71b-790a-407f-8a4b-bfff82f0f7c6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7b5dd71b-790a-407f-8a4b-bfff82f0f7c6\") pod \"openstack-galera-0\" (UID: \"7bcd3174-ca47-4882-a14d-1b631d973fcc\") " pod="watcher-kuttl-default/openstack-galera-0" Jan 22 06:55:30 crc kubenswrapper[4720]: I0122 06:55:30.127021 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/7bcd3174-ca47-4882-a14d-1b631d973fcc-kolla-config\") pod \"openstack-galera-0\" (UID: \"7bcd3174-ca47-4882-a14d-1b631d973fcc\") " pod="watcher-kuttl-default/openstack-galera-0" Jan 22 06:55:30 crc kubenswrapper[4720]: I0122 06:55:30.127057 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7bcd3174-ca47-4882-a14d-1b631d973fcc-operator-scripts\") pod \"openstack-galera-0\" (UID: \"7bcd3174-ca47-4882-a14d-1b631d973fcc\") " pod="watcher-kuttl-default/openstack-galera-0" Jan 22 06:55:30 crc kubenswrapper[4720]: I0122 06:55:30.127079 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/7bcd3174-ca47-4882-a14d-1b631d973fcc-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"7bcd3174-ca47-4882-a14d-1b631d973fcc\") " pod="watcher-kuttl-default/openstack-galera-0" Jan 22 06:55:30 crc kubenswrapper[4720]: I0122 06:55:30.127102 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/7bcd3174-ca47-4882-a14d-1b631d973fcc-config-data-generated\") pod \"openstack-galera-0\" (UID: \"7bcd3174-ca47-4882-a14d-1b631d973fcc\") " pod="watcher-kuttl-default/openstack-galera-0" Jan 22 06:55:30 crc kubenswrapper[4720]: I0122 06:55:30.127143 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6rckp\" (UniqueName: \"kubernetes.io/projected/7bcd3174-ca47-4882-a14d-1b631d973fcc-kube-api-access-6rckp\") pod \"openstack-galera-0\" (UID: \"7bcd3174-ca47-4882-a14d-1b631d973fcc\") " pod="watcher-kuttl-default/openstack-galera-0" Jan 22 06:55:30 crc kubenswrapper[4720]: I0122 06:55:30.127164 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7bcd3174-ca47-4882-a14d-1b631d973fcc-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"7bcd3174-ca47-4882-a14d-1b631d973fcc\") " pod="watcher-kuttl-default/openstack-galera-0" Jan 22 06:55:30 crc kubenswrapper[4720]: I0122 06:55:30.127194 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/7bcd3174-ca47-4882-a14d-1b631d973fcc-config-data-default\") pod \"openstack-galera-0\" (UID: \"7bcd3174-ca47-4882-a14d-1b631d973fcc\") " pod="watcher-kuttl-default/openstack-galera-0" Jan 22 06:55:30 crc kubenswrapper[4720]: I0122 06:55:30.128076 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/7bcd3174-ca47-4882-a14d-1b631d973fcc-config-data-generated\") pod \"openstack-galera-0\" (UID: \"7bcd3174-ca47-4882-a14d-1b631d973fcc\") " pod="watcher-kuttl-default/openstack-galera-0" Jan 22 06:55:30 crc kubenswrapper[4720]: I0122 06:55:30.128730 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/7bcd3174-ca47-4882-a14d-1b631d973fcc-kolla-config\") pod \"openstack-galera-0\" (UID: \"7bcd3174-ca47-4882-a14d-1b631d973fcc\") " pod="watcher-kuttl-default/openstack-galera-0" Jan 22 06:55:30 crc kubenswrapper[4720]: I0122 06:55:30.128829 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/7bcd3174-ca47-4882-a14d-1b631d973fcc-config-data-default\") pod \"openstack-galera-0\" (UID: \"7bcd3174-ca47-4882-a14d-1b631d973fcc\") " pod="watcher-kuttl-default/openstack-galera-0" Jan 22 06:55:30 crc kubenswrapper[4720]: I0122 06:55:30.130593 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7bcd3174-ca47-4882-a14d-1b631d973fcc-operator-scripts\") pod \"openstack-galera-0\" (UID: \"7bcd3174-ca47-4882-a14d-1b631d973fcc\") " pod="watcher-kuttl-default/openstack-galera-0" Jan 22 06:55:30 crc kubenswrapper[4720]: I0122 06:55:30.143797 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7bcd3174-ca47-4882-a14d-1b631d973fcc-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"7bcd3174-ca47-4882-a14d-1b631d973fcc\") " pod="watcher-kuttl-default/openstack-galera-0" Jan 22 06:55:30 crc kubenswrapper[4720]: I0122 06:55:30.145724 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/7bcd3174-ca47-4882-a14d-1b631d973fcc-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"7bcd3174-ca47-4882-a14d-1b631d973fcc\") " pod="watcher-kuttl-default/openstack-galera-0" Jan 22 06:55:30 crc kubenswrapper[4720]: I0122 06:55:30.146700 4720 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 22 06:55:30 crc kubenswrapper[4720]: I0122 06:55:30.146729 4720 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-7b5dd71b-790a-407f-8a4b-bfff82f0f7c6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7b5dd71b-790a-407f-8a4b-bfff82f0f7c6\") pod \"openstack-galera-0\" (UID: \"7bcd3174-ca47-4882-a14d-1b631d973fcc\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/2ff073197413d1168af28059464260324520c2c2c7436785ee968777d8d647c8/globalmount\"" pod="watcher-kuttl-default/openstack-galera-0" Jan 22 06:55:30 crc kubenswrapper[4720]: I0122 06:55:30.194612 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6rckp\" (UniqueName: \"kubernetes.io/projected/7bcd3174-ca47-4882-a14d-1b631d973fcc-kube-api-access-6rckp\") pod \"openstack-galera-0\" (UID: \"7bcd3174-ca47-4882-a14d-1b631d973fcc\") " pod="watcher-kuttl-default/openstack-galera-0" Jan 22 06:55:30 crc kubenswrapper[4720]: I0122 06:55:30.283806 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/memcached-0"] Jan 22 06:55:30 crc kubenswrapper[4720]: I0122 06:55:30.285355 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/memcached-0" Jan 22 06:55:30 crc kubenswrapper[4720]: I0122 06:55:30.291338 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"watcher-kuttl-default"/"memcached-config-data" Jan 22 06:55:30 crc kubenswrapper[4720]: I0122 06:55:30.291592 4720 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-memcached-svc" Jan 22 06:55:30 crc kubenswrapper[4720]: I0122 06:55:30.292550 4720 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"memcached-memcached-dockercfg-gtj4j" Jan 22 06:55:30 crc kubenswrapper[4720]: I0122 06:55:30.321429 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/memcached-0"] Jan 22 06:55:30 crc kubenswrapper[4720]: I0122 06:55:30.331304 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c4h4k\" (UniqueName: \"kubernetes.io/projected/0f11b752-39dd-4f60-b6e5-6f788a85f86a-kube-api-access-c4h4k\") pod \"memcached-0\" (UID: \"0f11b752-39dd-4f60-b6e5-6f788a85f86a\") " pod="watcher-kuttl-default/memcached-0" Jan 22 06:55:30 crc kubenswrapper[4720]: I0122 06:55:30.331364 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/0f11b752-39dd-4f60-b6e5-6f788a85f86a-memcached-tls-certs\") pod \"memcached-0\" (UID: \"0f11b752-39dd-4f60-b6e5-6f788a85f86a\") " pod="watcher-kuttl-default/memcached-0" Jan 22 06:55:30 crc kubenswrapper[4720]: I0122 06:55:30.331386 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f11b752-39dd-4f60-b6e5-6f788a85f86a-combined-ca-bundle\") pod \"memcached-0\" (UID: \"0f11b752-39dd-4f60-b6e5-6f788a85f86a\") " pod="watcher-kuttl-default/memcached-0" Jan 22 06:55:30 crc kubenswrapper[4720]: I0122 06:55:30.331425 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/0f11b752-39dd-4f60-b6e5-6f788a85f86a-kolla-config\") pod \"memcached-0\" (UID: \"0f11b752-39dd-4f60-b6e5-6f788a85f86a\") " pod="watcher-kuttl-default/memcached-0" Jan 22 06:55:30 crc kubenswrapper[4720]: I0122 06:55:30.331497 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0f11b752-39dd-4f60-b6e5-6f788a85f86a-config-data\") pod \"memcached-0\" (UID: \"0f11b752-39dd-4f60-b6e5-6f788a85f86a\") " pod="watcher-kuttl-default/memcached-0" Jan 22 06:55:30 crc kubenswrapper[4720]: I0122 06:55:30.433221 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-7b5dd71b-790a-407f-8a4b-bfff82f0f7c6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7b5dd71b-790a-407f-8a4b-bfff82f0f7c6\") pod \"openstack-galera-0\" (UID: \"7bcd3174-ca47-4882-a14d-1b631d973fcc\") " pod="watcher-kuttl-default/openstack-galera-0" Jan 22 06:55:30 crc kubenswrapper[4720]: I0122 06:55:30.434137 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/0f11b752-39dd-4f60-b6e5-6f788a85f86a-kolla-config\") pod \"memcached-0\" (UID: \"0f11b752-39dd-4f60-b6e5-6f788a85f86a\") " pod="watcher-kuttl-default/memcached-0" Jan 22 06:55:30 crc kubenswrapper[4720]: I0122 06:55:30.434223 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0f11b752-39dd-4f60-b6e5-6f788a85f86a-config-data\") pod \"memcached-0\" (UID: \"0f11b752-39dd-4f60-b6e5-6f788a85f86a\") " pod="watcher-kuttl-default/memcached-0" Jan 22 06:55:30 crc kubenswrapper[4720]: I0122 06:55:30.434297 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c4h4k\" (UniqueName: \"kubernetes.io/projected/0f11b752-39dd-4f60-b6e5-6f788a85f86a-kube-api-access-c4h4k\") pod \"memcached-0\" (UID: \"0f11b752-39dd-4f60-b6e5-6f788a85f86a\") " pod="watcher-kuttl-default/memcached-0" Jan 22 06:55:30 crc kubenswrapper[4720]: I0122 06:55:30.434328 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/0f11b752-39dd-4f60-b6e5-6f788a85f86a-memcached-tls-certs\") pod \"memcached-0\" (UID: \"0f11b752-39dd-4f60-b6e5-6f788a85f86a\") " pod="watcher-kuttl-default/memcached-0" Jan 22 06:55:30 crc kubenswrapper[4720]: I0122 06:55:30.434349 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f11b752-39dd-4f60-b6e5-6f788a85f86a-combined-ca-bundle\") pod \"memcached-0\" (UID: \"0f11b752-39dd-4f60-b6e5-6f788a85f86a\") " pod="watcher-kuttl-default/memcached-0" Jan 22 06:55:30 crc kubenswrapper[4720]: I0122 06:55:30.440814 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0f11b752-39dd-4f60-b6e5-6f788a85f86a-config-data\") pod \"memcached-0\" (UID: \"0f11b752-39dd-4f60-b6e5-6f788a85f86a\") " pod="watcher-kuttl-default/memcached-0" Jan 22 06:55:30 crc kubenswrapper[4720]: I0122 06:55:30.443989 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/0f11b752-39dd-4f60-b6e5-6f788a85f86a-kolla-config\") pod \"memcached-0\" (UID: \"0f11b752-39dd-4f60-b6e5-6f788a85f86a\") " pod="watcher-kuttl-default/memcached-0" Jan 22 06:55:30 crc kubenswrapper[4720]: I0122 06:55:30.479674 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/0f11b752-39dd-4f60-b6e5-6f788a85f86a-memcached-tls-certs\") pod \"memcached-0\" (UID: \"0f11b752-39dd-4f60-b6e5-6f788a85f86a\") " pod="watcher-kuttl-default/memcached-0" Jan 22 06:55:30 crc kubenswrapper[4720]: I0122 06:55:30.480241 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f11b752-39dd-4f60-b6e5-6f788a85f86a-combined-ca-bundle\") pod \"memcached-0\" (UID: \"0f11b752-39dd-4f60-b6e5-6f788a85f86a\") " pod="watcher-kuttl-default/memcached-0" Jan 22 06:55:30 crc kubenswrapper[4720]: I0122 06:55:30.495767 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c4h4k\" (UniqueName: \"kubernetes.io/projected/0f11b752-39dd-4f60-b6e5-6f788a85f86a-kube-api-access-c4h4k\") pod \"memcached-0\" (UID: \"0f11b752-39dd-4f60-b6e5-6f788a85f86a\") " pod="watcher-kuttl-default/memcached-0" Jan 22 06:55:30 crc kubenswrapper[4720]: I0122 06:55:30.554787 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/openstack-galera-0" Jan 22 06:55:30 crc kubenswrapper[4720]: I0122 06:55:30.621690 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/memcached-0" Jan 22 06:55:30 crc kubenswrapper[4720]: I0122 06:55:30.872645 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/kube-state-metrics-0"] Jan 22 06:55:30 crc kubenswrapper[4720]: I0122 06:55:30.874523 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/kube-state-metrics-0" Jan 22 06:55:30 crc kubenswrapper[4720]: I0122 06:55:30.881498 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/kube-state-metrics-0"] Jan 22 06:55:30 crc kubenswrapper[4720]: I0122 06:55:30.881664 4720 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"telemetry-ceilometer-dockercfg-kgvrt" Jan 22 06:55:30 crc kubenswrapper[4720]: I0122 06:55:30.945357 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s2kbx\" (UniqueName: \"kubernetes.io/projected/5186fc7a-6b08-4177-bb0a-a43da69baa8a-kube-api-access-s2kbx\") pod \"kube-state-metrics-0\" (UID: \"5186fc7a-6b08-4177-bb0a-a43da69baa8a\") " pod="watcher-kuttl-default/kube-state-metrics-0" Jan 22 06:55:31 crc kubenswrapper[4720]: I0122 06:55:31.069019 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kbx\" (UniqueName: \"kubernetes.io/projected/5186fc7a-6b08-4177-bb0a-a43da69baa8a-kube-api-access-s2kbx\") pod \"kube-state-metrics-0\" (UID: \"5186fc7a-6b08-4177-bb0a-a43da69baa8a\") " pod="watcher-kuttl-default/kube-state-metrics-0" Jan 22 06:55:31 crc kubenswrapper[4720]: I0122 06:55:31.104810 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kbx\" (UniqueName: \"kubernetes.io/projected/5186fc7a-6b08-4177-bb0a-a43da69baa8a-kube-api-access-s2kbx\") pod \"kube-state-metrics-0\" (UID: \"5186fc7a-6b08-4177-bb0a-a43da69baa8a\") " pod="watcher-kuttl-default/kube-state-metrics-0" Jan 22 06:55:31 crc kubenswrapper[4720]: I0122 06:55:31.280927 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/kube-state-metrics-0" Jan 22 06:55:31 crc kubenswrapper[4720]: I0122 06:55:31.530866 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/memcached-0"] Jan 22 06:55:31 crc kubenswrapper[4720]: I0122 06:55:31.687981 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/openstack-galera-0"] Jan 22 06:55:31 crc kubenswrapper[4720]: I0122 06:55:31.698575 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/alertmanager-metric-storage-0"] Jan 22 06:55:31 crc kubenswrapper[4720]: I0122 06:55:31.702436 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/alertmanager-metric-storage-0" Jan 22 06:55:31 crc kubenswrapper[4720]: I0122 06:55:31.706342 4720 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"alertmanager-metric-storage-tls-assets-0" Jan 22 06:55:31 crc kubenswrapper[4720]: I0122 06:55:31.706701 4720 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"alertmanager-metric-storage-cluster-tls-config" Jan 22 06:55:31 crc kubenswrapper[4720]: I0122 06:55:31.708598 4720 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"alertmanager-metric-storage-generated" Jan 22 06:55:31 crc kubenswrapper[4720]: I0122 06:55:31.708669 4720 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"alertmanager-metric-storage-web-config" Jan 22 06:55:31 crc kubenswrapper[4720]: I0122 06:55:31.708797 4720 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"metric-storage-alertmanager-dockercfg-6747f" Jan 22 06:55:31 crc kubenswrapper[4720]: W0122 06:55:31.728080 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7bcd3174_ca47_4882_a14d_1b631d973fcc.slice/crio-a7a4d823cfca7e4de50572a0810a86f041aadc8a7456f589d9ed7552574894a8 WatchSource:0}: Error finding container a7a4d823cfca7e4de50572a0810a86f041aadc8a7456f589d9ed7552574894a8: Status 404 returned error can't find the container with id a7a4d823cfca7e4de50572a0810a86f041aadc8a7456f589d9ed7552574894a8 Jan 22 06:55:31 crc kubenswrapper[4720]: I0122 06:55:31.746036 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/alertmanager-metric-storage-0"] Jan 22 06:55:31 crc kubenswrapper[4720]: I0122 06:55:31.805581 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jh4vh\" (UniqueName: \"kubernetes.io/projected/98e9f2ee-4cf2-414f-85c4-3dc1e3023a7c-kube-api-access-jh4vh\") pod \"alertmanager-metric-storage-0\" (UID: \"98e9f2ee-4cf2-414f-85c4-3dc1e3023a7c\") " pod="watcher-kuttl-default/alertmanager-metric-storage-0" Jan 22 06:55:31 crc kubenswrapper[4720]: I0122 06:55:31.805643 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/98e9f2ee-4cf2-414f-85c4-3dc1e3023a7c-cluster-tls-config\") pod \"alertmanager-metric-storage-0\" (UID: \"98e9f2ee-4cf2-414f-85c4-3dc1e3023a7c\") " pod="watcher-kuttl-default/alertmanager-metric-storage-0" Jan 22 06:55:31 crc kubenswrapper[4720]: I0122 06:55:31.805673 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/98e9f2ee-4cf2-414f-85c4-3dc1e3023a7c-config-volume\") pod \"alertmanager-metric-storage-0\" (UID: \"98e9f2ee-4cf2-414f-85c4-3dc1e3023a7c\") " pod="watcher-kuttl-default/alertmanager-metric-storage-0" Jan 22 06:55:31 crc kubenswrapper[4720]: I0122 06:55:31.806509 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/98e9f2ee-4cf2-414f-85c4-3dc1e3023a7c-alertmanager-metric-storage-db\") pod \"alertmanager-metric-storage-0\" (UID: \"98e9f2ee-4cf2-414f-85c4-3dc1e3023a7c\") " pod="watcher-kuttl-default/alertmanager-metric-storage-0" Jan 22 06:55:31 crc kubenswrapper[4720]: I0122 06:55:31.806540 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/98e9f2ee-4cf2-414f-85c4-3dc1e3023a7c-config-out\") pod \"alertmanager-metric-storage-0\" (UID: \"98e9f2ee-4cf2-414f-85c4-3dc1e3023a7c\") " pod="watcher-kuttl-default/alertmanager-metric-storage-0" Jan 22 06:55:31 crc kubenswrapper[4720]: I0122 06:55:31.806570 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/98e9f2ee-4cf2-414f-85c4-3dc1e3023a7c-web-config\") pod \"alertmanager-metric-storage-0\" (UID: \"98e9f2ee-4cf2-414f-85c4-3dc1e3023a7c\") " pod="watcher-kuttl-default/alertmanager-metric-storage-0" Jan 22 06:55:31 crc kubenswrapper[4720]: I0122 06:55:31.806656 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/98e9f2ee-4cf2-414f-85c4-3dc1e3023a7c-tls-assets\") pod \"alertmanager-metric-storage-0\" (UID: \"98e9f2ee-4cf2-414f-85c4-3dc1e3023a7c\") " pod="watcher-kuttl-default/alertmanager-metric-storage-0" Jan 22 06:55:31 crc kubenswrapper[4720]: I0122 06:55:31.910313 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jh4vh\" (UniqueName: \"kubernetes.io/projected/98e9f2ee-4cf2-414f-85c4-3dc1e3023a7c-kube-api-access-jh4vh\") pod \"alertmanager-metric-storage-0\" (UID: \"98e9f2ee-4cf2-414f-85c4-3dc1e3023a7c\") " pod="watcher-kuttl-default/alertmanager-metric-storage-0" Jan 22 06:55:31 crc kubenswrapper[4720]: I0122 06:55:31.910407 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/98e9f2ee-4cf2-414f-85c4-3dc1e3023a7c-cluster-tls-config\") pod \"alertmanager-metric-storage-0\" (UID: \"98e9f2ee-4cf2-414f-85c4-3dc1e3023a7c\") " pod="watcher-kuttl-default/alertmanager-metric-storage-0" Jan 22 06:55:31 crc kubenswrapper[4720]: I0122 06:55:31.910434 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/98e9f2ee-4cf2-414f-85c4-3dc1e3023a7c-config-volume\") pod \"alertmanager-metric-storage-0\" (UID: \"98e9f2ee-4cf2-414f-85c4-3dc1e3023a7c\") " pod="watcher-kuttl-default/alertmanager-metric-storage-0" Jan 22 06:55:31 crc kubenswrapper[4720]: I0122 06:55:31.910502 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/98e9f2ee-4cf2-414f-85c4-3dc1e3023a7c-alertmanager-metric-storage-db\") pod \"alertmanager-metric-storage-0\" (UID: \"98e9f2ee-4cf2-414f-85c4-3dc1e3023a7c\") " pod="watcher-kuttl-default/alertmanager-metric-storage-0" Jan 22 06:55:31 crc kubenswrapper[4720]: I0122 06:55:31.910534 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/98e9f2ee-4cf2-414f-85c4-3dc1e3023a7c-config-out\") pod \"alertmanager-metric-storage-0\" (UID: \"98e9f2ee-4cf2-414f-85c4-3dc1e3023a7c\") " pod="watcher-kuttl-default/alertmanager-metric-storage-0" Jan 22 06:55:31 crc kubenswrapper[4720]: I0122 06:55:31.910556 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/98e9f2ee-4cf2-414f-85c4-3dc1e3023a7c-web-config\") pod \"alertmanager-metric-storage-0\" (UID: \"98e9f2ee-4cf2-414f-85c4-3dc1e3023a7c\") " pod="watcher-kuttl-default/alertmanager-metric-storage-0" Jan 22 06:55:31 crc kubenswrapper[4720]: I0122 06:55:31.910599 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/98e9f2ee-4cf2-414f-85c4-3dc1e3023a7c-tls-assets\") pod \"alertmanager-metric-storage-0\" (UID: \"98e9f2ee-4cf2-414f-85c4-3dc1e3023a7c\") " pod="watcher-kuttl-default/alertmanager-metric-storage-0" Jan 22 06:55:31 crc kubenswrapper[4720]: I0122 06:55:31.912735 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/98e9f2ee-4cf2-414f-85c4-3dc1e3023a7c-alertmanager-metric-storage-db\") pod \"alertmanager-metric-storage-0\" (UID: \"98e9f2ee-4cf2-414f-85c4-3dc1e3023a7c\") " pod="watcher-kuttl-default/alertmanager-metric-storage-0" Jan 22 06:55:31 crc kubenswrapper[4720]: I0122 06:55:31.933133 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/98e9f2ee-4cf2-414f-85c4-3dc1e3023a7c-web-config\") pod \"alertmanager-metric-storage-0\" (UID: \"98e9f2ee-4cf2-414f-85c4-3dc1e3023a7c\") " pod="watcher-kuttl-default/alertmanager-metric-storage-0" Jan 22 06:55:31 crc kubenswrapper[4720]: I0122 06:55:31.933473 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/98e9f2ee-4cf2-414f-85c4-3dc1e3023a7c-config-out\") pod \"alertmanager-metric-storage-0\" (UID: \"98e9f2ee-4cf2-414f-85c4-3dc1e3023a7c\") " pod="watcher-kuttl-default/alertmanager-metric-storage-0" Jan 22 06:55:31 crc kubenswrapper[4720]: I0122 06:55:31.934296 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/98e9f2ee-4cf2-414f-85c4-3dc1e3023a7c-cluster-tls-config\") pod \"alertmanager-metric-storage-0\" (UID: \"98e9f2ee-4cf2-414f-85c4-3dc1e3023a7c\") " pod="watcher-kuttl-default/alertmanager-metric-storage-0" Jan 22 06:55:31 crc kubenswrapper[4720]: I0122 06:55:31.934320 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/98e9f2ee-4cf2-414f-85c4-3dc1e3023a7c-tls-assets\") pod \"alertmanager-metric-storage-0\" (UID: \"98e9f2ee-4cf2-414f-85c4-3dc1e3023a7c\") " pod="watcher-kuttl-default/alertmanager-metric-storage-0" Jan 22 06:55:31 crc kubenswrapper[4720]: I0122 06:55:31.934687 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/98e9f2ee-4cf2-414f-85c4-3dc1e3023a7c-config-volume\") pod \"alertmanager-metric-storage-0\" (UID: \"98e9f2ee-4cf2-414f-85c4-3dc1e3023a7c\") " pod="watcher-kuttl-default/alertmanager-metric-storage-0" Jan 22 06:55:31 crc kubenswrapper[4720]: I0122 06:55:31.937978 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/memcached-0" event={"ID":"0f11b752-39dd-4f60-b6e5-6f788a85f86a","Type":"ContainerStarted","Data":"ccae151106b74c918b750440a199253eb3588be6e722310b96d8d0e7410450ba"} Jan 22 06:55:31 crc kubenswrapper[4720]: I0122 06:55:31.939480 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/openstack-galera-0" event={"ID":"7bcd3174-ca47-4882-a14d-1b631d973fcc","Type":"ContainerStarted","Data":"a7a4d823cfca7e4de50572a0810a86f041aadc8a7456f589d9ed7552574894a8"} Jan 22 06:55:31 crc kubenswrapper[4720]: I0122 06:55:31.964407 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jh4vh\" (UniqueName: \"kubernetes.io/projected/98e9f2ee-4cf2-414f-85c4-3dc1e3023a7c-kube-api-access-jh4vh\") pod \"alertmanager-metric-storage-0\" (UID: \"98e9f2ee-4cf2-414f-85c4-3dc1e3023a7c\") " pod="watcher-kuttl-default/alertmanager-metric-storage-0" Jan 22 06:55:32 crc kubenswrapper[4720]: I0122 06:55:32.063641 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/alertmanager-metric-storage-0" Jan 22 06:55:32 crc kubenswrapper[4720]: I0122 06:55:32.245260 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-ui-dashboards-66cbf594b5-gqdw7"] Jan 22 06:55:32 crc kubenswrapper[4720]: I0122 06:55:32.246984 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/prometheus-metric-storage-0"] Jan 22 06:55:32 crc kubenswrapper[4720]: I0122 06:55:32.248697 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 22 06:55:32 crc kubenswrapper[4720]: I0122 06:55:32.249340 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-gqdw7" Jan 22 06:55:32 crc kubenswrapper[4720]: I0122 06:55:32.257725 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-ui-dashboards-66cbf594b5-gqdw7"] Jan 22 06:55:32 crc kubenswrapper[4720]: I0122 06:55:32.258784 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"watcher-kuttl-default"/"prometheus-metric-storage-rulefiles-0" Jan 22 06:55:32 crc kubenswrapper[4720]: I0122 06:55:32.259080 4720 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"prometheus-metric-storage" Jan 22 06:55:32 crc kubenswrapper[4720]: I0122 06:55:32.273134 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"watcher-kuttl-default"/"prometheus-metric-storage-rulefiles-2" Jan 22 06:55:32 crc kubenswrapper[4720]: I0122 06:55:32.273456 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"watcher-kuttl-default"/"prometheus-metric-storage-rulefiles-1" Jan 22 06:55:32 crc kubenswrapper[4720]: I0122 06:55:32.273758 4720 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Jan 22 06:55:32 crc kubenswrapper[4720]: I0122 06:55:32.273945 4720 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"prometheus-metric-storage-web-config" Jan 22 06:55:32 crc kubenswrapper[4720]: I0122 06:55:32.274087 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-ui-dashboards" Jan 22 06:55:32 crc kubenswrapper[4720]: I0122 06:55:32.274290 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-ui-dashboards-sa-dockercfg-7rcnf" Jan 22 06:55:32 crc kubenswrapper[4720]: I0122 06:55:32.274425 4720 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"prometheus-metric-storage-tls-assets-0" Jan 22 06:55:32 crc kubenswrapper[4720]: I0122 06:55:32.274562 4720 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"metric-storage-prometheus-dockercfg-7592l" Jan 22 06:55:32 crc kubenswrapper[4720]: I0122 06:55:32.284205 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/kube-state-metrics-0"] Jan 22 06:55:32 crc kubenswrapper[4720]: I0122 06:55:32.325120 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/prometheus-metric-storage-0"] Jan 22 06:55:32 crc kubenswrapper[4720]: I0122 06:55:32.326106 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mc2nw\" (UniqueName: \"kubernetes.io/projected/976fdae9-9e7d-46d1-b649-c0cfecd372ae-kube-api-access-mc2nw\") pod \"observability-ui-dashboards-66cbf594b5-gqdw7\" (UID: \"976fdae9-9e7d-46d1-b649-c0cfecd372ae\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-gqdw7" Jan 22 06:55:32 crc kubenswrapper[4720]: I0122 06:55:32.326145 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/976fdae9-9e7d-46d1-b649-c0cfecd372ae-serving-cert\") pod \"observability-ui-dashboards-66cbf594b5-gqdw7\" (UID: \"976fdae9-9e7d-46d1-b649-c0cfecd372ae\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-gqdw7" Jan 22 06:55:32 crc kubenswrapper[4720]: I0122 06:55:32.440040 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/4d97711e-2650-4d76-b960-21e698d8e10a-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"4d97711e-2650-4d76-b960-21e698d8e10a\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 22 06:55:32 crc kubenswrapper[4720]: I0122 06:55:32.440099 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/4d97711e-2650-4d76-b960-21e698d8e10a-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"4d97711e-2650-4d76-b960-21e698d8e10a\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 22 06:55:32 crc kubenswrapper[4720]: I0122 06:55:32.440134 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/4d97711e-2650-4d76-b960-21e698d8e10a-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"4d97711e-2650-4d76-b960-21e698d8e10a\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 22 06:55:32 crc kubenswrapper[4720]: I0122 06:55:32.440167 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/4d97711e-2650-4d76-b960-21e698d8e10a-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"4d97711e-2650-4d76-b960-21e698d8e10a\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 22 06:55:32 crc kubenswrapper[4720]: I0122 06:55:32.440199 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b7hv8\" (UniqueName: \"kubernetes.io/projected/4d97711e-2650-4d76-b960-21e698d8e10a-kube-api-access-b7hv8\") pod \"prometheus-metric-storage-0\" (UID: \"4d97711e-2650-4d76-b960-21e698d8e10a\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 22 06:55:32 crc kubenswrapper[4720]: I0122 06:55:32.440271 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mc2nw\" (UniqueName: \"kubernetes.io/projected/976fdae9-9e7d-46d1-b649-c0cfecd372ae-kube-api-access-mc2nw\") pod \"observability-ui-dashboards-66cbf594b5-gqdw7\" (UID: \"976fdae9-9e7d-46d1-b649-c0cfecd372ae\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-gqdw7" Jan 22 06:55:32 crc kubenswrapper[4720]: I0122 06:55:32.440291 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/4d97711e-2650-4d76-b960-21e698d8e10a-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"4d97711e-2650-4d76-b960-21e698d8e10a\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 22 06:55:32 crc kubenswrapper[4720]: I0122 06:55:32.440334 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/976fdae9-9e7d-46d1-b649-c0cfecd372ae-serving-cert\") pod \"observability-ui-dashboards-66cbf594b5-gqdw7\" (UID: \"976fdae9-9e7d-46d1-b649-c0cfecd372ae\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-gqdw7" Jan 22 06:55:32 crc kubenswrapper[4720]: I0122 06:55:32.440392 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/4d97711e-2650-4d76-b960-21e698d8e10a-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"4d97711e-2650-4d76-b960-21e698d8e10a\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 22 06:55:32 crc kubenswrapper[4720]: I0122 06:55:32.440417 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/4d97711e-2650-4d76-b960-21e698d8e10a-config\") pod \"prometheus-metric-storage-0\" (UID: \"4d97711e-2650-4d76-b960-21e698d8e10a\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 22 06:55:32 crc kubenswrapper[4720]: I0122 06:55:32.440489 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/4d97711e-2650-4d76-b960-21e698d8e10a-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"4d97711e-2650-4d76-b960-21e698d8e10a\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 22 06:55:32 crc kubenswrapper[4720]: I0122 06:55:32.440513 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-40d4a4fe-c9b1-4471-a9eb-4f5b2c118017\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-40d4a4fe-c9b1-4471-a9eb-4f5b2c118017\") pod \"prometheus-metric-storage-0\" (UID: \"4d97711e-2650-4d76-b960-21e698d8e10a\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 22 06:55:32 crc kubenswrapper[4720]: E0122 06:55:32.441078 4720 secret.go:188] Couldn't get secret openshift-operators/observability-ui-dashboards: secret "observability-ui-dashboards" not found Jan 22 06:55:32 crc kubenswrapper[4720]: E0122 06:55:32.441133 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/976fdae9-9e7d-46d1-b649-c0cfecd372ae-serving-cert podName:976fdae9-9e7d-46d1-b649-c0cfecd372ae nodeName:}" failed. No retries permitted until 2026-01-22 06:55:32.941116173 +0000 UTC m=+1225.083022878 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/976fdae9-9e7d-46d1-b649-c0cfecd372ae-serving-cert") pod "observability-ui-dashboards-66cbf594b5-gqdw7" (UID: "976fdae9-9e7d-46d1-b649-c0cfecd372ae") : secret "observability-ui-dashboards" not found Jan 22 06:55:32 crc kubenswrapper[4720]: I0122 06:55:32.521369 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mc2nw\" (UniqueName: \"kubernetes.io/projected/976fdae9-9e7d-46d1-b649-c0cfecd372ae-kube-api-access-mc2nw\") pod \"observability-ui-dashboards-66cbf594b5-gqdw7\" (UID: \"976fdae9-9e7d-46d1-b649-c0cfecd372ae\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-gqdw7" Jan 22 06:55:32 crc kubenswrapper[4720]: I0122 06:55:32.542683 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/4d97711e-2650-4d76-b960-21e698d8e10a-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"4d97711e-2650-4d76-b960-21e698d8e10a\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 22 06:55:32 crc kubenswrapper[4720]: I0122 06:55:32.543209 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/4d97711e-2650-4d76-b960-21e698d8e10a-config\") pod \"prometheus-metric-storage-0\" (UID: \"4d97711e-2650-4d76-b960-21e698d8e10a\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 22 06:55:32 crc kubenswrapper[4720]: I0122 06:55:32.543263 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/4d97711e-2650-4d76-b960-21e698d8e10a-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"4d97711e-2650-4d76-b960-21e698d8e10a\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 22 06:55:32 crc kubenswrapper[4720]: I0122 06:55:32.543289 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-40d4a4fe-c9b1-4471-a9eb-4f5b2c118017\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-40d4a4fe-c9b1-4471-a9eb-4f5b2c118017\") pod \"prometheus-metric-storage-0\" (UID: \"4d97711e-2650-4d76-b960-21e698d8e10a\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 22 06:55:32 crc kubenswrapper[4720]: I0122 06:55:32.543324 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/4d97711e-2650-4d76-b960-21e698d8e10a-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"4d97711e-2650-4d76-b960-21e698d8e10a\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 22 06:55:32 crc kubenswrapper[4720]: I0122 06:55:32.543350 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/4d97711e-2650-4d76-b960-21e698d8e10a-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"4d97711e-2650-4d76-b960-21e698d8e10a\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 22 06:55:32 crc kubenswrapper[4720]: I0122 06:55:32.543374 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/4d97711e-2650-4d76-b960-21e698d8e10a-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"4d97711e-2650-4d76-b960-21e698d8e10a\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 22 06:55:32 crc kubenswrapper[4720]: I0122 06:55:32.543395 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/4d97711e-2650-4d76-b960-21e698d8e10a-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"4d97711e-2650-4d76-b960-21e698d8e10a\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 22 06:55:32 crc kubenswrapper[4720]: I0122 06:55:32.543420 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b7hv8\" (UniqueName: \"kubernetes.io/projected/4d97711e-2650-4d76-b960-21e698d8e10a-kube-api-access-b7hv8\") pod \"prometheus-metric-storage-0\" (UID: \"4d97711e-2650-4d76-b960-21e698d8e10a\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 22 06:55:32 crc kubenswrapper[4720]: I0122 06:55:32.543465 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/4d97711e-2650-4d76-b960-21e698d8e10a-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"4d97711e-2650-4d76-b960-21e698d8e10a\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 22 06:55:32 crc kubenswrapper[4720]: I0122 06:55:32.556084 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/4d97711e-2650-4d76-b960-21e698d8e10a-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"4d97711e-2650-4d76-b960-21e698d8e10a\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 22 06:55:32 crc kubenswrapper[4720]: I0122 06:55:32.556858 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/4d97711e-2650-4d76-b960-21e698d8e10a-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"4d97711e-2650-4d76-b960-21e698d8e10a\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 22 06:55:32 crc kubenswrapper[4720]: I0122 06:55:32.557295 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/4d97711e-2650-4d76-b960-21e698d8e10a-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"4d97711e-2650-4d76-b960-21e698d8e10a\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 22 06:55:32 crc kubenswrapper[4720]: I0122 06:55:32.561780 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/4d97711e-2650-4d76-b960-21e698d8e10a-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"4d97711e-2650-4d76-b960-21e698d8e10a\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 22 06:55:32 crc kubenswrapper[4720]: I0122 06:55:32.563859 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/4d97711e-2650-4d76-b960-21e698d8e10a-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"4d97711e-2650-4d76-b960-21e698d8e10a\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 22 06:55:32 crc kubenswrapper[4720]: I0122 06:55:32.570201 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/4d97711e-2650-4d76-b960-21e698d8e10a-config\") pod \"prometheus-metric-storage-0\" (UID: \"4d97711e-2650-4d76-b960-21e698d8e10a\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 22 06:55:32 crc kubenswrapper[4720]: I0122 06:55:32.573988 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/4d97711e-2650-4d76-b960-21e698d8e10a-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"4d97711e-2650-4d76-b960-21e698d8e10a\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 22 06:55:32 crc kubenswrapper[4720]: I0122 06:55:32.595102 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b7hv8\" (UniqueName: \"kubernetes.io/projected/4d97711e-2650-4d76-b960-21e698d8e10a-kube-api-access-b7hv8\") pod \"prometheus-metric-storage-0\" (UID: \"4d97711e-2650-4d76-b960-21e698d8e10a\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 22 06:55:32 crc kubenswrapper[4720]: I0122 06:55:32.596094 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/4d97711e-2650-4d76-b960-21e698d8e10a-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"4d97711e-2650-4d76-b960-21e698d8e10a\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 22 06:55:32 crc kubenswrapper[4720]: I0122 06:55:32.681481 4720 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 22 06:55:32 crc kubenswrapper[4720]: I0122 06:55:32.681574 4720 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-40d4a4fe-c9b1-4471-a9eb-4f5b2c118017\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-40d4a4fe-c9b1-4471-a9eb-4f5b2c118017\") pod \"prometheus-metric-storage-0\" (UID: \"4d97711e-2650-4d76-b960-21e698d8e10a\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/e74ba53cc22a17779c8ca9de275ce3db1c5e72fbfc84ee06e990819ef29e35bd/globalmount\"" pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 22 06:55:32 crc kubenswrapper[4720]: I0122 06:55:32.718347 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-547f7d9556-p96k6"] Jan 22 06:55:32 crc kubenswrapper[4720]: I0122 06:55:32.719433 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-547f7d9556-p96k6" Jan 22 06:55:32 crc kubenswrapper[4720]: I0122 06:55:32.732597 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-547f7d9556-p96k6"] Jan 22 06:55:32 crc kubenswrapper[4720]: I0122 06:55:32.836798 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-40d4a4fe-c9b1-4471-a9eb-4f5b2c118017\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-40d4a4fe-c9b1-4471-a9eb-4f5b2c118017\") pod \"prometheus-metric-storage-0\" (UID: \"4d97711e-2650-4d76-b960-21e698d8e10a\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 22 06:55:32 crc kubenswrapper[4720]: I0122 06:55:32.889935 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/5781b5c7-a7af-4c95-8e04-c407dc116dbe-console-serving-cert\") pod \"console-547f7d9556-p96k6\" (UID: \"5781b5c7-a7af-4c95-8e04-c407dc116dbe\") " pod="openshift-console/console-547f7d9556-p96k6" Jan 22 06:55:32 crc kubenswrapper[4720]: I0122 06:55:32.890042 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/5781b5c7-a7af-4c95-8e04-c407dc116dbe-console-oauth-config\") pod \"console-547f7d9556-p96k6\" (UID: \"5781b5c7-a7af-4c95-8e04-c407dc116dbe\") " pod="openshift-console/console-547f7d9556-p96k6" Jan 22 06:55:32 crc kubenswrapper[4720]: I0122 06:55:32.890103 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/5781b5c7-a7af-4c95-8e04-c407dc116dbe-oauth-serving-cert\") pod \"console-547f7d9556-p96k6\" (UID: \"5781b5c7-a7af-4c95-8e04-c407dc116dbe\") " pod="openshift-console/console-547f7d9556-p96k6" Jan 22 06:55:32 crc kubenswrapper[4720]: I0122 06:55:32.890127 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-27fdl\" (UniqueName: \"kubernetes.io/projected/5781b5c7-a7af-4c95-8e04-c407dc116dbe-kube-api-access-27fdl\") pod \"console-547f7d9556-p96k6\" (UID: \"5781b5c7-a7af-4c95-8e04-c407dc116dbe\") " pod="openshift-console/console-547f7d9556-p96k6" Jan 22 06:55:32 crc kubenswrapper[4720]: I0122 06:55:32.890171 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/5781b5c7-a7af-4c95-8e04-c407dc116dbe-console-config\") pod \"console-547f7d9556-p96k6\" (UID: \"5781b5c7-a7af-4c95-8e04-c407dc116dbe\") " pod="openshift-console/console-547f7d9556-p96k6" Jan 22 06:55:32 crc kubenswrapper[4720]: I0122 06:55:32.890268 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5781b5c7-a7af-4c95-8e04-c407dc116dbe-trusted-ca-bundle\") pod \"console-547f7d9556-p96k6\" (UID: \"5781b5c7-a7af-4c95-8e04-c407dc116dbe\") " pod="openshift-console/console-547f7d9556-p96k6" Jan 22 06:55:32 crc kubenswrapper[4720]: I0122 06:55:32.890323 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/5781b5c7-a7af-4c95-8e04-c407dc116dbe-service-ca\") pod \"console-547f7d9556-p96k6\" (UID: \"5781b5c7-a7af-4c95-8e04-c407dc116dbe\") " pod="openshift-console/console-547f7d9556-p96k6" Jan 22 06:55:32 crc kubenswrapper[4720]: I0122 06:55:32.933483 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 22 06:55:32 crc kubenswrapper[4720]: I0122 06:55:32.982300 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/kube-state-metrics-0" event={"ID":"5186fc7a-6b08-4177-bb0a-a43da69baa8a","Type":"ContainerStarted","Data":"b8cb5d9aebf6827568db5f6cd635172184dc4fa7dca0e0022068248bf963f895"} Jan 22 06:55:32 crc kubenswrapper[4720]: I0122 06:55:32.993126 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/5781b5c7-a7af-4c95-8e04-c407dc116dbe-console-config\") pod \"console-547f7d9556-p96k6\" (UID: \"5781b5c7-a7af-4c95-8e04-c407dc116dbe\") " pod="openshift-console/console-547f7d9556-p96k6" Jan 22 06:55:32 crc kubenswrapper[4720]: I0122 06:55:32.993207 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/976fdae9-9e7d-46d1-b649-c0cfecd372ae-serving-cert\") pod \"observability-ui-dashboards-66cbf594b5-gqdw7\" (UID: \"976fdae9-9e7d-46d1-b649-c0cfecd372ae\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-gqdw7" Jan 22 06:55:32 crc kubenswrapper[4720]: I0122 06:55:32.993274 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5781b5c7-a7af-4c95-8e04-c407dc116dbe-trusted-ca-bundle\") pod \"console-547f7d9556-p96k6\" (UID: \"5781b5c7-a7af-4c95-8e04-c407dc116dbe\") " pod="openshift-console/console-547f7d9556-p96k6" Jan 22 06:55:32 crc kubenswrapper[4720]: I0122 06:55:32.993308 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/5781b5c7-a7af-4c95-8e04-c407dc116dbe-service-ca\") pod \"console-547f7d9556-p96k6\" (UID: \"5781b5c7-a7af-4c95-8e04-c407dc116dbe\") " pod="openshift-console/console-547f7d9556-p96k6" Jan 22 06:55:32 crc kubenswrapper[4720]: I0122 06:55:32.993339 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/5781b5c7-a7af-4c95-8e04-c407dc116dbe-console-serving-cert\") pod \"console-547f7d9556-p96k6\" (UID: \"5781b5c7-a7af-4c95-8e04-c407dc116dbe\") " pod="openshift-console/console-547f7d9556-p96k6" Jan 22 06:55:32 crc kubenswrapper[4720]: I0122 06:55:32.993398 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/5781b5c7-a7af-4c95-8e04-c407dc116dbe-console-oauth-config\") pod \"console-547f7d9556-p96k6\" (UID: \"5781b5c7-a7af-4c95-8e04-c407dc116dbe\") " pod="openshift-console/console-547f7d9556-p96k6" Jan 22 06:55:32 crc kubenswrapper[4720]: I0122 06:55:32.993462 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/5781b5c7-a7af-4c95-8e04-c407dc116dbe-oauth-serving-cert\") pod \"console-547f7d9556-p96k6\" (UID: \"5781b5c7-a7af-4c95-8e04-c407dc116dbe\") " pod="openshift-console/console-547f7d9556-p96k6" Jan 22 06:55:32 crc kubenswrapper[4720]: I0122 06:55:32.993486 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-27fdl\" (UniqueName: \"kubernetes.io/projected/5781b5c7-a7af-4c95-8e04-c407dc116dbe-kube-api-access-27fdl\") pod \"console-547f7d9556-p96k6\" (UID: \"5781b5c7-a7af-4c95-8e04-c407dc116dbe\") " pod="openshift-console/console-547f7d9556-p96k6" Jan 22 06:55:33 crc kubenswrapper[4720]: I0122 06:55:32.995593 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/5781b5c7-a7af-4c95-8e04-c407dc116dbe-console-config\") pod \"console-547f7d9556-p96k6\" (UID: \"5781b5c7-a7af-4c95-8e04-c407dc116dbe\") " pod="openshift-console/console-547f7d9556-p96k6" Jan 22 06:55:33 crc kubenswrapper[4720]: I0122 06:55:32.996008 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5781b5c7-a7af-4c95-8e04-c407dc116dbe-trusted-ca-bundle\") pod \"console-547f7d9556-p96k6\" (UID: \"5781b5c7-a7af-4c95-8e04-c407dc116dbe\") " pod="openshift-console/console-547f7d9556-p96k6" Jan 22 06:55:33 crc kubenswrapper[4720]: I0122 06:55:32.996614 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/5781b5c7-a7af-4c95-8e04-c407dc116dbe-service-ca\") pod \"console-547f7d9556-p96k6\" (UID: \"5781b5c7-a7af-4c95-8e04-c407dc116dbe\") " pod="openshift-console/console-547f7d9556-p96k6" Jan 22 06:55:33 crc kubenswrapper[4720]: I0122 06:55:33.008834 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/5781b5c7-a7af-4c95-8e04-c407dc116dbe-oauth-serving-cert\") pod \"console-547f7d9556-p96k6\" (UID: \"5781b5c7-a7af-4c95-8e04-c407dc116dbe\") " pod="openshift-console/console-547f7d9556-p96k6" Jan 22 06:55:33 crc kubenswrapper[4720]: I0122 06:55:33.010902 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/5781b5c7-a7af-4c95-8e04-c407dc116dbe-console-oauth-config\") pod \"console-547f7d9556-p96k6\" (UID: \"5781b5c7-a7af-4c95-8e04-c407dc116dbe\") " pod="openshift-console/console-547f7d9556-p96k6" Jan 22 06:55:33 crc kubenswrapper[4720]: I0122 06:55:33.012165 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/976fdae9-9e7d-46d1-b649-c0cfecd372ae-serving-cert\") pod \"observability-ui-dashboards-66cbf594b5-gqdw7\" (UID: \"976fdae9-9e7d-46d1-b649-c0cfecd372ae\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-gqdw7" Jan 22 06:55:33 crc kubenswrapper[4720]: I0122 06:55:33.022850 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/5781b5c7-a7af-4c95-8e04-c407dc116dbe-console-serving-cert\") pod \"console-547f7d9556-p96k6\" (UID: \"5781b5c7-a7af-4c95-8e04-c407dc116dbe\") " pod="openshift-console/console-547f7d9556-p96k6" Jan 22 06:55:33 crc kubenswrapper[4720]: I0122 06:55:33.036758 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-27fdl\" (UniqueName: \"kubernetes.io/projected/5781b5c7-a7af-4c95-8e04-c407dc116dbe-kube-api-access-27fdl\") pod \"console-547f7d9556-p96k6\" (UID: \"5781b5c7-a7af-4c95-8e04-c407dc116dbe\") " pod="openshift-console/console-547f7d9556-p96k6" Jan 22 06:55:33 crc kubenswrapper[4720]: I0122 06:55:33.077676 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-547f7d9556-p96k6" Jan 22 06:55:33 crc kubenswrapper[4720]: I0122 06:55:33.169973 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/alertmanager-metric-storage-0"] Jan 22 06:55:33 crc kubenswrapper[4720]: W0122 06:55:33.277664 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod98e9f2ee_4cf2_414f_85c4_3dc1e3023a7c.slice/crio-6d3d3c1438db48b79d60c349b445aa2270905bb13f70598252fe25df7727164e WatchSource:0}: Error finding container 6d3d3c1438db48b79d60c349b445aa2270905bb13f70598252fe25df7727164e: Status 404 returned error can't find the container with id 6d3d3c1438db48b79d60c349b445aa2270905bb13f70598252fe25df7727164e Jan 22 06:55:33 crc kubenswrapper[4720]: I0122 06:55:33.308479 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-gqdw7" Jan 22 06:55:34 crc kubenswrapper[4720]: I0122 06:55:34.108023 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/alertmanager-metric-storage-0" event={"ID":"98e9f2ee-4cf2-414f-85c4-3dc1e3023a7c","Type":"ContainerStarted","Data":"6d3d3c1438db48b79d60c349b445aa2270905bb13f70598252fe25df7727164e"} Jan 22 06:55:34 crc kubenswrapper[4720]: I0122 06:55:34.235054 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/prometheus-metric-storage-0"] Jan 22 06:55:34 crc kubenswrapper[4720]: I0122 06:55:34.485188 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-ui-dashboards-66cbf594b5-gqdw7"] Jan 22 06:55:34 crc kubenswrapper[4720]: I0122 06:55:34.509082 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-547f7d9556-p96k6"] Jan 22 06:55:35 crc kubenswrapper[4720]: W0122 06:55:35.593817 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod976fdae9_9e7d_46d1_b649_c0cfecd372ae.slice/crio-1df0883453b498c4ffc02e1d4ac9e4cab7b55fa1a6ff20353cde8e0ff7b7d146 WatchSource:0}: Error finding container 1df0883453b498c4ffc02e1d4ac9e4cab7b55fa1a6ff20353cde8e0ff7b7d146: Status 404 returned error can't find the container with id 1df0883453b498c4ffc02e1d4ac9e4cab7b55fa1a6ff20353cde8e0ff7b7d146 Jan 22 06:55:36 crc kubenswrapper[4720]: I0122 06:55:36.147734 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-gqdw7" event={"ID":"976fdae9-9e7d-46d1-b649-c0cfecd372ae","Type":"ContainerStarted","Data":"1df0883453b498c4ffc02e1d4ac9e4cab7b55fa1a6ff20353cde8e0ff7b7d146"} Jan 22 06:55:36 crc kubenswrapper[4720]: I0122 06:55:36.155979 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/prometheus-metric-storage-0" event={"ID":"4d97711e-2650-4d76-b960-21e698d8e10a","Type":"ContainerStarted","Data":"31515a033e305f943cab331442dbd93aa3b397e4d55cd092469400e083a6d68b"} Jan 22 06:55:36 crc kubenswrapper[4720]: I0122 06:55:36.157705 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-547f7d9556-p96k6" event={"ID":"5781b5c7-a7af-4c95-8e04-c407dc116dbe","Type":"ContainerStarted","Data":"6d12987b42c978aca2d28d473355b78ed3f4b0f8c2c5e4989616530f9061fb92"} Jan 22 06:55:55 crc kubenswrapper[4720]: E0122 06:55:55.037419 4720 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-mariadb:current-podified" Jan 22 06:55:55 crc kubenswrapper[4720]: E0122 06:55:55.038254 4720 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:mysql-bootstrap,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[bash /var/lib/operator-scripts/mysql_bootstrap.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:True,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mysql-db,ReadOnly:false,MountPath:/var/lib/mysql,SubPath:mysql,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-default,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-generated,ReadOnly:false,MountPath:/var/lib/config-data/generated,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:operator-scripts,ReadOnly:true,MountPath:/var/lib/operator-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6rckp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-galera-0_watcher-kuttl-default(7bcd3174-ca47-4882-a14d-1b631d973fcc): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 22 06:55:55 crc kubenswrapper[4720]: E0122 06:55:55.039450 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="watcher-kuttl-default/openstack-galera-0" podUID="7bcd3174-ca47-4882-a14d-1b631d973fcc" Jan 22 06:55:55 crc kubenswrapper[4720]: E0122 06:55:55.472270 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-mariadb:current-podified\\\"\"" pod="watcher-kuttl-default/openstack-galera-0" podUID="7bcd3174-ca47-4882-a14d-1b631d973fcc" Jan 22 06:55:56 crc kubenswrapper[4720]: E0122 06:55:56.001334 4720 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/cluster-observability-operator/dashboards-console-plugin-rhel9@sha256:093d2731ac848ed5fd57356b155a19d3bf7b8db96d95b09c5d0095e143f7254f" Jan 22 06:55:56 crc kubenswrapper[4720]: E0122 06:55:56.001543 4720 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:observability-ui-dashboards,Image:registry.redhat.io/cluster-observability-operator/dashboards-console-plugin-rhel9@sha256:093d2731ac848ed5fd57356b155a19d3bf7b8db96d95b09c5d0095e143f7254f,Command:[],Args:[-port=9443 -cert=/var/serving-cert/tls.crt -key=/var/serving-cert/tls.key],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:web,HostPort:0,ContainerPort:9443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:serving-cert,ReadOnly:true,MountPath:/var/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mc2nw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000350000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod observability-ui-dashboards-66cbf594b5-gqdw7_openshift-operators(976fdae9-9e7d-46d1-b649-c0cfecd372ae): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 22 06:55:56 crc kubenswrapper[4720]: E0122 06:55:56.002770 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"observability-ui-dashboards\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-gqdw7" podUID="976fdae9-9e7d-46d1-b649-c0cfecd372ae" Jan 22 06:55:56 crc kubenswrapper[4720]: E0122 06:55:56.103163 4720 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Jan 22 06:55:56 crc kubenswrapper[4720]: E0122 06:55:56.103451 4720 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sllxz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000710000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-notifications-server-0_watcher-kuttl-default(33c789df-54ca-47c4-9688-74e392e3b121): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 22 06:55:56 crc kubenswrapper[4720]: E0122 06:55:56.104591 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="watcher-kuttl-default/rabbitmq-notifications-server-0" podUID="33c789df-54ca-47c4-9688-74e392e3b121" Jan 22 06:55:56 crc kubenswrapper[4720]: I0122 06:55:56.480566 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-547f7d9556-p96k6" event={"ID":"5781b5c7-a7af-4c95-8e04-c407dc116dbe","Type":"ContainerStarted","Data":"8e55f09c5f7d428c43ac42beb2512a737e5cf68de1064d7ffe3661b3e69e90eb"} Jan 22 06:55:56 crc kubenswrapper[4720]: E0122 06:55:56.485510 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"observability-ui-dashboards\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/dashboards-console-plugin-rhel9@sha256:093d2731ac848ed5fd57356b155a19d3bf7b8db96d95b09c5d0095e143f7254f\\\"\"" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-gqdw7" podUID="976fdae9-9e7d-46d1-b649-c0cfecd372ae" Jan 22 06:55:56 crc kubenswrapper[4720]: I0122 06:55:56.574115 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-547f7d9556-p96k6" podStartSLOduration=24.574076027 podStartE2EDuration="24.574076027s" podCreationTimestamp="2026-01-22 06:55:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 06:55:56.565622435 +0000 UTC m=+1248.707529140" watchObservedRunningTime="2026-01-22 06:55:56.574076027 +0000 UTC m=+1248.715982742" Jan 22 06:55:57 crc kubenswrapper[4720]: E0122 06:55:57.009128 4720 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0" Jan 22 06:55:57 crc kubenswrapper[4720]: E0122 06:55:57.009212 4720 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0" Jan 22 06:55:57 crc kubenswrapper[4720]: E0122 06:55:57.009537 4720 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-state-metrics,Image:registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0,Command:[],Args:[--resources=pods --namespaces=watcher-kuttl-default],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:http-metrics,HostPort:0,ContainerPort:8080,Protocol:TCP,HostIP:,},ContainerPort{Name:telemetry,HostPort:0,ContainerPort:8081,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-s2kbx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{0 8080 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000710000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kube-state-metrics-0_watcher-kuttl-default(5186fc7a-6b08-4177-bb0a-a43da69baa8a): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 22 06:55:57 crc kubenswrapper[4720]: E0122 06:55:57.013240 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="watcher-kuttl-default/kube-state-metrics-0" podUID="5186fc7a-6b08-4177-bb0a-a43da69baa8a" Jan 22 06:55:57 crc kubenswrapper[4720]: I0122 06:55:57.490931 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/memcached-0" event={"ID":"0f11b752-39dd-4f60-b6e5-6f788a85f86a","Type":"ContainerStarted","Data":"a6a06afc3c96b7c1b7603a84bc896d35ff9f51ef9b4f34e596bcf3ece6ddd1b9"} Jan 22 06:55:57 crc kubenswrapper[4720]: I0122 06:55:57.491321 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/memcached-0" Jan 22 06:55:57 crc kubenswrapper[4720]: E0122 06:55:57.495205 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0\\\"\"" pod="watcher-kuttl-default/kube-state-metrics-0" podUID="5186fc7a-6b08-4177-bb0a-a43da69baa8a" Jan 22 06:55:57 crc kubenswrapper[4720]: I0122 06:55:57.517261 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/memcached-0" podStartSLOduration=2.091249519 podStartE2EDuration="27.517235974s" podCreationTimestamp="2026-01-22 06:55:30 +0000 UTC" firstStartedPulling="2026-01-22 06:55:31.574990138 +0000 UTC m=+1223.716896843" lastFinishedPulling="2026-01-22 06:55:57.000976593 +0000 UTC m=+1249.142883298" observedRunningTime="2026-01-22 06:55:57.513714023 +0000 UTC m=+1249.655620748" watchObservedRunningTime="2026-01-22 06:55:57.517235974 +0000 UTC m=+1249.659142679" Jan 22 06:55:59 crc kubenswrapper[4720]: I0122 06:55:59.506611 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/rabbitmq-notifications-server-0" event={"ID":"33c789df-54ca-47c4-9688-74e392e3b121","Type":"ContainerStarted","Data":"ab7934967d70853addb696b47aaf6554e1a226242549dd727265e9aad88934b7"} Jan 22 06:55:59 crc kubenswrapper[4720]: I0122 06:55:59.508788 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/rabbitmq-server-0" event={"ID":"9482dbed-80f4-4d45-9402-5315c0d59310","Type":"ContainerStarted","Data":"fa60b99512b606e5ea50300ed6a9af3af3c99751897855266596b936479d949a"} Jan 22 06:55:59 crc kubenswrapper[4720]: I0122 06:55:59.780886 4720 patch_prober.go:28] interesting pod/machine-config-daemon-bnsvd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 06:55:59 crc kubenswrapper[4720]: I0122 06:55:59.781297 4720 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" podUID="f4b26e9d-6a95-4b1c-9750-88b6aa100c67" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 06:56:00 crc kubenswrapper[4720]: I0122 06:56:00.519418 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/prometheus-metric-storage-0" event={"ID":"4d97711e-2650-4d76-b960-21e698d8e10a","Type":"ContainerStarted","Data":"430ee309c2505c07802557a8951630e2b2e15555750189137fb85b7719b9cf53"} Jan 22 06:56:00 crc kubenswrapper[4720]: I0122 06:56:00.522583 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/alertmanager-metric-storage-0" event={"ID":"98e9f2ee-4cf2-414f-85c4-3dc1e3023a7c","Type":"ContainerStarted","Data":"7580469d0280202c19b7981935cd31180cc0951b4d346bfc82c5a2ac3e5a750d"} Jan 22 06:56:03 crc kubenswrapper[4720]: I0122 06:56:03.079107 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-547f7d9556-p96k6" Jan 22 06:56:03 crc kubenswrapper[4720]: I0122 06:56:03.080324 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-547f7d9556-p96k6" Jan 22 06:56:03 crc kubenswrapper[4720]: I0122 06:56:03.083981 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-547f7d9556-p96k6" Jan 22 06:56:03 crc kubenswrapper[4720]: I0122 06:56:03.549130 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-547f7d9556-p96k6" Jan 22 06:56:03 crc kubenswrapper[4720]: I0122 06:56:03.616670 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-795669dc4d-wqfdz"] Jan 22 06:56:05 crc kubenswrapper[4720]: I0122 06:56:05.623527 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/memcached-0" Jan 22 06:56:07 crc kubenswrapper[4720]: I0122 06:56:07.576986 4720 generic.go:334] "Generic (PLEG): container finished" podID="4d97711e-2650-4d76-b960-21e698d8e10a" containerID="430ee309c2505c07802557a8951630e2b2e15555750189137fb85b7719b9cf53" exitCode=0 Jan 22 06:56:07 crc kubenswrapper[4720]: I0122 06:56:07.577090 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/prometheus-metric-storage-0" event={"ID":"4d97711e-2650-4d76-b960-21e698d8e10a","Type":"ContainerDied","Data":"430ee309c2505c07802557a8951630e2b2e15555750189137fb85b7719b9cf53"} Jan 22 06:56:08 crc kubenswrapper[4720]: I0122 06:56:08.586942 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/openstack-galera-0" event={"ID":"7bcd3174-ca47-4882-a14d-1b631d973fcc","Type":"ContainerStarted","Data":"09c6b1d6bbea5d40f6f981e05201dbdae1c801a21ff6b2e6ddbf836068932d17"} Jan 22 06:56:08 crc kubenswrapper[4720]: I0122 06:56:08.590460 4720 generic.go:334] "Generic (PLEG): container finished" podID="98e9f2ee-4cf2-414f-85c4-3dc1e3023a7c" containerID="7580469d0280202c19b7981935cd31180cc0951b4d346bfc82c5a2ac3e5a750d" exitCode=0 Jan 22 06:56:08 crc kubenswrapper[4720]: I0122 06:56:08.590515 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/alertmanager-metric-storage-0" event={"ID":"98e9f2ee-4cf2-414f-85c4-3dc1e3023a7c","Type":"ContainerDied","Data":"7580469d0280202c19b7981935cd31180cc0951b4d346bfc82c5a2ac3e5a750d"} Jan 22 06:56:09 crc kubenswrapper[4720]: I0122 06:56:09.601322 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-gqdw7" event={"ID":"976fdae9-9e7d-46d1-b649-c0cfecd372ae","Type":"ContainerStarted","Data":"0e2afee9986176ff0594b18b9b1d3100c671d7cf01c2aec2af522906bac680d1"} Jan 22 06:56:09 crc kubenswrapper[4720]: I0122 06:56:09.623875 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-gqdw7" podStartSLOduration=4.148752343 podStartE2EDuration="37.623854803s" podCreationTimestamp="2026-01-22 06:55:32 +0000 UTC" firstStartedPulling="2026-01-22 06:55:35.597896369 +0000 UTC m=+1227.739803074" lastFinishedPulling="2026-01-22 06:56:09.072998829 +0000 UTC m=+1261.214905534" observedRunningTime="2026-01-22 06:56:09.616496092 +0000 UTC m=+1261.758402797" watchObservedRunningTime="2026-01-22 06:56:09.623854803 +0000 UTC m=+1261.765761508" Jan 22 06:56:11 crc kubenswrapper[4720]: I0122 06:56:11.619612 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/alertmanager-metric-storage-0" event={"ID":"98e9f2ee-4cf2-414f-85c4-3dc1e3023a7c","Type":"ContainerStarted","Data":"14ef6cdba1d6992351ee0b2abc0a0dd8b9f59b481f78084bc858c3c06b82b916"} Jan 22 06:56:11 crc kubenswrapper[4720]: I0122 06:56:11.621768 4720 generic.go:334] "Generic (PLEG): container finished" podID="7bcd3174-ca47-4882-a14d-1b631d973fcc" containerID="09c6b1d6bbea5d40f6f981e05201dbdae1c801a21ff6b2e6ddbf836068932d17" exitCode=0 Jan 22 06:56:11 crc kubenswrapper[4720]: I0122 06:56:11.621832 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/openstack-galera-0" event={"ID":"7bcd3174-ca47-4882-a14d-1b631d973fcc","Type":"ContainerDied","Data":"09c6b1d6bbea5d40f6f981e05201dbdae1c801a21ff6b2e6ddbf836068932d17"} Jan 22 06:56:14 crc kubenswrapper[4720]: I0122 06:56:14.883882 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/openstack-galera-0" event={"ID":"7bcd3174-ca47-4882-a14d-1b631d973fcc","Type":"ContainerStarted","Data":"03b32b0a6f68a03181f8ba26fd26007091cdeb2974d12bc000f7b702af9726ea"} Jan 22 06:56:14 crc kubenswrapper[4720]: I0122 06:56:14.898767 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/kube-state-metrics-0" event={"ID":"5186fc7a-6b08-4177-bb0a-a43da69baa8a","Type":"ContainerStarted","Data":"b0cb377d189fd1c5254467121d99917f4fe3bf14f957d55163b66990b4ea5d6b"} Jan 22 06:56:14 crc kubenswrapper[4720]: I0122 06:56:14.899547 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/kube-state-metrics-0" Jan 22 06:56:14 crc kubenswrapper[4720]: I0122 06:56:14.904876 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/prometheus-metric-storage-0" event={"ID":"4d97711e-2650-4d76-b960-21e698d8e10a","Type":"ContainerStarted","Data":"3b49fa2fa24c5ecea272e747a0b591891a3dd1d7c68695351099a1acde3770cc"} Jan 22 06:56:14 crc kubenswrapper[4720]: I0122 06:56:14.911649 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/openstack-galera-0" podStartSLOduration=10.975398241 podStartE2EDuration="46.911614226s" podCreationTimestamp="2026-01-22 06:55:28 +0000 UTC" firstStartedPulling="2026-01-22 06:55:31.76385699 +0000 UTC m=+1223.905763695" lastFinishedPulling="2026-01-22 06:56:07.700072975 +0000 UTC m=+1259.841979680" observedRunningTime="2026-01-22 06:56:14.908884468 +0000 UTC m=+1267.050791193" watchObservedRunningTime="2026-01-22 06:56:14.911614226 +0000 UTC m=+1267.053520941" Jan 22 06:56:14 crc kubenswrapper[4720]: I0122 06:56:14.949495 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/kube-state-metrics-0" podStartSLOduration=2.724328985 podStartE2EDuration="44.949457443s" podCreationTimestamp="2026-01-22 06:55:30 +0000 UTC" firstStartedPulling="2026-01-22 06:55:32.331544708 +0000 UTC m=+1224.473451413" lastFinishedPulling="2026-01-22 06:56:14.556673166 +0000 UTC m=+1266.698579871" observedRunningTime="2026-01-22 06:56:14.939056744 +0000 UTC m=+1267.080963459" watchObservedRunningTime="2026-01-22 06:56:14.949457443 +0000 UTC m=+1267.091364148" Jan 22 06:56:16 crc kubenswrapper[4720]: I0122 06:56:16.936246 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/alertmanager-metric-storage-0" event={"ID":"98e9f2ee-4cf2-414f-85c4-3dc1e3023a7c","Type":"ContainerStarted","Data":"edc40a2b2f1ecec9f38ee1b3e67ab5b9b7ea05adf72fca082ff0521e9cf8d252"} Jan 22 06:56:16 crc kubenswrapper[4720]: I0122 06:56:16.936654 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/alertmanager-metric-storage-0" Jan 22 06:56:16 crc kubenswrapper[4720]: I0122 06:56:16.941103 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/alertmanager-metric-storage-0" Jan 22 06:56:16 crc kubenswrapper[4720]: I0122 06:56:16.963562 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/alertmanager-metric-storage-0" podStartSLOduration=8.121102192 podStartE2EDuration="45.963516382s" podCreationTimestamp="2026-01-22 06:55:31 +0000 UTC" firstStartedPulling="2026-01-22 06:55:33.298703192 +0000 UTC m=+1225.440609897" lastFinishedPulling="2026-01-22 06:56:11.141117382 +0000 UTC m=+1263.283024087" observedRunningTime="2026-01-22 06:56:16.96203357 +0000 UTC m=+1269.103940285" watchObservedRunningTime="2026-01-22 06:56:16.963516382 +0000 UTC m=+1269.105423097" Jan 22 06:56:17 crc kubenswrapper[4720]: I0122 06:56:17.945765 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/prometheus-metric-storage-0" event={"ID":"4d97711e-2650-4d76-b960-21e698d8e10a","Type":"ContainerStarted","Data":"0425326e75ccb10579acac7738e359f31550e5d9f508c8b82e2ae226398731e7"} Jan 22 06:56:20 crc kubenswrapper[4720]: I0122 06:56:20.555107 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/openstack-galera-0" Jan 22 06:56:20 crc kubenswrapper[4720]: I0122 06:56:20.555430 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/openstack-galera-0" Jan 22 06:56:21 crc kubenswrapper[4720]: I0122 06:56:21.292605 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/kube-state-metrics-0" Jan 22 06:56:26 crc kubenswrapper[4720]: I0122 06:56:26.713971 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/openstack-galera-0" Jan 22 06:56:26 crc kubenswrapper[4720]: I0122 06:56:26.804738 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/openstack-galera-0" Jan 22 06:56:27 crc kubenswrapper[4720]: I0122 06:56:27.023289 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/prometheus-metric-storage-0" event={"ID":"4d97711e-2650-4d76-b960-21e698d8e10a","Type":"ContainerStarted","Data":"e3b32f842059b4b41d96bf7d62c937e2378313eab27884f899bd4979eae1f713"} Jan 22 06:56:27 crc kubenswrapper[4720]: I0122 06:56:27.046628 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/prometheus-metric-storage-0" podStartSLOduration=5.265924766 podStartE2EDuration="56.046607649s" podCreationTimestamp="2026-01-22 06:55:31 +0000 UTC" firstStartedPulling="2026-01-22 06:55:35.403150078 +0000 UTC m=+1227.545056773" lastFinishedPulling="2026-01-22 06:56:26.183832911 +0000 UTC m=+1278.325739656" observedRunningTime="2026-01-22 06:56:27.042513321 +0000 UTC m=+1279.184420026" watchObservedRunningTime="2026-01-22 06:56:27.046607649 +0000 UTC m=+1279.188514354" Jan 22 06:56:27 crc kubenswrapper[4720]: I0122 06:56:27.934806 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 22 06:56:28 crc kubenswrapper[4720]: I0122 06:56:28.667679 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-795669dc4d-wqfdz" podUID="5fe824a8-1f65-47bd-af5b-88f2cc67c738" containerName="console" containerID="cri-o://e9f398eb7668d1f4f733099ad6defc58c2c75458c49654bd9caad13e636638fd" gracePeriod=15 Jan 22 06:56:29 crc kubenswrapper[4720]: I0122 06:56:29.023239 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/root-account-create-update-96mbz"] Jan 22 06:56:29 crc kubenswrapper[4720]: I0122 06:56:29.024842 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/root-account-create-update-96mbz" Jan 22 06:56:29 crc kubenswrapper[4720]: I0122 06:56:29.027256 4720 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"openstack-mariadb-root-db-secret" Jan 22 06:56:29 crc kubenswrapper[4720]: I0122 06:56:29.047710 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-795669dc4d-wqfdz_5fe824a8-1f65-47bd-af5b-88f2cc67c738/console/0.log" Jan 22 06:56:29 crc kubenswrapper[4720]: I0122 06:56:29.047759 4720 generic.go:334] "Generic (PLEG): container finished" podID="5fe824a8-1f65-47bd-af5b-88f2cc67c738" containerID="e9f398eb7668d1f4f733099ad6defc58c2c75458c49654bd9caad13e636638fd" exitCode=2 Jan 22 06:56:29 crc kubenswrapper[4720]: I0122 06:56:29.047975 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-795669dc4d-wqfdz" event={"ID":"5fe824a8-1f65-47bd-af5b-88f2cc67c738","Type":"ContainerDied","Data":"e9f398eb7668d1f4f733099ad6defc58c2c75458c49654bd9caad13e636638fd"} Jan 22 06:56:29 crc kubenswrapper[4720]: I0122 06:56:29.055685 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/root-account-create-update-96mbz"] Jan 22 06:56:29 crc kubenswrapper[4720]: I0122 06:56:29.067060 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f1642b8a-36b1-4482-bb8e-f289886d7d82-operator-scripts\") pod \"root-account-create-update-96mbz\" (UID: \"f1642b8a-36b1-4482-bb8e-f289886d7d82\") " pod="watcher-kuttl-default/root-account-create-update-96mbz" Jan 22 06:56:29 crc kubenswrapper[4720]: I0122 06:56:29.067268 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c78w2\" (UniqueName: \"kubernetes.io/projected/f1642b8a-36b1-4482-bb8e-f289886d7d82-kube-api-access-c78w2\") pod \"root-account-create-update-96mbz\" (UID: \"f1642b8a-36b1-4482-bb8e-f289886d7d82\") " pod="watcher-kuttl-default/root-account-create-update-96mbz" Jan 22 06:56:29 crc kubenswrapper[4720]: I0122 06:56:29.168787 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c78w2\" (UniqueName: \"kubernetes.io/projected/f1642b8a-36b1-4482-bb8e-f289886d7d82-kube-api-access-c78w2\") pod \"root-account-create-update-96mbz\" (UID: \"f1642b8a-36b1-4482-bb8e-f289886d7d82\") " pod="watcher-kuttl-default/root-account-create-update-96mbz" Jan 22 06:56:29 crc kubenswrapper[4720]: I0122 06:56:29.168874 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f1642b8a-36b1-4482-bb8e-f289886d7d82-operator-scripts\") pod \"root-account-create-update-96mbz\" (UID: \"f1642b8a-36b1-4482-bb8e-f289886d7d82\") " pod="watcher-kuttl-default/root-account-create-update-96mbz" Jan 22 06:56:29 crc kubenswrapper[4720]: I0122 06:56:29.169780 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f1642b8a-36b1-4482-bb8e-f289886d7d82-operator-scripts\") pod \"root-account-create-update-96mbz\" (UID: \"f1642b8a-36b1-4482-bb8e-f289886d7d82\") " pod="watcher-kuttl-default/root-account-create-update-96mbz" Jan 22 06:56:29 crc kubenswrapper[4720]: I0122 06:56:29.179402 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-795669dc4d-wqfdz_5fe824a8-1f65-47bd-af5b-88f2cc67c738/console/0.log" Jan 22 06:56:29 crc kubenswrapper[4720]: I0122 06:56:29.179492 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-795669dc4d-wqfdz" Jan 22 06:56:29 crc kubenswrapper[4720]: I0122 06:56:29.199120 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c78w2\" (UniqueName: \"kubernetes.io/projected/f1642b8a-36b1-4482-bb8e-f289886d7d82-kube-api-access-c78w2\") pod \"root-account-create-update-96mbz\" (UID: \"f1642b8a-36b1-4482-bb8e-f289886d7d82\") " pod="watcher-kuttl-default/root-account-create-update-96mbz" Jan 22 06:56:29 crc kubenswrapper[4720]: I0122 06:56:29.269786 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/5fe824a8-1f65-47bd-af5b-88f2cc67c738-oauth-serving-cert\") pod \"5fe824a8-1f65-47bd-af5b-88f2cc67c738\" (UID: \"5fe824a8-1f65-47bd-af5b-88f2cc67c738\") " Jan 22 06:56:29 crc kubenswrapper[4720]: I0122 06:56:29.269869 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/5fe824a8-1f65-47bd-af5b-88f2cc67c738-console-config\") pod \"5fe824a8-1f65-47bd-af5b-88f2cc67c738\" (UID: \"5fe824a8-1f65-47bd-af5b-88f2cc67c738\") " Jan 22 06:56:29 crc kubenswrapper[4720]: I0122 06:56:29.270044 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-frh5c\" (UniqueName: \"kubernetes.io/projected/5fe824a8-1f65-47bd-af5b-88f2cc67c738-kube-api-access-frh5c\") pod \"5fe824a8-1f65-47bd-af5b-88f2cc67c738\" (UID: \"5fe824a8-1f65-47bd-af5b-88f2cc67c738\") " Jan 22 06:56:29 crc kubenswrapper[4720]: I0122 06:56:29.270095 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/5fe824a8-1f65-47bd-af5b-88f2cc67c738-console-serving-cert\") pod \"5fe824a8-1f65-47bd-af5b-88f2cc67c738\" (UID: \"5fe824a8-1f65-47bd-af5b-88f2cc67c738\") " Jan 22 06:56:29 crc kubenswrapper[4720]: I0122 06:56:29.270135 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5fe824a8-1f65-47bd-af5b-88f2cc67c738-trusted-ca-bundle\") pod \"5fe824a8-1f65-47bd-af5b-88f2cc67c738\" (UID: \"5fe824a8-1f65-47bd-af5b-88f2cc67c738\") " Jan 22 06:56:29 crc kubenswrapper[4720]: I0122 06:56:29.270195 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/5fe824a8-1f65-47bd-af5b-88f2cc67c738-service-ca\") pod \"5fe824a8-1f65-47bd-af5b-88f2cc67c738\" (UID: \"5fe824a8-1f65-47bd-af5b-88f2cc67c738\") " Jan 22 06:56:29 crc kubenswrapper[4720]: I0122 06:56:29.270209 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/5fe824a8-1f65-47bd-af5b-88f2cc67c738-console-oauth-config\") pod \"5fe824a8-1f65-47bd-af5b-88f2cc67c738\" (UID: \"5fe824a8-1f65-47bd-af5b-88f2cc67c738\") " Jan 22 06:56:29 crc kubenswrapper[4720]: I0122 06:56:29.270948 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5fe824a8-1f65-47bd-af5b-88f2cc67c738-service-ca" (OuterVolumeSpecName: "service-ca") pod "5fe824a8-1f65-47bd-af5b-88f2cc67c738" (UID: "5fe824a8-1f65-47bd-af5b-88f2cc67c738"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 06:56:29 crc kubenswrapper[4720]: I0122 06:56:29.271006 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5fe824a8-1f65-47bd-af5b-88f2cc67c738-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "5fe824a8-1f65-47bd-af5b-88f2cc67c738" (UID: "5fe824a8-1f65-47bd-af5b-88f2cc67c738"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 06:56:29 crc kubenswrapper[4720]: I0122 06:56:29.271264 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5fe824a8-1f65-47bd-af5b-88f2cc67c738-console-config" (OuterVolumeSpecName: "console-config") pod "5fe824a8-1f65-47bd-af5b-88f2cc67c738" (UID: "5fe824a8-1f65-47bd-af5b-88f2cc67c738"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 06:56:29 crc kubenswrapper[4720]: I0122 06:56:29.271632 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5fe824a8-1f65-47bd-af5b-88f2cc67c738-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "5fe824a8-1f65-47bd-af5b-88f2cc67c738" (UID: "5fe824a8-1f65-47bd-af5b-88f2cc67c738"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 06:56:29 crc kubenswrapper[4720]: I0122 06:56:29.273153 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe824a8-1f65-47bd-af5b-88f2cc67c738-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "5fe824a8-1f65-47bd-af5b-88f2cc67c738" (UID: "5fe824a8-1f65-47bd-af5b-88f2cc67c738"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 06:56:29 crc kubenswrapper[4720]: I0122 06:56:29.274921 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe824a8-1f65-47bd-af5b-88f2cc67c738-kube-api-access-frh5c" (OuterVolumeSpecName: "kube-api-access-frh5c") pod "5fe824a8-1f65-47bd-af5b-88f2cc67c738" (UID: "5fe824a8-1f65-47bd-af5b-88f2cc67c738"). InnerVolumeSpecName "kube-api-access-frh5c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 06:56:29 crc kubenswrapper[4720]: I0122 06:56:29.280735 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe824a8-1f65-47bd-af5b-88f2cc67c738-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "5fe824a8-1f65-47bd-af5b-88f2cc67c738" (UID: "5fe824a8-1f65-47bd-af5b-88f2cc67c738"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 06:56:29 crc kubenswrapper[4720]: I0122 06:56:29.352204 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/root-account-create-update-96mbz" Jan 22 06:56:29 crc kubenswrapper[4720]: I0122 06:56:29.372160 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-frh5c\" (UniqueName: \"kubernetes.io/projected/5fe824a8-1f65-47bd-af5b-88f2cc67c738-kube-api-access-frh5c\") on node \"crc\" DevicePath \"\"" Jan 22 06:56:29 crc kubenswrapper[4720]: I0122 06:56:29.372199 4720 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/5fe824a8-1f65-47bd-af5b-88f2cc67c738-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 06:56:29 crc kubenswrapper[4720]: I0122 06:56:29.372214 4720 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5fe824a8-1f65-47bd-af5b-88f2cc67c738-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 06:56:29 crc kubenswrapper[4720]: I0122 06:56:29.372227 4720 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/5fe824a8-1f65-47bd-af5b-88f2cc67c738-service-ca\") on node \"crc\" DevicePath \"\"" Jan 22 06:56:29 crc kubenswrapper[4720]: I0122 06:56:29.372242 4720 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/5fe824a8-1f65-47bd-af5b-88f2cc67c738-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 22 06:56:29 crc kubenswrapper[4720]: I0122 06:56:29.372255 4720 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/5fe824a8-1f65-47bd-af5b-88f2cc67c738-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 22 06:56:29 crc kubenswrapper[4720]: I0122 06:56:29.372267 4720 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/5fe824a8-1f65-47bd-af5b-88f2cc67c738-console-config\") on node \"crc\" DevicePath \"\"" Jan 22 06:56:29 crc kubenswrapper[4720]: I0122 06:56:29.780581 4720 patch_prober.go:28] interesting pod/machine-config-daemon-bnsvd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 06:56:29 crc kubenswrapper[4720]: I0122 06:56:29.781130 4720 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" podUID="f4b26e9d-6a95-4b1c-9750-88b6aa100c67" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 06:56:29 crc kubenswrapper[4720]: I0122 06:56:29.781201 4720 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" Jan 22 06:56:29 crc kubenswrapper[4720]: I0122 06:56:29.781840 4720 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c3c253bdde52e7e13d966a713540bfc6fece8955f90bf08577d309f38a73e677"} pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 22 06:56:29 crc kubenswrapper[4720]: I0122 06:56:29.781936 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" podUID="f4b26e9d-6a95-4b1c-9750-88b6aa100c67" containerName="machine-config-daemon" containerID="cri-o://c3c253bdde52e7e13d966a713540bfc6fece8955f90bf08577d309f38a73e677" gracePeriod=600 Jan 22 06:56:29 crc kubenswrapper[4720]: I0122 06:56:29.879651 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/root-account-create-update-96mbz"] Jan 22 06:56:29 crc kubenswrapper[4720]: W0122 06:56:29.891755 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf1642b8a_36b1_4482_bb8e_f289886d7d82.slice/crio-399db0225049327cdc49fcb04ac8d47d9a840107f0875e8379e132ea948586e4 WatchSource:0}: Error finding container 399db0225049327cdc49fcb04ac8d47d9a840107f0875e8379e132ea948586e4: Status 404 returned error can't find the container with id 399db0225049327cdc49fcb04ac8d47d9a840107f0875e8379e132ea948586e4 Jan 22 06:56:30 crc kubenswrapper[4720]: I0122 06:56:30.060348 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-795669dc4d-wqfdz_5fe824a8-1f65-47bd-af5b-88f2cc67c738/console/0.log" Jan 22 06:56:30 crc kubenswrapper[4720]: I0122 06:56:30.060485 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-795669dc4d-wqfdz" Jan 22 06:56:30 crc kubenswrapper[4720]: I0122 06:56:30.060931 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-795669dc4d-wqfdz" event={"ID":"5fe824a8-1f65-47bd-af5b-88f2cc67c738","Type":"ContainerDied","Data":"48f1e6b5c8e775221749369cbcc3b0036106be8923c6357fa39021085b6ba8c9"} Jan 22 06:56:30 crc kubenswrapper[4720]: I0122 06:56:30.062187 4720 scope.go:117] "RemoveContainer" containerID="e9f398eb7668d1f4f733099ad6defc58c2c75458c49654bd9caad13e636638fd" Jan 22 06:56:30 crc kubenswrapper[4720]: I0122 06:56:30.072483 4720 generic.go:334] "Generic (PLEG): container finished" podID="f4b26e9d-6a95-4b1c-9750-88b6aa100c67" containerID="c3c253bdde52e7e13d966a713540bfc6fece8955f90bf08577d309f38a73e677" exitCode=0 Jan 22 06:56:30 crc kubenswrapper[4720]: I0122 06:56:30.072603 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" event={"ID":"f4b26e9d-6a95-4b1c-9750-88b6aa100c67","Type":"ContainerDied","Data":"c3c253bdde52e7e13d966a713540bfc6fece8955f90bf08577d309f38a73e677"} Jan 22 06:56:30 crc kubenswrapper[4720]: I0122 06:56:30.078927 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/root-account-create-update-96mbz" event={"ID":"f1642b8a-36b1-4482-bb8e-f289886d7d82","Type":"ContainerStarted","Data":"399db0225049327cdc49fcb04ac8d47d9a840107f0875e8379e132ea948586e4"} Jan 22 06:56:30 crc kubenswrapper[4720]: I0122 06:56:30.101114 4720 scope.go:117] "RemoveContainer" containerID="b414bde178e4b56f6099e1ff683f7636b4d4b7f1bac281d62264b75dc74b4bc6" Jan 22 06:56:30 crc kubenswrapper[4720]: I0122 06:56:30.109058 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-795669dc4d-wqfdz"] Jan 22 06:56:30 crc kubenswrapper[4720]: I0122 06:56:30.121663 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-795669dc4d-wqfdz"] Jan 22 06:56:30 crc kubenswrapper[4720]: I0122 06:56:30.138317 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/keystone-db-create-w6jkf"] Jan 22 06:56:30 crc kubenswrapper[4720]: E0122 06:56:30.138729 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5fe824a8-1f65-47bd-af5b-88f2cc67c738" containerName="console" Jan 22 06:56:30 crc kubenswrapper[4720]: I0122 06:56:30.138747 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="5fe824a8-1f65-47bd-af5b-88f2cc67c738" containerName="console" Jan 22 06:56:30 crc kubenswrapper[4720]: I0122 06:56:30.138947 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="5fe824a8-1f65-47bd-af5b-88f2cc67c738" containerName="console" Jan 22 06:56:30 crc kubenswrapper[4720]: I0122 06:56:30.139584 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-db-create-w6jkf" Jan 22 06:56:30 crc kubenswrapper[4720]: I0122 06:56:30.150558 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/keystone-db-create-w6jkf"] Jan 22 06:56:30 crc kubenswrapper[4720]: I0122 06:56:30.230557 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe824a8-1f65-47bd-af5b-88f2cc67c738" path="/var/lib/kubelet/pods/5fe824a8-1f65-47bd-af5b-88f2cc67c738/volumes" Jan 22 06:56:30 crc kubenswrapper[4720]: I0122 06:56:30.275961 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/keystone-2e07-account-create-update-fjdrg"] Jan 22 06:56:30 crc kubenswrapper[4720]: I0122 06:56:30.277101 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-2e07-account-create-update-fjdrg" Jan 22 06:56:30 crc kubenswrapper[4720]: I0122 06:56:30.283285 4720 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"keystone-db-secret" Jan 22 06:56:30 crc kubenswrapper[4720]: I0122 06:56:30.290693 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6bf728f2-33bb-4f7c-b2a1-55e4cfd402e3-operator-scripts\") pod \"keystone-db-create-w6jkf\" (UID: \"6bf728f2-33bb-4f7c-b2a1-55e4cfd402e3\") " pod="watcher-kuttl-default/keystone-db-create-w6jkf" Jan 22 06:56:30 crc kubenswrapper[4720]: I0122 06:56:30.290752 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wdmjv\" (UniqueName: \"kubernetes.io/projected/d421e895-4cb2-4a95-9a5b-ebf16f934a57-kube-api-access-wdmjv\") pod \"keystone-2e07-account-create-update-fjdrg\" (UID: \"d421e895-4cb2-4a95-9a5b-ebf16f934a57\") " pod="watcher-kuttl-default/keystone-2e07-account-create-update-fjdrg" Jan 22 06:56:30 crc kubenswrapper[4720]: I0122 06:56:30.290813 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g8fsr\" (UniqueName: \"kubernetes.io/projected/6bf728f2-33bb-4f7c-b2a1-55e4cfd402e3-kube-api-access-g8fsr\") pod \"keystone-db-create-w6jkf\" (UID: \"6bf728f2-33bb-4f7c-b2a1-55e4cfd402e3\") " pod="watcher-kuttl-default/keystone-db-create-w6jkf" Jan 22 06:56:30 crc kubenswrapper[4720]: I0122 06:56:30.290857 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d421e895-4cb2-4a95-9a5b-ebf16f934a57-operator-scripts\") pod \"keystone-2e07-account-create-update-fjdrg\" (UID: \"d421e895-4cb2-4a95-9a5b-ebf16f934a57\") " pod="watcher-kuttl-default/keystone-2e07-account-create-update-fjdrg" Jan 22 06:56:30 crc kubenswrapper[4720]: I0122 06:56:30.294409 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/keystone-2e07-account-create-update-fjdrg"] Jan 22 06:56:30 crc kubenswrapper[4720]: I0122 06:56:30.393194 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wdmjv\" (UniqueName: \"kubernetes.io/projected/d421e895-4cb2-4a95-9a5b-ebf16f934a57-kube-api-access-wdmjv\") pod \"keystone-2e07-account-create-update-fjdrg\" (UID: \"d421e895-4cb2-4a95-9a5b-ebf16f934a57\") " pod="watcher-kuttl-default/keystone-2e07-account-create-update-fjdrg" Jan 22 06:56:30 crc kubenswrapper[4720]: I0122 06:56:30.393304 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g8fsr\" (UniqueName: \"kubernetes.io/projected/6bf728f2-33bb-4f7c-b2a1-55e4cfd402e3-kube-api-access-g8fsr\") pod \"keystone-db-create-w6jkf\" (UID: \"6bf728f2-33bb-4f7c-b2a1-55e4cfd402e3\") " pod="watcher-kuttl-default/keystone-db-create-w6jkf" Jan 22 06:56:30 crc kubenswrapper[4720]: I0122 06:56:30.393369 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d421e895-4cb2-4a95-9a5b-ebf16f934a57-operator-scripts\") pod \"keystone-2e07-account-create-update-fjdrg\" (UID: \"d421e895-4cb2-4a95-9a5b-ebf16f934a57\") " pod="watcher-kuttl-default/keystone-2e07-account-create-update-fjdrg" Jan 22 06:56:30 crc kubenswrapper[4720]: I0122 06:56:30.393444 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6bf728f2-33bb-4f7c-b2a1-55e4cfd402e3-operator-scripts\") pod \"keystone-db-create-w6jkf\" (UID: \"6bf728f2-33bb-4f7c-b2a1-55e4cfd402e3\") " pod="watcher-kuttl-default/keystone-db-create-w6jkf" Jan 22 06:56:30 crc kubenswrapper[4720]: I0122 06:56:30.395166 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6bf728f2-33bb-4f7c-b2a1-55e4cfd402e3-operator-scripts\") pod \"keystone-db-create-w6jkf\" (UID: \"6bf728f2-33bb-4f7c-b2a1-55e4cfd402e3\") " pod="watcher-kuttl-default/keystone-db-create-w6jkf" Jan 22 06:56:30 crc kubenswrapper[4720]: I0122 06:56:30.395831 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d421e895-4cb2-4a95-9a5b-ebf16f934a57-operator-scripts\") pod \"keystone-2e07-account-create-update-fjdrg\" (UID: \"d421e895-4cb2-4a95-9a5b-ebf16f934a57\") " pod="watcher-kuttl-default/keystone-2e07-account-create-update-fjdrg" Jan 22 06:56:30 crc kubenswrapper[4720]: I0122 06:56:30.467523 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wdmjv\" (UniqueName: \"kubernetes.io/projected/d421e895-4cb2-4a95-9a5b-ebf16f934a57-kube-api-access-wdmjv\") pod \"keystone-2e07-account-create-update-fjdrg\" (UID: \"d421e895-4cb2-4a95-9a5b-ebf16f934a57\") " pod="watcher-kuttl-default/keystone-2e07-account-create-update-fjdrg" Jan 22 06:56:30 crc kubenswrapper[4720]: I0122 06:56:30.606484 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-2e07-account-create-update-fjdrg" Jan 22 06:56:30 crc kubenswrapper[4720]: I0122 06:56:30.803104 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g8fsr\" (UniqueName: \"kubernetes.io/projected/6bf728f2-33bb-4f7c-b2a1-55e4cfd402e3-kube-api-access-g8fsr\") pod \"keystone-db-create-w6jkf\" (UID: \"6bf728f2-33bb-4f7c-b2a1-55e4cfd402e3\") " pod="watcher-kuttl-default/keystone-db-create-w6jkf" Jan 22 06:56:31 crc kubenswrapper[4720]: I0122 06:56:31.075379 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-db-create-w6jkf" Jan 22 06:56:31 crc kubenswrapper[4720]: I0122 06:56:31.091424 4720 generic.go:334] "Generic (PLEG): container finished" podID="33c789df-54ca-47c4-9688-74e392e3b121" containerID="ab7934967d70853addb696b47aaf6554e1a226242549dd727265e9aad88934b7" exitCode=0 Jan 22 06:56:31 crc kubenswrapper[4720]: I0122 06:56:31.091515 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/rabbitmq-notifications-server-0" event={"ID":"33c789df-54ca-47c4-9688-74e392e3b121","Type":"ContainerDied","Data":"ab7934967d70853addb696b47aaf6554e1a226242549dd727265e9aad88934b7"} Jan 22 06:56:31 crc kubenswrapper[4720]: I0122 06:56:31.096216 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" event={"ID":"f4b26e9d-6a95-4b1c-9750-88b6aa100c67","Type":"ContainerStarted","Data":"cef29da1a352e3d091047268daeade230282190271ca25c80b09fe79bbd42efe"} Jan 22 06:56:31 crc kubenswrapper[4720]: I0122 06:56:31.101902 4720 generic.go:334] "Generic (PLEG): container finished" podID="f1642b8a-36b1-4482-bb8e-f289886d7d82" containerID="2c1b6ee7e31eb2b1690292c1e0b2dbc767f64df8f199f24080d8f5b2be353c7b" exitCode=0 Jan 22 06:56:31 crc kubenswrapper[4720]: I0122 06:56:31.102081 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/root-account-create-update-96mbz" event={"ID":"f1642b8a-36b1-4482-bb8e-f289886d7d82","Type":"ContainerDied","Data":"2c1b6ee7e31eb2b1690292c1e0b2dbc767f64df8f199f24080d8f5b2be353c7b"} Jan 22 06:56:31 crc kubenswrapper[4720]: I0122 06:56:31.284438 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/keystone-2e07-account-create-update-fjdrg"] Jan 22 06:56:31 crc kubenswrapper[4720]: I0122 06:56:31.558334 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/keystone-db-create-w6jkf"] Jan 22 06:56:31 crc kubenswrapper[4720]: W0122 06:56:31.563562 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6bf728f2_33bb_4f7c_b2a1_55e4cfd402e3.slice/crio-b4067124dafb235bf8f514d7e4179fd9d6013e2e316fafb8db0464136c620c00 WatchSource:0}: Error finding container b4067124dafb235bf8f514d7e4179fd9d6013e2e316fafb8db0464136c620c00: Status 404 returned error can't find the container with id b4067124dafb235bf8f514d7e4179fd9d6013e2e316fafb8db0464136c620c00 Jan 22 06:56:32 crc kubenswrapper[4720]: I0122 06:56:32.113948 4720 generic.go:334] "Generic (PLEG): container finished" podID="9482dbed-80f4-4d45-9402-5315c0d59310" containerID="fa60b99512b606e5ea50300ed6a9af3af3c99751897855266596b936479d949a" exitCode=0 Jan 22 06:56:32 crc kubenswrapper[4720]: I0122 06:56:32.114039 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/rabbitmq-server-0" event={"ID":"9482dbed-80f4-4d45-9402-5315c0d59310","Type":"ContainerDied","Data":"fa60b99512b606e5ea50300ed6a9af3af3c99751897855266596b936479d949a"} Jan 22 06:56:32 crc kubenswrapper[4720]: I0122 06:56:32.118709 4720 generic.go:334] "Generic (PLEG): container finished" podID="d421e895-4cb2-4a95-9a5b-ebf16f934a57" containerID="1ac7dfbb6385fb7241272ac912ea7a32567a7470cb7545fd2c0ef99601e814c7" exitCode=0 Jan 22 06:56:32 crc kubenswrapper[4720]: I0122 06:56:32.118957 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-2e07-account-create-update-fjdrg" event={"ID":"d421e895-4cb2-4a95-9a5b-ebf16f934a57","Type":"ContainerDied","Data":"1ac7dfbb6385fb7241272ac912ea7a32567a7470cb7545fd2c0ef99601e814c7"} Jan 22 06:56:32 crc kubenswrapper[4720]: I0122 06:56:32.118991 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-2e07-account-create-update-fjdrg" event={"ID":"d421e895-4cb2-4a95-9a5b-ebf16f934a57","Type":"ContainerStarted","Data":"ac35f699602e596a1168fd5644a1ef2d90e134eb48ed148946b1e6e3f3cc6b4e"} Jan 22 06:56:32 crc kubenswrapper[4720]: I0122 06:56:32.122645 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/rabbitmq-notifications-server-0" event={"ID":"33c789df-54ca-47c4-9688-74e392e3b121","Type":"ContainerStarted","Data":"ccf2445dd8dba0ef1a548e03be657002c6c13acfa0584799a18e52cb6333e3ad"} Jan 22 06:56:32 crc kubenswrapper[4720]: I0122 06:56:32.123020 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Jan 22 06:56:32 crc kubenswrapper[4720]: I0122 06:56:32.125706 4720 generic.go:334] "Generic (PLEG): container finished" podID="6bf728f2-33bb-4f7c-b2a1-55e4cfd402e3" containerID="f3da2e2411b61d614b724da1441be5c003d7e5bfc77346bccb29187cb3fda5cb" exitCode=0 Jan 22 06:56:32 crc kubenswrapper[4720]: I0122 06:56:32.126651 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-db-create-w6jkf" event={"ID":"6bf728f2-33bb-4f7c-b2a1-55e4cfd402e3","Type":"ContainerDied","Data":"f3da2e2411b61d614b724da1441be5c003d7e5bfc77346bccb29187cb3fda5cb"} Jan 22 06:56:32 crc kubenswrapper[4720]: I0122 06:56:32.126683 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-db-create-w6jkf" event={"ID":"6bf728f2-33bb-4f7c-b2a1-55e4cfd402e3","Type":"ContainerStarted","Data":"b4067124dafb235bf8f514d7e4179fd9d6013e2e316fafb8db0464136c620c00"} Jan 22 06:56:32 crc kubenswrapper[4720]: I0122 06:56:32.228646 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/rabbitmq-notifications-server-0" podStartSLOduration=-9223371970.626148 podStartE2EDuration="1m6.228627246s" podCreationTimestamp="2026-01-22 06:55:26 +0000 UTC" firstStartedPulling="2026-01-22 06:55:28.950442152 +0000 UTC m=+1221.092348857" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 06:56:32.222630993 +0000 UTC m=+1284.364537718" watchObservedRunningTime="2026-01-22 06:56:32.228627246 +0000 UTC m=+1284.370533951" Jan 22 06:56:32 crc kubenswrapper[4720]: I0122 06:56:32.497744 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/root-account-create-update-96mbz" Jan 22 06:56:32 crc kubenswrapper[4720]: I0122 06:56:32.629996 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c78w2\" (UniqueName: \"kubernetes.io/projected/f1642b8a-36b1-4482-bb8e-f289886d7d82-kube-api-access-c78w2\") pod \"f1642b8a-36b1-4482-bb8e-f289886d7d82\" (UID: \"f1642b8a-36b1-4482-bb8e-f289886d7d82\") " Jan 22 06:56:32 crc kubenswrapper[4720]: I0122 06:56:32.630086 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f1642b8a-36b1-4482-bb8e-f289886d7d82-operator-scripts\") pod \"f1642b8a-36b1-4482-bb8e-f289886d7d82\" (UID: \"f1642b8a-36b1-4482-bb8e-f289886d7d82\") " Jan 22 06:56:32 crc kubenswrapper[4720]: I0122 06:56:32.631081 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f1642b8a-36b1-4482-bb8e-f289886d7d82-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f1642b8a-36b1-4482-bb8e-f289886d7d82" (UID: "f1642b8a-36b1-4482-bb8e-f289886d7d82"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 06:56:32 crc kubenswrapper[4720]: I0122 06:56:32.636110 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f1642b8a-36b1-4482-bb8e-f289886d7d82-kube-api-access-c78w2" (OuterVolumeSpecName: "kube-api-access-c78w2") pod "f1642b8a-36b1-4482-bb8e-f289886d7d82" (UID: "f1642b8a-36b1-4482-bb8e-f289886d7d82"). InnerVolumeSpecName "kube-api-access-c78w2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 06:56:32 crc kubenswrapper[4720]: I0122 06:56:32.732262 4720 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f1642b8a-36b1-4482-bb8e-f289886d7d82-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 06:56:32 crc kubenswrapper[4720]: I0122 06:56:32.732303 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c78w2\" (UniqueName: \"kubernetes.io/projected/f1642b8a-36b1-4482-bb8e-f289886d7d82-kube-api-access-c78w2\") on node \"crc\" DevicePath \"\"" Jan 22 06:56:32 crc kubenswrapper[4720]: I0122 06:56:32.935872 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 22 06:56:32 crc kubenswrapper[4720]: I0122 06:56:32.939134 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 22 06:56:33 crc kubenswrapper[4720]: I0122 06:56:33.136835 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/rabbitmq-server-0" event={"ID":"9482dbed-80f4-4d45-9402-5315c0d59310","Type":"ContainerStarted","Data":"3f4be2396261a25df3be81c306f86f35bf061eb57859ea9e450e507cd02a4292"} Jan 22 06:56:33 crc kubenswrapper[4720]: I0122 06:56:33.137202 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/rabbitmq-server-0" Jan 22 06:56:33 crc kubenswrapper[4720]: I0122 06:56:33.139483 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/root-account-create-update-96mbz" Jan 22 06:56:33 crc kubenswrapper[4720]: I0122 06:56:33.139531 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/root-account-create-update-96mbz" event={"ID":"f1642b8a-36b1-4482-bb8e-f289886d7d82","Type":"ContainerDied","Data":"399db0225049327cdc49fcb04ac8d47d9a840107f0875e8379e132ea948586e4"} Jan 22 06:56:33 crc kubenswrapper[4720]: I0122 06:56:33.139620 4720 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="399db0225049327cdc49fcb04ac8d47d9a840107f0875e8379e132ea948586e4" Jan 22 06:56:33 crc kubenswrapper[4720]: I0122 06:56:33.144480 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 22 06:56:33 crc kubenswrapper[4720]: I0122 06:56:33.172863 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/rabbitmq-server-0" podStartSLOduration=38.345463088 podStartE2EDuration="1m6.172826442s" podCreationTimestamp="2026-01-22 06:55:27 +0000 UTC" firstStartedPulling="2026-01-22 06:55:29.176130311 +0000 UTC m=+1221.318037016" lastFinishedPulling="2026-01-22 06:55:57.003493665 +0000 UTC m=+1249.145400370" observedRunningTime="2026-01-22 06:56:33.164773981 +0000 UTC m=+1285.306680706" watchObservedRunningTime="2026-01-22 06:56:33.172826442 +0000 UTC m=+1285.314733147" Jan 22 06:56:33 crc kubenswrapper[4720]: I0122 06:56:33.638632 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-2e07-account-create-update-fjdrg" Jan 22 06:56:33 crc kubenswrapper[4720]: I0122 06:56:33.649114 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-db-create-w6jkf" Jan 22 06:56:33 crc kubenswrapper[4720]: I0122 06:56:33.758683 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d421e895-4cb2-4a95-9a5b-ebf16f934a57-operator-scripts\") pod \"d421e895-4cb2-4a95-9a5b-ebf16f934a57\" (UID: \"d421e895-4cb2-4a95-9a5b-ebf16f934a57\") " Jan 22 06:56:33 crc kubenswrapper[4720]: I0122 06:56:33.758728 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6bf728f2-33bb-4f7c-b2a1-55e4cfd402e3-operator-scripts\") pod \"6bf728f2-33bb-4f7c-b2a1-55e4cfd402e3\" (UID: \"6bf728f2-33bb-4f7c-b2a1-55e4cfd402e3\") " Jan 22 06:56:33 crc kubenswrapper[4720]: I0122 06:56:33.759408 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6bf728f2-33bb-4f7c-b2a1-55e4cfd402e3-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6bf728f2-33bb-4f7c-b2a1-55e4cfd402e3" (UID: "6bf728f2-33bb-4f7c-b2a1-55e4cfd402e3"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 06:56:33 crc kubenswrapper[4720]: I0122 06:56:33.759418 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d421e895-4cb2-4a95-9a5b-ebf16f934a57-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d421e895-4cb2-4a95-9a5b-ebf16f934a57" (UID: "d421e895-4cb2-4a95-9a5b-ebf16f934a57"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 06:56:33 crc kubenswrapper[4720]: I0122 06:56:33.759524 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g8fsr\" (UniqueName: \"kubernetes.io/projected/6bf728f2-33bb-4f7c-b2a1-55e4cfd402e3-kube-api-access-g8fsr\") pod \"6bf728f2-33bb-4f7c-b2a1-55e4cfd402e3\" (UID: \"6bf728f2-33bb-4f7c-b2a1-55e4cfd402e3\") " Jan 22 06:56:33 crc kubenswrapper[4720]: I0122 06:56:33.759596 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wdmjv\" (UniqueName: \"kubernetes.io/projected/d421e895-4cb2-4a95-9a5b-ebf16f934a57-kube-api-access-wdmjv\") pod \"d421e895-4cb2-4a95-9a5b-ebf16f934a57\" (UID: \"d421e895-4cb2-4a95-9a5b-ebf16f934a57\") " Jan 22 06:56:33 crc kubenswrapper[4720]: I0122 06:56:33.760044 4720 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d421e895-4cb2-4a95-9a5b-ebf16f934a57-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 06:56:33 crc kubenswrapper[4720]: I0122 06:56:33.760864 4720 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6bf728f2-33bb-4f7c-b2a1-55e4cfd402e3-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 06:56:33 crc kubenswrapper[4720]: I0122 06:56:33.766940 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d421e895-4cb2-4a95-9a5b-ebf16f934a57-kube-api-access-wdmjv" (OuterVolumeSpecName: "kube-api-access-wdmjv") pod "d421e895-4cb2-4a95-9a5b-ebf16f934a57" (UID: "d421e895-4cb2-4a95-9a5b-ebf16f934a57"). InnerVolumeSpecName "kube-api-access-wdmjv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 06:56:33 crc kubenswrapper[4720]: I0122 06:56:33.767245 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6bf728f2-33bb-4f7c-b2a1-55e4cfd402e3-kube-api-access-g8fsr" (OuterVolumeSpecName: "kube-api-access-g8fsr") pod "6bf728f2-33bb-4f7c-b2a1-55e4cfd402e3" (UID: "6bf728f2-33bb-4f7c-b2a1-55e4cfd402e3"). InnerVolumeSpecName "kube-api-access-g8fsr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 06:56:33 crc kubenswrapper[4720]: I0122 06:56:33.862636 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g8fsr\" (UniqueName: \"kubernetes.io/projected/6bf728f2-33bb-4f7c-b2a1-55e4cfd402e3-kube-api-access-g8fsr\") on node \"crc\" DevicePath \"\"" Jan 22 06:56:33 crc kubenswrapper[4720]: I0122 06:56:33.862682 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wdmjv\" (UniqueName: \"kubernetes.io/projected/d421e895-4cb2-4a95-9a5b-ebf16f934a57-kube-api-access-wdmjv\") on node \"crc\" DevicePath \"\"" Jan 22 06:56:34 crc kubenswrapper[4720]: I0122 06:56:34.150869 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-db-create-w6jkf" event={"ID":"6bf728f2-33bb-4f7c-b2a1-55e4cfd402e3","Type":"ContainerDied","Data":"b4067124dafb235bf8f514d7e4179fd9d6013e2e316fafb8db0464136c620c00"} Jan 22 06:56:34 crc kubenswrapper[4720]: I0122 06:56:34.150968 4720 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b4067124dafb235bf8f514d7e4179fd9d6013e2e316fafb8db0464136c620c00" Jan 22 06:56:34 crc kubenswrapper[4720]: I0122 06:56:34.150883 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-db-create-w6jkf" Jan 22 06:56:34 crc kubenswrapper[4720]: I0122 06:56:34.152843 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-2e07-account-create-update-fjdrg" Jan 22 06:56:34 crc kubenswrapper[4720]: I0122 06:56:34.152959 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-2e07-account-create-update-fjdrg" event={"ID":"d421e895-4cb2-4a95-9a5b-ebf16f934a57","Type":"ContainerDied","Data":"ac35f699602e596a1168fd5644a1ef2d90e134eb48ed148946b1e6e3f3cc6b4e"} Jan 22 06:56:34 crc kubenswrapper[4720]: I0122 06:56:34.153016 4720 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ac35f699602e596a1168fd5644a1ef2d90e134eb48ed148946b1e6e3f3cc6b4e" Jan 22 06:56:36 crc kubenswrapper[4720]: I0122 06:56:36.940549 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/prometheus-metric-storage-0"] Jan 22 06:56:36 crc kubenswrapper[4720]: I0122 06:56:36.941116 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/prometheus-metric-storage-0" podUID="4d97711e-2650-4d76-b960-21e698d8e10a" containerName="prometheus" containerID="cri-o://3b49fa2fa24c5ecea272e747a0b591891a3dd1d7c68695351099a1acde3770cc" gracePeriod=600 Jan 22 06:56:36 crc kubenswrapper[4720]: I0122 06:56:36.941180 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/prometheus-metric-storage-0" podUID="4d97711e-2650-4d76-b960-21e698d8e10a" containerName="thanos-sidecar" containerID="cri-o://e3b32f842059b4b41d96bf7d62c937e2378313eab27884f899bd4979eae1f713" gracePeriod=600 Jan 22 06:56:36 crc kubenswrapper[4720]: I0122 06:56:36.941251 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/prometheus-metric-storage-0" podUID="4d97711e-2650-4d76-b960-21e698d8e10a" containerName="config-reloader" containerID="cri-o://0425326e75ccb10579acac7738e359f31550e5d9f508c8b82e2ae226398731e7" gracePeriod=600 Jan 22 06:56:37 crc kubenswrapper[4720]: E0122 06:56:37.157731 4720 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4d97711e_2650_4d76_b960_21e698d8e10a.slice/crio-e3b32f842059b4b41d96bf7d62c937e2378313eab27884f899bd4979eae1f713.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4d97711e_2650_4d76_b960_21e698d8e10a.slice/crio-conmon-e3b32f842059b4b41d96bf7d62c937e2378313eab27884f899bd4979eae1f713.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4d97711e_2650_4d76_b960_21e698d8e10a.slice/crio-conmon-3b49fa2fa24c5ecea272e747a0b591891a3dd1d7c68695351099a1acde3770cc.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4d97711e_2650_4d76_b960_21e698d8e10a.slice/crio-3b49fa2fa24c5ecea272e747a0b591891a3dd1d7c68695351099a1acde3770cc.scope\": RecentStats: unable to find data in memory cache]" Jan 22 06:56:37 crc kubenswrapper[4720]: I0122 06:56:37.181110 4720 generic.go:334] "Generic (PLEG): container finished" podID="4d97711e-2650-4d76-b960-21e698d8e10a" containerID="e3b32f842059b4b41d96bf7d62c937e2378313eab27884f899bd4979eae1f713" exitCode=0 Jan 22 06:56:37 crc kubenswrapper[4720]: I0122 06:56:37.181484 4720 generic.go:334] "Generic (PLEG): container finished" podID="4d97711e-2650-4d76-b960-21e698d8e10a" containerID="3b49fa2fa24c5ecea272e747a0b591891a3dd1d7c68695351099a1acde3770cc" exitCode=0 Jan 22 06:56:37 crc kubenswrapper[4720]: I0122 06:56:37.181390 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/prometheus-metric-storage-0" event={"ID":"4d97711e-2650-4d76-b960-21e698d8e10a","Type":"ContainerDied","Data":"e3b32f842059b4b41d96bf7d62c937e2378313eab27884f899bd4979eae1f713"} Jan 22 06:56:37 crc kubenswrapper[4720]: I0122 06:56:37.181534 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/prometheus-metric-storage-0" event={"ID":"4d97711e-2650-4d76-b960-21e698d8e10a","Type":"ContainerDied","Data":"3b49fa2fa24c5ecea272e747a0b591891a3dd1d7c68695351099a1acde3770cc"} Jan 22 06:56:37 crc kubenswrapper[4720]: I0122 06:56:37.966378 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 22 06:56:38 crc kubenswrapper[4720]: I0122 06:56:38.056461 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/4d97711e-2650-4d76-b960-21e698d8e10a-prometheus-metric-storage-rulefiles-1\") pod \"4d97711e-2650-4d76-b960-21e698d8e10a\" (UID: \"4d97711e-2650-4d76-b960-21e698d8e10a\") " Jan 22 06:56:38 crc kubenswrapper[4720]: I0122 06:56:38.056547 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/4d97711e-2650-4d76-b960-21e698d8e10a-thanos-prometheus-http-client-file\") pod \"4d97711e-2650-4d76-b960-21e698d8e10a\" (UID: \"4d97711e-2650-4d76-b960-21e698d8e10a\") " Jan 22 06:56:38 crc kubenswrapper[4720]: I0122 06:56:38.056580 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/4d97711e-2650-4d76-b960-21e698d8e10a-tls-assets\") pod \"4d97711e-2650-4d76-b960-21e698d8e10a\" (UID: \"4d97711e-2650-4d76-b960-21e698d8e10a\") " Jan 22 06:56:38 crc kubenswrapper[4720]: I0122 06:56:38.056629 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/4d97711e-2650-4d76-b960-21e698d8e10a-config-out\") pod \"4d97711e-2650-4d76-b960-21e698d8e10a\" (UID: \"4d97711e-2650-4d76-b960-21e698d8e10a\") " Jan 22 06:56:38 crc kubenswrapper[4720]: I0122 06:56:38.056667 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/4d97711e-2650-4d76-b960-21e698d8e10a-prometheus-metric-storage-rulefiles-2\") pod \"4d97711e-2650-4d76-b960-21e698d8e10a\" (UID: \"4d97711e-2650-4d76-b960-21e698d8e10a\") " Jan 22 06:56:38 crc kubenswrapper[4720]: I0122 06:56:38.056705 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/4d97711e-2650-4d76-b960-21e698d8e10a-config\") pod \"4d97711e-2650-4d76-b960-21e698d8e10a\" (UID: \"4d97711e-2650-4d76-b960-21e698d8e10a\") " Jan 22 06:56:38 crc kubenswrapper[4720]: I0122 06:56:38.056739 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b7hv8\" (UniqueName: \"kubernetes.io/projected/4d97711e-2650-4d76-b960-21e698d8e10a-kube-api-access-b7hv8\") pod \"4d97711e-2650-4d76-b960-21e698d8e10a\" (UID: \"4d97711e-2650-4d76-b960-21e698d8e10a\") " Jan 22 06:56:38 crc kubenswrapper[4720]: I0122 06:56:38.056929 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-40d4a4fe-c9b1-4471-a9eb-4f5b2c118017\") pod \"4d97711e-2650-4d76-b960-21e698d8e10a\" (UID: \"4d97711e-2650-4d76-b960-21e698d8e10a\") " Jan 22 06:56:38 crc kubenswrapper[4720]: I0122 06:56:38.056961 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/4d97711e-2650-4d76-b960-21e698d8e10a-prometheus-metric-storage-rulefiles-0\") pod \"4d97711e-2650-4d76-b960-21e698d8e10a\" (UID: \"4d97711e-2650-4d76-b960-21e698d8e10a\") " Jan 22 06:56:38 crc kubenswrapper[4720]: I0122 06:56:38.057005 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/4d97711e-2650-4d76-b960-21e698d8e10a-web-config\") pod \"4d97711e-2650-4d76-b960-21e698d8e10a\" (UID: \"4d97711e-2650-4d76-b960-21e698d8e10a\") " Jan 22 06:56:38 crc kubenswrapper[4720]: I0122 06:56:38.057877 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4d97711e-2650-4d76-b960-21e698d8e10a-prometheus-metric-storage-rulefiles-0" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-0") pod "4d97711e-2650-4d76-b960-21e698d8e10a" (UID: "4d97711e-2650-4d76-b960-21e698d8e10a"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 06:56:38 crc kubenswrapper[4720]: I0122 06:56:38.058001 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4d97711e-2650-4d76-b960-21e698d8e10a-prometheus-metric-storage-rulefiles-2" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-2") pod "4d97711e-2650-4d76-b960-21e698d8e10a" (UID: "4d97711e-2650-4d76-b960-21e698d8e10a"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-2". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 06:56:38 crc kubenswrapper[4720]: I0122 06:56:38.058095 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4d97711e-2650-4d76-b960-21e698d8e10a-prometheus-metric-storage-rulefiles-1" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-1") pod "4d97711e-2650-4d76-b960-21e698d8e10a" (UID: "4d97711e-2650-4d76-b960-21e698d8e10a"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-1". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 06:56:38 crc kubenswrapper[4720]: I0122 06:56:38.068178 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d97711e-2650-4d76-b960-21e698d8e10a-config" (OuterVolumeSpecName: "config") pod "4d97711e-2650-4d76-b960-21e698d8e10a" (UID: "4d97711e-2650-4d76-b960-21e698d8e10a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 06:56:38 crc kubenswrapper[4720]: I0122 06:56:38.070473 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4d97711e-2650-4d76-b960-21e698d8e10a-config-out" (OuterVolumeSpecName: "config-out") pod "4d97711e-2650-4d76-b960-21e698d8e10a" (UID: "4d97711e-2650-4d76-b960-21e698d8e10a"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 06:56:38 crc kubenswrapper[4720]: I0122 06:56:38.080305 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d97711e-2650-4d76-b960-21e698d8e10a-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "4d97711e-2650-4d76-b960-21e698d8e10a" (UID: "4d97711e-2650-4d76-b960-21e698d8e10a"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 06:56:38 crc kubenswrapper[4720]: I0122 06:56:38.082552 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d97711e-2650-4d76-b960-21e698d8e10a-thanos-prometheus-http-client-file" (OuterVolumeSpecName: "thanos-prometheus-http-client-file") pod "4d97711e-2650-4d76-b960-21e698d8e10a" (UID: "4d97711e-2650-4d76-b960-21e698d8e10a"). InnerVolumeSpecName "thanos-prometheus-http-client-file". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 06:56:38 crc kubenswrapper[4720]: I0122 06:56:38.084240 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d97711e-2650-4d76-b960-21e698d8e10a-kube-api-access-b7hv8" (OuterVolumeSpecName: "kube-api-access-b7hv8") pod "4d97711e-2650-4d76-b960-21e698d8e10a" (UID: "4d97711e-2650-4d76-b960-21e698d8e10a"). InnerVolumeSpecName "kube-api-access-b7hv8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 06:56:38 crc kubenswrapper[4720]: I0122 06:56:38.114102 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d97711e-2650-4d76-b960-21e698d8e10a-web-config" (OuterVolumeSpecName: "web-config") pod "4d97711e-2650-4d76-b960-21e698d8e10a" (UID: "4d97711e-2650-4d76-b960-21e698d8e10a"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 06:56:38 crc kubenswrapper[4720]: I0122 06:56:38.121096 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-40d4a4fe-c9b1-4471-a9eb-4f5b2c118017" (OuterVolumeSpecName: "prometheus-metric-storage-db") pod "4d97711e-2650-4d76-b960-21e698d8e10a" (UID: "4d97711e-2650-4d76-b960-21e698d8e10a"). InnerVolumeSpecName "pvc-40d4a4fe-c9b1-4471-a9eb-4f5b2c118017". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 22 06:56:38 crc kubenswrapper[4720]: I0122 06:56:38.159696 4720 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/4d97711e-2650-4d76-b960-21e698d8e10a-prometheus-metric-storage-rulefiles-1\") on node \"crc\" DevicePath \"\"" Jan 22 06:56:38 crc kubenswrapper[4720]: I0122 06:56:38.159745 4720 reconciler_common.go:293] "Volume detached for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/4d97711e-2650-4d76-b960-21e698d8e10a-thanos-prometheus-http-client-file\") on node \"crc\" DevicePath \"\"" Jan 22 06:56:38 crc kubenswrapper[4720]: I0122 06:56:38.159758 4720 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/4d97711e-2650-4d76-b960-21e698d8e10a-tls-assets\") on node \"crc\" DevicePath \"\"" Jan 22 06:56:38 crc kubenswrapper[4720]: I0122 06:56:38.159772 4720 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/4d97711e-2650-4d76-b960-21e698d8e10a-config-out\") on node \"crc\" DevicePath \"\"" Jan 22 06:56:38 crc kubenswrapper[4720]: I0122 06:56:38.159782 4720 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/4d97711e-2650-4d76-b960-21e698d8e10a-prometheus-metric-storage-rulefiles-2\") on node \"crc\" DevicePath \"\"" Jan 22 06:56:38 crc kubenswrapper[4720]: I0122 06:56:38.159794 4720 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/4d97711e-2650-4d76-b960-21e698d8e10a-config\") on node \"crc\" DevicePath \"\"" Jan 22 06:56:38 crc kubenswrapper[4720]: I0122 06:56:38.159803 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b7hv8\" (UniqueName: \"kubernetes.io/projected/4d97711e-2650-4d76-b960-21e698d8e10a-kube-api-access-b7hv8\") on node \"crc\" DevicePath \"\"" Jan 22 06:56:38 crc kubenswrapper[4720]: I0122 06:56:38.159851 4720 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-40d4a4fe-c9b1-4471-a9eb-4f5b2c118017\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-40d4a4fe-c9b1-4471-a9eb-4f5b2c118017\") on node \"crc\" " Jan 22 06:56:38 crc kubenswrapper[4720]: I0122 06:56:38.159863 4720 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/4d97711e-2650-4d76-b960-21e698d8e10a-prometheus-metric-storage-rulefiles-0\") on node \"crc\" DevicePath \"\"" Jan 22 06:56:38 crc kubenswrapper[4720]: I0122 06:56:38.159877 4720 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/4d97711e-2650-4d76-b960-21e698d8e10a-web-config\") on node \"crc\" DevicePath \"\"" Jan 22 06:56:38 crc kubenswrapper[4720]: I0122 06:56:38.184782 4720 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 22 06:56:38 crc kubenswrapper[4720]: I0122 06:56:38.184990 4720 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-40d4a4fe-c9b1-4471-a9eb-4f5b2c118017" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-40d4a4fe-c9b1-4471-a9eb-4f5b2c118017") on node "crc" Jan 22 06:56:38 crc kubenswrapper[4720]: I0122 06:56:38.192237 4720 generic.go:334] "Generic (PLEG): container finished" podID="4d97711e-2650-4d76-b960-21e698d8e10a" containerID="0425326e75ccb10579acac7738e359f31550e5d9f508c8b82e2ae226398731e7" exitCode=0 Jan 22 06:56:38 crc kubenswrapper[4720]: I0122 06:56:38.192299 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/prometheus-metric-storage-0" event={"ID":"4d97711e-2650-4d76-b960-21e698d8e10a","Type":"ContainerDied","Data":"0425326e75ccb10579acac7738e359f31550e5d9f508c8b82e2ae226398731e7"} Jan 22 06:56:38 crc kubenswrapper[4720]: I0122 06:56:38.192339 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/prometheus-metric-storage-0" event={"ID":"4d97711e-2650-4d76-b960-21e698d8e10a","Type":"ContainerDied","Data":"31515a033e305f943cab331442dbd93aa3b397e4d55cd092469400e083a6d68b"} Jan 22 06:56:38 crc kubenswrapper[4720]: I0122 06:56:38.192359 4720 scope.go:117] "RemoveContainer" containerID="e3b32f842059b4b41d96bf7d62c937e2378313eab27884f899bd4979eae1f713" Jan 22 06:56:38 crc kubenswrapper[4720]: I0122 06:56:38.192527 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 22 06:56:38 crc kubenswrapper[4720]: I0122 06:56:38.219076 4720 scope.go:117] "RemoveContainer" containerID="0425326e75ccb10579acac7738e359f31550e5d9f508c8b82e2ae226398731e7" Jan 22 06:56:38 crc kubenswrapper[4720]: I0122 06:56:38.241788 4720 scope.go:117] "RemoveContainer" containerID="3b49fa2fa24c5ecea272e747a0b591891a3dd1d7c68695351099a1acde3770cc" Jan 22 06:56:38 crc kubenswrapper[4720]: I0122 06:56:38.248106 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/prometheus-metric-storage-0"] Jan 22 06:56:38 crc kubenswrapper[4720]: I0122 06:56:38.261461 4720 reconciler_common.go:293] "Volume detached for volume \"pvc-40d4a4fe-c9b1-4471-a9eb-4f5b2c118017\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-40d4a4fe-c9b1-4471-a9eb-4f5b2c118017\") on node \"crc\" DevicePath \"\"" Jan 22 06:56:38 crc kubenswrapper[4720]: I0122 06:56:38.262684 4720 scope.go:117] "RemoveContainer" containerID="430ee309c2505c07802557a8951630e2b2e15555750189137fb85b7719b9cf53" Jan 22 06:56:38 crc kubenswrapper[4720]: I0122 06:56:38.269841 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/prometheus-metric-storage-0"] Jan 22 06:56:38 crc kubenswrapper[4720]: I0122 06:56:38.287770 4720 scope.go:117] "RemoveContainer" containerID="e3b32f842059b4b41d96bf7d62c937e2378313eab27884f899bd4979eae1f713" Jan 22 06:56:38 crc kubenswrapper[4720]: E0122 06:56:38.288509 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e3b32f842059b4b41d96bf7d62c937e2378313eab27884f899bd4979eae1f713\": container with ID starting with e3b32f842059b4b41d96bf7d62c937e2378313eab27884f899bd4979eae1f713 not found: ID does not exist" containerID="e3b32f842059b4b41d96bf7d62c937e2378313eab27884f899bd4979eae1f713" Jan 22 06:56:38 crc kubenswrapper[4720]: I0122 06:56:38.288551 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e3b32f842059b4b41d96bf7d62c937e2378313eab27884f899bd4979eae1f713"} err="failed to get container status \"e3b32f842059b4b41d96bf7d62c937e2378313eab27884f899bd4979eae1f713\": rpc error: code = NotFound desc = could not find container \"e3b32f842059b4b41d96bf7d62c937e2378313eab27884f899bd4979eae1f713\": container with ID starting with e3b32f842059b4b41d96bf7d62c937e2378313eab27884f899bd4979eae1f713 not found: ID does not exist" Jan 22 06:56:38 crc kubenswrapper[4720]: I0122 06:56:38.288579 4720 scope.go:117] "RemoveContainer" containerID="0425326e75ccb10579acac7738e359f31550e5d9f508c8b82e2ae226398731e7" Jan 22 06:56:38 crc kubenswrapper[4720]: E0122 06:56:38.289312 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0425326e75ccb10579acac7738e359f31550e5d9f508c8b82e2ae226398731e7\": container with ID starting with 0425326e75ccb10579acac7738e359f31550e5d9f508c8b82e2ae226398731e7 not found: ID does not exist" containerID="0425326e75ccb10579acac7738e359f31550e5d9f508c8b82e2ae226398731e7" Jan 22 06:56:38 crc kubenswrapper[4720]: I0122 06:56:38.289341 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0425326e75ccb10579acac7738e359f31550e5d9f508c8b82e2ae226398731e7"} err="failed to get container status \"0425326e75ccb10579acac7738e359f31550e5d9f508c8b82e2ae226398731e7\": rpc error: code = NotFound desc = could not find container \"0425326e75ccb10579acac7738e359f31550e5d9f508c8b82e2ae226398731e7\": container with ID starting with 0425326e75ccb10579acac7738e359f31550e5d9f508c8b82e2ae226398731e7 not found: ID does not exist" Jan 22 06:56:38 crc kubenswrapper[4720]: I0122 06:56:38.289362 4720 scope.go:117] "RemoveContainer" containerID="3b49fa2fa24c5ecea272e747a0b591891a3dd1d7c68695351099a1acde3770cc" Jan 22 06:56:38 crc kubenswrapper[4720]: E0122 06:56:38.289788 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3b49fa2fa24c5ecea272e747a0b591891a3dd1d7c68695351099a1acde3770cc\": container with ID starting with 3b49fa2fa24c5ecea272e747a0b591891a3dd1d7c68695351099a1acde3770cc not found: ID does not exist" containerID="3b49fa2fa24c5ecea272e747a0b591891a3dd1d7c68695351099a1acde3770cc" Jan 22 06:56:38 crc kubenswrapper[4720]: I0122 06:56:38.289858 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3b49fa2fa24c5ecea272e747a0b591891a3dd1d7c68695351099a1acde3770cc"} err="failed to get container status \"3b49fa2fa24c5ecea272e747a0b591891a3dd1d7c68695351099a1acde3770cc\": rpc error: code = NotFound desc = could not find container \"3b49fa2fa24c5ecea272e747a0b591891a3dd1d7c68695351099a1acde3770cc\": container with ID starting with 3b49fa2fa24c5ecea272e747a0b591891a3dd1d7c68695351099a1acde3770cc not found: ID does not exist" Jan 22 06:56:38 crc kubenswrapper[4720]: I0122 06:56:38.289897 4720 scope.go:117] "RemoveContainer" containerID="430ee309c2505c07802557a8951630e2b2e15555750189137fb85b7719b9cf53" Jan 22 06:56:38 crc kubenswrapper[4720]: E0122 06:56:38.290273 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"430ee309c2505c07802557a8951630e2b2e15555750189137fb85b7719b9cf53\": container with ID starting with 430ee309c2505c07802557a8951630e2b2e15555750189137fb85b7719b9cf53 not found: ID does not exist" containerID="430ee309c2505c07802557a8951630e2b2e15555750189137fb85b7719b9cf53" Jan 22 06:56:38 crc kubenswrapper[4720]: I0122 06:56:38.290313 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"430ee309c2505c07802557a8951630e2b2e15555750189137fb85b7719b9cf53"} err="failed to get container status \"430ee309c2505c07802557a8951630e2b2e15555750189137fb85b7719b9cf53\": rpc error: code = NotFound desc = could not find container \"430ee309c2505c07802557a8951630e2b2e15555750189137fb85b7719b9cf53\": container with ID starting with 430ee309c2505c07802557a8951630e2b2e15555750189137fb85b7719b9cf53 not found: ID does not exist" Jan 22 06:56:38 crc kubenswrapper[4720]: I0122 06:56:38.309197 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/prometheus-metric-storage-0"] Jan 22 06:56:38 crc kubenswrapper[4720]: E0122 06:56:38.309602 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6bf728f2-33bb-4f7c-b2a1-55e4cfd402e3" containerName="mariadb-database-create" Jan 22 06:56:38 crc kubenswrapper[4720]: I0122 06:56:38.309622 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="6bf728f2-33bb-4f7c-b2a1-55e4cfd402e3" containerName="mariadb-database-create" Jan 22 06:56:38 crc kubenswrapper[4720]: E0122 06:56:38.309658 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d97711e-2650-4d76-b960-21e698d8e10a" containerName="prometheus" Jan 22 06:56:38 crc kubenswrapper[4720]: I0122 06:56:38.309665 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d97711e-2650-4d76-b960-21e698d8e10a" containerName="prometheus" Jan 22 06:56:38 crc kubenswrapper[4720]: E0122 06:56:38.309679 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d97711e-2650-4d76-b960-21e698d8e10a" containerName="thanos-sidecar" Jan 22 06:56:38 crc kubenswrapper[4720]: I0122 06:56:38.309686 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d97711e-2650-4d76-b960-21e698d8e10a" containerName="thanos-sidecar" Jan 22 06:56:38 crc kubenswrapper[4720]: E0122 06:56:38.309705 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d97711e-2650-4d76-b960-21e698d8e10a" containerName="init-config-reloader" Jan 22 06:56:38 crc kubenswrapper[4720]: I0122 06:56:38.309714 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d97711e-2650-4d76-b960-21e698d8e10a" containerName="init-config-reloader" Jan 22 06:56:38 crc kubenswrapper[4720]: E0122 06:56:38.309730 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d97711e-2650-4d76-b960-21e698d8e10a" containerName="config-reloader" Jan 22 06:56:38 crc kubenswrapper[4720]: I0122 06:56:38.309737 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d97711e-2650-4d76-b960-21e698d8e10a" containerName="config-reloader" Jan 22 06:56:38 crc kubenswrapper[4720]: E0122 06:56:38.309750 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1642b8a-36b1-4482-bb8e-f289886d7d82" containerName="mariadb-account-create-update" Jan 22 06:56:38 crc kubenswrapper[4720]: I0122 06:56:38.309756 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1642b8a-36b1-4482-bb8e-f289886d7d82" containerName="mariadb-account-create-update" Jan 22 06:56:38 crc kubenswrapper[4720]: E0122 06:56:38.309770 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d421e895-4cb2-4a95-9a5b-ebf16f934a57" containerName="mariadb-account-create-update" Jan 22 06:56:38 crc kubenswrapper[4720]: I0122 06:56:38.309776 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="d421e895-4cb2-4a95-9a5b-ebf16f934a57" containerName="mariadb-account-create-update" Jan 22 06:56:38 crc kubenswrapper[4720]: I0122 06:56:38.309935 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d97711e-2650-4d76-b960-21e698d8e10a" containerName="thanos-sidecar" Jan 22 06:56:38 crc kubenswrapper[4720]: I0122 06:56:38.309945 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="d421e895-4cb2-4a95-9a5b-ebf16f934a57" containerName="mariadb-account-create-update" Jan 22 06:56:38 crc kubenswrapper[4720]: I0122 06:56:38.309956 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="6bf728f2-33bb-4f7c-b2a1-55e4cfd402e3" containerName="mariadb-database-create" Jan 22 06:56:38 crc kubenswrapper[4720]: I0122 06:56:38.309966 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d97711e-2650-4d76-b960-21e698d8e10a" containerName="config-reloader" Jan 22 06:56:38 crc kubenswrapper[4720]: I0122 06:56:38.309974 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d97711e-2650-4d76-b960-21e698d8e10a" containerName="prometheus" Jan 22 06:56:38 crc kubenswrapper[4720]: I0122 06:56:38.309980 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="f1642b8a-36b1-4482-bb8e-f289886d7d82" containerName="mariadb-account-create-update" Jan 22 06:56:38 crc kubenswrapper[4720]: I0122 06:56:38.311741 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 22 06:56:38 crc kubenswrapper[4720]: I0122 06:56:38.315149 4720 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"prometheus-metric-storage-web-config" Jan 22 06:56:38 crc kubenswrapper[4720]: I0122 06:56:38.315399 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"watcher-kuttl-default"/"prometheus-metric-storage-rulefiles-0" Jan 22 06:56:38 crc kubenswrapper[4720]: I0122 06:56:38.315551 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"watcher-kuttl-default"/"prometheus-metric-storage-rulefiles-2" Jan 22 06:56:38 crc kubenswrapper[4720]: I0122 06:56:38.317268 4720 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"prometheus-metric-storage" Jan 22 06:56:38 crc kubenswrapper[4720]: I0122 06:56:38.317437 4720 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-metric-storage-prometheus-svc" Jan 22 06:56:38 crc kubenswrapper[4720]: I0122 06:56:38.320603 4720 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Jan 22 06:56:38 crc kubenswrapper[4720]: I0122 06:56:38.320751 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"watcher-kuttl-default"/"prometheus-metric-storage-rulefiles-1" Jan 22 06:56:38 crc kubenswrapper[4720]: I0122 06:56:38.322004 4720 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"metric-storage-prometheus-dockercfg-7592l" Jan 22 06:56:38 crc kubenswrapper[4720]: I0122 06:56:38.333001 4720 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"prometheus-metric-storage-tls-assets-0" Jan 22 06:56:38 crc kubenswrapper[4720]: I0122 06:56:38.335621 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/prometheus-metric-storage-0"] Jan 22 06:56:38 crc kubenswrapper[4720]: I0122 06:56:38.464537 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/dcb7da9c-0e97-404e-9b99-87c192455159-config\") pod \"prometheus-metric-storage-0\" (UID: \"dcb7da9c-0e97-404e-9b99-87c192455159\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 22 06:56:38 crc kubenswrapper[4720]: I0122 06:56:38.464599 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/dcb7da9c-0e97-404e-9b99-87c192455159-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"dcb7da9c-0e97-404e-9b99-87c192455159\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 22 06:56:38 crc kubenswrapper[4720]: I0122 06:56:38.464632 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/dcb7da9c-0e97-404e-9b99-87c192455159-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"dcb7da9c-0e97-404e-9b99-87c192455159\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 22 06:56:38 crc kubenswrapper[4720]: I0122 06:56:38.464665 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/dcb7da9c-0e97-404e-9b99-87c192455159-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"dcb7da9c-0e97-404e-9b99-87c192455159\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 22 06:56:38 crc kubenswrapper[4720]: I0122 06:56:38.464695 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/dcb7da9c-0e97-404e-9b99-87c192455159-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"dcb7da9c-0e97-404e-9b99-87c192455159\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 22 06:56:38 crc kubenswrapper[4720]: I0122 06:56:38.464770 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/dcb7da9c-0e97-404e-9b99-87c192455159-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"dcb7da9c-0e97-404e-9b99-87c192455159\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 22 06:56:38 crc kubenswrapper[4720]: I0122 06:56:38.464860 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dcb7da9c-0e97-404e-9b99-87c192455159-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"dcb7da9c-0e97-404e-9b99-87c192455159\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 22 06:56:38 crc kubenswrapper[4720]: I0122 06:56:38.464893 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/dcb7da9c-0e97-404e-9b99-87c192455159-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"dcb7da9c-0e97-404e-9b99-87c192455159\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 22 06:56:38 crc kubenswrapper[4720]: I0122 06:56:38.464924 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8jzvg\" (UniqueName: \"kubernetes.io/projected/dcb7da9c-0e97-404e-9b99-87c192455159-kube-api-access-8jzvg\") pod \"prometheus-metric-storage-0\" (UID: \"dcb7da9c-0e97-404e-9b99-87c192455159\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 22 06:56:38 crc kubenswrapper[4720]: I0122 06:56:38.465114 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/dcb7da9c-0e97-404e-9b99-87c192455159-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"dcb7da9c-0e97-404e-9b99-87c192455159\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 22 06:56:38 crc kubenswrapper[4720]: I0122 06:56:38.465185 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/dcb7da9c-0e97-404e-9b99-87c192455159-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"dcb7da9c-0e97-404e-9b99-87c192455159\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 22 06:56:38 crc kubenswrapper[4720]: I0122 06:56:38.465346 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/dcb7da9c-0e97-404e-9b99-87c192455159-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"dcb7da9c-0e97-404e-9b99-87c192455159\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 22 06:56:38 crc kubenswrapper[4720]: I0122 06:56:38.465406 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-40d4a4fe-c9b1-4471-a9eb-4f5b2c118017\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-40d4a4fe-c9b1-4471-a9eb-4f5b2c118017\") pod \"prometheus-metric-storage-0\" (UID: \"dcb7da9c-0e97-404e-9b99-87c192455159\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 22 06:56:38 crc kubenswrapper[4720]: I0122 06:56:38.567256 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/dcb7da9c-0e97-404e-9b99-87c192455159-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"dcb7da9c-0e97-404e-9b99-87c192455159\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 22 06:56:38 crc kubenswrapper[4720]: I0122 06:56:38.567343 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-40d4a4fe-c9b1-4471-a9eb-4f5b2c118017\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-40d4a4fe-c9b1-4471-a9eb-4f5b2c118017\") pod \"prometheus-metric-storage-0\" (UID: \"dcb7da9c-0e97-404e-9b99-87c192455159\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 22 06:56:38 crc kubenswrapper[4720]: I0122 06:56:38.567393 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/dcb7da9c-0e97-404e-9b99-87c192455159-config\") pod \"prometheus-metric-storage-0\" (UID: \"dcb7da9c-0e97-404e-9b99-87c192455159\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 22 06:56:38 crc kubenswrapper[4720]: I0122 06:56:38.567429 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/dcb7da9c-0e97-404e-9b99-87c192455159-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"dcb7da9c-0e97-404e-9b99-87c192455159\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 22 06:56:38 crc kubenswrapper[4720]: I0122 06:56:38.567463 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/dcb7da9c-0e97-404e-9b99-87c192455159-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"dcb7da9c-0e97-404e-9b99-87c192455159\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 22 06:56:38 crc kubenswrapper[4720]: I0122 06:56:38.567496 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/dcb7da9c-0e97-404e-9b99-87c192455159-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"dcb7da9c-0e97-404e-9b99-87c192455159\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 22 06:56:38 crc kubenswrapper[4720]: I0122 06:56:38.567540 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/dcb7da9c-0e97-404e-9b99-87c192455159-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"dcb7da9c-0e97-404e-9b99-87c192455159\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 22 06:56:38 crc kubenswrapper[4720]: I0122 06:56:38.567576 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/dcb7da9c-0e97-404e-9b99-87c192455159-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"dcb7da9c-0e97-404e-9b99-87c192455159\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 22 06:56:38 crc kubenswrapper[4720]: I0122 06:56:38.567644 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dcb7da9c-0e97-404e-9b99-87c192455159-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"dcb7da9c-0e97-404e-9b99-87c192455159\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 22 06:56:38 crc kubenswrapper[4720]: I0122 06:56:38.567677 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/dcb7da9c-0e97-404e-9b99-87c192455159-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"dcb7da9c-0e97-404e-9b99-87c192455159\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 22 06:56:38 crc kubenswrapper[4720]: I0122 06:56:38.567700 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8jzvg\" (UniqueName: \"kubernetes.io/projected/dcb7da9c-0e97-404e-9b99-87c192455159-kube-api-access-8jzvg\") pod \"prometheus-metric-storage-0\" (UID: \"dcb7da9c-0e97-404e-9b99-87c192455159\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 22 06:56:38 crc kubenswrapper[4720]: I0122 06:56:38.567731 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/dcb7da9c-0e97-404e-9b99-87c192455159-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"dcb7da9c-0e97-404e-9b99-87c192455159\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 22 06:56:38 crc kubenswrapper[4720]: I0122 06:56:38.567757 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/dcb7da9c-0e97-404e-9b99-87c192455159-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"dcb7da9c-0e97-404e-9b99-87c192455159\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 22 06:56:38 crc kubenswrapper[4720]: I0122 06:56:38.569543 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/dcb7da9c-0e97-404e-9b99-87c192455159-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"dcb7da9c-0e97-404e-9b99-87c192455159\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 22 06:56:38 crc kubenswrapper[4720]: I0122 06:56:38.569594 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/dcb7da9c-0e97-404e-9b99-87c192455159-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"dcb7da9c-0e97-404e-9b99-87c192455159\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 22 06:56:38 crc kubenswrapper[4720]: I0122 06:56:38.571186 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/dcb7da9c-0e97-404e-9b99-87c192455159-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"dcb7da9c-0e97-404e-9b99-87c192455159\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 22 06:56:38 crc kubenswrapper[4720]: I0122 06:56:38.573061 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/dcb7da9c-0e97-404e-9b99-87c192455159-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"dcb7da9c-0e97-404e-9b99-87c192455159\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 22 06:56:38 crc kubenswrapper[4720]: I0122 06:56:38.574386 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/dcb7da9c-0e97-404e-9b99-87c192455159-config\") pod \"prometheus-metric-storage-0\" (UID: \"dcb7da9c-0e97-404e-9b99-87c192455159\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 22 06:56:38 crc kubenswrapper[4720]: I0122 06:56:38.574718 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/dcb7da9c-0e97-404e-9b99-87c192455159-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"dcb7da9c-0e97-404e-9b99-87c192455159\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 22 06:56:38 crc kubenswrapper[4720]: I0122 06:56:38.575207 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dcb7da9c-0e97-404e-9b99-87c192455159-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"dcb7da9c-0e97-404e-9b99-87c192455159\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 22 06:56:38 crc kubenswrapper[4720]: I0122 06:56:38.576457 4720 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 22 06:56:38 crc kubenswrapper[4720]: I0122 06:56:38.576524 4720 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-40d4a4fe-c9b1-4471-a9eb-4f5b2c118017\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-40d4a4fe-c9b1-4471-a9eb-4f5b2c118017\") pod \"prometheus-metric-storage-0\" (UID: \"dcb7da9c-0e97-404e-9b99-87c192455159\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/e74ba53cc22a17779c8ca9de275ce3db1c5e72fbfc84ee06e990819ef29e35bd/globalmount\"" pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 22 06:56:38 crc kubenswrapper[4720]: I0122 06:56:38.585748 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/dcb7da9c-0e97-404e-9b99-87c192455159-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"dcb7da9c-0e97-404e-9b99-87c192455159\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 22 06:56:38 crc kubenswrapper[4720]: I0122 06:56:38.585863 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/dcb7da9c-0e97-404e-9b99-87c192455159-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"dcb7da9c-0e97-404e-9b99-87c192455159\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 22 06:56:38 crc kubenswrapper[4720]: I0122 06:56:38.586169 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/dcb7da9c-0e97-404e-9b99-87c192455159-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"dcb7da9c-0e97-404e-9b99-87c192455159\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 22 06:56:38 crc kubenswrapper[4720]: I0122 06:56:38.593223 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/dcb7da9c-0e97-404e-9b99-87c192455159-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"dcb7da9c-0e97-404e-9b99-87c192455159\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 22 06:56:38 crc kubenswrapper[4720]: I0122 06:56:38.593750 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8jzvg\" (UniqueName: \"kubernetes.io/projected/dcb7da9c-0e97-404e-9b99-87c192455159-kube-api-access-8jzvg\") pod \"prometheus-metric-storage-0\" (UID: \"dcb7da9c-0e97-404e-9b99-87c192455159\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 22 06:56:38 crc kubenswrapper[4720]: I0122 06:56:38.645479 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-40d4a4fe-c9b1-4471-a9eb-4f5b2c118017\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-40d4a4fe-c9b1-4471-a9eb-4f5b2c118017\") pod \"prometheus-metric-storage-0\" (UID: \"dcb7da9c-0e97-404e-9b99-87c192455159\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 22 06:56:38 crc kubenswrapper[4720]: I0122 06:56:38.932144 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 22 06:56:39 crc kubenswrapper[4720]: I0122 06:56:39.504158 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/prometheus-metric-storage-0"] Jan 22 06:56:39 crc kubenswrapper[4720]: W0122 06:56:39.508527 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddcb7da9c_0e97_404e_9b99_87c192455159.slice/crio-1822c346fb81ba9ef2c3c4a023ba8ff540a45e3604fae3d1372bdd314bf4623c WatchSource:0}: Error finding container 1822c346fb81ba9ef2c3c4a023ba8ff540a45e3604fae3d1372bdd314bf4623c: Status 404 returned error can't find the container with id 1822c346fb81ba9ef2c3c4a023ba8ff540a45e3604fae3d1372bdd314bf4623c Jan 22 06:56:40 crc kubenswrapper[4720]: I0122 06:56:40.226035 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4d97711e-2650-4d76-b960-21e698d8e10a" path="/var/lib/kubelet/pods/4d97711e-2650-4d76-b960-21e698d8e10a/volumes" Jan 22 06:56:40 crc kubenswrapper[4720]: I0122 06:56:40.235526 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/prometheus-metric-storage-0" event={"ID":"dcb7da9c-0e97-404e-9b99-87c192455159","Type":"ContainerStarted","Data":"1822c346fb81ba9ef2c3c4a023ba8ff540a45e3604fae3d1372bdd314bf4623c"} Jan 22 06:56:40 crc kubenswrapper[4720]: I0122 06:56:40.935535 4720 prober.go:107] "Probe failed" probeType="Readiness" pod="watcher-kuttl-default/prometheus-metric-storage-0" podUID="4d97711e-2650-4d76-b960-21e698d8e10a" containerName="prometheus" probeResult="failure" output="Get \"http://10.217.0.111:9090/-/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 22 06:56:43 crc kubenswrapper[4720]: I0122 06:56:43.261555 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/prometheus-metric-storage-0" event={"ID":"dcb7da9c-0e97-404e-9b99-87c192455159","Type":"ContainerStarted","Data":"fea82760639227b6f70105707ee8fdcd80b7a6af399567d15688721adb71b20b"} Jan 22 06:56:48 crc kubenswrapper[4720]: I0122 06:56:48.302195 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Jan 22 06:56:48 crc kubenswrapper[4720]: I0122 06:56:48.809203 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/rabbitmq-server-0" Jan 22 06:56:51 crc kubenswrapper[4720]: I0122 06:56:51.013849 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/keystone-db-sync-v5pc9"] Jan 22 06:56:51 crc kubenswrapper[4720]: I0122 06:56:51.015408 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-db-sync-v5pc9" Jan 22 06:56:51 crc kubenswrapper[4720]: I0122 06:56:51.019570 4720 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"keystone-scripts" Jan 22 06:56:51 crc kubenswrapper[4720]: I0122 06:56:51.020794 4720 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"keystone" Jan 22 06:56:51 crc kubenswrapper[4720]: I0122 06:56:51.023353 4720 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"keystone-config-data" Jan 22 06:56:51 crc kubenswrapper[4720]: I0122 06:56:51.024047 4720 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"keystone-keystone-dockercfg-skq9h" Jan 22 06:56:51 crc kubenswrapper[4720]: I0122 06:56:51.040720 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/keystone-db-sync-v5pc9"] Jan 22 06:56:51 crc kubenswrapper[4720]: I0122 06:56:51.081652 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1eb3e6e5-9c5a-44ab-af1e-46fcd3a22c99-config-data\") pod \"keystone-db-sync-v5pc9\" (UID: \"1eb3e6e5-9c5a-44ab-af1e-46fcd3a22c99\") " pod="watcher-kuttl-default/keystone-db-sync-v5pc9" Jan 22 06:56:51 crc kubenswrapper[4720]: I0122 06:56:51.081720 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1eb3e6e5-9c5a-44ab-af1e-46fcd3a22c99-combined-ca-bundle\") pod \"keystone-db-sync-v5pc9\" (UID: \"1eb3e6e5-9c5a-44ab-af1e-46fcd3a22c99\") " pod="watcher-kuttl-default/keystone-db-sync-v5pc9" Jan 22 06:56:51 crc kubenswrapper[4720]: I0122 06:56:51.081771 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-78zsh\" (UniqueName: \"kubernetes.io/projected/1eb3e6e5-9c5a-44ab-af1e-46fcd3a22c99-kube-api-access-78zsh\") pod \"keystone-db-sync-v5pc9\" (UID: \"1eb3e6e5-9c5a-44ab-af1e-46fcd3a22c99\") " pod="watcher-kuttl-default/keystone-db-sync-v5pc9" Jan 22 06:56:51 crc kubenswrapper[4720]: I0122 06:56:51.183858 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1eb3e6e5-9c5a-44ab-af1e-46fcd3a22c99-config-data\") pod \"keystone-db-sync-v5pc9\" (UID: \"1eb3e6e5-9c5a-44ab-af1e-46fcd3a22c99\") " pod="watcher-kuttl-default/keystone-db-sync-v5pc9" Jan 22 06:56:51 crc kubenswrapper[4720]: I0122 06:56:51.183955 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1eb3e6e5-9c5a-44ab-af1e-46fcd3a22c99-combined-ca-bundle\") pod \"keystone-db-sync-v5pc9\" (UID: \"1eb3e6e5-9c5a-44ab-af1e-46fcd3a22c99\") " pod="watcher-kuttl-default/keystone-db-sync-v5pc9" Jan 22 06:56:51 crc kubenswrapper[4720]: I0122 06:56:51.184033 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-78zsh\" (UniqueName: \"kubernetes.io/projected/1eb3e6e5-9c5a-44ab-af1e-46fcd3a22c99-kube-api-access-78zsh\") pod \"keystone-db-sync-v5pc9\" (UID: \"1eb3e6e5-9c5a-44ab-af1e-46fcd3a22c99\") " pod="watcher-kuttl-default/keystone-db-sync-v5pc9" Jan 22 06:56:51 crc kubenswrapper[4720]: I0122 06:56:51.192435 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1eb3e6e5-9c5a-44ab-af1e-46fcd3a22c99-config-data\") pod \"keystone-db-sync-v5pc9\" (UID: \"1eb3e6e5-9c5a-44ab-af1e-46fcd3a22c99\") " pod="watcher-kuttl-default/keystone-db-sync-v5pc9" Jan 22 06:56:51 crc kubenswrapper[4720]: I0122 06:56:51.209903 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1eb3e6e5-9c5a-44ab-af1e-46fcd3a22c99-combined-ca-bundle\") pod \"keystone-db-sync-v5pc9\" (UID: \"1eb3e6e5-9c5a-44ab-af1e-46fcd3a22c99\") " pod="watcher-kuttl-default/keystone-db-sync-v5pc9" Jan 22 06:56:51 crc kubenswrapper[4720]: I0122 06:56:51.214733 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-78zsh\" (UniqueName: \"kubernetes.io/projected/1eb3e6e5-9c5a-44ab-af1e-46fcd3a22c99-kube-api-access-78zsh\") pod \"keystone-db-sync-v5pc9\" (UID: \"1eb3e6e5-9c5a-44ab-af1e-46fcd3a22c99\") " pod="watcher-kuttl-default/keystone-db-sync-v5pc9" Jan 22 06:56:51 crc kubenswrapper[4720]: I0122 06:56:51.333464 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-db-sync-v5pc9" Jan 22 06:56:51 crc kubenswrapper[4720]: W0122 06:56:51.966686 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1eb3e6e5_9c5a_44ab_af1e_46fcd3a22c99.slice/crio-b50994f75aa605d118b88c83e9e6fb67e2c05ac185ecd19de13ebf06570964b3 WatchSource:0}: Error finding container b50994f75aa605d118b88c83e9e6fb67e2c05ac185ecd19de13ebf06570964b3: Status 404 returned error can't find the container with id b50994f75aa605d118b88c83e9e6fb67e2c05ac185ecd19de13ebf06570964b3 Jan 22 06:56:51 crc kubenswrapper[4720]: I0122 06:56:51.966712 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/keystone-db-sync-v5pc9"] Jan 22 06:56:52 crc kubenswrapper[4720]: I0122 06:56:52.346669 4720 generic.go:334] "Generic (PLEG): container finished" podID="dcb7da9c-0e97-404e-9b99-87c192455159" containerID="fea82760639227b6f70105707ee8fdcd80b7a6af399567d15688721adb71b20b" exitCode=0 Jan 22 06:56:52 crc kubenswrapper[4720]: I0122 06:56:52.346760 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/prometheus-metric-storage-0" event={"ID":"dcb7da9c-0e97-404e-9b99-87c192455159","Type":"ContainerDied","Data":"fea82760639227b6f70105707ee8fdcd80b7a6af399567d15688721adb71b20b"} Jan 22 06:56:52 crc kubenswrapper[4720]: I0122 06:56:52.348664 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-db-sync-v5pc9" event={"ID":"1eb3e6e5-9c5a-44ab-af1e-46fcd3a22c99","Type":"ContainerStarted","Data":"b50994f75aa605d118b88c83e9e6fb67e2c05ac185ecd19de13ebf06570964b3"} Jan 22 06:56:53 crc kubenswrapper[4720]: I0122 06:56:53.390794 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/prometheus-metric-storage-0" event={"ID":"dcb7da9c-0e97-404e-9b99-87c192455159","Type":"ContainerStarted","Data":"8e6a0d7ef2c6e012fe6b2fb4820040a1816088fc1ec2be30b25c2b2a4f66fa9e"} Jan 22 06:56:56 crc kubenswrapper[4720]: I0122 06:56:56.427532 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/prometheus-metric-storage-0" event={"ID":"dcb7da9c-0e97-404e-9b99-87c192455159","Type":"ContainerStarted","Data":"30ee0427bfceb08b2af16ab7480acd5bc8f67e190501ce155a4a40112d332a65"} Jan 22 06:57:02 crc kubenswrapper[4720]: I0122 06:57:02.505920 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-db-sync-v5pc9" event={"ID":"1eb3e6e5-9c5a-44ab-af1e-46fcd3a22c99","Type":"ContainerStarted","Data":"f5f67f7122c451feddcf36c21770c5990412236753b1de32e9107c61632de28d"} Jan 22 06:57:02 crc kubenswrapper[4720]: I0122 06:57:02.508677 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/prometheus-metric-storage-0" event={"ID":"dcb7da9c-0e97-404e-9b99-87c192455159","Type":"ContainerStarted","Data":"2977d9c97cdaa9b9b26905a15dd3b099b6af47a4734642788e11473521517a23"} Jan 22 06:57:02 crc kubenswrapper[4720]: I0122 06:57:02.554643 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/keystone-db-sync-v5pc9" podStartSLOduration=2.88696175 podStartE2EDuration="12.5546017s" podCreationTimestamp="2026-01-22 06:56:50 +0000 UTC" firstStartedPulling="2026-01-22 06:56:51.969854435 +0000 UTC m=+1304.111761140" lastFinishedPulling="2026-01-22 06:57:01.637494385 +0000 UTC m=+1313.779401090" observedRunningTime="2026-01-22 06:57:02.522988614 +0000 UTC m=+1314.664895339" watchObservedRunningTime="2026-01-22 06:57:02.5546017 +0000 UTC m=+1314.696508405" Jan 22 06:57:02 crc kubenswrapper[4720]: I0122 06:57:02.556720 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/prometheus-metric-storage-0" podStartSLOduration=24.556713721 podStartE2EDuration="24.556713721s" podCreationTimestamp="2026-01-22 06:56:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 06:57:02.551706827 +0000 UTC m=+1314.693613552" watchObservedRunningTime="2026-01-22 06:57:02.556713721 +0000 UTC m=+1314.698620426" Jan 22 06:57:03 crc kubenswrapper[4720]: I0122 06:57:03.933319 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 22 06:57:06 crc kubenswrapper[4720]: I0122 06:57:06.541418 4720 generic.go:334] "Generic (PLEG): container finished" podID="1eb3e6e5-9c5a-44ab-af1e-46fcd3a22c99" containerID="f5f67f7122c451feddcf36c21770c5990412236753b1de32e9107c61632de28d" exitCode=0 Jan 22 06:57:06 crc kubenswrapper[4720]: I0122 06:57:06.541510 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-db-sync-v5pc9" event={"ID":"1eb3e6e5-9c5a-44ab-af1e-46fcd3a22c99","Type":"ContainerDied","Data":"f5f67f7122c451feddcf36c21770c5990412236753b1de32e9107c61632de28d"} Jan 22 06:57:07 crc kubenswrapper[4720]: I0122 06:57:07.899251 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-db-sync-v5pc9" Jan 22 06:57:08 crc kubenswrapper[4720]: I0122 06:57:08.094171 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-78zsh\" (UniqueName: \"kubernetes.io/projected/1eb3e6e5-9c5a-44ab-af1e-46fcd3a22c99-kube-api-access-78zsh\") pod \"1eb3e6e5-9c5a-44ab-af1e-46fcd3a22c99\" (UID: \"1eb3e6e5-9c5a-44ab-af1e-46fcd3a22c99\") " Jan 22 06:57:08 crc kubenswrapper[4720]: I0122 06:57:08.094276 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1eb3e6e5-9c5a-44ab-af1e-46fcd3a22c99-combined-ca-bundle\") pod \"1eb3e6e5-9c5a-44ab-af1e-46fcd3a22c99\" (UID: \"1eb3e6e5-9c5a-44ab-af1e-46fcd3a22c99\") " Jan 22 06:57:08 crc kubenswrapper[4720]: I0122 06:57:08.094336 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1eb3e6e5-9c5a-44ab-af1e-46fcd3a22c99-config-data\") pod \"1eb3e6e5-9c5a-44ab-af1e-46fcd3a22c99\" (UID: \"1eb3e6e5-9c5a-44ab-af1e-46fcd3a22c99\") " Jan 22 06:57:08 crc kubenswrapper[4720]: I0122 06:57:08.123113 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1eb3e6e5-9c5a-44ab-af1e-46fcd3a22c99-kube-api-access-78zsh" (OuterVolumeSpecName: "kube-api-access-78zsh") pod "1eb3e6e5-9c5a-44ab-af1e-46fcd3a22c99" (UID: "1eb3e6e5-9c5a-44ab-af1e-46fcd3a22c99"). InnerVolumeSpecName "kube-api-access-78zsh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 06:57:08 crc kubenswrapper[4720]: I0122 06:57:08.165108 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1eb3e6e5-9c5a-44ab-af1e-46fcd3a22c99-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1eb3e6e5-9c5a-44ab-af1e-46fcd3a22c99" (UID: "1eb3e6e5-9c5a-44ab-af1e-46fcd3a22c99"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 06:57:08 crc kubenswrapper[4720]: I0122 06:57:08.192115 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1eb3e6e5-9c5a-44ab-af1e-46fcd3a22c99-config-data" (OuterVolumeSpecName: "config-data") pod "1eb3e6e5-9c5a-44ab-af1e-46fcd3a22c99" (UID: "1eb3e6e5-9c5a-44ab-af1e-46fcd3a22c99"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 06:57:08 crc kubenswrapper[4720]: I0122 06:57:08.199428 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-78zsh\" (UniqueName: \"kubernetes.io/projected/1eb3e6e5-9c5a-44ab-af1e-46fcd3a22c99-kube-api-access-78zsh\") on node \"crc\" DevicePath \"\"" Jan 22 06:57:08 crc kubenswrapper[4720]: I0122 06:57:08.199459 4720 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1eb3e6e5-9c5a-44ab-af1e-46fcd3a22c99-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 06:57:08 crc kubenswrapper[4720]: I0122 06:57:08.199470 4720 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1eb3e6e5-9c5a-44ab-af1e-46fcd3a22c99-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 06:57:08 crc kubenswrapper[4720]: I0122 06:57:08.592285 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-db-sync-v5pc9" event={"ID":"1eb3e6e5-9c5a-44ab-af1e-46fcd3a22c99","Type":"ContainerDied","Data":"b50994f75aa605d118b88c83e9e6fb67e2c05ac185ecd19de13ebf06570964b3"} Jan 22 06:57:08 crc kubenswrapper[4720]: I0122 06:57:08.592867 4720 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b50994f75aa605d118b88c83e9e6fb67e2c05ac185ecd19de13ebf06570964b3" Jan 22 06:57:08 crc kubenswrapper[4720]: I0122 06:57:08.592335 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-db-sync-v5pc9" Jan 22 06:57:08 crc kubenswrapper[4720]: I0122 06:57:08.717803 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/keystone-bootstrap-22bbf"] Jan 22 06:57:08 crc kubenswrapper[4720]: E0122 06:57:08.718198 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1eb3e6e5-9c5a-44ab-af1e-46fcd3a22c99" containerName="keystone-db-sync" Jan 22 06:57:08 crc kubenswrapper[4720]: I0122 06:57:08.718243 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="1eb3e6e5-9c5a-44ab-af1e-46fcd3a22c99" containerName="keystone-db-sync" Jan 22 06:57:08 crc kubenswrapper[4720]: I0122 06:57:08.718408 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="1eb3e6e5-9c5a-44ab-af1e-46fcd3a22c99" containerName="keystone-db-sync" Jan 22 06:57:08 crc kubenswrapper[4720]: I0122 06:57:08.719032 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-bootstrap-22bbf" Jan 22 06:57:08 crc kubenswrapper[4720]: I0122 06:57:08.722096 4720 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"keystone-config-data" Jan 22 06:57:08 crc kubenswrapper[4720]: I0122 06:57:08.722389 4720 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"osp-secret" Jan 22 06:57:08 crc kubenswrapper[4720]: I0122 06:57:08.722536 4720 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"keystone-keystone-dockercfg-skq9h" Jan 22 06:57:08 crc kubenswrapper[4720]: I0122 06:57:08.722711 4720 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"keystone" Jan 22 06:57:08 crc kubenswrapper[4720]: I0122 06:57:08.725348 4720 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"keystone-scripts" Jan 22 06:57:08 crc kubenswrapper[4720]: I0122 06:57:08.742275 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/keystone-bootstrap-22bbf"] Jan 22 06:57:08 crc kubenswrapper[4720]: I0122 06:57:08.808222 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/63bf0b08-e9b3-473f-88fc-ab639d2428d9-fernet-keys\") pod \"keystone-bootstrap-22bbf\" (UID: \"63bf0b08-e9b3-473f-88fc-ab639d2428d9\") " pod="watcher-kuttl-default/keystone-bootstrap-22bbf" Jan 22 06:57:08 crc kubenswrapper[4720]: I0122 06:57:08.808279 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/63bf0b08-e9b3-473f-88fc-ab639d2428d9-credential-keys\") pod \"keystone-bootstrap-22bbf\" (UID: \"63bf0b08-e9b3-473f-88fc-ab639d2428d9\") " pod="watcher-kuttl-default/keystone-bootstrap-22bbf" Jan 22 06:57:08 crc kubenswrapper[4720]: I0122 06:57:08.808425 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63bf0b08-e9b3-473f-88fc-ab639d2428d9-config-data\") pod \"keystone-bootstrap-22bbf\" (UID: \"63bf0b08-e9b3-473f-88fc-ab639d2428d9\") " pod="watcher-kuttl-default/keystone-bootstrap-22bbf" Jan 22 06:57:08 crc kubenswrapper[4720]: I0122 06:57:08.808486 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rbx9h\" (UniqueName: \"kubernetes.io/projected/63bf0b08-e9b3-473f-88fc-ab639d2428d9-kube-api-access-rbx9h\") pod \"keystone-bootstrap-22bbf\" (UID: \"63bf0b08-e9b3-473f-88fc-ab639d2428d9\") " pod="watcher-kuttl-default/keystone-bootstrap-22bbf" Jan 22 06:57:08 crc kubenswrapper[4720]: I0122 06:57:08.808558 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63bf0b08-e9b3-473f-88fc-ab639d2428d9-combined-ca-bundle\") pod \"keystone-bootstrap-22bbf\" (UID: \"63bf0b08-e9b3-473f-88fc-ab639d2428d9\") " pod="watcher-kuttl-default/keystone-bootstrap-22bbf" Jan 22 06:57:08 crc kubenswrapper[4720]: I0122 06:57:08.808595 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/63bf0b08-e9b3-473f-88fc-ab639d2428d9-scripts\") pod \"keystone-bootstrap-22bbf\" (UID: \"63bf0b08-e9b3-473f-88fc-ab639d2428d9\") " pod="watcher-kuttl-default/keystone-bootstrap-22bbf" Jan 22 06:57:08 crc kubenswrapper[4720]: I0122 06:57:08.905628 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 06:57:08 crc kubenswrapper[4720]: I0122 06:57:08.907662 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 22 06:57:08 crc kubenswrapper[4720]: I0122 06:57:08.910143 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6ab539c5-9633-47b3-a904-e5bb0f40c1c8-scripts\") pod \"ceilometer-0\" (UID: \"6ab539c5-9633-47b3-a904-e5bb0f40c1c8\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 06:57:08 crc kubenswrapper[4720]: I0122 06:57:08.910205 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6ab539c5-9633-47b3-a904-e5bb0f40c1c8-log-httpd\") pod \"ceilometer-0\" (UID: \"6ab539c5-9633-47b3-a904-e5bb0f40c1c8\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 06:57:08 crc kubenswrapper[4720]: I0122 06:57:08.910235 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/63bf0b08-e9b3-473f-88fc-ab639d2428d9-credential-keys\") pod \"keystone-bootstrap-22bbf\" (UID: \"63bf0b08-e9b3-473f-88fc-ab639d2428d9\") " pod="watcher-kuttl-default/keystone-bootstrap-22bbf" Jan 22 06:57:08 crc kubenswrapper[4720]: I0122 06:57:08.910391 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63bf0b08-e9b3-473f-88fc-ab639d2428d9-config-data\") pod \"keystone-bootstrap-22bbf\" (UID: \"63bf0b08-e9b3-473f-88fc-ab639d2428d9\") " pod="watcher-kuttl-default/keystone-bootstrap-22bbf" Jan 22 06:57:08 crc kubenswrapper[4720]: I0122 06:57:08.911061 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6ab539c5-9633-47b3-a904-e5bb0f40c1c8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6ab539c5-9633-47b3-a904-e5bb0f40c1c8\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 06:57:08 crc kubenswrapper[4720]: I0122 06:57:08.911103 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rbx9h\" (UniqueName: \"kubernetes.io/projected/63bf0b08-e9b3-473f-88fc-ab639d2428d9-kube-api-access-rbx9h\") pod \"keystone-bootstrap-22bbf\" (UID: \"63bf0b08-e9b3-473f-88fc-ab639d2428d9\") " pod="watcher-kuttl-default/keystone-bootstrap-22bbf" Jan 22 06:57:08 crc kubenswrapper[4720]: I0122 06:57:08.911144 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ab539c5-9633-47b3-a904-e5bb0f40c1c8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6ab539c5-9633-47b3-a904-e5bb0f40c1c8\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 06:57:08 crc kubenswrapper[4720]: I0122 06:57:08.911205 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/63bf0b08-e9b3-473f-88fc-ab639d2428d9-fernet-keys\") pod \"keystone-bootstrap-22bbf\" (UID: \"63bf0b08-e9b3-473f-88fc-ab639d2428d9\") " pod="watcher-kuttl-default/keystone-bootstrap-22bbf" Jan 22 06:57:08 crc kubenswrapper[4720]: I0122 06:57:08.911241 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6ab539c5-9633-47b3-a904-e5bb0f40c1c8-config-data\") pod \"ceilometer-0\" (UID: \"6ab539c5-9633-47b3-a904-e5bb0f40c1c8\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 06:57:08 crc kubenswrapper[4720]: I0122 06:57:08.911319 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-87lpw\" (UniqueName: \"kubernetes.io/projected/6ab539c5-9633-47b3-a904-e5bb0f40c1c8-kube-api-access-87lpw\") pod \"ceilometer-0\" (UID: \"6ab539c5-9633-47b3-a904-e5bb0f40c1c8\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 06:57:08 crc kubenswrapper[4720]: I0122 06:57:08.911391 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63bf0b08-e9b3-473f-88fc-ab639d2428d9-combined-ca-bundle\") pod \"keystone-bootstrap-22bbf\" (UID: \"63bf0b08-e9b3-473f-88fc-ab639d2428d9\") " pod="watcher-kuttl-default/keystone-bootstrap-22bbf" Jan 22 06:57:08 crc kubenswrapper[4720]: I0122 06:57:08.911422 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/63bf0b08-e9b3-473f-88fc-ab639d2428d9-scripts\") pod \"keystone-bootstrap-22bbf\" (UID: \"63bf0b08-e9b3-473f-88fc-ab639d2428d9\") " pod="watcher-kuttl-default/keystone-bootstrap-22bbf" Jan 22 06:57:08 crc kubenswrapper[4720]: I0122 06:57:08.911445 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6ab539c5-9633-47b3-a904-e5bb0f40c1c8-run-httpd\") pod \"ceilometer-0\" (UID: \"6ab539c5-9633-47b3-a904-e5bb0f40c1c8\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 06:57:08 crc kubenswrapper[4720]: I0122 06:57:08.914925 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/63bf0b08-e9b3-473f-88fc-ab639d2428d9-credential-keys\") pod \"keystone-bootstrap-22bbf\" (UID: \"63bf0b08-e9b3-473f-88fc-ab639d2428d9\") " pod="watcher-kuttl-default/keystone-bootstrap-22bbf" Jan 22 06:57:08 crc kubenswrapper[4720]: I0122 06:57:08.915102 4720 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-config-data" Jan 22 06:57:08 crc kubenswrapper[4720]: I0122 06:57:08.915260 4720 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-scripts" Jan 22 06:57:08 crc kubenswrapper[4720]: I0122 06:57:08.915821 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/63bf0b08-e9b3-473f-88fc-ab639d2428d9-fernet-keys\") pod \"keystone-bootstrap-22bbf\" (UID: \"63bf0b08-e9b3-473f-88fc-ab639d2428d9\") " pod="watcher-kuttl-default/keystone-bootstrap-22bbf" Jan 22 06:57:08 crc kubenswrapper[4720]: I0122 06:57:08.917839 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/63bf0b08-e9b3-473f-88fc-ab639d2428d9-scripts\") pod \"keystone-bootstrap-22bbf\" (UID: \"63bf0b08-e9b3-473f-88fc-ab639d2428d9\") " pod="watcher-kuttl-default/keystone-bootstrap-22bbf" Jan 22 06:57:08 crc kubenswrapper[4720]: I0122 06:57:08.921800 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63bf0b08-e9b3-473f-88fc-ab639d2428d9-combined-ca-bundle\") pod \"keystone-bootstrap-22bbf\" (UID: \"63bf0b08-e9b3-473f-88fc-ab639d2428d9\") " pod="watcher-kuttl-default/keystone-bootstrap-22bbf" Jan 22 06:57:08 crc kubenswrapper[4720]: I0122 06:57:08.922620 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63bf0b08-e9b3-473f-88fc-ab639d2428d9-config-data\") pod \"keystone-bootstrap-22bbf\" (UID: \"63bf0b08-e9b3-473f-88fc-ab639d2428d9\") " pod="watcher-kuttl-default/keystone-bootstrap-22bbf" Jan 22 06:57:08 crc kubenswrapper[4720]: I0122 06:57:08.944738 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 22 06:57:08 crc kubenswrapper[4720]: I0122 06:57:08.945965 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 06:57:08 crc kubenswrapper[4720]: I0122 06:57:08.947524 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rbx9h\" (UniqueName: \"kubernetes.io/projected/63bf0b08-e9b3-473f-88fc-ab639d2428d9-kube-api-access-rbx9h\") pod \"keystone-bootstrap-22bbf\" (UID: \"63bf0b08-e9b3-473f-88fc-ab639d2428d9\") " pod="watcher-kuttl-default/keystone-bootstrap-22bbf" Jan 22 06:57:08 crc kubenswrapper[4720]: I0122 06:57:08.962684 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 22 06:57:09 crc kubenswrapper[4720]: I0122 06:57:09.013076 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ab539c5-9633-47b3-a904-e5bb0f40c1c8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6ab539c5-9633-47b3-a904-e5bb0f40c1c8\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 06:57:09 crc kubenswrapper[4720]: I0122 06:57:09.013154 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6ab539c5-9633-47b3-a904-e5bb0f40c1c8-config-data\") pod \"ceilometer-0\" (UID: \"6ab539c5-9633-47b3-a904-e5bb0f40c1c8\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 06:57:09 crc kubenswrapper[4720]: I0122 06:57:09.013212 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-87lpw\" (UniqueName: \"kubernetes.io/projected/6ab539c5-9633-47b3-a904-e5bb0f40c1c8-kube-api-access-87lpw\") pod \"ceilometer-0\" (UID: \"6ab539c5-9633-47b3-a904-e5bb0f40c1c8\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 06:57:09 crc kubenswrapper[4720]: I0122 06:57:09.013255 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6ab539c5-9633-47b3-a904-e5bb0f40c1c8-run-httpd\") pod \"ceilometer-0\" (UID: \"6ab539c5-9633-47b3-a904-e5bb0f40c1c8\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 06:57:09 crc kubenswrapper[4720]: I0122 06:57:09.013286 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6ab539c5-9633-47b3-a904-e5bb0f40c1c8-scripts\") pod \"ceilometer-0\" (UID: \"6ab539c5-9633-47b3-a904-e5bb0f40c1c8\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 06:57:09 crc kubenswrapper[4720]: I0122 06:57:09.013326 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6ab539c5-9633-47b3-a904-e5bb0f40c1c8-log-httpd\") pod \"ceilometer-0\" (UID: \"6ab539c5-9633-47b3-a904-e5bb0f40c1c8\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 06:57:09 crc kubenswrapper[4720]: I0122 06:57:09.013367 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6ab539c5-9633-47b3-a904-e5bb0f40c1c8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6ab539c5-9633-47b3-a904-e5bb0f40c1c8\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 06:57:09 crc kubenswrapper[4720]: I0122 06:57:09.016441 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6ab539c5-9633-47b3-a904-e5bb0f40c1c8-log-httpd\") pod \"ceilometer-0\" (UID: \"6ab539c5-9633-47b3-a904-e5bb0f40c1c8\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 06:57:09 crc kubenswrapper[4720]: I0122 06:57:09.017188 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6ab539c5-9633-47b3-a904-e5bb0f40c1c8-run-httpd\") pod \"ceilometer-0\" (UID: \"6ab539c5-9633-47b3-a904-e5bb0f40c1c8\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 06:57:09 crc kubenswrapper[4720]: I0122 06:57:09.020345 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6ab539c5-9633-47b3-a904-e5bb0f40c1c8-config-data\") pod \"ceilometer-0\" (UID: \"6ab539c5-9633-47b3-a904-e5bb0f40c1c8\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 06:57:09 crc kubenswrapper[4720]: I0122 06:57:09.022835 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ab539c5-9633-47b3-a904-e5bb0f40c1c8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6ab539c5-9633-47b3-a904-e5bb0f40c1c8\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 06:57:09 crc kubenswrapper[4720]: I0122 06:57:09.023315 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6ab539c5-9633-47b3-a904-e5bb0f40c1c8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6ab539c5-9633-47b3-a904-e5bb0f40c1c8\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 06:57:09 crc kubenswrapper[4720]: I0122 06:57:09.027727 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6ab539c5-9633-47b3-a904-e5bb0f40c1c8-scripts\") pod \"ceilometer-0\" (UID: \"6ab539c5-9633-47b3-a904-e5bb0f40c1c8\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 06:57:09 crc kubenswrapper[4720]: I0122 06:57:09.033701 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-bootstrap-22bbf" Jan 22 06:57:09 crc kubenswrapper[4720]: I0122 06:57:09.037631 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-87lpw\" (UniqueName: \"kubernetes.io/projected/6ab539c5-9633-47b3-a904-e5bb0f40c1c8-kube-api-access-87lpw\") pod \"ceilometer-0\" (UID: \"6ab539c5-9633-47b3-a904-e5bb0f40c1c8\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 06:57:09 crc kubenswrapper[4720]: I0122 06:57:09.308466 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 22 06:57:09 crc kubenswrapper[4720]: I0122 06:57:09.528854 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/keystone-bootstrap-22bbf"] Jan 22 06:57:09 crc kubenswrapper[4720]: I0122 06:57:09.564276 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 06:57:09 crc kubenswrapper[4720]: I0122 06:57:09.602662 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"6ab539c5-9633-47b3-a904-e5bb0f40c1c8","Type":"ContainerStarted","Data":"f3e026b2c0923f0054fd34bac4b5be59cf13b10acf7dba6d1440f09576b19dec"} Jan 22 06:57:09 crc kubenswrapper[4720]: I0122 06:57:09.603860 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-bootstrap-22bbf" event={"ID":"63bf0b08-e9b3-473f-88fc-ab639d2428d9","Type":"ContainerStarted","Data":"04b08c7b1fda0e0822b32e7b93e8618d546e551c2f3715ad9ce2e15d1087da47"} Jan 22 06:57:09 crc kubenswrapper[4720]: I0122 06:57:09.609633 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 22 06:57:10 crc kubenswrapper[4720]: I0122 06:57:10.620837 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-bootstrap-22bbf" event={"ID":"63bf0b08-e9b3-473f-88fc-ab639d2428d9","Type":"ContainerStarted","Data":"92bba22f19505073c77bbd838b72601a8a6bc744e60f1b3a73bb9dc2514c635a"} Jan 22 06:57:10 crc kubenswrapper[4720]: I0122 06:57:10.653157 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/keystone-bootstrap-22bbf" podStartSLOduration=2.653131036 podStartE2EDuration="2.653131036s" podCreationTimestamp="2026-01-22 06:57:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 06:57:10.650972094 +0000 UTC m=+1322.792878809" watchObservedRunningTime="2026-01-22 06:57:10.653131036 +0000 UTC m=+1322.795037741" Jan 22 06:57:10 crc kubenswrapper[4720]: I0122 06:57:10.947456 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 06:57:13 crc kubenswrapper[4720]: I0122 06:57:13.649455 4720 generic.go:334] "Generic (PLEG): container finished" podID="63bf0b08-e9b3-473f-88fc-ab639d2428d9" containerID="92bba22f19505073c77bbd838b72601a8a6bc744e60f1b3a73bb9dc2514c635a" exitCode=0 Jan 22 06:57:13 crc kubenswrapper[4720]: I0122 06:57:13.649552 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-bootstrap-22bbf" event={"ID":"63bf0b08-e9b3-473f-88fc-ab639d2428d9","Type":"ContainerDied","Data":"92bba22f19505073c77bbd838b72601a8a6bc744e60f1b3a73bb9dc2514c635a"} Jan 22 06:57:14 crc kubenswrapper[4720]: I0122 06:57:14.658922 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"6ab539c5-9633-47b3-a904-e5bb0f40c1c8","Type":"ContainerStarted","Data":"18923d1c339ae24b4a6cfa84bb610f59265122e6a44e815887fa3b323dd6f392"} Jan 22 06:57:14 crc kubenswrapper[4720]: I0122 06:57:14.998389 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-bootstrap-22bbf" Jan 22 06:57:15 crc kubenswrapper[4720]: I0122 06:57:15.145806 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63bf0b08-e9b3-473f-88fc-ab639d2428d9-config-data\") pod \"63bf0b08-e9b3-473f-88fc-ab639d2428d9\" (UID: \"63bf0b08-e9b3-473f-88fc-ab639d2428d9\") " Jan 22 06:57:15 crc kubenswrapper[4720]: I0122 06:57:15.145929 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/63bf0b08-e9b3-473f-88fc-ab639d2428d9-fernet-keys\") pod \"63bf0b08-e9b3-473f-88fc-ab639d2428d9\" (UID: \"63bf0b08-e9b3-473f-88fc-ab639d2428d9\") " Jan 22 06:57:15 crc kubenswrapper[4720]: I0122 06:57:15.145995 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/63bf0b08-e9b3-473f-88fc-ab639d2428d9-credential-keys\") pod \"63bf0b08-e9b3-473f-88fc-ab639d2428d9\" (UID: \"63bf0b08-e9b3-473f-88fc-ab639d2428d9\") " Jan 22 06:57:15 crc kubenswrapper[4720]: I0122 06:57:15.146054 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rbx9h\" (UniqueName: \"kubernetes.io/projected/63bf0b08-e9b3-473f-88fc-ab639d2428d9-kube-api-access-rbx9h\") pod \"63bf0b08-e9b3-473f-88fc-ab639d2428d9\" (UID: \"63bf0b08-e9b3-473f-88fc-ab639d2428d9\") " Jan 22 06:57:15 crc kubenswrapper[4720]: I0122 06:57:15.146163 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/63bf0b08-e9b3-473f-88fc-ab639d2428d9-scripts\") pod \"63bf0b08-e9b3-473f-88fc-ab639d2428d9\" (UID: \"63bf0b08-e9b3-473f-88fc-ab639d2428d9\") " Jan 22 06:57:15 crc kubenswrapper[4720]: I0122 06:57:15.146255 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63bf0b08-e9b3-473f-88fc-ab639d2428d9-combined-ca-bundle\") pod \"63bf0b08-e9b3-473f-88fc-ab639d2428d9\" (UID: \"63bf0b08-e9b3-473f-88fc-ab639d2428d9\") " Jan 22 06:57:15 crc kubenswrapper[4720]: I0122 06:57:15.151819 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/63bf0b08-e9b3-473f-88fc-ab639d2428d9-kube-api-access-rbx9h" (OuterVolumeSpecName: "kube-api-access-rbx9h") pod "63bf0b08-e9b3-473f-88fc-ab639d2428d9" (UID: "63bf0b08-e9b3-473f-88fc-ab639d2428d9"). InnerVolumeSpecName "kube-api-access-rbx9h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 06:57:15 crc kubenswrapper[4720]: I0122 06:57:15.152997 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/63bf0b08-e9b3-473f-88fc-ab639d2428d9-scripts" (OuterVolumeSpecName: "scripts") pod "63bf0b08-e9b3-473f-88fc-ab639d2428d9" (UID: "63bf0b08-e9b3-473f-88fc-ab639d2428d9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 06:57:15 crc kubenswrapper[4720]: I0122 06:57:15.153634 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/63bf0b08-e9b3-473f-88fc-ab639d2428d9-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "63bf0b08-e9b3-473f-88fc-ab639d2428d9" (UID: "63bf0b08-e9b3-473f-88fc-ab639d2428d9"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 06:57:15 crc kubenswrapper[4720]: I0122 06:57:15.159389 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/63bf0b08-e9b3-473f-88fc-ab639d2428d9-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "63bf0b08-e9b3-473f-88fc-ab639d2428d9" (UID: "63bf0b08-e9b3-473f-88fc-ab639d2428d9"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 06:57:15 crc kubenswrapper[4720]: I0122 06:57:15.170409 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/63bf0b08-e9b3-473f-88fc-ab639d2428d9-config-data" (OuterVolumeSpecName: "config-data") pod "63bf0b08-e9b3-473f-88fc-ab639d2428d9" (UID: "63bf0b08-e9b3-473f-88fc-ab639d2428d9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 06:57:15 crc kubenswrapper[4720]: I0122 06:57:15.174304 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/63bf0b08-e9b3-473f-88fc-ab639d2428d9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "63bf0b08-e9b3-473f-88fc-ab639d2428d9" (UID: "63bf0b08-e9b3-473f-88fc-ab639d2428d9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 06:57:15 crc kubenswrapper[4720]: I0122 06:57:15.248568 4720 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/63bf0b08-e9b3-473f-88fc-ab639d2428d9-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 06:57:15 crc kubenswrapper[4720]: I0122 06:57:15.248620 4720 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63bf0b08-e9b3-473f-88fc-ab639d2428d9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 06:57:15 crc kubenswrapper[4720]: I0122 06:57:15.248638 4720 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63bf0b08-e9b3-473f-88fc-ab639d2428d9-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 06:57:15 crc kubenswrapper[4720]: I0122 06:57:15.248693 4720 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/63bf0b08-e9b3-473f-88fc-ab639d2428d9-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 22 06:57:15 crc kubenswrapper[4720]: I0122 06:57:15.248707 4720 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/63bf0b08-e9b3-473f-88fc-ab639d2428d9-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 22 06:57:15 crc kubenswrapper[4720]: I0122 06:57:15.248721 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rbx9h\" (UniqueName: \"kubernetes.io/projected/63bf0b08-e9b3-473f-88fc-ab639d2428d9-kube-api-access-rbx9h\") on node \"crc\" DevicePath \"\"" Jan 22 06:57:15 crc kubenswrapper[4720]: I0122 06:57:15.667598 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-bootstrap-22bbf" event={"ID":"63bf0b08-e9b3-473f-88fc-ab639d2428d9","Type":"ContainerDied","Data":"04b08c7b1fda0e0822b32e7b93e8618d546e551c2f3715ad9ce2e15d1087da47"} Jan 22 06:57:15 crc kubenswrapper[4720]: I0122 06:57:15.667644 4720 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="04b08c7b1fda0e0822b32e7b93e8618d546e551c2f3715ad9ce2e15d1087da47" Jan 22 06:57:15 crc kubenswrapper[4720]: I0122 06:57:15.667711 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-bootstrap-22bbf" Jan 22 06:57:15 crc kubenswrapper[4720]: I0122 06:57:15.792492 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/keystone-bootstrap-22bbf"] Jan 22 06:57:15 crc kubenswrapper[4720]: I0122 06:57:15.797596 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/keystone-bootstrap-22bbf"] Jan 22 06:57:15 crc kubenswrapper[4720]: I0122 06:57:15.883855 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/keystone-bootstrap-2wf2m"] Jan 22 06:57:15 crc kubenswrapper[4720]: E0122 06:57:15.884284 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63bf0b08-e9b3-473f-88fc-ab639d2428d9" containerName="keystone-bootstrap" Jan 22 06:57:15 crc kubenswrapper[4720]: I0122 06:57:15.884307 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="63bf0b08-e9b3-473f-88fc-ab639d2428d9" containerName="keystone-bootstrap" Jan 22 06:57:15 crc kubenswrapper[4720]: I0122 06:57:15.884545 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="63bf0b08-e9b3-473f-88fc-ab639d2428d9" containerName="keystone-bootstrap" Jan 22 06:57:15 crc kubenswrapper[4720]: I0122 06:57:15.885221 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-bootstrap-2wf2m" Jan 22 06:57:15 crc kubenswrapper[4720]: I0122 06:57:15.888623 4720 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"keystone-scripts" Jan 22 06:57:15 crc kubenswrapper[4720]: I0122 06:57:15.888717 4720 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"osp-secret" Jan 22 06:57:15 crc kubenswrapper[4720]: I0122 06:57:15.889329 4720 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"keystone-keystone-dockercfg-skq9h" Jan 22 06:57:15 crc kubenswrapper[4720]: I0122 06:57:15.889829 4720 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"keystone-config-data" Jan 22 06:57:15 crc kubenswrapper[4720]: I0122 06:57:15.904673 4720 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"keystone" Jan 22 06:57:15 crc kubenswrapper[4720]: I0122 06:57:15.910532 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/keystone-bootstrap-2wf2m"] Jan 22 06:57:16 crc kubenswrapper[4720]: I0122 06:57:16.061278 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b9501447-d695-42bc-ab22-0422b2db3647-scripts\") pod \"keystone-bootstrap-2wf2m\" (UID: \"b9501447-d695-42bc-ab22-0422b2db3647\") " pod="watcher-kuttl-default/keystone-bootstrap-2wf2m" Jan 22 06:57:16 crc kubenswrapper[4720]: I0122 06:57:16.061358 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b9501447-d695-42bc-ab22-0422b2db3647-credential-keys\") pod \"keystone-bootstrap-2wf2m\" (UID: \"b9501447-d695-42bc-ab22-0422b2db3647\") " pod="watcher-kuttl-default/keystone-bootstrap-2wf2m" Jan 22 06:57:16 crc kubenswrapper[4720]: I0122 06:57:16.061551 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b9501447-d695-42bc-ab22-0422b2db3647-fernet-keys\") pod \"keystone-bootstrap-2wf2m\" (UID: \"b9501447-d695-42bc-ab22-0422b2db3647\") " pod="watcher-kuttl-default/keystone-bootstrap-2wf2m" Jan 22 06:57:16 crc kubenswrapper[4720]: I0122 06:57:16.061770 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b9501447-d695-42bc-ab22-0422b2db3647-config-data\") pod \"keystone-bootstrap-2wf2m\" (UID: \"b9501447-d695-42bc-ab22-0422b2db3647\") " pod="watcher-kuttl-default/keystone-bootstrap-2wf2m" Jan 22 06:57:16 crc kubenswrapper[4720]: I0122 06:57:16.062017 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-529zg\" (UniqueName: \"kubernetes.io/projected/b9501447-d695-42bc-ab22-0422b2db3647-kube-api-access-529zg\") pod \"keystone-bootstrap-2wf2m\" (UID: \"b9501447-d695-42bc-ab22-0422b2db3647\") " pod="watcher-kuttl-default/keystone-bootstrap-2wf2m" Jan 22 06:57:16 crc kubenswrapper[4720]: I0122 06:57:16.062060 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9501447-d695-42bc-ab22-0422b2db3647-combined-ca-bundle\") pod \"keystone-bootstrap-2wf2m\" (UID: \"b9501447-d695-42bc-ab22-0422b2db3647\") " pod="watcher-kuttl-default/keystone-bootstrap-2wf2m" Jan 22 06:57:16 crc kubenswrapper[4720]: I0122 06:57:16.163845 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-529zg\" (UniqueName: \"kubernetes.io/projected/b9501447-d695-42bc-ab22-0422b2db3647-kube-api-access-529zg\") pod \"keystone-bootstrap-2wf2m\" (UID: \"b9501447-d695-42bc-ab22-0422b2db3647\") " pod="watcher-kuttl-default/keystone-bootstrap-2wf2m" Jan 22 06:57:16 crc kubenswrapper[4720]: I0122 06:57:16.164362 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9501447-d695-42bc-ab22-0422b2db3647-combined-ca-bundle\") pod \"keystone-bootstrap-2wf2m\" (UID: \"b9501447-d695-42bc-ab22-0422b2db3647\") " pod="watcher-kuttl-default/keystone-bootstrap-2wf2m" Jan 22 06:57:16 crc kubenswrapper[4720]: I0122 06:57:16.164401 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b9501447-d695-42bc-ab22-0422b2db3647-scripts\") pod \"keystone-bootstrap-2wf2m\" (UID: \"b9501447-d695-42bc-ab22-0422b2db3647\") " pod="watcher-kuttl-default/keystone-bootstrap-2wf2m" Jan 22 06:57:16 crc kubenswrapper[4720]: I0122 06:57:16.164448 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b9501447-d695-42bc-ab22-0422b2db3647-credential-keys\") pod \"keystone-bootstrap-2wf2m\" (UID: \"b9501447-d695-42bc-ab22-0422b2db3647\") " pod="watcher-kuttl-default/keystone-bootstrap-2wf2m" Jan 22 06:57:16 crc kubenswrapper[4720]: I0122 06:57:16.164485 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b9501447-d695-42bc-ab22-0422b2db3647-fernet-keys\") pod \"keystone-bootstrap-2wf2m\" (UID: \"b9501447-d695-42bc-ab22-0422b2db3647\") " pod="watcher-kuttl-default/keystone-bootstrap-2wf2m" Jan 22 06:57:16 crc kubenswrapper[4720]: I0122 06:57:16.164532 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b9501447-d695-42bc-ab22-0422b2db3647-config-data\") pod \"keystone-bootstrap-2wf2m\" (UID: \"b9501447-d695-42bc-ab22-0422b2db3647\") " pod="watcher-kuttl-default/keystone-bootstrap-2wf2m" Jan 22 06:57:16 crc kubenswrapper[4720]: I0122 06:57:16.170316 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b9501447-d695-42bc-ab22-0422b2db3647-scripts\") pod \"keystone-bootstrap-2wf2m\" (UID: \"b9501447-d695-42bc-ab22-0422b2db3647\") " pod="watcher-kuttl-default/keystone-bootstrap-2wf2m" Jan 22 06:57:16 crc kubenswrapper[4720]: I0122 06:57:16.170730 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9501447-d695-42bc-ab22-0422b2db3647-combined-ca-bundle\") pod \"keystone-bootstrap-2wf2m\" (UID: \"b9501447-d695-42bc-ab22-0422b2db3647\") " pod="watcher-kuttl-default/keystone-bootstrap-2wf2m" Jan 22 06:57:16 crc kubenswrapper[4720]: I0122 06:57:16.171368 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b9501447-d695-42bc-ab22-0422b2db3647-config-data\") pod \"keystone-bootstrap-2wf2m\" (UID: \"b9501447-d695-42bc-ab22-0422b2db3647\") " pod="watcher-kuttl-default/keystone-bootstrap-2wf2m" Jan 22 06:57:16 crc kubenswrapper[4720]: I0122 06:57:16.171511 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b9501447-d695-42bc-ab22-0422b2db3647-fernet-keys\") pod \"keystone-bootstrap-2wf2m\" (UID: \"b9501447-d695-42bc-ab22-0422b2db3647\") " pod="watcher-kuttl-default/keystone-bootstrap-2wf2m" Jan 22 06:57:16 crc kubenswrapper[4720]: I0122 06:57:16.172013 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b9501447-d695-42bc-ab22-0422b2db3647-credential-keys\") pod \"keystone-bootstrap-2wf2m\" (UID: \"b9501447-d695-42bc-ab22-0422b2db3647\") " pod="watcher-kuttl-default/keystone-bootstrap-2wf2m" Jan 22 06:57:16 crc kubenswrapper[4720]: I0122 06:57:16.189602 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-529zg\" (UniqueName: \"kubernetes.io/projected/b9501447-d695-42bc-ab22-0422b2db3647-kube-api-access-529zg\") pod \"keystone-bootstrap-2wf2m\" (UID: \"b9501447-d695-42bc-ab22-0422b2db3647\") " pod="watcher-kuttl-default/keystone-bootstrap-2wf2m" Jan 22 06:57:16 crc kubenswrapper[4720]: I0122 06:57:16.203580 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-bootstrap-2wf2m" Jan 22 06:57:16 crc kubenswrapper[4720]: I0122 06:57:16.226504 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="63bf0b08-e9b3-473f-88fc-ab639d2428d9" path="/var/lib/kubelet/pods/63bf0b08-e9b3-473f-88fc-ab639d2428d9/volumes" Jan 22 06:57:16 crc kubenswrapper[4720]: I0122 06:57:16.663407 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/keystone-bootstrap-2wf2m"] Jan 22 06:57:16 crc kubenswrapper[4720]: W0122 06:57:16.668194 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb9501447_d695_42bc_ab22_0422b2db3647.slice/crio-de742a5ac1d471871dd65100264756686312696210fddb1cb3f1c85df05733b6 WatchSource:0}: Error finding container de742a5ac1d471871dd65100264756686312696210fddb1cb3f1c85df05733b6: Status 404 returned error can't find the container with id de742a5ac1d471871dd65100264756686312696210fddb1cb3f1c85df05733b6 Jan 22 06:57:16 crc kubenswrapper[4720]: I0122 06:57:16.701092 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"6ab539c5-9633-47b3-a904-e5bb0f40c1c8","Type":"ContainerStarted","Data":"7b5c2178bc69d0aec900f125fa9d77557c8e5a376035a7512141070f4b898159"} Jan 22 06:57:17 crc kubenswrapper[4720]: I0122 06:57:17.735014 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-bootstrap-2wf2m" event={"ID":"b9501447-d695-42bc-ab22-0422b2db3647","Type":"ContainerStarted","Data":"9c1de39ef7a6e57b29b6c436cb2274f19acfee9e2e92f84c2315ed45f2496cfd"} Jan 22 06:57:17 crc kubenswrapper[4720]: I0122 06:57:17.735081 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-bootstrap-2wf2m" event={"ID":"b9501447-d695-42bc-ab22-0422b2db3647","Type":"ContainerStarted","Data":"de742a5ac1d471871dd65100264756686312696210fddb1cb3f1c85df05733b6"} Jan 22 06:57:17 crc kubenswrapper[4720]: I0122 06:57:17.765901 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/keystone-bootstrap-2wf2m" podStartSLOduration=2.765877917 podStartE2EDuration="2.765877917s" podCreationTimestamp="2026-01-22 06:57:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 06:57:17.756627262 +0000 UTC m=+1329.898533967" watchObservedRunningTime="2026-01-22 06:57:17.765877917 +0000 UTC m=+1329.907784622" Jan 22 06:57:18 crc kubenswrapper[4720]: E0122 06:57:18.189017 4720 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1eb3e6e5_9c5a_44ab_af1e_46fcd3a22c99.slice\": RecentStats: unable to find data in memory cache]" Jan 22 06:57:21 crc kubenswrapper[4720]: I0122 06:57:21.953824 4720 generic.go:334] "Generic (PLEG): container finished" podID="b9501447-d695-42bc-ab22-0422b2db3647" containerID="9c1de39ef7a6e57b29b6c436cb2274f19acfee9e2e92f84c2315ed45f2496cfd" exitCode=0 Jan 22 06:57:21 crc kubenswrapper[4720]: I0122 06:57:21.953899 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-bootstrap-2wf2m" event={"ID":"b9501447-d695-42bc-ab22-0422b2db3647","Type":"ContainerDied","Data":"9c1de39ef7a6e57b29b6c436cb2274f19acfee9e2e92f84c2315ed45f2496cfd"} Jan 22 06:57:22 crc kubenswrapper[4720]: I0122 06:57:22.968274 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"6ab539c5-9633-47b3-a904-e5bb0f40c1c8","Type":"ContainerStarted","Data":"08b74e16faf2070307d3bee0bb8bcb75cc4ba0cff581bca2ae83e9e7d7341df1"} Jan 22 06:57:23 crc kubenswrapper[4720]: I0122 06:57:23.351836 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-bootstrap-2wf2m" Jan 22 06:57:23 crc kubenswrapper[4720]: I0122 06:57:23.404121 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b9501447-d695-42bc-ab22-0422b2db3647-config-data\") pod \"b9501447-d695-42bc-ab22-0422b2db3647\" (UID: \"b9501447-d695-42bc-ab22-0422b2db3647\") " Jan 22 06:57:23 crc kubenswrapper[4720]: I0122 06:57:23.404372 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b9501447-d695-42bc-ab22-0422b2db3647-fernet-keys\") pod \"b9501447-d695-42bc-ab22-0422b2db3647\" (UID: \"b9501447-d695-42bc-ab22-0422b2db3647\") " Jan 22 06:57:23 crc kubenswrapper[4720]: I0122 06:57:23.404409 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b9501447-d695-42bc-ab22-0422b2db3647-credential-keys\") pod \"b9501447-d695-42bc-ab22-0422b2db3647\" (UID: \"b9501447-d695-42bc-ab22-0422b2db3647\") " Jan 22 06:57:23 crc kubenswrapper[4720]: I0122 06:57:23.404444 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9501447-d695-42bc-ab22-0422b2db3647-combined-ca-bundle\") pod \"b9501447-d695-42bc-ab22-0422b2db3647\" (UID: \"b9501447-d695-42bc-ab22-0422b2db3647\") " Jan 22 06:57:23 crc kubenswrapper[4720]: I0122 06:57:23.404479 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b9501447-d695-42bc-ab22-0422b2db3647-scripts\") pod \"b9501447-d695-42bc-ab22-0422b2db3647\" (UID: \"b9501447-d695-42bc-ab22-0422b2db3647\") " Jan 22 06:57:23 crc kubenswrapper[4720]: I0122 06:57:23.404517 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-529zg\" (UniqueName: \"kubernetes.io/projected/b9501447-d695-42bc-ab22-0422b2db3647-kube-api-access-529zg\") pod \"b9501447-d695-42bc-ab22-0422b2db3647\" (UID: \"b9501447-d695-42bc-ab22-0422b2db3647\") " Jan 22 06:57:23 crc kubenswrapper[4720]: I0122 06:57:23.411958 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b9501447-d695-42bc-ab22-0422b2db3647-scripts" (OuterVolumeSpecName: "scripts") pod "b9501447-d695-42bc-ab22-0422b2db3647" (UID: "b9501447-d695-42bc-ab22-0422b2db3647"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 06:57:23 crc kubenswrapper[4720]: I0122 06:57:23.412095 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b9501447-d695-42bc-ab22-0422b2db3647-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "b9501447-d695-42bc-ab22-0422b2db3647" (UID: "b9501447-d695-42bc-ab22-0422b2db3647"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 06:57:23 crc kubenswrapper[4720]: I0122 06:57:23.413058 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b9501447-d695-42bc-ab22-0422b2db3647-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "b9501447-d695-42bc-ab22-0422b2db3647" (UID: "b9501447-d695-42bc-ab22-0422b2db3647"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 06:57:23 crc kubenswrapper[4720]: I0122 06:57:23.428010 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b9501447-d695-42bc-ab22-0422b2db3647-kube-api-access-529zg" (OuterVolumeSpecName: "kube-api-access-529zg") pod "b9501447-d695-42bc-ab22-0422b2db3647" (UID: "b9501447-d695-42bc-ab22-0422b2db3647"). InnerVolumeSpecName "kube-api-access-529zg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 06:57:23 crc kubenswrapper[4720]: I0122 06:57:23.432372 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b9501447-d695-42bc-ab22-0422b2db3647-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b9501447-d695-42bc-ab22-0422b2db3647" (UID: "b9501447-d695-42bc-ab22-0422b2db3647"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 06:57:23 crc kubenswrapper[4720]: I0122 06:57:23.442116 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b9501447-d695-42bc-ab22-0422b2db3647-config-data" (OuterVolumeSpecName: "config-data") pod "b9501447-d695-42bc-ab22-0422b2db3647" (UID: "b9501447-d695-42bc-ab22-0422b2db3647"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 06:57:23 crc kubenswrapper[4720]: I0122 06:57:23.507017 4720 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b9501447-d695-42bc-ab22-0422b2db3647-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 22 06:57:23 crc kubenswrapper[4720]: I0122 06:57:23.507066 4720 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9501447-d695-42bc-ab22-0422b2db3647-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 06:57:23 crc kubenswrapper[4720]: I0122 06:57:23.507079 4720 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b9501447-d695-42bc-ab22-0422b2db3647-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 06:57:23 crc kubenswrapper[4720]: I0122 06:57:23.507090 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-529zg\" (UniqueName: \"kubernetes.io/projected/b9501447-d695-42bc-ab22-0422b2db3647-kube-api-access-529zg\") on node \"crc\" DevicePath \"\"" Jan 22 06:57:23 crc kubenswrapper[4720]: I0122 06:57:23.507100 4720 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b9501447-d695-42bc-ab22-0422b2db3647-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 06:57:23 crc kubenswrapper[4720]: I0122 06:57:23.507109 4720 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b9501447-d695-42bc-ab22-0422b2db3647-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 22 06:57:23 crc kubenswrapper[4720]: I0122 06:57:23.976372 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-bootstrap-2wf2m" event={"ID":"b9501447-d695-42bc-ab22-0422b2db3647","Type":"ContainerDied","Data":"de742a5ac1d471871dd65100264756686312696210fddb1cb3f1c85df05733b6"} Jan 22 06:57:23 crc kubenswrapper[4720]: I0122 06:57:23.976421 4720 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="de742a5ac1d471871dd65100264756686312696210fddb1cb3f1c85df05733b6" Jan 22 06:57:23 crc kubenswrapper[4720]: I0122 06:57:23.976460 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-bootstrap-2wf2m" Jan 22 06:57:24 crc kubenswrapper[4720]: I0122 06:57:24.185229 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/keystone-fb4ff76bc-49d2q"] Jan 22 06:57:24 crc kubenswrapper[4720]: E0122 06:57:24.185813 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9501447-d695-42bc-ab22-0422b2db3647" containerName="keystone-bootstrap" Jan 22 06:57:24 crc kubenswrapper[4720]: I0122 06:57:24.185836 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9501447-d695-42bc-ab22-0422b2db3647" containerName="keystone-bootstrap" Jan 22 06:57:24 crc kubenswrapper[4720]: I0122 06:57:24.186099 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="b9501447-d695-42bc-ab22-0422b2db3647" containerName="keystone-bootstrap" Jan 22 06:57:24 crc kubenswrapper[4720]: I0122 06:57:24.186959 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-fb4ff76bc-49d2q" Jan 22 06:57:24 crc kubenswrapper[4720]: I0122 06:57:24.192686 4720 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"keystone" Jan 22 06:57:24 crc kubenswrapper[4720]: I0122 06:57:24.192986 4720 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"keystone-keystone-dockercfg-skq9h" Jan 22 06:57:24 crc kubenswrapper[4720]: I0122 06:57:24.193174 4720 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"keystone-config-data" Jan 22 06:57:24 crc kubenswrapper[4720]: I0122 06:57:24.193386 4720 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-keystone-internal-svc" Jan 22 06:57:24 crc kubenswrapper[4720]: I0122 06:57:24.194086 4720 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"keystone-scripts" Jan 22 06:57:24 crc kubenswrapper[4720]: I0122 06:57:24.200631 4720 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-keystone-public-svc" Jan 22 06:57:24 crc kubenswrapper[4720]: I0122 06:57:24.230362 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/keystone-fb4ff76bc-49d2q"] Jan 22 06:57:24 crc kubenswrapper[4720]: I0122 06:57:24.325387 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/94acf8e7-279f-4560-9716-56f731501d94-credential-keys\") pod \"keystone-fb4ff76bc-49d2q\" (UID: \"94acf8e7-279f-4560-9716-56f731501d94\") " pod="watcher-kuttl-default/keystone-fb4ff76bc-49d2q" Jan 22 06:57:24 crc kubenswrapper[4720]: I0122 06:57:24.325482 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/94acf8e7-279f-4560-9716-56f731501d94-scripts\") pod \"keystone-fb4ff76bc-49d2q\" (UID: \"94acf8e7-279f-4560-9716-56f731501d94\") " pod="watcher-kuttl-default/keystone-fb4ff76bc-49d2q" Jan 22 06:57:24 crc kubenswrapper[4720]: I0122 06:57:24.325723 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/94acf8e7-279f-4560-9716-56f731501d94-public-tls-certs\") pod \"keystone-fb4ff76bc-49d2q\" (UID: \"94acf8e7-279f-4560-9716-56f731501d94\") " pod="watcher-kuttl-default/keystone-fb4ff76bc-49d2q" Jan 22 06:57:24 crc kubenswrapper[4720]: I0122 06:57:24.325810 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94acf8e7-279f-4560-9716-56f731501d94-combined-ca-bundle\") pod \"keystone-fb4ff76bc-49d2q\" (UID: \"94acf8e7-279f-4560-9716-56f731501d94\") " pod="watcher-kuttl-default/keystone-fb4ff76bc-49d2q" Jan 22 06:57:24 crc kubenswrapper[4720]: I0122 06:57:24.325893 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kknzf\" (UniqueName: \"kubernetes.io/projected/94acf8e7-279f-4560-9716-56f731501d94-kube-api-access-kknzf\") pod \"keystone-fb4ff76bc-49d2q\" (UID: \"94acf8e7-279f-4560-9716-56f731501d94\") " pod="watcher-kuttl-default/keystone-fb4ff76bc-49d2q" Jan 22 06:57:24 crc kubenswrapper[4720]: I0122 06:57:24.325939 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94acf8e7-279f-4560-9716-56f731501d94-config-data\") pod \"keystone-fb4ff76bc-49d2q\" (UID: \"94acf8e7-279f-4560-9716-56f731501d94\") " pod="watcher-kuttl-default/keystone-fb4ff76bc-49d2q" Jan 22 06:57:24 crc kubenswrapper[4720]: I0122 06:57:24.325956 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/94acf8e7-279f-4560-9716-56f731501d94-internal-tls-certs\") pod \"keystone-fb4ff76bc-49d2q\" (UID: \"94acf8e7-279f-4560-9716-56f731501d94\") " pod="watcher-kuttl-default/keystone-fb4ff76bc-49d2q" Jan 22 06:57:24 crc kubenswrapper[4720]: I0122 06:57:24.326007 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/94acf8e7-279f-4560-9716-56f731501d94-fernet-keys\") pod \"keystone-fb4ff76bc-49d2q\" (UID: \"94acf8e7-279f-4560-9716-56f731501d94\") " pod="watcher-kuttl-default/keystone-fb4ff76bc-49d2q" Jan 22 06:57:24 crc kubenswrapper[4720]: I0122 06:57:24.448192 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/94acf8e7-279f-4560-9716-56f731501d94-credential-keys\") pod \"keystone-fb4ff76bc-49d2q\" (UID: \"94acf8e7-279f-4560-9716-56f731501d94\") " pod="watcher-kuttl-default/keystone-fb4ff76bc-49d2q" Jan 22 06:57:24 crc kubenswrapper[4720]: I0122 06:57:24.448275 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/94acf8e7-279f-4560-9716-56f731501d94-scripts\") pod \"keystone-fb4ff76bc-49d2q\" (UID: \"94acf8e7-279f-4560-9716-56f731501d94\") " pod="watcher-kuttl-default/keystone-fb4ff76bc-49d2q" Jan 22 06:57:24 crc kubenswrapper[4720]: I0122 06:57:24.448346 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/94acf8e7-279f-4560-9716-56f731501d94-public-tls-certs\") pod \"keystone-fb4ff76bc-49d2q\" (UID: \"94acf8e7-279f-4560-9716-56f731501d94\") " pod="watcher-kuttl-default/keystone-fb4ff76bc-49d2q" Jan 22 06:57:24 crc kubenswrapper[4720]: I0122 06:57:24.448372 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94acf8e7-279f-4560-9716-56f731501d94-combined-ca-bundle\") pod \"keystone-fb4ff76bc-49d2q\" (UID: \"94acf8e7-279f-4560-9716-56f731501d94\") " pod="watcher-kuttl-default/keystone-fb4ff76bc-49d2q" Jan 22 06:57:24 crc kubenswrapper[4720]: I0122 06:57:24.448403 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kknzf\" (UniqueName: \"kubernetes.io/projected/94acf8e7-279f-4560-9716-56f731501d94-kube-api-access-kknzf\") pod \"keystone-fb4ff76bc-49d2q\" (UID: \"94acf8e7-279f-4560-9716-56f731501d94\") " pod="watcher-kuttl-default/keystone-fb4ff76bc-49d2q" Jan 22 06:57:24 crc kubenswrapper[4720]: I0122 06:57:24.448430 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94acf8e7-279f-4560-9716-56f731501d94-config-data\") pod \"keystone-fb4ff76bc-49d2q\" (UID: \"94acf8e7-279f-4560-9716-56f731501d94\") " pod="watcher-kuttl-default/keystone-fb4ff76bc-49d2q" Jan 22 06:57:24 crc kubenswrapper[4720]: I0122 06:57:24.448448 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/94acf8e7-279f-4560-9716-56f731501d94-internal-tls-certs\") pod \"keystone-fb4ff76bc-49d2q\" (UID: \"94acf8e7-279f-4560-9716-56f731501d94\") " pod="watcher-kuttl-default/keystone-fb4ff76bc-49d2q" Jan 22 06:57:24 crc kubenswrapper[4720]: I0122 06:57:24.448484 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/94acf8e7-279f-4560-9716-56f731501d94-fernet-keys\") pod \"keystone-fb4ff76bc-49d2q\" (UID: \"94acf8e7-279f-4560-9716-56f731501d94\") " pod="watcher-kuttl-default/keystone-fb4ff76bc-49d2q" Jan 22 06:57:24 crc kubenswrapper[4720]: I0122 06:57:24.458461 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/94acf8e7-279f-4560-9716-56f731501d94-scripts\") pod \"keystone-fb4ff76bc-49d2q\" (UID: \"94acf8e7-279f-4560-9716-56f731501d94\") " pod="watcher-kuttl-default/keystone-fb4ff76bc-49d2q" Jan 22 06:57:24 crc kubenswrapper[4720]: I0122 06:57:24.459504 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/94acf8e7-279f-4560-9716-56f731501d94-fernet-keys\") pod \"keystone-fb4ff76bc-49d2q\" (UID: \"94acf8e7-279f-4560-9716-56f731501d94\") " pod="watcher-kuttl-default/keystone-fb4ff76bc-49d2q" Jan 22 06:57:24 crc kubenswrapper[4720]: I0122 06:57:24.462503 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/94acf8e7-279f-4560-9716-56f731501d94-internal-tls-certs\") pod \"keystone-fb4ff76bc-49d2q\" (UID: \"94acf8e7-279f-4560-9716-56f731501d94\") " pod="watcher-kuttl-default/keystone-fb4ff76bc-49d2q" Jan 22 06:57:24 crc kubenswrapper[4720]: I0122 06:57:24.463659 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/94acf8e7-279f-4560-9716-56f731501d94-credential-keys\") pod \"keystone-fb4ff76bc-49d2q\" (UID: \"94acf8e7-279f-4560-9716-56f731501d94\") " pod="watcher-kuttl-default/keystone-fb4ff76bc-49d2q" Jan 22 06:57:24 crc kubenswrapper[4720]: I0122 06:57:24.472498 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/94acf8e7-279f-4560-9716-56f731501d94-public-tls-certs\") pod \"keystone-fb4ff76bc-49d2q\" (UID: \"94acf8e7-279f-4560-9716-56f731501d94\") " pod="watcher-kuttl-default/keystone-fb4ff76bc-49d2q" Jan 22 06:57:24 crc kubenswrapper[4720]: I0122 06:57:24.474111 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94acf8e7-279f-4560-9716-56f731501d94-combined-ca-bundle\") pod \"keystone-fb4ff76bc-49d2q\" (UID: \"94acf8e7-279f-4560-9716-56f731501d94\") " pod="watcher-kuttl-default/keystone-fb4ff76bc-49d2q" Jan 22 06:57:24 crc kubenswrapper[4720]: I0122 06:57:24.477888 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94acf8e7-279f-4560-9716-56f731501d94-config-data\") pod \"keystone-fb4ff76bc-49d2q\" (UID: \"94acf8e7-279f-4560-9716-56f731501d94\") " pod="watcher-kuttl-default/keystone-fb4ff76bc-49d2q" Jan 22 06:57:24 crc kubenswrapper[4720]: I0122 06:57:24.484968 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kknzf\" (UniqueName: \"kubernetes.io/projected/94acf8e7-279f-4560-9716-56f731501d94-kube-api-access-kknzf\") pod \"keystone-fb4ff76bc-49d2q\" (UID: \"94acf8e7-279f-4560-9716-56f731501d94\") " pod="watcher-kuttl-default/keystone-fb4ff76bc-49d2q" Jan 22 06:57:24 crc kubenswrapper[4720]: I0122 06:57:24.506778 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-fb4ff76bc-49d2q" Jan 22 06:57:25 crc kubenswrapper[4720]: I0122 06:57:25.086266 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/keystone-fb4ff76bc-49d2q"] Jan 22 06:57:26 crc kubenswrapper[4720]: I0122 06:57:26.000936 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-fb4ff76bc-49d2q" event={"ID":"94acf8e7-279f-4560-9716-56f731501d94","Type":"ContainerStarted","Data":"9bb688468587c4651c3504c958b57edd707756a40a5140b3b98afbdfe7bd6160"} Jan 22 06:57:27 crc kubenswrapper[4720]: I0122 06:57:27.011900 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-fb4ff76bc-49d2q" event={"ID":"94acf8e7-279f-4560-9716-56f731501d94","Type":"ContainerStarted","Data":"19f76acff38894c114f2443a20a59c6e2d8b7aa672fcd330010cc3b567d81d35"} Jan 22 06:57:27 crc kubenswrapper[4720]: I0122 06:57:27.012096 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/keystone-fb4ff76bc-49d2q" Jan 22 06:57:27 crc kubenswrapper[4720]: I0122 06:57:27.059791 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/keystone-fb4ff76bc-49d2q" podStartSLOduration=3.059764148 podStartE2EDuration="3.059764148s" podCreationTimestamp="2026-01-22 06:57:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 06:57:27.056649289 +0000 UTC m=+1339.198555994" watchObservedRunningTime="2026-01-22 06:57:27.059764148 +0000 UTC m=+1339.201670873" Jan 22 06:57:28 crc kubenswrapper[4720]: E0122 06:57:28.429258 4720 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1eb3e6e5_9c5a_44ab_af1e_46fcd3a22c99.slice\": RecentStats: unable to find data in memory cache]" Jan 22 06:57:33 crc kubenswrapper[4720]: I0122 06:57:33.103418 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"6ab539c5-9633-47b3-a904-e5bb0f40c1c8","Type":"ContainerStarted","Data":"bc20b35254f76e6196e986fa94869d5aaa8ef0bf7100c94fa101ba923c33d9d7"} Jan 22 06:57:33 crc kubenswrapper[4720]: I0122 06:57:33.104038 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/ceilometer-0" Jan 22 06:57:33 crc kubenswrapper[4720]: I0122 06:57:33.103629 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="6ab539c5-9633-47b3-a904-e5bb0f40c1c8" containerName="ceilometer-central-agent" containerID="cri-o://18923d1c339ae24b4a6cfa84bb610f59265122e6a44e815887fa3b323dd6f392" gracePeriod=30 Jan 22 06:57:33 crc kubenswrapper[4720]: I0122 06:57:33.103709 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="6ab539c5-9633-47b3-a904-e5bb0f40c1c8" containerName="sg-core" containerID="cri-o://08b74e16faf2070307d3bee0bb8bcb75cc4ba0cff581bca2ae83e9e7d7341df1" gracePeriod=30 Jan 22 06:57:33 crc kubenswrapper[4720]: I0122 06:57:33.103711 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="6ab539c5-9633-47b3-a904-e5bb0f40c1c8" containerName="proxy-httpd" containerID="cri-o://bc20b35254f76e6196e986fa94869d5aaa8ef0bf7100c94fa101ba923c33d9d7" gracePeriod=30 Jan 22 06:57:33 crc kubenswrapper[4720]: I0122 06:57:33.103658 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="6ab539c5-9633-47b3-a904-e5bb0f40c1c8" containerName="ceilometer-notification-agent" containerID="cri-o://7b5c2178bc69d0aec900f125fa9d77557c8e5a376035a7512141070f4b898159" gracePeriod=30 Jan 22 06:57:33 crc kubenswrapper[4720]: I0122 06:57:33.137604 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/ceilometer-0" podStartSLOduration=2.184964933 podStartE2EDuration="25.137578635s" podCreationTimestamp="2026-01-22 06:57:08 +0000 UTC" firstStartedPulling="2026-01-22 06:57:09.56582148 +0000 UTC m=+1321.707728185" lastFinishedPulling="2026-01-22 06:57:32.518435182 +0000 UTC m=+1344.660341887" observedRunningTime="2026-01-22 06:57:33.128031921 +0000 UTC m=+1345.269938636" watchObservedRunningTime="2026-01-22 06:57:33.137578635 +0000 UTC m=+1345.279485340" Jan 22 06:57:34 crc kubenswrapper[4720]: I0122 06:57:34.113623 4720 generic.go:334] "Generic (PLEG): container finished" podID="6ab539c5-9633-47b3-a904-e5bb0f40c1c8" containerID="bc20b35254f76e6196e986fa94869d5aaa8ef0bf7100c94fa101ba923c33d9d7" exitCode=0 Jan 22 06:57:34 crc kubenswrapper[4720]: I0122 06:57:34.113668 4720 generic.go:334] "Generic (PLEG): container finished" podID="6ab539c5-9633-47b3-a904-e5bb0f40c1c8" containerID="08b74e16faf2070307d3bee0bb8bcb75cc4ba0cff581bca2ae83e9e7d7341df1" exitCode=2 Jan 22 06:57:34 crc kubenswrapper[4720]: I0122 06:57:34.113681 4720 generic.go:334] "Generic (PLEG): container finished" podID="6ab539c5-9633-47b3-a904-e5bb0f40c1c8" containerID="18923d1c339ae24b4a6cfa84bb610f59265122e6a44e815887fa3b323dd6f392" exitCode=0 Jan 22 06:57:34 crc kubenswrapper[4720]: I0122 06:57:34.113703 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"6ab539c5-9633-47b3-a904-e5bb0f40c1c8","Type":"ContainerDied","Data":"bc20b35254f76e6196e986fa94869d5aaa8ef0bf7100c94fa101ba923c33d9d7"} Jan 22 06:57:34 crc kubenswrapper[4720]: I0122 06:57:34.113763 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"6ab539c5-9633-47b3-a904-e5bb0f40c1c8","Type":"ContainerDied","Data":"08b74e16faf2070307d3bee0bb8bcb75cc4ba0cff581bca2ae83e9e7d7341df1"} Jan 22 06:57:34 crc kubenswrapper[4720]: I0122 06:57:34.113775 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"6ab539c5-9633-47b3-a904-e5bb0f40c1c8","Type":"ContainerDied","Data":"18923d1c339ae24b4a6cfa84bb610f59265122e6a44e815887fa3b323dd6f392"} Jan 22 06:57:38 crc kubenswrapper[4720]: I0122 06:57:38.152045 4720 generic.go:334] "Generic (PLEG): container finished" podID="6ab539c5-9633-47b3-a904-e5bb0f40c1c8" containerID="7b5c2178bc69d0aec900f125fa9d77557c8e5a376035a7512141070f4b898159" exitCode=0 Jan 22 06:57:38 crc kubenswrapper[4720]: I0122 06:57:38.152622 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"6ab539c5-9633-47b3-a904-e5bb0f40c1c8","Type":"ContainerDied","Data":"7b5c2178bc69d0aec900f125fa9d77557c8e5a376035a7512141070f4b898159"} Jan 22 06:57:38 crc kubenswrapper[4720]: I0122 06:57:38.335896 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 22 06:57:38 crc kubenswrapper[4720]: I0122 06:57:38.441280 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6ab539c5-9633-47b3-a904-e5bb0f40c1c8-scripts\") pod \"6ab539c5-9633-47b3-a904-e5bb0f40c1c8\" (UID: \"6ab539c5-9633-47b3-a904-e5bb0f40c1c8\") " Jan 22 06:57:38 crc kubenswrapper[4720]: I0122 06:57:38.441343 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6ab539c5-9633-47b3-a904-e5bb0f40c1c8-run-httpd\") pod \"6ab539c5-9633-47b3-a904-e5bb0f40c1c8\" (UID: \"6ab539c5-9633-47b3-a904-e5bb0f40c1c8\") " Jan 22 06:57:38 crc kubenswrapper[4720]: I0122 06:57:38.441422 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6ab539c5-9633-47b3-a904-e5bb0f40c1c8-config-data\") pod \"6ab539c5-9633-47b3-a904-e5bb0f40c1c8\" (UID: \"6ab539c5-9633-47b3-a904-e5bb0f40c1c8\") " Jan 22 06:57:38 crc kubenswrapper[4720]: I0122 06:57:38.441470 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-87lpw\" (UniqueName: \"kubernetes.io/projected/6ab539c5-9633-47b3-a904-e5bb0f40c1c8-kube-api-access-87lpw\") pod \"6ab539c5-9633-47b3-a904-e5bb0f40c1c8\" (UID: \"6ab539c5-9633-47b3-a904-e5bb0f40c1c8\") " Jan 22 06:57:38 crc kubenswrapper[4720]: I0122 06:57:38.441528 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ab539c5-9633-47b3-a904-e5bb0f40c1c8-combined-ca-bundle\") pod \"6ab539c5-9633-47b3-a904-e5bb0f40c1c8\" (UID: \"6ab539c5-9633-47b3-a904-e5bb0f40c1c8\") " Jan 22 06:57:38 crc kubenswrapper[4720]: I0122 06:57:38.441554 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6ab539c5-9633-47b3-a904-e5bb0f40c1c8-sg-core-conf-yaml\") pod \"6ab539c5-9633-47b3-a904-e5bb0f40c1c8\" (UID: \"6ab539c5-9633-47b3-a904-e5bb0f40c1c8\") " Jan 22 06:57:38 crc kubenswrapper[4720]: I0122 06:57:38.441573 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6ab539c5-9633-47b3-a904-e5bb0f40c1c8-log-httpd\") pod \"6ab539c5-9633-47b3-a904-e5bb0f40c1c8\" (UID: \"6ab539c5-9633-47b3-a904-e5bb0f40c1c8\") " Jan 22 06:57:38 crc kubenswrapper[4720]: I0122 06:57:38.442380 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6ab539c5-9633-47b3-a904-e5bb0f40c1c8-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "6ab539c5-9633-47b3-a904-e5bb0f40c1c8" (UID: "6ab539c5-9633-47b3-a904-e5bb0f40c1c8"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 06:57:38 crc kubenswrapper[4720]: I0122 06:57:38.443153 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6ab539c5-9633-47b3-a904-e5bb0f40c1c8-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "6ab539c5-9633-47b3-a904-e5bb0f40c1c8" (UID: "6ab539c5-9633-47b3-a904-e5bb0f40c1c8"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 06:57:38 crc kubenswrapper[4720]: I0122 06:57:38.451141 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ab539c5-9633-47b3-a904-e5bb0f40c1c8-kube-api-access-87lpw" (OuterVolumeSpecName: "kube-api-access-87lpw") pod "6ab539c5-9633-47b3-a904-e5bb0f40c1c8" (UID: "6ab539c5-9633-47b3-a904-e5bb0f40c1c8"). InnerVolumeSpecName "kube-api-access-87lpw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 06:57:38 crc kubenswrapper[4720]: I0122 06:57:38.451561 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ab539c5-9633-47b3-a904-e5bb0f40c1c8-scripts" (OuterVolumeSpecName: "scripts") pod "6ab539c5-9633-47b3-a904-e5bb0f40c1c8" (UID: "6ab539c5-9633-47b3-a904-e5bb0f40c1c8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 06:57:38 crc kubenswrapper[4720]: I0122 06:57:38.470311 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ab539c5-9633-47b3-a904-e5bb0f40c1c8-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "6ab539c5-9633-47b3-a904-e5bb0f40c1c8" (UID: "6ab539c5-9633-47b3-a904-e5bb0f40c1c8"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 06:57:38 crc kubenswrapper[4720]: I0122 06:57:38.529294 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ab539c5-9633-47b3-a904-e5bb0f40c1c8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6ab539c5-9633-47b3-a904-e5bb0f40c1c8" (UID: "6ab539c5-9633-47b3-a904-e5bb0f40c1c8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 06:57:38 crc kubenswrapper[4720]: I0122 06:57:38.543715 4720 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ab539c5-9633-47b3-a904-e5bb0f40c1c8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 06:57:38 crc kubenswrapper[4720]: I0122 06:57:38.543766 4720 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6ab539c5-9633-47b3-a904-e5bb0f40c1c8-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 22 06:57:38 crc kubenswrapper[4720]: I0122 06:57:38.543803 4720 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6ab539c5-9633-47b3-a904-e5bb0f40c1c8-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 22 06:57:38 crc kubenswrapper[4720]: I0122 06:57:38.543818 4720 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6ab539c5-9633-47b3-a904-e5bb0f40c1c8-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 06:57:38 crc kubenswrapper[4720]: I0122 06:57:38.543832 4720 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6ab539c5-9633-47b3-a904-e5bb0f40c1c8-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 22 06:57:38 crc kubenswrapper[4720]: I0122 06:57:38.543849 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-87lpw\" (UniqueName: \"kubernetes.io/projected/6ab539c5-9633-47b3-a904-e5bb0f40c1c8-kube-api-access-87lpw\") on node \"crc\" DevicePath \"\"" Jan 22 06:57:38 crc kubenswrapper[4720]: I0122 06:57:38.554670 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ab539c5-9633-47b3-a904-e5bb0f40c1c8-config-data" (OuterVolumeSpecName: "config-data") pod "6ab539c5-9633-47b3-a904-e5bb0f40c1c8" (UID: "6ab539c5-9633-47b3-a904-e5bb0f40c1c8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 06:57:38 crc kubenswrapper[4720]: E0122 06:57:38.636609 4720 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1eb3e6e5_9c5a_44ab_af1e_46fcd3a22c99.slice\": RecentStats: unable to find data in memory cache]" Jan 22 06:57:38 crc kubenswrapper[4720]: I0122 06:57:38.645220 4720 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6ab539c5-9633-47b3-a904-e5bb0f40c1c8-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 06:57:39 crc kubenswrapper[4720]: I0122 06:57:39.169425 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"6ab539c5-9633-47b3-a904-e5bb0f40c1c8","Type":"ContainerDied","Data":"f3e026b2c0923f0054fd34bac4b5be59cf13b10acf7dba6d1440f09576b19dec"} Jan 22 06:57:39 crc kubenswrapper[4720]: I0122 06:57:39.169766 4720 scope.go:117] "RemoveContainer" containerID="bc20b35254f76e6196e986fa94869d5aaa8ef0bf7100c94fa101ba923c33d9d7" Jan 22 06:57:39 crc kubenswrapper[4720]: I0122 06:57:39.170019 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 22 06:57:39 crc kubenswrapper[4720]: I0122 06:57:39.202658 4720 scope.go:117] "RemoveContainer" containerID="08b74e16faf2070307d3bee0bb8bcb75cc4ba0cff581bca2ae83e9e7d7341df1" Jan 22 06:57:39 crc kubenswrapper[4720]: I0122 06:57:39.218277 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 06:57:39 crc kubenswrapper[4720]: I0122 06:57:39.233658 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 06:57:39 crc kubenswrapper[4720]: I0122 06:57:39.240014 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 06:57:39 crc kubenswrapper[4720]: E0122 06:57:39.240442 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ab539c5-9633-47b3-a904-e5bb0f40c1c8" containerName="ceilometer-notification-agent" Jan 22 06:57:39 crc kubenswrapper[4720]: I0122 06:57:39.240463 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ab539c5-9633-47b3-a904-e5bb0f40c1c8" containerName="ceilometer-notification-agent" Jan 22 06:57:39 crc kubenswrapper[4720]: E0122 06:57:39.240480 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ab539c5-9633-47b3-a904-e5bb0f40c1c8" containerName="sg-core" Jan 22 06:57:39 crc kubenswrapper[4720]: I0122 06:57:39.240487 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ab539c5-9633-47b3-a904-e5bb0f40c1c8" containerName="sg-core" Jan 22 06:57:39 crc kubenswrapper[4720]: E0122 06:57:39.240513 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ab539c5-9633-47b3-a904-e5bb0f40c1c8" containerName="ceilometer-central-agent" Jan 22 06:57:39 crc kubenswrapper[4720]: I0122 06:57:39.240519 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ab539c5-9633-47b3-a904-e5bb0f40c1c8" containerName="ceilometer-central-agent" Jan 22 06:57:39 crc kubenswrapper[4720]: E0122 06:57:39.240532 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ab539c5-9633-47b3-a904-e5bb0f40c1c8" containerName="proxy-httpd" Jan 22 06:57:39 crc kubenswrapper[4720]: I0122 06:57:39.240538 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ab539c5-9633-47b3-a904-e5bb0f40c1c8" containerName="proxy-httpd" Jan 22 06:57:39 crc kubenswrapper[4720]: I0122 06:57:39.240679 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="6ab539c5-9633-47b3-a904-e5bb0f40c1c8" containerName="ceilometer-notification-agent" Jan 22 06:57:39 crc kubenswrapper[4720]: I0122 06:57:39.240694 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="6ab539c5-9633-47b3-a904-e5bb0f40c1c8" containerName="ceilometer-central-agent" Jan 22 06:57:39 crc kubenswrapper[4720]: I0122 06:57:39.240704 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="6ab539c5-9633-47b3-a904-e5bb0f40c1c8" containerName="proxy-httpd" Jan 22 06:57:39 crc kubenswrapper[4720]: I0122 06:57:39.240716 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="6ab539c5-9633-47b3-a904-e5bb0f40c1c8" containerName="sg-core" Jan 22 06:57:39 crc kubenswrapper[4720]: I0122 06:57:39.244489 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 22 06:57:39 crc kubenswrapper[4720]: I0122 06:57:39.247961 4720 scope.go:117] "RemoveContainer" containerID="7b5c2178bc69d0aec900f125fa9d77557c8e5a376035a7512141070f4b898159" Jan 22 06:57:39 crc kubenswrapper[4720]: I0122 06:57:39.248389 4720 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-config-data" Jan 22 06:57:39 crc kubenswrapper[4720]: I0122 06:57:39.248618 4720 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-scripts" Jan 22 06:57:39 crc kubenswrapper[4720]: I0122 06:57:39.257951 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 06:57:39 crc kubenswrapper[4720]: I0122 06:57:39.258807 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ff337faa-68c4-4a45-b6de-a7f4fae4de14-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ff337faa-68c4-4a45-b6de-a7f4fae4de14\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 06:57:39 crc kubenswrapper[4720]: I0122 06:57:39.258895 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w7hkg\" (UniqueName: \"kubernetes.io/projected/ff337faa-68c4-4a45-b6de-a7f4fae4de14-kube-api-access-w7hkg\") pod \"ceilometer-0\" (UID: \"ff337faa-68c4-4a45-b6de-a7f4fae4de14\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 06:57:39 crc kubenswrapper[4720]: I0122 06:57:39.259127 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ff337faa-68c4-4a45-b6de-a7f4fae4de14-run-httpd\") pod \"ceilometer-0\" (UID: \"ff337faa-68c4-4a45-b6de-a7f4fae4de14\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 06:57:39 crc kubenswrapper[4720]: I0122 06:57:39.259347 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ff337faa-68c4-4a45-b6de-a7f4fae4de14-log-httpd\") pod \"ceilometer-0\" (UID: \"ff337faa-68c4-4a45-b6de-a7f4fae4de14\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 06:57:39 crc kubenswrapper[4720]: I0122 06:57:39.259577 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff337faa-68c4-4a45-b6de-a7f4fae4de14-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ff337faa-68c4-4a45-b6de-a7f4fae4de14\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 06:57:39 crc kubenswrapper[4720]: I0122 06:57:39.259857 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ff337faa-68c4-4a45-b6de-a7f4fae4de14-scripts\") pod \"ceilometer-0\" (UID: \"ff337faa-68c4-4a45-b6de-a7f4fae4de14\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 06:57:39 crc kubenswrapper[4720]: I0122 06:57:39.260085 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff337faa-68c4-4a45-b6de-a7f4fae4de14-config-data\") pod \"ceilometer-0\" (UID: \"ff337faa-68c4-4a45-b6de-a7f4fae4de14\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 06:57:39 crc kubenswrapper[4720]: I0122 06:57:39.318220 4720 scope.go:117] "RemoveContainer" containerID="18923d1c339ae24b4a6cfa84bb610f59265122e6a44e815887fa3b323dd6f392" Jan 22 06:57:39 crc kubenswrapper[4720]: I0122 06:57:39.348549 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 06:57:39 crc kubenswrapper[4720]: E0122 06:57:39.349211 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[combined-ca-bundle config-data kube-api-access-w7hkg log-httpd run-httpd scripts sg-core-conf-yaml], unattached volumes=[], failed to process volumes=[]: context canceled" pod="watcher-kuttl-default/ceilometer-0" podUID="ff337faa-68c4-4a45-b6de-a7f4fae4de14" Jan 22 06:57:39 crc kubenswrapper[4720]: I0122 06:57:39.361292 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ff337faa-68c4-4a45-b6de-a7f4fae4de14-scripts\") pod \"ceilometer-0\" (UID: \"ff337faa-68c4-4a45-b6de-a7f4fae4de14\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 06:57:39 crc kubenswrapper[4720]: I0122 06:57:39.361364 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff337faa-68c4-4a45-b6de-a7f4fae4de14-config-data\") pod \"ceilometer-0\" (UID: \"ff337faa-68c4-4a45-b6de-a7f4fae4de14\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 06:57:39 crc kubenswrapper[4720]: I0122 06:57:39.361385 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ff337faa-68c4-4a45-b6de-a7f4fae4de14-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ff337faa-68c4-4a45-b6de-a7f4fae4de14\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 06:57:39 crc kubenswrapper[4720]: I0122 06:57:39.361405 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w7hkg\" (UniqueName: \"kubernetes.io/projected/ff337faa-68c4-4a45-b6de-a7f4fae4de14-kube-api-access-w7hkg\") pod \"ceilometer-0\" (UID: \"ff337faa-68c4-4a45-b6de-a7f4fae4de14\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 06:57:39 crc kubenswrapper[4720]: I0122 06:57:39.361471 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ff337faa-68c4-4a45-b6de-a7f4fae4de14-run-httpd\") pod \"ceilometer-0\" (UID: \"ff337faa-68c4-4a45-b6de-a7f4fae4de14\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 06:57:39 crc kubenswrapper[4720]: I0122 06:57:39.361513 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ff337faa-68c4-4a45-b6de-a7f4fae4de14-log-httpd\") pod \"ceilometer-0\" (UID: \"ff337faa-68c4-4a45-b6de-a7f4fae4de14\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 06:57:39 crc kubenswrapper[4720]: I0122 06:57:39.361555 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff337faa-68c4-4a45-b6de-a7f4fae4de14-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ff337faa-68c4-4a45-b6de-a7f4fae4de14\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 06:57:39 crc kubenswrapper[4720]: I0122 06:57:39.363559 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ff337faa-68c4-4a45-b6de-a7f4fae4de14-log-httpd\") pod \"ceilometer-0\" (UID: \"ff337faa-68c4-4a45-b6de-a7f4fae4de14\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 06:57:39 crc kubenswrapper[4720]: I0122 06:57:39.363945 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ff337faa-68c4-4a45-b6de-a7f4fae4de14-run-httpd\") pod \"ceilometer-0\" (UID: \"ff337faa-68c4-4a45-b6de-a7f4fae4de14\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 06:57:39 crc kubenswrapper[4720]: I0122 06:57:39.368859 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ff337faa-68c4-4a45-b6de-a7f4fae4de14-scripts\") pod \"ceilometer-0\" (UID: \"ff337faa-68c4-4a45-b6de-a7f4fae4de14\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 06:57:39 crc kubenswrapper[4720]: I0122 06:57:39.370261 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff337faa-68c4-4a45-b6de-a7f4fae4de14-config-data\") pod \"ceilometer-0\" (UID: \"ff337faa-68c4-4a45-b6de-a7f4fae4de14\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 06:57:39 crc kubenswrapper[4720]: I0122 06:57:39.370768 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ff337faa-68c4-4a45-b6de-a7f4fae4de14-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ff337faa-68c4-4a45-b6de-a7f4fae4de14\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 06:57:39 crc kubenswrapper[4720]: I0122 06:57:39.372119 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff337faa-68c4-4a45-b6de-a7f4fae4de14-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ff337faa-68c4-4a45-b6de-a7f4fae4de14\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 06:57:39 crc kubenswrapper[4720]: I0122 06:57:39.383447 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w7hkg\" (UniqueName: \"kubernetes.io/projected/ff337faa-68c4-4a45-b6de-a7f4fae4de14-kube-api-access-w7hkg\") pod \"ceilometer-0\" (UID: \"ff337faa-68c4-4a45-b6de-a7f4fae4de14\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 06:57:40 crc kubenswrapper[4720]: I0122 06:57:40.179205 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 22 06:57:40 crc kubenswrapper[4720]: I0122 06:57:40.192279 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 22 06:57:40 crc kubenswrapper[4720]: I0122 06:57:40.222940 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ab539c5-9633-47b3-a904-e5bb0f40c1c8" path="/var/lib/kubelet/pods/6ab539c5-9633-47b3-a904-e5bb0f40c1c8/volumes" Jan 22 06:57:40 crc kubenswrapper[4720]: I0122 06:57:40.375559 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff337faa-68c4-4a45-b6de-a7f4fae4de14-config-data\") pod \"ff337faa-68c4-4a45-b6de-a7f4fae4de14\" (UID: \"ff337faa-68c4-4a45-b6de-a7f4fae4de14\") " Jan 22 06:57:40 crc kubenswrapper[4720]: I0122 06:57:40.376182 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff337faa-68c4-4a45-b6de-a7f4fae4de14-combined-ca-bundle\") pod \"ff337faa-68c4-4a45-b6de-a7f4fae4de14\" (UID: \"ff337faa-68c4-4a45-b6de-a7f4fae4de14\") " Jan 22 06:57:40 crc kubenswrapper[4720]: I0122 06:57:40.376316 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ff337faa-68c4-4a45-b6de-a7f4fae4de14-run-httpd\") pod \"ff337faa-68c4-4a45-b6de-a7f4fae4de14\" (UID: \"ff337faa-68c4-4a45-b6de-a7f4fae4de14\") " Jan 22 06:57:40 crc kubenswrapper[4720]: I0122 06:57:40.376438 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7hkg\" (UniqueName: \"kubernetes.io/projected/ff337faa-68c4-4a45-b6de-a7f4fae4de14-kube-api-access-w7hkg\") pod \"ff337faa-68c4-4a45-b6de-a7f4fae4de14\" (UID: \"ff337faa-68c4-4a45-b6de-a7f4fae4de14\") " Jan 22 06:57:40 crc kubenswrapper[4720]: I0122 06:57:40.376531 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ff337faa-68c4-4a45-b6de-a7f4fae4de14-sg-core-conf-yaml\") pod \"ff337faa-68c4-4a45-b6de-a7f4fae4de14\" (UID: \"ff337faa-68c4-4a45-b6de-a7f4fae4de14\") " Jan 22 06:57:40 crc kubenswrapper[4720]: I0122 06:57:40.376675 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ff337faa-68c4-4a45-b6de-a7f4fae4de14-scripts\") pod \"ff337faa-68c4-4a45-b6de-a7f4fae4de14\" (UID: \"ff337faa-68c4-4a45-b6de-a7f4fae4de14\") " Jan 22 06:57:40 crc kubenswrapper[4720]: I0122 06:57:40.376725 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ff337faa-68c4-4a45-b6de-a7f4fae4de14-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "ff337faa-68c4-4a45-b6de-a7f4fae4de14" (UID: "ff337faa-68c4-4a45-b6de-a7f4fae4de14"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 06:57:40 crc kubenswrapper[4720]: I0122 06:57:40.376841 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ff337faa-68c4-4a45-b6de-a7f4fae4de14-log-httpd\") pod \"ff337faa-68c4-4a45-b6de-a7f4fae4de14\" (UID: \"ff337faa-68c4-4a45-b6de-a7f4fae4de14\") " Jan 22 06:57:40 crc kubenswrapper[4720]: I0122 06:57:40.377060 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ff337faa-68c4-4a45-b6de-a7f4fae4de14-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "ff337faa-68c4-4a45-b6de-a7f4fae4de14" (UID: "ff337faa-68c4-4a45-b6de-a7f4fae4de14"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 06:57:40 crc kubenswrapper[4720]: I0122 06:57:40.378892 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff337faa-68c4-4a45-b6de-a7f4fae4de14-config-data" (OuterVolumeSpecName: "config-data") pod "ff337faa-68c4-4a45-b6de-a7f4fae4de14" (UID: "ff337faa-68c4-4a45-b6de-a7f4fae4de14"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 06:57:40 crc kubenswrapper[4720]: I0122 06:57:40.379840 4720 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ff337faa-68c4-4a45-b6de-a7f4fae4de14-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 22 06:57:40 crc kubenswrapper[4720]: I0122 06:57:40.380037 4720 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff337faa-68c4-4a45-b6de-a7f4fae4de14-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 06:57:40 crc kubenswrapper[4720]: I0122 06:57:40.380125 4720 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ff337faa-68c4-4a45-b6de-a7f4fae4de14-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 22 06:57:40 crc kubenswrapper[4720]: I0122 06:57:40.379972 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff337faa-68c4-4a45-b6de-a7f4fae4de14-kube-api-access-w7hkg" (OuterVolumeSpecName: "kube-api-access-w7hkg") pod "ff337faa-68c4-4a45-b6de-a7f4fae4de14" (UID: "ff337faa-68c4-4a45-b6de-a7f4fae4de14"). InnerVolumeSpecName "kube-api-access-w7hkg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 06:57:40 crc kubenswrapper[4720]: I0122 06:57:40.381133 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff337faa-68c4-4a45-b6de-a7f4fae4de14-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ff337faa-68c4-4a45-b6de-a7f4fae4de14" (UID: "ff337faa-68c4-4a45-b6de-a7f4fae4de14"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 06:57:40 crc kubenswrapper[4720]: I0122 06:57:40.381364 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff337faa-68c4-4a45-b6de-a7f4fae4de14-scripts" (OuterVolumeSpecName: "scripts") pod "ff337faa-68c4-4a45-b6de-a7f4fae4de14" (UID: "ff337faa-68c4-4a45-b6de-a7f4fae4de14"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 06:57:40 crc kubenswrapper[4720]: I0122 06:57:40.383655 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff337faa-68c4-4a45-b6de-a7f4fae4de14-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "ff337faa-68c4-4a45-b6de-a7f4fae4de14" (UID: "ff337faa-68c4-4a45-b6de-a7f4fae4de14"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 06:57:40 crc kubenswrapper[4720]: I0122 06:57:40.481599 4720 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ff337faa-68c4-4a45-b6de-a7f4fae4de14-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 22 06:57:40 crc kubenswrapper[4720]: I0122 06:57:40.481954 4720 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ff337faa-68c4-4a45-b6de-a7f4fae4de14-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 06:57:40 crc kubenswrapper[4720]: I0122 06:57:40.482240 4720 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff337faa-68c4-4a45-b6de-a7f4fae4de14-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 06:57:40 crc kubenswrapper[4720]: I0122 06:57:40.482326 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7hkg\" (UniqueName: \"kubernetes.io/projected/ff337faa-68c4-4a45-b6de-a7f4fae4de14-kube-api-access-w7hkg\") on node \"crc\" DevicePath \"\"" Jan 22 06:57:41 crc kubenswrapper[4720]: I0122 06:57:41.192633 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 22 06:57:41 crc kubenswrapper[4720]: I0122 06:57:41.269494 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 06:57:41 crc kubenswrapper[4720]: I0122 06:57:41.278425 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 06:57:41 crc kubenswrapper[4720]: I0122 06:57:41.309683 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 06:57:41 crc kubenswrapper[4720]: I0122 06:57:41.311825 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 22 06:57:41 crc kubenswrapper[4720]: I0122 06:57:41.315034 4720 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-config-data" Jan 22 06:57:41 crc kubenswrapper[4720]: I0122 06:57:41.315253 4720 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-scripts" Jan 22 06:57:41 crc kubenswrapper[4720]: I0122 06:57:41.320469 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 06:57:41 crc kubenswrapper[4720]: I0122 06:57:41.395725 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b548db90-81a9-4307-974a-fab031bfc971-run-httpd\") pod \"ceilometer-0\" (UID: \"b548db90-81a9-4307-974a-fab031bfc971\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 06:57:41 crc kubenswrapper[4720]: I0122 06:57:41.395792 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5hpj7\" (UniqueName: \"kubernetes.io/projected/b548db90-81a9-4307-974a-fab031bfc971-kube-api-access-5hpj7\") pod \"ceilometer-0\" (UID: \"b548db90-81a9-4307-974a-fab031bfc971\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 06:57:41 crc kubenswrapper[4720]: I0122 06:57:41.395815 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b548db90-81a9-4307-974a-fab031bfc971-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b548db90-81a9-4307-974a-fab031bfc971\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 06:57:41 crc kubenswrapper[4720]: I0122 06:57:41.395855 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b548db90-81a9-4307-974a-fab031bfc971-config-data\") pod \"ceilometer-0\" (UID: \"b548db90-81a9-4307-974a-fab031bfc971\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 06:57:41 crc kubenswrapper[4720]: I0122 06:57:41.395868 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b548db90-81a9-4307-974a-fab031bfc971-log-httpd\") pod \"ceilometer-0\" (UID: \"b548db90-81a9-4307-974a-fab031bfc971\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 06:57:41 crc kubenswrapper[4720]: I0122 06:57:41.395884 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b548db90-81a9-4307-974a-fab031bfc971-scripts\") pod \"ceilometer-0\" (UID: \"b548db90-81a9-4307-974a-fab031bfc971\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 06:57:41 crc kubenswrapper[4720]: I0122 06:57:41.395926 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b548db90-81a9-4307-974a-fab031bfc971-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b548db90-81a9-4307-974a-fab031bfc971\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 06:57:41 crc kubenswrapper[4720]: I0122 06:57:41.497166 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5hpj7\" (UniqueName: \"kubernetes.io/projected/b548db90-81a9-4307-974a-fab031bfc971-kube-api-access-5hpj7\") pod \"ceilometer-0\" (UID: \"b548db90-81a9-4307-974a-fab031bfc971\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 06:57:41 crc kubenswrapper[4720]: I0122 06:57:41.497218 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b548db90-81a9-4307-974a-fab031bfc971-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b548db90-81a9-4307-974a-fab031bfc971\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 06:57:41 crc kubenswrapper[4720]: I0122 06:57:41.497274 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b548db90-81a9-4307-974a-fab031bfc971-config-data\") pod \"ceilometer-0\" (UID: \"b548db90-81a9-4307-974a-fab031bfc971\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 06:57:41 crc kubenswrapper[4720]: I0122 06:57:41.497293 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b548db90-81a9-4307-974a-fab031bfc971-log-httpd\") pod \"ceilometer-0\" (UID: \"b548db90-81a9-4307-974a-fab031bfc971\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 06:57:41 crc kubenswrapper[4720]: I0122 06:57:41.497310 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b548db90-81a9-4307-974a-fab031bfc971-scripts\") pod \"ceilometer-0\" (UID: \"b548db90-81a9-4307-974a-fab031bfc971\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 06:57:41 crc kubenswrapper[4720]: I0122 06:57:41.497340 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b548db90-81a9-4307-974a-fab031bfc971-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b548db90-81a9-4307-974a-fab031bfc971\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 06:57:41 crc kubenswrapper[4720]: I0122 06:57:41.497396 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b548db90-81a9-4307-974a-fab031bfc971-run-httpd\") pod \"ceilometer-0\" (UID: \"b548db90-81a9-4307-974a-fab031bfc971\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 06:57:41 crc kubenswrapper[4720]: I0122 06:57:41.497785 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b548db90-81a9-4307-974a-fab031bfc971-log-httpd\") pod \"ceilometer-0\" (UID: \"b548db90-81a9-4307-974a-fab031bfc971\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 06:57:41 crc kubenswrapper[4720]: I0122 06:57:41.497851 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b548db90-81a9-4307-974a-fab031bfc971-run-httpd\") pod \"ceilometer-0\" (UID: \"b548db90-81a9-4307-974a-fab031bfc971\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 06:57:41 crc kubenswrapper[4720]: I0122 06:57:41.514470 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b548db90-81a9-4307-974a-fab031bfc971-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b548db90-81a9-4307-974a-fab031bfc971\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 06:57:41 crc kubenswrapper[4720]: I0122 06:57:41.514561 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b548db90-81a9-4307-974a-fab031bfc971-scripts\") pod \"ceilometer-0\" (UID: \"b548db90-81a9-4307-974a-fab031bfc971\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 06:57:41 crc kubenswrapper[4720]: I0122 06:57:41.514887 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b548db90-81a9-4307-974a-fab031bfc971-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b548db90-81a9-4307-974a-fab031bfc971\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 06:57:41 crc kubenswrapper[4720]: I0122 06:57:41.518294 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5hpj7\" (UniqueName: \"kubernetes.io/projected/b548db90-81a9-4307-974a-fab031bfc971-kube-api-access-5hpj7\") pod \"ceilometer-0\" (UID: \"b548db90-81a9-4307-974a-fab031bfc971\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 06:57:41 crc kubenswrapper[4720]: I0122 06:57:41.534986 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b548db90-81a9-4307-974a-fab031bfc971-config-data\") pod \"ceilometer-0\" (UID: \"b548db90-81a9-4307-974a-fab031bfc971\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 06:57:41 crc kubenswrapper[4720]: I0122 06:57:41.630596 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 22 06:57:42 crc kubenswrapper[4720]: I0122 06:57:42.056033 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 06:57:42 crc kubenswrapper[4720]: I0122 06:57:42.203298 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"b548db90-81a9-4307-974a-fab031bfc971","Type":"ContainerStarted","Data":"7c087d9a583ee49cec1717db9f84331ccd0adfc2db971ace338055f772fc8935"} Jan 22 06:57:42 crc kubenswrapper[4720]: I0122 06:57:42.232331 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ff337faa-68c4-4a45-b6de-a7f4fae4de14" path="/var/lib/kubelet/pods/ff337faa-68c4-4a45-b6de-a7f4fae4de14/volumes" Jan 22 06:57:43 crc kubenswrapper[4720]: I0122 06:57:43.214498 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"b548db90-81a9-4307-974a-fab031bfc971","Type":"ContainerStarted","Data":"c25b672cba9ddedbd7765629c42e27e45674893ceb41a83b46de6aeddf464381"} Jan 22 06:57:44 crc kubenswrapper[4720]: I0122 06:57:44.225358 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"b548db90-81a9-4307-974a-fab031bfc971","Type":"ContainerStarted","Data":"42bb58138a905f1f6b1f00c4ade6b74e6bc2d6f35cf2ca575b45d28fb3374125"} Jan 22 06:57:44 crc kubenswrapper[4720]: I0122 06:57:44.225707 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"b548db90-81a9-4307-974a-fab031bfc971","Type":"ContainerStarted","Data":"62bdf42917d601bc91cbf6f5e9a6964a1debf6183d8a1e070526ad3615ff5291"} Jan 22 06:57:46 crc kubenswrapper[4720]: I0122 06:57:46.249075 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"b548db90-81a9-4307-974a-fab031bfc971","Type":"ContainerStarted","Data":"8ec10c8de6185178130fab4dd72d580681046bbf45afd76a024b782a0e601deb"} Jan 22 06:57:46 crc kubenswrapper[4720]: I0122 06:57:46.250645 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/ceilometer-0" Jan 22 06:57:46 crc kubenswrapper[4720]: I0122 06:57:46.278740 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/ceilometer-0" podStartSLOduration=2.219479798 podStartE2EDuration="5.278721075s" podCreationTimestamp="2026-01-22 06:57:41 +0000 UTC" firstStartedPulling="2026-01-22 06:57:42.068028124 +0000 UTC m=+1354.209934829" lastFinishedPulling="2026-01-22 06:57:45.127269401 +0000 UTC m=+1357.269176106" observedRunningTime="2026-01-22 06:57:46.271255091 +0000 UTC m=+1358.413161806" watchObservedRunningTime="2026-01-22 06:57:46.278721075 +0000 UTC m=+1358.420627780" Jan 22 06:57:48 crc kubenswrapper[4720]: E0122 06:57:48.855347 4720 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1eb3e6e5_9c5a_44ab_af1e_46fcd3a22c99.slice\": RecentStats: unable to find data in memory cache]" Jan 22 06:57:50 crc kubenswrapper[4720]: I0122 06:57:50.535296 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-hqfpx"] Jan 22 06:57:50 crc kubenswrapper[4720]: I0122 06:57:50.537611 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hqfpx" Jan 22 06:57:50 crc kubenswrapper[4720]: I0122 06:57:50.555684 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hqfpx"] Jan 22 06:57:50 crc kubenswrapper[4720]: I0122 06:57:50.681426 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb8b9b9e-6305-4996-9c98-123c7047142a-catalog-content\") pod \"redhat-operators-hqfpx\" (UID: \"bb8b9b9e-6305-4996-9c98-123c7047142a\") " pod="openshift-marketplace/redhat-operators-hqfpx" Jan 22 06:57:50 crc kubenswrapper[4720]: I0122 06:57:50.681990 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb8b9b9e-6305-4996-9c98-123c7047142a-utilities\") pod \"redhat-operators-hqfpx\" (UID: \"bb8b9b9e-6305-4996-9c98-123c7047142a\") " pod="openshift-marketplace/redhat-operators-hqfpx" Jan 22 06:57:50 crc kubenswrapper[4720]: I0122 06:57:50.682193 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7wx9v\" (UniqueName: \"kubernetes.io/projected/bb8b9b9e-6305-4996-9c98-123c7047142a-kube-api-access-7wx9v\") pod \"redhat-operators-hqfpx\" (UID: \"bb8b9b9e-6305-4996-9c98-123c7047142a\") " pod="openshift-marketplace/redhat-operators-hqfpx" Jan 22 06:57:50 crc kubenswrapper[4720]: I0122 06:57:50.784440 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb8b9b9e-6305-4996-9c98-123c7047142a-catalog-content\") pod \"redhat-operators-hqfpx\" (UID: \"bb8b9b9e-6305-4996-9c98-123c7047142a\") " pod="openshift-marketplace/redhat-operators-hqfpx" Jan 22 06:57:50 crc kubenswrapper[4720]: I0122 06:57:50.785145 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb8b9b9e-6305-4996-9c98-123c7047142a-utilities\") pod \"redhat-operators-hqfpx\" (UID: \"bb8b9b9e-6305-4996-9c98-123c7047142a\") " pod="openshift-marketplace/redhat-operators-hqfpx" Jan 22 06:57:50 crc kubenswrapper[4720]: I0122 06:57:50.785394 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb8b9b9e-6305-4996-9c98-123c7047142a-catalog-content\") pod \"redhat-operators-hqfpx\" (UID: \"bb8b9b9e-6305-4996-9c98-123c7047142a\") " pod="openshift-marketplace/redhat-operators-hqfpx" Jan 22 06:57:50 crc kubenswrapper[4720]: I0122 06:57:50.785428 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7wx9v\" (UniqueName: \"kubernetes.io/projected/bb8b9b9e-6305-4996-9c98-123c7047142a-kube-api-access-7wx9v\") pod \"redhat-operators-hqfpx\" (UID: \"bb8b9b9e-6305-4996-9c98-123c7047142a\") " pod="openshift-marketplace/redhat-operators-hqfpx" Jan 22 06:57:50 crc kubenswrapper[4720]: I0122 06:57:50.785715 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb8b9b9e-6305-4996-9c98-123c7047142a-utilities\") pod \"redhat-operators-hqfpx\" (UID: \"bb8b9b9e-6305-4996-9c98-123c7047142a\") " pod="openshift-marketplace/redhat-operators-hqfpx" Jan 22 06:57:50 crc kubenswrapper[4720]: I0122 06:57:50.805070 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7wx9v\" (UniqueName: \"kubernetes.io/projected/bb8b9b9e-6305-4996-9c98-123c7047142a-kube-api-access-7wx9v\") pod \"redhat-operators-hqfpx\" (UID: \"bb8b9b9e-6305-4996-9c98-123c7047142a\") " pod="openshift-marketplace/redhat-operators-hqfpx" Jan 22 06:57:50 crc kubenswrapper[4720]: I0122 06:57:50.860106 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hqfpx" Jan 22 06:57:51 crc kubenswrapper[4720]: I0122 06:57:51.126832 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hqfpx"] Jan 22 06:57:51 crc kubenswrapper[4720]: I0122 06:57:51.288480 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hqfpx" event={"ID":"bb8b9b9e-6305-4996-9c98-123c7047142a","Type":"ContainerStarted","Data":"bac549315dccb295a0371d465c83fa1e5bdeb20c0ea6c3a84908c0030e1d9710"} Jan 22 06:57:52 crc kubenswrapper[4720]: I0122 06:57:52.299696 4720 generic.go:334] "Generic (PLEG): container finished" podID="bb8b9b9e-6305-4996-9c98-123c7047142a" containerID="e12de1571a77bf50017a0d159a6c0961011fc13b7092b3a481f6cb6525e203f6" exitCode=0 Jan 22 06:57:52 crc kubenswrapper[4720]: I0122 06:57:52.299767 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hqfpx" event={"ID":"bb8b9b9e-6305-4996-9c98-123c7047142a","Type":"ContainerDied","Data":"e12de1571a77bf50017a0d159a6c0961011fc13b7092b3a481f6cb6525e203f6"} Jan 22 06:57:54 crc kubenswrapper[4720]: I0122 06:57:54.320090 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hqfpx" event={"ID":"bb8b9b9e-6305-4996-9c98-123c7047142a","Type":"ContainerStarted","Data":"48a00fb707c3f06657ca439214b2b474fb8274de6b9ff75297f0d6e352c25b51"} Jan 22 06:57:54 crc kubenswrapper[4720]: I0122 06:57:54.330446 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-ngp2n"] Jan 22 06:57:54 crc kubenswrapper[4720]: I0122 06:57:54.332233 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ngp2n" Jan 22 06:57:54 crc kubenswrapper[4720]: I0122 06:57:54.356223 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-ngp2n"] Jan 22 06:57:54 crc kubenswrapper[4720]: I0122 06:57:54.457042 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f233f6d9-0fea-4c79-99e3-dd3d4edd0644-utilities\") pod \"certified-operators-ngp2n\" (UID: \"f233f6d9-0fea-4c79-99e3-dd3d4edd0644\") " pod="openshift-marketplace/certified-operators-ngp2n" Jan 22 06:57:54 crc kubenswrapper[4720]: I0122 06:57:54.457362 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f233f6d9-0fea-4c79-99e3-dd3d4edd0644-catalog-content\") pod \"certified-operators-ngp2n\" (UID: \"f233f6d9-0fea-4c79-99e3-dd3d4edd0644\") " pod="openshift-marketplace/certified-operators-ngp2n" Jan 22 06:57:54 crc kubenswrapper[4720]: I0122 06:57:54.457434 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p2s49\" (UniqueName: \"kubernetes.io/projected/f233f6d9-0fea-4c79-99e3-dd3d4edd0644-kube-api-access-p2s49\") pod \"certified-operators-ngp2n\" (UID: \"f233f6d9-0fea-4c79-99e3-dd3d4edd0644\") " pod="openshift-marketplace/certified-operators-ngp2n" Jan 22 06:57:54 crc kubenswrapper[4720]: I0122 06:57:54.559752 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f233f6d9-0fea-4c79-99e3-dd3d4edd0644-utilities\") pod \"certified-operators-ngp2n\" (UID: \"f233f6d9-0fea-4c79-99e3-dd3d4edd0644\") " pod="openshift-marketplace/certified-operators-ngp2n" Jan 22 06:57:54 crc kubenswrapper[4720]: I0122 06:57:54.559876 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f233f6d9-0fea-4c79-99e3-dd3d4edd0644-catalog-content\") pod \"certified-operators-ngp2n\" (UID: \"f233f6d9-0fea-4c79-99e3-dd3d4edd0644\") " pod="openshift-marketplace/certified-operators-ngp2n" Jan 22 06:57:54 crc kubenswrapper[4720]: I0122 06:57:54.559924 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p2s49\" (UniqueName: \"kubernetes.io/projected/f233f6d9-0fea-4c79-99e3-dd3d4edd0644-kube-api-access-p2s49\") pod \"certified-operators-ngp2n\" (UID: \"f233f6d9-0fea-4c79-99e3-dd3d4edd0644\") " pod="openshift-marketplace/certified-operators-ngp2n" Jan 22 06:57:54 crc kubenswrapper[4720]: I0122 06:57:54.560311 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f233f6d9-0fea-4c79-99e3-dd3d4edd0644-utilities\") pod \"certified-operators-ngp2n\" (UID: \"f233f6d9-0fea-4c79-99e3-dd3d4edd0644\") " pod="openshift-marketplace/certified-operators-ngp2n" Jan 22 06:57:54 crc kubenswrapper[4720]: I0122 06:57:54.560431 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f233f6d9-0fea-4c79-99e3-dd3d4edd0644-catalog-content\") pod \"certified-operators-ngp2n\" (UID: \"f233f6d9-0fea-4c79-99e3-dd3d4edd0644\") " pod="openshift-marketplace/certified-operators-ngp2n" Jan 22 06:57:54 crc kubenswrapper[4720]: I0122 06:57:54.582065 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p2s49\" (UniqueName: \"kubernetes.io/projected/f233f6d9-0fea-4c79-99e3-dd3d4edd0644-kube-api-access-p2s49\") pod \"certified-operators-ngp2n\" (UID: \"f233f6d9-0fea-4c79-99e3-dd3d4edd0644\") " pod="openshift-marketplace/certified-operators-ngp2n" Jan 22 06:57:54 crc kubenswrapper[4720]: I0122 06:57:54.650959 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ngp2n" Jan 22 06:57:55 crc kubenswrapper[4720]: I0122 06:57:55.199843 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-ngp2n"] Jan 22 06:57:55 crc kubenswrapper[4720]: I0122 06:57:55.328595 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ngp2n" event={"ID":"f233f6d9-0fea-4c79-99e3-dd3d4edd0644","Type":"ContainerStarted","Data":"69e357d9b14a23f0c452f873acfcbf957de5e4f6b06fddabf1e84e74751c4adb"} Jan 22 06:57:56 crc kubenswrapper[4720]: I0122 06:57:56.384061 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/keystone-fb4ff76bc-49d2q" Jan 22 06:57:57 crc kubenswrapper[4720]: I0122 06:57:57.353759 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ngp2n" event={"ID":"f233f6d9-0fea-4c79-99e3-dd3d4edd0644","Type":"ContainerStarted","Data":"b0e61d23100d5341b9b6dcb87578e5315d1bb29ab4eb4e83c9c7829cd44a0715"} Jan 22 06:57:57 crc kubenswrapper[4720]: I0122 06:57:57.356473 4720 generic.go:334] "Generic (PLEG): container finished" podID="bb8b9b9e-6305-4996-9c98-123c7047142a" containerID="48a00fb707c3f06657ca439214b2b474fb8274de6b9ff75297f0d6e352c25b51" exitCode=0 Jan 22 06:57:57 crc kubenswrapper[4720]: I0122 06:57:57.356516 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hqfpx" event={"ID":"bb8b9b9e-6305-4996-9c98-123c7047142a","Type":"ContainerDied","Data":"48a00fb707c3f06657ca439214b2b474fb8274de6b9ff75297f0d6e352c25b51"} Jan 22 06:57:58 crc kubenswrapper[4720]: I0122 06:57:58.366945 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hqfpx" event={"ID":"bb8b9b9e-6305-4996-9c98-123c7047142a","Type":"ContainerStarted","Data":"de395868ca3665d6e4fa816dc15cdeacd301b17e03b52de8ff1d9d68d72c478c"} Jan 22 06:57:58 crc kubenswrapper[4720]: I0122 06:57:58.369296 4720 generic.go:334] "Generic (PLEG): container finished" podID="f233f6d9-0fea-4c79-99e3-dd3d4edd0644" containerID="b0e61d23100d5341b9b6dcb87578e5315d1bb29ab4eb4e83c9c7829cd44a0715" exitCode=0 Jan 22 06:57:58 crc kubenswrapper[4720]: I0122 06:57:58.369334 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ngp2n" event={"ID":"f233f6d9-0fea-4c79-99e3-dd3d4edd0644","Type":"ContainerDied","Data":"b0e61d23100d5341b9b6dcb87578e5315d1bb29ab4eb4e83c9c7829cd44a0715"} Jan 22 06:57:58 crc kubenswrapper[4720]: I0122 06:57:58.388749 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-hqfpx" podStartSLOduration=2.6979269930000003 podStartE2EDuration="8.388730902s" podCreationTimestamp="2026-01-22 06:57:50 +0000 UTC" firstStartedPulling="2026-01-22 06:57:52.301898757 +0000 UTC m=+1364.443805462" lastFinishedPulling="2026-01-22 06:57:57.992702666 +0000 UTC m=+1370.134609371" observedRunningTime="2026-01-22 06:57:58.38659741 +0000 UTC m=+1370.528504125" watchObservedRunningTime="2026-01-22 06:57:58.388730902 +0000 UTC m=+1370.530637607" Jan 22 06:57:59 crc kubenswrapper[4720]: E0122 06:57:59.096958 4720 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1eb3e6e5_9c5a_44ab_af1e_46fcd3a22c99.slice\": RecentStats: unable to find data in memory cache]" Jan 22 06:57:59 crc kubenswrapper[4720]: I0122 06:57:59.377650 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ngp2n" event={"ID":"f233f6d9-0fea-4c79-99e3-dd3d4edd0644","Type":"ContainerStarted","Data":"04b3f1f9a2ce541ce7b25950a16a2ef3f45ad0ccda95691b1a5e2be0dbd8696a"} Jan 22 06:58:00 crc kubenswrapper[4720]: I0122 06:58:00.121137 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/openstackclient"] Jan 22 06:58:00 crc kubenswrapper[4720]: I0122 06:58:00.123246 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/openstackclient" Jan 22 06:58:00 crc kubenswrapper[4720]: I0122 06:58:00.125679 4720 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"openstack-config-secret" Jan 22 06:58:00 crc kubenswrapper[4720]: I0122 06:58:00.125719 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"watcher-kuttl-default"/"openstack-config" Jan 22 06:58:00 crc kubenswrapper[4720]: I0122 06:58:00.127991 4720 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"openstackclient-openstackclient-dockercfg-tq2b4" Jan 22 06:58:00 crc kubenswrapper[4720]: I0122 06:58:00.130079 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/openstackclient"] Jan 22 06:58:00 crc kubenswrapper[4720]: I0122 06:58:00.266679 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61f06b72-9149-4617-881a-22568c2cbe41-combined-ca-bundle\") pod \"openstackclient\" (UID: \"61f06b72-9149-4617-881a-22568c2cbe41\") " pod="watcher-kuttl-default/openstackclient" Jan 22 06:58:00 crc kubenswrapper[4720]: I0122 06:58:00.267816 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/61f06b72-9149-4617-881a-22568c2cbe41-openstack-config\") pod \"openstackclient\" (UID: \"61f06b72-9149-4617-881a-22568c2cbe41\") " pod="watcher-kuttl-default/openstackclient" Jan 22 06:58:00 crc kubenswrapper[4720]: I0122 06:58:00.267956 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-42xgk\" (UniqueName: \"kubernetes.io/projected/61f06b72-9149-4617-881a-22568c2cbe41-kube-api-access-42xgk\") pod \"openstackclient\" (UID: \"61f06b72-9149-4617-881a-22568c2cbe41\") " pod="watcher-kuttl-default/openstackclient" Jan 22 06:58:00 crc kubenswrapper[4720]: I0122 06:58:00.268111 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/61f06b72-9149-4617-881a-22568c2cbe41-openstack-config-secret\") pod \"openstackclient\" (UID: \"61f06b72-9149-4617-881a-22568c2cbe41\") " pod="watcher-kuttl-default/openstackclient" Jan 22 06:58:00 crc kubenswrapper[4720]: I0122 06:58:00.337651 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/openstackclient"] Jan 22 06:58:00 crc kubenswrapper[4720]: E0122 06:58:00.338446 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[combined-ca-bundle kube-api-access-42xgk openstack-config openstack-config-secret], unattached volumes=[], failed to process volumes=[]: context canceled" pod="watcher-kuttl-default/openstackclient" podUID="61f06b72-9149-4617-881a-22568c2cbe41" Jan 22 06:58:00 crc kubenswrapper[4720]: I0122 06:58:00.345814 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/openstackclient"] Jan 22 06:58:00 crc kubenswrapper[4720]: I0122 06:58:00.369501 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/61f06b72-9149-4617-881a-22568c2cbe41-openstack-config\") pod \"openstackclient\" (UID: \"61f06b72-9149-4617-881a-22568c2cbe41\") " pod="watcher-kuttl-default/openstackclient" Jan 22 06:58:00 crc kubenswrapper[4720]: I0122 06:58:00.369544 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-42xgk\" (UniqueName: \"kubernetes.io/projected/61f06b72-9149-4617-881a-22568c2cbe41-kube-api-access-42xgk\") pod \"openstackclient\" (UID: \"61f06b72-9149-4617-881a-22568c2cbe41\") " pod="watcher-kuttl-default/openstackclient" Jan 22 06:58:00 crc kubenswrapper[4720]: I0122 06:58:00.369568 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/61f06b72-9149-4617-881a-22568c2cbe41-openstack-config-secret\") pod \"openstackclient\" (UID: \"61f06b72-9149-4617-881a-22568c2cbe41\") " pod="watcher-kuttl-default/openstackclient" Jan 22 06:58:00 crc kubenswrapper[4720]: I0122 06:58:00.369592 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61f06b72-9149-4617-881a-22568c2cbe41-combined-ca-bundle\") pod \"openstackclient\" (UID: \"61f06b72-9149-4617-881a-22568c2cbe41\") " pod="watcher-kuttl-default/openstackclient" Jan 22 06:58:00 crc kubenswrapper[4720]: I0122 06:58:00.371287 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/61f06b72-9149-4617-881a-22568c2cbe41-openstack-config\") pod \"openstackclient\" (UID: \"61f06b72-9149-4617-881a-22568c2cbe41\") " pod="watcher-kuttl-default/openstackclient" Jan 22 06:58:00 crc kubenswrapper[4720]: E0122 06:58:00.371445 4720 projected.go:194] Error preparing data for projected volume kube-api-access-42xgk for pod watcher-kuttl-default/openstackclient: failed to fetch token: serviceaccounts "openstackclient-openstackclient" is forbidden: User "system:node:crc" cannot create resource "serviceaccounts/token" in API group "" in the namespace "watcher-kuttl-default": no relationship found between node 'crc' and this object Jan 22 06:58:00 crc kubenswrapper[4720]: E0122 06:58:00.371538 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/61f06b72-9149-4617-881a-22568c2cbe41-kube-api-access-42xgk podName:61f06b72-9149-4617-881a-22568c2cbe41 nodeName:}" failed. No retries permitted until 2026-01-22 06:58:00.871515453 +0000 UTC m=+1373.013422158 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-42xgk" (UniqueName: "kubernetes.io/projected/61f06b72-9149-4617-881a-22568c2cbe41-kube-api-access-42xgk") pod "openstackclient" (UID: "61f06b72-9149-4617-881a-22568c2cbe41") : failed to fetch token: serviceaccounts "openstackclient-openstackclient" is forbidden: User "system:node:crc" cannot create resource "serviceaccounts/token" in API group "" in the namespace "watcher-kuttl-default": no relationship found between node 'crc' and this object Jan 22 06:58:00 crc kubenswrapper[4720]: I0122 06:58:00.375780 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61f06b72-9149-4617-881a-22568c2cbe41-combined-ca-bundle\") pod \"openstackclient\" (UID: \"61f06b72-9149-4617-881a-22568c2cbe41\") " pod="watcher-kuttl-default/openstackclient" Jan 22 06:58:00 crc kubenswrapper[4720]: I0122 06:58:00.378034 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/61f06b72-9149-4617-881a-22568c2cbe41-openstack-config-secret\") pod \"openstackclient\" (UID: \"61f06b72-9149-4617-881a-22568c2cbe41\") " pod="watcher-kuttl-default/openstackclient" Jan 22 06:58:00 crc kubenswrapper[4720]: I0122 06:58:00.392461 4720 generic.go:334] "Generic (PLEG): container finished" podID="f233f6d9-0fea-4c79-99e3-dd3d4edd0644" containerID="04b3f1f9a2ce541ce7b25950a16a2ef3f45ad0ccda95691b1a5e2be0dbd8696a" exitCode=0 Jan 22 06:58:00 crc kubenswrapper[4720]: I0122 06:58:00.392591 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/openstackclient" Jan 22 06:58:00 crc kubenswrapper[4720]: I0122 06:58:00.393040 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ngp2n" event={"ID":"f233f6d9-0fea-4c79-99e3-dd3d4edd0644","Type":"ContainerDied","Data":"04b3f1f9a2ce541ce7b25950a16a2ef3f45ad0ccda95691b1a5e2be0dbd8696a"} Jan 22 06:58:00 crc kubenswrapper[4720]: I0122 06:58:00.424884 4720 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="watcher-kuttl-default/openstackclient" oldPodUID="61f06b72-9149-4617-881a-22568c2cbe41" podUID="29110f04-f286-4428-b872-a3ed6b6c0919" Jan 22 06:58:00 crc kubenswrapper[4720]: I0122 06:58:00.428964 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/openstackclient"] Jan 22 06:58:00 crc kubenswrapper[4720]: I0122 06:58:00.430502 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/openstackclient" Jan 22 06:58:00 crc kubenswrapper[4720]: I0122 06:58:00.445338 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/openstackclient"] Jan 22 06:58:00 crc kubenswrapper[4720]: I0122 06:58:00.474398 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29110f04-f286-4428-b872-a3ed6b6c0919-combined-ca-bundle\") pod \"openstackclient\" (UID: \"29110f04-f286-4428-b872-a3ed6b6c0919\") " pod="watcher-kuttl-default/openstackclient" Jan 22 06:58:00 crc kubenswrapper[4720]: I0122 06:58:00.474447 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9x9fb\" (UniqueName: \"kubernetes.io/projected/29110f04-f286-4428-b872-a3ed6b6c0919-kube-api-access-9x9fb\") pod \"openstackclient\" (UID: \"29110f04-f286-4428-b872-a3ed6b6c0919\") " pod="watcher-kuttl-default/openstackclient" Jan 22 06:58:00 crc kubenswrapper[4720]: I0122 06:58:00.474479 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/29110f04-f286-4428-b872-a3ed6b6c0919-openstack-config-secret\") pod \"openstackclient\" (UID: \"29110f04-f286-4428-b872-a3ed6b6c0919\") " pod="watcher-kuttl-default/openstackclient" Jan 22 06:58:00 crc kubenswrapper[4720]: I0122 06:58:00.474563 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/29110f04-f286-4428-b872-a3ed6b6c0919-openstack-config\") pod \"openstackclient\" (UID: \"29110f04-f286-4428-b872-a3ed6b6c0919\") " pod="watcher-kuttl-default/openstackclient" Jan 22 06:58:00 crc kubenswrapper[4720]: I0122 06:58:00.474483 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/openstackclient" Jan 22 06:58:00 crc kubenswrapper[4720]: I0122 06:58:00.576464 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/29110f04-f286-4428-b872-a3ed6b6c0919-openstack-config\") pod \"openstackclient\" (UID: \"29110f04-f286-4428-b872-a3ed6b6c0919\") " pod="watcher-kuttl-default/openstackclient" Jan 22 06:58:00 crc kubenswrapper[4720]: I0122 06:58:00.576899 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9x9fb\" (UniqueName: \"kubernetes.io/projected/29110f04-f286-4428-b872-a3ed6b6c0919-kube-api-access-9x9fb\") pod \"openstackclient\" (UID: \"29110f04-f286-4428-b872-a3ed6b6c0919\") " pod="watcher-kuttl-default/openstackclient" Jan 22 06:58:00 crc kubenswrapper[4720]: I0122 06:58:00.577073 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29110f04-f286-4428-b872-a3ed6b6c0919-combined-ca-bundle\") pod \"openstackclient\" (UID: \"29110f04-f286-4428-b872-a3ed6b6c0919\") " pod="watcher-kuttl-default/openstackclient" Jan 22 06:58:00 crc kubenswrapper[4720]: I0122 06:58:00.577118 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/29110f04-f286-4428-b872-a3ed6b6c0919-openstack-config-secret\") pod \"openstackclient\" (UID: \"29110f04-f286-4428-b872-a3ed6b6c0919\") " pod="watcher-kuttl-default/openstackclient" Jan 22 06:58:00 crc kubenswrapper[4720]: I0122 06:58:00.577495 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/29110f04-f286-4428-b872-a3ed6b6c0919-openstack-config\") pod \"openstackclient\" (UID: \"29110f04-f286-4428-b872-a3ed6b6c0919\") " pod="watcher-kuttl-default/openstackclient" Jan 22 06:58:00 crc kubenswrapper[4720]: I0122 06:58:00.585758 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/29110f04-f286-4428-b872-a3ed6b6c0919-openstack-config-secret\") pod \"openstackclient\" (UID: \"29110f04-f286-4428-b872-a3ed6b6c0919\") " pod="watcher-kuttl-default/openstackclient" Jan 22 06:58:00 crc kubenswrapper[4720]: I0122 06:58:00.597162 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9x9fb\" (UniqueName: \"kubernetes.io/projected/29110f04-f286-4428-b872-a3ed6b6c0919-kube-api-access-9x9fb\") pod \"openstackclient\" (UID: \"29110f04-f286-4428-b872-a3ed6b6c0919\") " pod="watcher-kuttl-default/openstackclient" Jan 22 06:58:00 crc kubenswrapper[4720]: I0122 06:58:00.610280 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29110f04-f286-4428-b872-a3ed6b6c0919-combined-ca-bundle\") pod \"openstackclient\" (UID: \"29110f04-f286-4428-b872-a3ed6b6c0919\") " pod="watcher-kuttl-default/openstackclient" Jan 22 06:58:00 crc kubenswrapper[4720]: I0122 06:58:00.679867 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61f06b72-9149-4617-881a-22568c2cbe41-combined-ca-bundle\") pod \"61f06b72-9149-4617-881a-22568c2cbe41\" (UID: \"61f06b72-9149-4617-881a-22568c2cbe41\") " Jan 22 06:58:00 crc kubenswrapper[4720]: I0122 06:58:00.680392 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/61f06b72-9149-4617-881a-22568c2cbe41-openstack-config\") pod \"61f06b72-9149-4617-881a-22568c2cbe41\" (UID: \"61f06b72-9149-4617-881a-22568c2cbe41\") " Jan 22 06:58:00 crc kubenswrapper[4720]: I0122 06:58:00.680614 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/61f06b72-9149-4617-881a-22568c2cbe41-openstack-config-secret\") pod \"61f06b72-9149-4617-881a-22568c2cbe41\" (UID: \"61f06b72-9149-4617-881a-22568c2cbe41\") " Jan 22 06:58:00 crc kubenswrapper[4720]: I0122 06:58:00.681179 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/61f06b72-9149-4617-881a-22568c2cbe41-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "61f06b72-9149-4617-881a-22568c2cbe41" (UID: "61f06b72-9149-4617-881a-22568c2cbe41"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 06:58:00 crc kubenswrapper[4720]: I0122 06:58:00.681526 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-42xgk\" (UniqueName: \"kubernetes.io/projected/61f06b72-9149-4617-881a-22568c2cbe41-kube-api-access-42xgk\") on node \"crc\" DevicePath \"\"" Jan 22 06:58:00 crc kubenswrapper[4720]: I0122 06:58:00.681638 4720 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/61f06b72-9149-4617-881a-22568c2cbe41-openstack-config\") on node \"crc\" DevicePath \"\"" Jan 22 06:58:00 crc kubenswrapper[4720]: I0122 06:58:00.682080 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61f06b72-9149-4617-881a-22568c2cbe41-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "61f06b72-9149-4617-881a-22568c2cbe41" (UID: "61f06b72-9149-4617-881a-22568c2cbe41"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 06:58:00 crc kubenswrapper[4720]: I0122 06:58:00.683922 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61f06b72-9149-4617-881a-22568c2cbe41-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "61f06b72-9149-4617-881a-22568c2cbe41" (UID: "61f06b72-9149-4617-881a-22568c2cbe41"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 06:58:00 crc kubenswrapper[4720]: I0122 06:58:00.783768 4720 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/61f06b72-9149-4617-881a-22568c2cbe41-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Jan 22 06:58:00 crc kubenswrapper[4720]: I0122 06:58:00.784095 4720 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61f06b72-9149-4617-881a-22568c2cbe41-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 06:58:00 crc kubenswrapper[4720]: I0122 06:58:00.789004 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/openstackclient" Jan 22 06:58:00 crc kubenswrapper[4720]: I0122 06:58:00.860347 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-hqfpx" Jan 22 06:58:00 crc kubenswrapper[4720]: I0122 06:58:00.860428 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-hqfpx" Jan 22 06:58:01 crc kubenswrapper[4720]: I0122 06:58:01.137469 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/openstackclient"] Jan 22 06:58:01 crc kubenswrapper[4720]: I0122 06:58:01.401894 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/openstackclient" Jan 22 06:58:01 crc kubenswrapper[4720]: I0122 06:58:01.407261 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/openstackclient" event={"ID":"29110f04-f286-4428-b872-a3ed6b6c0919","Type":"ContainerStarted","Data":"57bbf793708d993c5f7f94d493fb42b77e572d44781464a31e4620ceef7fbca7"} Jan 22 06:58:01 crc kubenswrapper[4720]: I0122 06:58:01.415510 4720 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="watcher-kuttl-default/openstackclient" oldPodUID="61f06b72-9149-4617-881a-22568c2cbe41" podUID="29110f04-f286-4428-b872-a3ed6b6c0919" Jan 22 06:58:01 crc kubenswrapper[4720]: I0122 06:58:01.912075 4720 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-hqfpx" podUID="bb8b9b9e-6305-4996-9c98-123c7047142a" containerName="registry-server" probeResult="failure" output=< Jan 22 06:58:01 crc kubenswrapper[4720]: timeout: failed to connect service ":50051" within 1s Jan 22 06:58:01 crc kubenswrapper[4720]: > Jan 22 06:58:02 crc kubenswrapper[4720]: I0122 06:58:02.226250 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="61f06b72-9149-4617-881a-22568c2cbe41" path="/var/lib/kubelet/pods/61f06b72-9149-4617-881a-22568c2cbe41/volumes" Jan 22 06:58:02 crc kubenswrapper[4720]: I0122 06:58:02.422066 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ngp2n" event={"ID":"f233f6d9-0fea-4c79-99e3-dd3d4edd0644","Type":"ContainerStarted","Data":"5155b5a65f7672e7ce6169b31014c9ee3e8736cd74900e3dbde966b112698f74"} Jan 22 06:58:02 crc kubenswrapper[4720]: I0122 06:58:02.446875 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-ngp2n" podStartSLOduration=5.35160812 podStartE2EDuration="8.446839418s" podCreationTimestamp="2026-01-22 06:57:54 +0000 UTC" firstStartedPulling="2026-01-22 06:57:58.371166108 +0000 UTC m=+1370.513072813" lastFinishedPulling="2026-01-22 06:58:01.466397406 +0000 UTC m=+1373.608304111" observedRunningTime="2026-01-22 06:58:02.440492916 +0000 UTC m=+1374.582399621" watchObservedRunningTime="2026-01-22 06:58:02.446839418 +0000 UTC m=+1374.588746123" Jan 22 06:58:04 crc kubenswrapper[4720]: I0122 06:58:04.652498 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-ngp2n" Jan 22 06:58:04 crc kubenswrapper[4720]: I0122 06:58:04.652968 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-ngp2n" Jan 22 06:58:04 crc kubenswrapper[4720]: I0122 06:58:04.703075 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-ngp2n" Jan 22 06:58:10 crc kubenswrapper[4720]: I0122 06:58:10.500187 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/openstackclient" event={"ID":"29110f04-f286-4428-b872-a3ed6b6c0919","Type":"ContainerStarted","Data":"93f21d90713a7550fdd01364f188e7b20392083f6fc18923f0c0803ca5817dc4"} Jan 22 06:58:10 crc kubenswrapper[4720]: I0122 06:58:10.525138 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/openstackclient" podStartSLOduration=2.131500796 podStartE2EDuration="10.525117172s" podCreationTimestamp="2026-01-22 06:58:00 +0000 UTC" firstStartedPulling="2026-01-22 06:58:01.146835694 +0000 UTC m=+1373.288742399" lastFinishedPulling="2026-01-22 06:58:09.54045207 +0000 UTC m=+1381.682358775" observedRunningTime="2026-01-22 06:58:10.525016599 +0000 UTC m=+1382.666923324" watchObservedRunningTime="2026-01-22 06:58:10.525117172 +0000 UTC m=+1382.667023887" Jan 22 06:58:10 crc kubenswrapper[4720]: I0122 06:58:10.911100 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-hqfpx" Jan 22 06:58:10 crc kubenswrapper[4720]: I0122 06:58:10.956370 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-hqfpx" Jan 22 06:58:11 crc kubenswrapper[4720]: I0122 06:58:11.639025 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/ceilometer-0" Jan 22 06:58:13 crc kubenswrapper[4720]: I0122 06:58:13.844344 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/kube-state-metrics-0"] Jan 22 06:58:13 crc kubenswrapper[4720]: I0122 06:58:13.844947 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/kube-state-metrics-0" podUID="5186fc7a-6b08-4177-bb0a-a43da69baa8a" containerName="kube-state-metrics" containerID="cri-o://b0cb377d189fd1c5254467121d99917f4fe3bf14f957d55163b66990b4ea5d6b" gracePeriod=30 Jan 22 06:58:14 crc kubenswrapper[4720]: I0122 06:58:14.294525 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/kube-state-metrics-0" Jan 22 06:58:14 crc kubenswrapper[4720]: I0122 06:58:14.404622 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s2kbx\" (UniqueName: \"kubernetes.io/projected/5186fc7a-6b08-4177-bb0a-a43da69baa8a-kube-api-access-s2kbx\") pod \"5186fc7a-6b08-4177-bb0a-a43da69baa8a\" (UID: \"5186fc7a-6b08-4177-bb0a-a43da69baa8a\") " Jan 22 06:58:14 crc kubenswrapper[4720]: I0122 06:58:14.418776 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5186fc7a-6b08-4177-bb0a-a43da69baa8a-kube-api-access-s2kbx" (OuterVolumeSpecName: "kube-api-access-s2kbx") pod "5186fc7a-6b08-4177-bb0a-a43da69baa8a" (UID: "5186fc7a-6b08-4177-bb0a-a43da69baa8a"). InnerVolumeSpecName "kube-api-access-s2kbx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 06:58:14 crc kubenswrapper[4720]: I0122 06:58:14.507111 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s2kbx\" (UniqueName: \"kubernetes.io/projected/5186fc7a-6b08-4177-bb0a-a43da69baa8a-kube-api-access-s2kbx\") on node \"crc\" DevicePath \"\"" Jan 22 06:58:14 crc kubenswrapper[4720]: I0122 06:58:14.522778 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-hqfpx"] Jan 22 06:58:14 crc kubenswrapper[4720]: I0122 06:58:14.523061 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-hqfpx" podUID="bb8b9b9e-6305-4996-9c98-123c7047142a" containerName="registry-server" containerID="cri-o://de395868ca3665d6e4fa816dc15cdeacd301b17e03b52de8ff1d9d68d72c478c" gracePeriod=2 Jan 22 06:58:14 crc kubenswrapper[4720]: I0122 06:58:14.584364 4720 generic.go:334] "Generic (PLEG): container finished" podID="5186fc7a-6b08-4177-bb0a-a43da69baa8a" containerID="b0cb377d189fd1c5254467121d99917f4fe3bf14f957d55163b66990b4ea5d6b" exitCode=2 Jan 22 06:58:14 crc kubenswrapper[4720]: I0122 06:58:14.584417 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/kube-state-metrics-0" event={"ID":"5186fc7a-6b08-4177-bb0a-a43da69baa8a","Type":"ContainerDied","Data":"b0cb377d189fd1c5254467121d99917f4fe3bf14f957d55163b66990b4ea5d6b"} Jan 22 06:58:14 crc kubenswrapper[4720]: I0122 06:58:14.584457 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/kube-state-metrics-0" event={"ID":"5186fc7a-6b08-4177-bb0a-a43da69baa8a","Type":"ContainerDied","Data":"b8cb5d9aebf6827568db5f6cd635172184dc4fa7dca0e0022068248bf963f895"} Jan 22 06:58:14 crc kubenswrapper[4720]: I0122 06:58:14.584482 4720 scope.go:117] "RemoveContainer" containerID="b0cb377d189fd1c5254467121d99917f4fe3bf14f957d55163b66990b4ea5d6b" Jan 22 06:58:14 crc kubenswrapper[4720]: I0122 06:58:14.584477 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/kube-state-metrics-0" Jan 22 06:58:14 crc kubenswrapper[4720]: I0122 06:58:14.605204 4720 scope.go:117] "RemoveContainer" containerID="b0cb377d189fd1c5254467121d99917f4fe3bf14f957d55163b66990b4ea5d6b" Jan 22 06:58:14 crc kubenswrapper[4720]: E0122 06:58:14.608471 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b0cb377d189fd1c5254467121d99917f4fe3bf14f957d55163b66990b4ea5d6b\": container with ID starting with b0cb377d189fd1c5254467121d99917f4fe3bf14f957d55163b66990b4ea5d6b not found: ID does not exist" containerID="b0cb377d189fd1c5254467121d99917f4fe3bf14f957d55163b66990b4ea5d6b" Jan 22 06:58:14 crc kubenswrapper[4720]: I0122 06:58:14.608524 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b0cb377d189fd1c5254467121d99917f4fe3bf14f957d55163b66990b4ea5d6b"} err="failed to get container status \"b0cb377d189fd1c5254467121d99917f4fe3bf14f957d55163b66990b4ea5d6b\": rpc error: code = NotFound desc = could not find container \"b0cb377d189fd1c5254467121d99917f4fe3bf14f957d55163b66990b4ea5d6b\": container with ID starting with b0cb377d189fd1c5254467121d99917f4fe3bf14f957d55163b66990b4ea5d6b not found: ID does not exist" Jan 22 06:58:14 crc kubenswrapper[4720]: I0122 06:58:14.611592 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/kube-state-metrics-0"] Jan 22 06:58:14 crc kubenswrapper[4720]: I0122 06:58:14.676430 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/kube-state-metrics-0"] Jan 22 06:58:14 crc kubenswrapper[4720]: I0122 06:58:14.683839 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/kube-state-metrics-0"] Jan 22 06:58:14 crc kubenswrapper[4720]: E0122 06:58:14.684368 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5186fc7a-6b08-4177-bb0a-a43da69baa8a" containerName="kube-state-metrics" Jan 22 06:58:14 crc kubenswrapper[4720]: I0122 06:58:14.684392 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="5186fc7a-6b08-4177-bb0a-a43da69baa8a" containerName="kube-state-metrics" Jan 22 06:58:14 crc kubenswrapper[4720]: I0122 06:58:14.684575 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="5186fc7a-6b08-4177-bb0a-a43da69baa8a" containerName="kube-state-metrics" Jan 22 06:58:14 crc kubenswrapper[4720]: I0122 06:58:14.685351 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/kube-state-metrics-0" Jan 22 06:58:14 crc kubenswrapper[4720]: I0122 06:58:14.690620 4720 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-kube-state-metrics-svc" Jan 22 06:58:14 crc kubenswrapper[4720]: I0122 06:58:14.690672 4720 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"kube-state-metrics-tls-config" Jan 22 06:58:14 crc kubenswrapper[4720]: I0122 06:58:14.694554 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/kube-state-metrics-0"] Jan 22 06:58:14 crc kubenswrapper[4720]: I0122 06:58:14.718766 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-ngp2n" Jan 22 06:58:14 crc kubenswrapper[4720]: I0122 06:58:14.813456 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ada8cab6-f7e3-47fc-8ce8-684f61ceb5b8-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"ada8cab6-f7e3-47fc-8ce8-684f61ceb5b8\") " pod="watcher-kuttl-default/kube-state-metrics-0" Jan 22 06:58:14 crc kubenswrapper[4720]: I0122 06:58:14.813571 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fgb8m\" (UniqueName: \"kubernetes.io/projected/ada8cab6-f7e3-47fc-8ce8-684f61ceb5b8-kube-api-access-fgb8m\") pod \"kube-state-metrics-0\" (UID: \"ada8cab6-f7e3-47fc-8ce8-684f61ceb5b8\") " pod="watcher-kuttl-default/kube-state-metrics-0" Jan 22 06:58:14 crc kubenswrapper[4720]: I0122 06:58:14.813788 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/ada8cab6-f7e3-47fc-8ce8-684f61ceb5b8-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"ada8cab6-f7e3-47fc-8ce8-684f61ceb5b8\") " pod="watcher-kuttl-default/kube-state-metrics-0" Jan 22 06:58:14 crc kubenswrapper[4720]: I0122 06:58:14.813860 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/ada8cab6-f7e3-47fc-8ce8-684f61ceb5b8-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"ada8cab6-f7e3-47fc-8ce8-684f61ceb5b8\") " pod="watcher-kuttl-default/kube-state-metrics-0" Jan 22 06:58:14 crc kubenswrapper[4720]: I0122 06:58:14.915584 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ada8cab6-f7e3-47fc-8ce8-684f61ceb5b8-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"ada8cab6-f7e3-47fc-8ce8-684f61ceb5b8\") " pod="watcher-kuttl-default/kube-state-metrics-0" Jan 22 06:58:14 crc kubenswrapper[4720]: I0122 06:58:14.915681 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fgb8m\" (UniqueName: \"kubernetes.io/projected/ada8cab6-f7e3-47fc-8ce8-684f61ceb5b8-kube-api-access-fgb8m\") pod \"kube-state-metrics-0\" (UID: \"ada8cab6-f7e3-47fc-8ce8-684f61ceb5b8\") " pod="watcher-kuttl-default/kube-state-metrics-0" Jan 22 06:58:14 crc kubenswrapper[4720]: I0122 06:58:14.915736 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/ada8cab6-f7e3-47fc-8ce8-684f61ceb5b8-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"ada8cab6-f7e3-47fc-8ce8-684f61ceb5b8\") " pod="watcher-kuttl-default/kube-state-metrics-0" Jan 22 06:58:14 crc kubenswrapper[4720]: I0122 06:58:14.915760 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/ada8cab6-f7e3-47fc-8ce8-684f61ceb5b8-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"ada8cab6-f7e3-47fc-8ce8-684f61ceb5b8\") " pod="watcher-kuttl-default/kube-state-metrics-0" Jan 22 06:58:14 crc kubenswrapper[4720]: I0122 06:58:14.920527 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/ada8cab6-f7e3-47fc-8ce8-684f61ceb5b8-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"ada8cab6-f7e3-47fc-8ce8-684f61ceb5b8\") " pod="watcher-kuttl-default/kube-state-metrics-0" Jan 22 06:58:14 crc kubenswrapper[4720]: I0122 06:58:14.920585 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/ada8cab6-f7e3-47fc-8ce8-684f61ceb5b8-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"ada8cab6-f7e3-47fc-8ce8-684f61ceb5b8\") " pod="watcher-kuttl-default/kube-state-metrics-0" Jan 22 06:58:14 crc kubenswrapper[4720]: I0122 06:58:14.921386 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ada8cab6-f7e3-47fc-8ce8-684f61ceb5b8-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"ada8cab6-f7e3-47fc-8ce8-684f61ceb5b8\") " pod="watcher-kuttl-default/kube-state-metrics-0" Jan 22 06:58:14 crc kubenswrapper[4720]: I0122 06:58:14.933364 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fgb8m\" (UniqueName: \"kubernetes.io/projected/ada8cab6-f7e3-47fc-8ce8-684f61ceb5b8-kube-api-access-fgb8m\") pod \"kube-state-metrics-0\" (UID: \"ada8cab6-f7e3-47fc-8ce8-684f61ceb5b8\") " pod="watcher-kuttl-default/kube-state-metrics-0" Jan 22 06:58:15 crc kubenswrapper[4720]: I0122 06:58:15.012997 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/kube-state-metrics-0" Jan 22 06:58:15 crc kubenswrapper[4720]: I0122 06:58:15.271966 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 06:58:15 crc kubenswrapper[4720]: I0122 06:58:15.272579 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="b548db90-81a9-4307-974a-fab031bfc971" containerName="ceilometer-central-agent" containerID="cri-o://c25b672cba9ddedbd7765629c42e27e45674893ceb41a83b46de6aeddf464381" gracePeriod=30 Jan 22 06:58:15 crc kubenswrapper[4720]: I0122 06:58:15.272660 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="b548db90-81a9-4307-974a-fab031bfc971" containerName="sg-core" containerID="cri-o://42bb58138a905f1f6b1f00c4ade6b74e6bc2d6f35cf2ca575b45d28fb3374125" gracePeriod=30 Jan 22 06:58:15 crc kubenswrapper[4720]: I0122 06:58:15.272734 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="b548db90-81a9-4307-974a-fab031bfc971" containerName="ceilometer-notification-agent" containerID="cri-o://62bdf42917d601bc91cbf6f5e9a6964a1debf6183d8a1e070526ad3615ff5291" gracePeriod=30 Jan 22 06:58:15 crc kubenswrapper[4720]: I0122 06:58:15.272941 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="b548db90-81a9-4307-974a-fab031bfc971" containerName="proxy-httpd" containerID="cri-o://8ec10c8de6185178130fab4dd72d580681046bbf45afd76a024b782a0e601deb" gracePeriod=30 Jan 22 06:58:15 crc kubenswrapper[4720]: W0122 06:58:15.477887 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podada8cab6_f7e3_47fc_8ce8_684f61ceb5b8.slice/crio-05e2ffa64d8ede649db635d1cea86db72b01250539f293fd9e7d7167363feb10 WatchSource:0}: Error finding container 05e2ffa64d8ede649db635d1cea86db72b01250539f293fd9e7d7167363feb10: Status 404 returned error can't find the container with id 05e2ffa64d8ede649db635d1cea86db72b01250539f293fd9e7d7167363feb10 Jan 22 06:58:15 crc kubenswrapper[4720]: I0122 06:58:15.482689 4720 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 22 06:58:15 crc kubenswrapper[4720]: I0122 06:58:15.490034 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/kube-state-metrics-0"] Jan 22 06:58:15 crc kubenswrapper[4720]: I0122 06:58:15.528171 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hqfpx" Jan 22 06:58:15 crc kubenswrapper[4720]: I0122 06:58:15.595406 4720 generic.go:334] "Generic (PLEG): container finished" podID="bb8b9b9e-6305-4996-9c98-123c7047142a" containerID="de395868ca3665d6e4fa816dc15cdeacd301b17e03b52de8ff1d9d68d72c478c" exitCode=0 Jan 22 06:58:15 crc kubenswrapper[4720]: I0122 06:58:15.595521 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hqfpx" Jan 22 06:58:15 crc kubenswrapper[4720]: I0122 06:58:15.595549 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hqfpx" event={"ID":"bb8b9b9e-6305-4996-9c98-123c7047142a","Type":"ContainerDied","Data":"de395868ca3665d6e4fa816dc15cdeacd301b17e03b52de8ff1d9d68d72c478c"} Jan 22 06:58:15 crc kubenswrapper[4720]: I0122 06:58:15.595652 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hqfpx" event={"ID":"bb8b9b9e-6305-4996-9c98-123c7047142a","Type":"ContainerDied","Data":"bac549315dccb295a0371d465c83fa1e5bdeb20c0ea6c3a84908c0030e1d9710"} Jan 22 06:58:15 crc kubenswrapper[4720]: I0122 06:58:15.595994 4720 scope.go:117] "RemoveContainer" containerID="de395868ca3665d6e4fa816dc15cdeacd301b17e03b52de8ff1d9d68d72c478c" Jan 22 06:58:15 crc kubenswrapper[4720]: I0122 06:58:15.603707 4720 generic.go:334] "Generic (PLEG): container finished" podID="b548db90-81a9-4307-974a-fab031bfc971" containerID="8ec10c8de6185178130fab4dd72d580681046bbf45afd76a024b782a0e601deb" exitCode=0 Jan 22 06:58:15 crc kubenswrapper[4720]: I0122 06:58:15.603744 4720 generic.go:334] "Generic (PLEG): container finished" podID="b548db90-81a9-4307-974a-fab031bfc971" containerID="42bb58138a905f1f6b1f00c4ade6b74e6bc2d6f35cf2ca575b45d28fb3374125" exitCode=2 Jan 22 06:58:15 crc kubenswrapper[4720]: I0122 06:58:15.603801 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"b548db90-81a9-4307-974a-fab031bfc971","Type":"ContainerDied","Data":"8ec10c8de6185178130fab4dd72d580681046bbf45afd76a024b782a0e601deb"} Jan 22 06:58:15 crc kubenswrapper[4720]: I0122 06:58:15.603847 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"b548db90-81a9-4307-974a-fab031bfc971","Type":"ContainerDied","Data":"42bb58138a905f1f6b1f00c4ade6b74e6bc2d6f35cf2ca575b45d28fb3374125"} Jan 22 06:58:15 crc kubenswrapper[4720]: I0122 06:58:15.607642 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/kube-state-metrics-0" event={"ID":"ada8cab6-f7e3-47fc-8ce8-684f61ceb5b8","Type":"ContainerStarted","Data":"05e2ffa64d8ede649db635d1cea86db72b01250539f293fd9e7d7167363feb10"} Jan 22 06:58:15 crc kubenswrapper[4720]: I0122 06:58:15.626724 4720 scope.go:117] "RemoveContainer" containerID="48a00fb707c3f06657ca439214b2b474fb8274de6b9ff75297f0d6e352c25b51" Jan 22 06:58:15 crc kubenswrapper[4720]: I0122 06:58:15.628513 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb8b9b9e-6305-4996-9c98-123c7047142a-utilities\") pod \"bb8b9b9e-6305-4996-9c98-123c7047142a\" (UID: \"bb8b9b9e-6305-4996-9c98-123c7047142a\") " Jan 22 06:58:15 crc kubenswrapper[4720]: I0122 06:58:15.628605 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7wx9v\" (UniqueName: \"kubernetes.io/projected/bb8b9b9e-6305-4996-9c98-123c7047142a-kube-api-access-7wx9v\") pod \"bb8b9b9e-6305-4996-9c98-123c7047142a\" (UID: \"bb8b9b9e-6305-4996-9c98-123c7047142a\") " Jan 22 06:58:15 crc kubenswrapper[4720]: I0122 06:58:15.628684 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb8b9b9e-6305-4996-9c98-123c7047142a-catalog-content\") pod \"bb8b9b9e-6305-4996-9c98-123c7047142a\" (UID: \"bb8b9b9e-6305-4996-9c98-123c7047142a\") " Jan 22 06:58:15 crc kubenswrapper[4720]: I0122 06:58:15.629746 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bb8b9b9e-6305-4996-9c98-123c7047142a-utilities" (OuterVolumeSpecName: "utilities") pod "bb8b9b9e-6305-4996-9c98-123c7047142a" (UID: "bb8b9b9e-6305-4996-9c98-123c7047142a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 06:58:15 crc kubenswrapper[4720]: I0122 06:58:15.640645 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb8b9b9e-6305-4996-9c98-123c7047142a-kube-api-access-7wx9v" (OuterVolumeSpecName: "kube-api-access-7wx9v") pod "bb8b9b9e-6305-4996-9c98-123c7047142a" (UID: "bb8b9b9e-6305-4996-9c98-123c7047142a"). InnerVolumeSpecName "kube-api-access-7wx9v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 06:58:15 crc kubenswrapper[4720]: I0122 06:58:15.722331 4720 scope.go:117] "RemoveContainer" containerID="e12de1571a77bf50017a0d159a6c0961011fc13b7092b3a481f6cb6525e203f6" Jan 22 06:58:15 crc kubenswrapper[4720]: I0122 06:58:15.730986 4720 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb8b9b9e-6305-4996-9c98-123c7047142a-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 06:58:15 crc kubenswrapper[4720]: I0122 06:58:15.731027 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7wx9v\" (UniqueName: \"kubernetes.io/projected/bb8b9b9e-6305-4996-9c98-123c7047142a-kube-api-access-7wx9v\") on node \"crc\" DevicePath \"\"" Jan 22 06:58:15 crc kubenswrapper[4720]: I0122 06:58:15.749242 4720 scope.go:117] "RemoveContainer" containerID="de395868ca3665d6e4fa816dc15cdeacd301b17e03b52de8ff1d9d68d72c478c" Jan 22 06:58:15 crc kubenswrapper[4720]: E0122 06:58:15.749891 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"de395868ca3665d6e4fa816dc15cdeacd301b17e03b52de8ff1d9d68d72c478c\": container with ID starting with de395868ca3665d6e4fa816dc15cdeacd301b17e03b52de8ff1d9d68d72c478c not found: ID does not exist" containerID="de395868ca3665d6e4fa816dc15cdeacd301b17e03b52de8ff1d9d68d72c478c" Jan 22 06:58:15 crc kubenswrapper[4720]: I0122 06:58:15.749971 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"de395868ca3665d6e4fa816dc15cdeacd301b17e03b52de8ff1d9d68d72c478c"} err="failed to get container status \"de395868ca3665d6e4fa816dc15cdeacd301b17e03b52de8ff1d9d68d72c478c\": rpc error: code = NotFound desc = could not find container \"de395868ca3665d6e4fa816dc15cdeacd301b17e03b52de8ff1d9d68d72c478c\": container with ID starting with de395868ca3665d6e4fa816dc15cdeacd301b17e03b52de8ff1d9d68d72c478c not found: ID does not exist" Jan 22 06:58:15 crc kubenswrapper[4720]: I0122 06:58:15.750002 4720 scope.go:117] "RemoveContainer" containerID="48a00fb707c3f06657ca439214b2b474fb8274de6b9ff75297f0d6e352c25b51" Jan 22 06:58:15 crc kubenswrapper[4720]: E0122 06:58:15.750377 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"48a00fb707c3f06657ca439214b2b474fb8274de6b9ff75297f0d6e352c25b51\": container with ID starting with 48a00fb707c3f06657ca439214b2b474fb8274de6b9ff75297f0d6e352c25b51 not found: ID does not exist" containerID="48a00fb707c3f06657ca439214b2b474fb8274de6b9ff75297f0d6e352c25b51" Jan 22 06:58:15 crc kubenswrapper[4720]: I0122 06:58:15.750422 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"48a00fb707c3f06657ca439214b2b474fb8274de6b9ff75297f0d6e352c25b51"} err="failed to get container status \"48a00fb707c3f06657ca439214b2b474fb8274de6b9ff75297f0d6e352c25b51\": rpc error: code = NotFound desc = could not find container \"48a00fb707c3f06657ca439214b2b474fb8274de6b9ff75297f0d6e352c25b51\": container with ID starting with 48a00fb707c3f06657ca439214b2b474fb8274de6b9ff75297f0d6e352c25b51 not found: ID does not exist" Jan 22 06:58:15 crc kubenswrapper[4720]: I0122 06:58:15.750455 4720 scope.go:117] "RemoveContainer" containerID="e12de1571a77bf50017a0d159a6c0961011fc13b7092b3a481f6cb6525e203f6" Jan 22 06:58:15 crc kubenswrapper[4720]: E0122 06:58:15.750897 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e12de1571a77bf50017a0d159a6c0961011fc13b7092b3a481f6cb6525e203f6\": container with ID starting with e12de1571a77bf50017a0d159a6c0961011fc13b7092b3a481f6cb6525e203f6 not found: ID does not exist" containerID="e12de1571a77bf50017a0d159a6c0961011fc13b7092b3a481f6cb6525e203f6" Jan 22 06:58:15 crc kubenswrapper[4720]: I0122 06:58:15.750997 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e12de1571a77bf50017a0d159a6c0961011fc13b7092b3a481f6cb6525e203f6"} err="failed to get container status \"e12de1571a77bf50017a0d159a6c0961011fc13b7092b3a481f6cb6525e203f6\": rpc error: code = NotFound desc = could not find container \"e12de1571a77bf50017a0d159a6c0961011fc13b7092b3a481f6cb6525e203f6\": container with ID starting with e12de1571a77bf50017a0d159a6c0961011fc13b7092b3a481f6cb6525e203f6 not found: ID does not exist" Jan 22 06:58:15 crc kubenswrapper[4720]: I0122 06:58:15.814078 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bb8b9b9e-6305-4996-9c98-123c7047142a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bb8b9b9e-6305-4996-9c98-123c7047142a" (UID: "bb8b9b9e-6305-4996-9c98-123c7047142a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 06:58:15 crc kubenswrapper[4720]: I0122 06:58:15.833210 4720 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb8b9b9e-6305-4996-9c98-123c7047142a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 06:58:15 crc kubenswrapper[4720]: I0122 06:58:15.932013 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-hqfpx"] Jan 22 06:58:15 crc kubenswrapper[4720]: I0122 06:58:15.938717 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-hqfpx"] Jan 22 06:58:16 crc kubenswrapper[4720]: I0122 06:58:16.220633 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5186fc7a-6b08-4177-bb0a-a43da69baa8a" path="/var/lib/kubelet/pods/5186fc7a-6b08-4177-bb0a-a43da69baa8a/volumes" Jan 22 06:58:16 crc kubenswrapper[4720]: I0122 06:58:16.221619 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bb8b9b9e-6305-4996-9c98-123c7047142a" path="/var/lib/kubelet/pods/bb8b9b9e-6305-4996-9c98-123c7047142a/volumes" Jan 22 06:58:16 crc kubenswrapper[4720]: I0122 06:58:16.620631 4720 generic.go:334] "Generic (PLEG): container finished" podID="b548db90-81a9-4307-974a-fab031bfc971" containerID="c25b672cba9ddedbd7765629c42e27e45674893ceb41a83b46de6aeddf464381" exitCode=0 Jan 22 06:58:16 crc kubenswrapper[4720]: I0122 06:58:16.620704 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"b548db90-81a9-4307-974a-fab031bfc971","Type":"ContainerDied","Data":"c25b672cba9ddedbd7765629c42e27e45674893ceb41a83b46de6aeddf464381"} Jan 22 06:58:16 crc kubenswrapper[4720]: I0122 06:58:16.622725 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/kube-state-metrics-0" event={"ID":"ada8cab6-f7e3-47fc-8ce8-684f61ceb5b8","Type":"ContainerStarted","Data":"10b4c6c0ed1bdfade09d1778b072a1b4be10a61593fd538f67a0cd016578ffbc"} Jan 22 06:58:16 crc kubenswrapper[4720]: I0122 06:58:16.622799 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/kube-state-metrics-0" Jan 22 06:58:16 crc kubenswrapper[4720]: I0122 06:58:16.663864 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/kube-state-metrics-0" podStartSLOduration=2.243583996 podStartE2EDuration="2.663831216s" podCreationTimestamp="2026-01-22 06:58:14 +0000 UTC" firstStartedPulling="2026-01-22 06:58:15.48236826 +0000 UTC m=+1387.624274965" lastFinishedPulling="2026-01-22 06:58:15.90261548 +0000 UTC m=+1388.044522185" observedRunningTime="2026-01-22 06:58:16.652630045 +0000 UTC m=+1388.794536770" watchObservedRunningTime="2026-01-22 06:58:16.663831216 +0000 UTC m=+1388.805737921" Jan 22 06:58:17 crc kubenswrapper[4720]: I0122 06:58:17.766032 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-db-create-gqhvs"] Jan 22 06:58:17 crc kubenswrapper[4720]: E0122 06:58:17.766943 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb8b9b9e-6305-4996-9c98-123c7047142a" containerName="extract-content" Jan 22 06:58:17 crc kubenswrapper[4720]: I0122 06:58:17.766963 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb8b9b9e-6305-4996-9c98-123c7047142a" containerName="extract-content" Jan 22 06:58:17 crc kubenswrapper[4720]: E0122 06:58:17.767006 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb8b9b9e-6305-4996-9c98-123c7047142a" containerName="extract-utilities" Jan 22 06:58:17 crc kubenswrapper[4720]: I0122 06:58:17.767014 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb8b9b9e-6305-4996-9c98-123c7047142a" containerName="extract-utilities" Jan 22 06:58:17 crc kubenswrapper[4720]: E0122 06:58:17.767033 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb8b9b9e-6305-4996-9c98-123c7047142a" containerName="registry-server" Jan 22 06:58:17 crc kubenswrapper[4720]: I0122 06:58:17.767043 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb8b9b9e-6305-4996-9c98-123c7047142a" containerName="registry-server" Jan 22 06:58:17 crc kubenswrapper[4720]: I0122 06:58:17.767261 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb8b9b9e-6305-4996-9c98-123c7047142a" containerName="registry-server" Jan 22 06:58:17 crc kubenswrapper[4720]: I0122 06:58:17.768078 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-gqhvs" Jan 22 06:58:17 crc kubenswrapper[4720]: I0122 06:58:17.776170 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-49c6-account-create-update-67wmk"] Jan 22 06:58:17 crc kubenswrapper[4720]: I0122 06:58:17.777651 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-49c6-account-create-update-67wmk" Jan 22 06:58:17 crc kubenswrapper[4720]: I0122 06:58:17.780297 4720 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-db-secret" Jan 22 06:58:17 crc kubenswrapper[4720]: I0122 06:58:17.781938 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-db-create-gqhvs"] Jan 22 06:58:17 crc kubenswrapper[4720]: I0122 06:58:17.820488 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-49c6-account-create-update-67wmk"] Jan 22 06:58:17 crc kubenswrapper[4720]: I0122 06:58:17.907451 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dv9bd\" (UniqueName: \"kubernetes.io/projected/509e786a-0709-438c-b2fc-1cf663797c56-kube-api-access-dv9bd\") pod \"watcher-db-create-gqhvs\" (UID: \"509e786a-0709-438c-b2fc-1cf663797c56\") " pod="watcher-kuttl-default/watcher-db-create-gqhvs" Jan 22 06:58:17 crc kubenswrapper[4720]: I0122 06:58:17.907515 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e078fabf-6d6b-44fe-bf95-f236bc469762-operator-scripts\") pod \"watcher-49c6-account-create-update-67wmk\" (UID: \"e078fabf-6d6b-44fe-bf95-f236bc469762\") " pod="watcher-kuttl-default/watcher-49c6-account-create-update-67wmk" Jan 22 06:58:17 crc kubenswrapper[4720]: I0122 06:58:17.907567 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/509e786a-0709-438c-b2fc-1cf663797c56-operator-scripts\") pod \"watcher-db-create-gqhvs\" (UID: \"509e786a-0709-438c-b2fc-1cf663797c56\") " pod="watcher-kuttl-default/watcher-db-create-gqhvs" Jan 22 06:58:17 crc kubenswrapper[4720]: I0122 06:58:17.907652 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7z4d6\" (UniqueName: \"kubernetes.io/projected/e078fabf-6d6b-44fe-bf95-f236bc469762-kube-api-access-7z4d6\") pod \"watcher-49c6-account-create-update-67wmk\" (UID: \"e078fabf-6d6b-44fe-bf95-f236bc469762\") " pod="watcher-kuttl-default/watcher-49c6-account-create-update-67wmk" Jan 22 06:58:18 crc kubenswrapper[4720]: I0122 06:58:18.009576 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/509e786a-0709-438c-b2fc-1cf663797c56-operator-scripts\") pod \"watcher-db-create-gqhvs\" (UID: \"509e786a-0709-438c-b2fc-1cf663797c56\") " pod="watcher-kuttl-default/watcher-db-create-gqhvs" Jan 22 06:58:18 crc kubenswrapper[4720]: I0122 06:58:18.009709 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7z4d6\" (UniqueName: \"kubernetes.io/projected/e078fabf-6d6b-44fe-bf95-f236bc469762-kube-api-access-7z4d6\") pod \"watcher-49c6-account-create-update-67wmk\" (UID: \"e078fabf-6d6b-44fe-bf95-f236bc469762\") " pod="watcher-kuttl-default/watcher-49c6-account-create-update-67wmk" Jan 22 06:58:18 crc kubenswrapper[4720]: I0122 06:58:18.009828 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dv9bd\" (UniqueName: \"kubernetes.io/projected/509e786a-0709-438c-b2fc-1cf663797c56-kube-api-access-dv9bd\") pod \"watcher-db-create-gqhvs\" (UID: \"509e786a-0709-438c-b2fc-1cf663797c56\") " pod="watcher-kuttl-default/watcher-db-create-gqhvs" Jan 22 06:58:18 crc kubenswrapper[4720]: I0122 06:58:18.009855 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e078fabf-6d6b-44fe-bf95-f236bc469762-operator-scripts\") pod \"watcher-49c6-account-create-update-67wmk\" (UID: \"e078fabf-6d6b-44fe-bf95-f236bc469762\") " pod="watcher-kuttl-default/watcher-49c6-account-create-update-67wmk" Jan 22 06:58:18 crc kubenswrapper[4720]: I0122 06:58:18.010815 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/509e786a-0709-438c-b2fc-1cf663797c56-operator-scripts\") pod \"watcher-db-create-gqhvs\" (UID: \"509e786a-0709-438c-b2fc-1cf663797c56\") " pod="watcher-kuttl-default/watcher-db-create-gqhvs" Jan 22 06:58:18 crc kubenswrapper[4720]: I0122 06:58:18.010879 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e078fabf-6d6b-44fe-bf95-f236bc469762-operator-scripts\") pod \"watcher-49c6-account-create-update-67wmk\" (UID: \"e078fabf-6d6b-44fe-bf95-f236bc469762\") " pod="watcher-kuttl-default/watcher-49c6-account-create-update-67wmk" Jan 22 06:58:18 crc kubenswrapper[4720]: I0122 06:58:18.040532 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dv9bd\" (UniqueName: \"kubernetes.io/projected/509e786a-0709-438c-b2fc-1cf663797c56-kube-api-access-dv9bd\") pod \"watcher-db-create-gqhvs\" (UID: \"509e786a-0709-438c-b2fc-1cf663797c56\") " pod="watcher-kuttl-default/watcher-db-create-gqhvs" Jan 22 06:58:18 crc kubenswrapper[4720]: I0122 06:58:18.040872 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7z4d6\" (UniqueName: \"kubernetes.io/projected/e078fabf-6d6b-44fe-bf95-f236bc469762-kube-api-access-7z4d6\") pod \"watcher-49c6-account-create-update-67wmk\" (UID: \"e078fabf-6d6b-44fe-bf95-f236bc469762\") " pod="watcher-kuttl-default/watcher-49c6-account-create-update-67wmk" Jan 22 06:58:18 crc kubenswrapper[4720]: I0122 06:58:18.091374 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-gqhvs" Jan 22 06:58:18 crc kubenswrapper[4720]: I0122 06:58:18.101942 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-49c6-account-create-update-67wmk" Jan 22 06:58:18 crc kubenswrapper[4720]: I0122 06:58:18.570805 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-db-create-gqhvs"] Jan 22 06:58:18 crc kubenswrapper[4720]: W0122 06:58:18.575818 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod509e786a_0709_438c_b2fc_1cf663797c56.slice/crio-911b4cb83694a725a9c9daa79343507203a5b779ccfdcdb0f5c205a3e6ee3849 WatchSource:0}: Error finding container 911b4cb83694a725a9c9daa79343507203a5b779ccfdcdb0f5c205a3e6ee3849: Status 404 returned error can't find the container with id 911b4cb83694a725a9c9daa79343507203a5b779ccfdcdb0f5c205a3e6ee3849 Jan 22 06:58:18 crc kubenswrapper[4720]: I0122 06:58:18.644458 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-49c6-account-create-update-67wmk"] Jan 22 06:58:18 crc kubenswrapper[4720]: W0122 06:58:18.647580 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode078fabf_6d6b_44fe_bf95_f236bc469762.slice/crio-a851b48bd04255b0f17b33f7f2bd9a3b9756583c1fbf14b3d1ae2e29d5c3802f WatchSource:0}: Error finding container a851b48bd04255b0f17b33f7f2bd9a3b9756583c1fbf14b3d1ae2e29d5c3802f: Status 404 returned error can't find the container with id a851b48bd04255b0f17b33f7f2bd9a3b9756583c1fbf14b3d1ae2e29d5c3802f Jan 22 06:58:18 crc kubenswrapper[4720]: I0122 06:58:18.649051 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-gqhvs" event={"ID":"509e786a-0709-438c-b2fc-1cf663797c56","Type":"ContainerStarted","Data":"911b4cb83694a725a9c9daa79343507203a5b779ccfdcdb0f5c205a3e6ee3849"} Jan 22 06:58:19 crc kubenswrapper[4720]: I0122 06:58:19.525429 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-ngp2n"] Jan 22 06:58:19 crc kubenswrapper[4720]: I0122 06:58:19.527109 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-ngp2n" podUID="f233f6d9-0fea-4c79-99e3-dd3d4edd0644" containerName="registry-server" containerID="cri-o://5155b5a65f7672e7ce6169b31014c9ee3e8736cd74900e3dbde966b112698f74" gracePeriod=2 Jan 22 06:58:19 crc kubenswrapper[4720]: I0122 06:58:19.669924 4720 generic.go:334] "Generic (PLEG): container finished" podID="e078fabf-6d6b-44fe-bf95-f236bc469762" containerID="7e854bfb3f205f9888883a028097f7b7689c09c4ebc28d8f17bc264a304218a0" exitCode=0 Jan 22 06:58:19 crc kubenswrapper[4720]: I0122 06:58:19.670037 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-49c6-account-create-update-67wmk" event={"ID":"e078fabf-6d6b-44fe-bf95-f236bc469762","Type":"ContainerDied","Data":"7e854bfb3f205f9888883a028097f7b7689c09c4ebc28d8f17bc264a304218a0"} Jan 22 06:58:19 crc kubenswrapper[4720]: I0122 06:58:19.670152 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-49c6-account-create-update-67wmk" event={"ID":"e078fabf-6d6b-44fe-bf95-f236bc469762","Type":"ContainerStarted","Data":"a851b48bd04255b0f17b33f7f2bd9a3b9756583c1fbf14b3d1ae2e29d5c3802f"} Jan 22 06:58:19 crc kubenswrapper[4720]: I0122 06:58:19.674714 4720 generic.go:334] "Generic (PLEG): container finished" podID="509e786a-0709-438c-b2fc-1cf663797c56" containerID="f2dcf7fc6592ea2bce8fd53f172e7eea21576aedf2c1682c7bc65b43df45782b" exitCode=0 Jan 22 06:58:19 crc kubenswrapper[4720]: I0122 06:58:19.674818 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-gqhvs" event={"ID":"509e786a-0709-438c-b2fc-1cf663797c56","Type":"ContainerDied","Data":"f2dcf7fc6592ea2bce8fd53f172e7eea21576aedf2c1682c7bc65b43df45782b"} Jan 22 06:58:20 crc kubenswrapper[4720]: I0122 06:58:20.017636 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ngp2n" Jan 22 06:58:20 crc kubenswrapper[4720]: I0122 06:58:20.149320 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p2s49\" (UniqueName: \"kubernetes.io/projected/f233f6d9-0fea-4c79-99e3-dd3d4edd0644-kube-api-access-p2s49\") pod \"f233f6d9-0fea-4c79-99e3-dd3d4edd0644\" (UID: \"f233f6d9-0fea-4c79-99e3-dd3d4edd0644\") " Jan 22 06:58:20 crc kubenswrapper[4720]: I0122 06:58:20.149853 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f233f6d9-0fea-4c79-99e3-dd3d4edd0644-utilities\") pod \"f233f6d9-0fea-4c79-99e3-dd3d4edd0644\" (UID: \"f233f6d9-0fea-4c79-99e3-dd3d4edd0644\") " Jan 22 06:58:20 crc kubenswrapper[4720]: I0122 06:58:20.150057 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f233f6d9-0fea-4c79-99e3-dd3d4edd0644-catalog-content\") pod \"f233f6d9-0fea-4c79-99e3-dd3d4edd0644\" (UID: \"f233f6d9-0fea-4c79-99e3-dd3d4edd0644\") " Jan 22 06:58:20 crc kubenswrapper[4720]: I0122 06:58:20.150935 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f233f6d9-0fea-4c79-99e3-dd3d4edd0644-utilities" (OuterVolumeSpecName: "utilities") pod "f233f6d9-0fea-4c79-99e3-dd3d4edd0644" (UID: "f233f6d9-0fea-4c79-99e3-dd3d4edd0644"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 06:58:20 crc kubenswrapper[4720]: I0122 06:58:20.156316 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f233f6d9-0fea-4c79-99e3-dd3d4edd0644-kube-api-access-p2s49" (OuterVolumeSpecName: "kube-api-access-p2s49") pod "f233f6d9-0fea-4c79-99e3-dd3d4edd0644" (UID: "f233f6d9-0fea-4c79-99e3-dd3d4edd0644"). InnerVolumeSpecName "kube-api-access-p2s49". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 06:58:20 crc kubenswrapper[4720]: I0122 06:58:20.197237 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f233f6d9-0fea-4c79-99e3-dd3d4edd0644-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f233f6d9-0fea-4c79-99e3-dd3d4edd0644" (UID: "f233f6d9-0fea-4c79-99e3-dd3d4edd0644"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 06:58:20 crc kubenswrapper[4720]: I0122 06:58:20.251860 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p2s49\" (UniqueName: \"kubernetes.io/projected/f233f6d9-0fea-4c79-99e3-dd3d4edd0644-kube-api-access-p2s49\") on node \"crc\" DevicePath \"\"" Jan 22 06:58:20 crc kubenswrapper[4720]: I0122 06:58:20.251895 4720 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f233f6d9-0fea-4c79-99e3-dd3d4edd0644-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 06:58:20 crc kubenswrapper[4720]: I0122 06:58:20.251926 4720 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f233f6d9-0fea-4c79-99e3-dd3d4edd0644-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 06:58:20 crc kubenswrapper[4720]: I0122 06:58:20.613778 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 22 06:58:20 crc kubenswrapper[4720]: I0122 06:58:20.687689 4720 generic.go:334] "Generic (PLEG): container finished" podID="f233f6d9-0fea-4c79-99e3-dd3d4edd0644" containerID="5155b5a65f7672e7ce6169b31014c9ee3e8736cd74900e3dbde966b112698f74" exitCode=0 Jan 22 06:58:20 crc kubenswrapper[4720]: I0122 06:58:20.687803 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ngp2n" event={"ID":"f233f6d9-0fea-4c79-99e3-dd3d4edd0644","Type":"ContainerDied","Data":"5155b5a65f7672e7ce6169b31014c9ee3e8736cd74900e3dbde966b112698f74"} Jan 22 06:58:20 crc kubenswrapper[4720]: I0122 06:58:20.687840 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ngp2n" event={"ID":"f233f6d9-0fea-4c79-99e3-dd3d4edd0644","Type":"ContainerDied","Data":"69e357d9b14a23f0c452f873acfcbf957de5e4f6b06fddabf1e84e74751c4adb"} Jan 22 06:58:20 crc kubenswrapper[4720]: I0122 06:58:20.687862 4720 scope.go:117] "RemoveContainer" containerID="5155b5a65f7672e7ce6169b31014c9ee3e8736cd74900e3dbde966b112698f74" Jan 22 06:58:20 crc kubenswrapper[4720]: I0122 06:58:20.688144 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ngp2n" Jan 22 06:58:20 crc kubenswrapper[4720]: I0122 06:58:20.694010 4720 generic.go:334] "Generic (PLEG): container finished" podID="b548db90-81a9-4307-974a-fab031bfc971" containerID="62bdf42917d601bc91cbf6f5e9a6964a1debf6183d8a1e070526ad3615ff5291" exitCode=0 Jan 22 06:58:20 crc kubenswrapper[4720]: I0122 06:58:20.694104 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 22 06:58:20 crc kubenswrapper[4720]: I0122 06:58:20.694195 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"b548db90-81a9-4307-974a-fab031bfc971","Type":"ContainerDied","Data":"62bdf42917d601bc91cbf6f5e9a6964a1debf6183d8a1e070526ad3615ff5291"} Jan 22 06:58:20 crc kubenswrapper[4720]: I0122 06:58:20.694236 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"b548db90-81a9-4307-974a-fab031bfc971","Type":"ContainerDied","Data":"7c087d9a583ee49cec1717db9f84331ccd0adfc2db971ace338055f772fc8935"} Jan 22 06:58:20 crc kubenswrapper[4720]: I0122 06:58:20.724917 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-ngp2n"] Jan 22 06:58:20 crc kubenswrapper[4720]: I0122 06:58:20.730795 4720 scope.go:117] "RemoveContainer" containerID="04b3f1f9a2ce541ce7b25950a16a2ef3f45ad0ccda95691b1a5e2be0dbd8696a" Jan 22 06:58:20 crc kubenswrapper[4720]: I0122 06:58:20.735202 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-ngp2n"] Jan 22 06:58:20 crc kubenswrapper[4720]: I0122 06:58:20.761135 4720 scope.go:117] "RemoveContainer" containerID="b0e61d23100d5341b9b6dcb87578e5315d1bb29ab4eb4e83c9c7829cd44a0715" Jan 22 06:58:20 crc kubenswrapper[4720]: I0122 06:58:20.769934 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5hpj7\" (UniqueName: \"kubernetes.io/projected/b548db90-81a9-4307-974a-fab031bfc971-kube-api-access-5hpj7\") pod \"b548db90-81a9-4307-974a-fab031bfc971\" (UID: \"b548db90-81a9-4307-974a-fab031bfc971\") " Jan 22 06:58:20 crc kubenswrapper[4720]: I0122 06:58:20.769996 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b548db90-81a9-4307-974a-fab031bfc971-run-httpd\") pod \"b548db90-81a9-4307-974a-fab031bfc971\" (UID: \"b548db90-81a9-4307-974a-fab031bfc971\") " Jan 22 06:58:20 crc kubenswrapper[4720]: I0122 06:58:20.770104 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b548db90-81a9-4307-974a-fab031bfc971-sg-core-conf-yaml\") pod \"b548db90-81a9-4307-974a-fab031bfc971\" (UID: \"b548db90-81a9-4307-974a-fab031bfc971\") " Jan 22 06:58:20 crc kubenswrapper[4720]: I0122 06:58:20.770141 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b548db90-81a9-4307-974a-fab031bfc971-scripts\") pod \"b548db90-81a9-4307-974a-fab031bfc971\" (UID: \"b548db90-81a9-4307-974a-fab031bfc971\") " Jan 22 06:58:20 crc kubenswrapper[4720]: I0122 06:58:20.770177 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b548db90-81a9-4307-974a-fab031bfc971-config-data\") pod \"b548db90-81a9-4307-974a-fab031bfc971\" (UID: \"b548db90-81a9-4307-974a-fab031bfc971\") " Jan 22 06:58:20 crc kubenswrapper[4720]: I0122 06:58:20.770228 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b548db90-81a9-4307-974a-fab031bfc971-combined-ca-bundle\") pod \"b548db90-81a9-4307-974a-fab031bfc971\" (UID: \"b548db90-81a9-4307-974a-fab031bfc971\") " Jan 22 06:58:20 crc kubenswrapper[4720]: I0122 06:58:20.770256 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b548db90-81a9-4307-974a-fab031bfc971-log-httpd\") pod \"b548db90-81a9-4307-974a-fab031bfc971\" (UID: \"b548db90-81a9-4307-974a-fab031bfc971\") " Jan 22 06:58:20 crc kubenswrapper[4720]: I0122 06:58:20.771259 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b548db90-81a9-4307-974a-fab031bfc971-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "b548db90-81a9-4307-974a-fab031bfc971" (UID: "b548db90-81a9-4307-974a-fab031bfc971"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 06:58:20 crc kubenswrapper[4720]: I0122 06:58:20.773574 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b548db90-81a9-4307-974a-fab031bfc971-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "b548db90-81a9-4307-974a-fab031bfc971" (UID: "b548db90-81a9-4307-974a-fab031bfc971"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 06:58:20 crc kubenswrapper[4720]: I0122 06:58:20.775336 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b548db90-81a9-4307-974a-fab031bfc971-scripts" (OuterVolumeSpecName: "scripts") pod "b548db90-81a9-4307-974a-fab031bfc971" (UID: "b548db90-81a9-4307-974a-fab031bfc971"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 06:58:20 crc kubenswrapper[4720]: I0122 06:58:20.775675 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b548db90-81a9-4307-974a-fab031bfc971-kube-api-access-5hpj7" (OuterVolumeSpecName: "kube-api-access-5hpj7") pod "b548db90-81a9-4307-974a-fab031bfc971" (UID: "b548db90-81a9-4307-974a-fab031bfc971"). InnerVolumeSpecName "kube-api-access-5hpj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 06:58:20 crc kubenswrapper[4720]: I0122 06:58:20.803232 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b548db90-81a9-4307-974a-fab031bfc971-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "b548db90-81a9-4307-974a-fab031bfc971" (UID: "b548db90-81a9-4307-974a-fab031bfc971"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 06:58:20 crc kubenswrapper[4720]: I0122 06:58:20.809502 4720 scope.go:117] "RemoveContainer" containerID="5155b5a65f7672e7ce6169b31014c9ee3e8736cd74900e3dbde966b112698f74" Jan 22 06:58:20 crc kubenswrapper[4720]: E0122 06:58:20.810126 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5155b5a65f7672e7ce6169b31014c9ee3e8736cd74900e3dbde966b112698f74\": container with ID starting with 5155b5a65f7672e7ce6169b31014c9ee3e8736cd74900e3dbde966b112698f74 not found: ID does not exist" containerID="5155b5a65f7672e7ce6169b31014c9ee3e8736cd74900e3dbde966b112698f74" Jan 22 06:58:20 crc kubenswrapper[4720]: I0122 06:58:20.810167 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5155b5a65f7672e7ce6169b31014c9ee3e8736cd74900e3dbde966b112698f74"} err="failed to get container status \"5155b5a65f7672e7ce6169b31014c9ee3e8736cd74900e3dbde966b112698f74\": rpc error: code = NotFound desc = could not find container \"5155b5a65f7672e7ce6169b31014c9ee3e8736cd74900e3dbde966b112698f74\": container with ID starting with 5155b5a65f7672e7ce6169b31014c9ee3e8736cd74900e3dbde966b112698f74 not found: ID does not exist" Jan 22 06:58:20 crc kubenswrapper[4720]: I0122 06:58:20.810199 4720 scope.go:117] "RemoveContainer" containerID="04b3f1f9a2ce541ce7b25950a16a2ef3f45ad0ccda95691b1a5e2be0dbd8696a" Jan 22 06:58:20 crc kubenswrapper[4720]: E0122 06:58:20.810410 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"04b3f1f9a2ce541ce7b25950a16a2ef3f45ad0ccda95691b1a5e2be0dbd8696a\": container with ID starting with 04b3f1f9a2ce541ce7b25950a16a2ef3f45ad0ccda95691b1a5e2be0dbd8696a not found: ID does not exist" containerID="04b3f1f9a2ce541ce7b25950a16a2ef3f45ad0ccda95691b1a5e2be0dbd8696a" Jan 22 06:58:20 crc kubenswrapper[4720]: I0122 06:58:20.810439 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"04b3f1f9a2ce541ce7b25950a16a2ef3f45ad0ccda95691b1a5e2be0dbd8696a"} err="failed to get container status \"04b3f1f9a2ce541ce7b25950a16a2ef3f45ad0ccda95691b1a5e2be0dbd8696a\": rpc error: code = NotFound desc = could not find container \"04b3f1f9a2ce541ce7b25950a16a2ef3f45ad0ccda95691b1a5e2be0dbd8696a\": container with ID starting with 04b3f1f9a2ce541ce7b25950a16a2ef3f45ad0ccda95691b1a5e2be0dbd8696a not found: ID does not exist" Jan 22 06:58:20 crc kubenswrapper[4720]: I0122 06:58:20.810455 4720 scope.go:117] "RemoveContainer" containerID="b0e61d23100d5341b9b6dcb87578e5315d1bb29ab4eb4e83c9c7829cd44a0715" Jan 22 06:58:20 crc kubenswrapper[4720]: E0122 06:58:20.810691 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b0e61d23100d5341b9b6dcb87578e5315d1bb29ab4eb4e83c9c7829cd44a0715\": container with ID starting with b0e61d23100d5341b9b6dcb87578e5315d1bb29ab4eb4e83c9c7829cd44a0715 not found: ID does not exist" containerID="b0e61d23100d5341b9b6dcb87578e5315d1bb29ab4eb4e83c9c7829cd44a0715" Jan 22 06:58:20 crc kubenswrapper[4720]: I0122 06:58:20.810714 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b0e61d23100d5341b9b6dcb87578e5315d1bb29ab4eb4e83c9c7829cd44a0715"} err="failed to get container status \"b0e61d23100d5341b9b6dcb87578e5315d1bb29ab4eb4e83c9c7829cd44a0715\": rpc error: code = NotFound desc = could not find container \"b0e61d23100d5341b9b6dcb87578e5315d1bb29ab4eb4e83c9c7829cd44a0715\": container with ID starting with b0e61d23100d5341b9b6dcb87578e5315d1bb29ab4eb4e83c9c7829cd44a0715 not found: ID does not exist" Jan 22 06:58:20 crc kubenswrapper[4720]: I0122 06:58:20.810728 4720 scope.go:117] "RemoveContainer" containerID="8ec10c8de6185178130fab4dd72d580681046bbf45afd76a024b782a0e601deb" Jan 22 06:58:20 crc kubenswrapper[4720]: I0122 06:58:20.829965 4720 scope.go:117] "RemoveContainer" containerID="42bb58138a905f1f6b1f00c4ade6b74e6bc2d6f35cf2ca575b45d28fb3374125" Jan 22 06:58:20 crc kubenswrapper[4720]: I0122 06:58:20.847260 4720 scope.go:117] "RemoveContainer" containerID="62bdf42917d601bc91cbf6f5e9a6964a1debf6183d8a1e070526ad3615ff5291" Jan 22 06:58:20 crc kubenswrapper[4720]: I0122 06:58:20.867385 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b548db90-81a9-4307-974a-fab031bfc971-config-data" (OuterVolumeSpecName: "config-data") pod "b548db90-81a9-4307-974a-fab031bfc971" (UID: "b548db90-81a9-4307-974a-fab031bfc971"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 06:58:20 crc kubenswrapper[4720]: I0122 06:58:20.870552 4720 scope.go:117] "RemoveContainer" containerID="c25b672cba9ddedbd7765629c42e27e45674893ceb41a83b46de6aeddf464381" Jan 22 06:58:20 crc kubenswrapper[4720]: I0122 06:58:20.872088 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5hpj7\" (UniqueName: \"kubernetes.io/projected/b548db90-81a9-4307-974a-fab031bfc971-kube-api-access-5hpj7\") on node \"crc\" DevicePath \"\"" Jan 22 06:58:20 crc kubenswrapper[4720]: I0122 06:58:20.872112 4720 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b548db90-81a9-4307-974a-fab031bfc971-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 22 06:58:20 crc kubenswrapper[4720]: I0122 06:58:20.872126 4720 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b548db90-81a9-4307-974a-fab031bfc971-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 22 06:58:20 crc kubenswrapper[4720]: I0122 06:58:20.872135 4720 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b548db90-81a9-4307-974a-fab031bfc971-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 06:58:20 crc kubenswrapper[4720]: I0122 06:58:20.872146 4720 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b548db90-81a9-4307-974a-fab031bfc971-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 06:58:20 crc kubenswrapper[4720]: I0122 06:58:20.872156 4720 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b548db90-81a9-4307-974a-fab031bfc971-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 22 06:58:20 crc kubenswrapper[4720]: I0122 06:58:20.873541 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b548db90-81a9-4307-974a-fab031bfc971-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b548db90-81a9-4307-974a-fab031bfc971" (UID: "b548db90-81a9-4307-974a-fab031bfc971"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 06:58:20 crc kubenswrapper[4720]: I0122 06:58:20.920472 4720 scope.go:117] "RemoveContainer" containerID="8ec10c8de6185178130fab4dd72d580681046bbf45afd76a024b782a0e601deb" Jan 22 06:58:20 crc kubenswrapper[4720]: E0122 06:58:20.921046 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8ec10c8de6185178130fab4dd72d580681046bbf45afd76a024b782a0e601deb\": container with ID starting with 8ec10c8de6185178130fab4dd72d580681046bbf45afd76a024b782a0e601deb not found: ID does not exist" containerID="8ec10c8de6185178130fab4dd72d580681046bbf45afd76a024b782a0e601deb" Jan 22 06:58:20 crc kubenswrapper[4720]: I0122 06:58:20.921105 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8ec10c8de6185178130fab4dd72d580681046bbf45afd76a024b782a0e601deb"} err="failed to get container status \"8ec10c8de6185178130fab4dd72d580681046bbf45afd76a024b782a0e601deb\": rpc error: code = NotFound desc = could not find container \"8ec10c8de6185178130fab4dd72d580681046bbf45afd76a024b782a0e601deb\": container with ID starting with 8ec10c8de6185178130fab4dd72d580681046bbf45afd76a024b782a0e601deb not found: ID does not exist" Jan 22 06:58:20 crc kubenswrapper[4720]: I0122 06:58:20.921148 4720 scope.go:117] "RemoveContainer" containerID="42bb58138a905f1f6b1f00c4ade6b74e6bc2d6f35cf2ca575b45d28fb3374125" Jan 22 06:58:20 crc kubenswrapper[4720]: E0122 06:58:20.921520 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"42bb58138a905f1f6b1f00c4ade6b74e6bc2d6f35cf2ca575b45d28fb3374125\": container with ID starting with 42bb58138a905f1f6b1f00c4ade6b74e6bc2d6f35cf2ca575b45d28fb3374125 not found: ID does not exist" containerID="42bb58138a905f1f6b1f00c4ade6b74e6bc2d6f35cf2ca575b45d28fb3374125" Jan 22 06:58:20 crc kubenswrapper[4720]: I0122 06:58:20.921553 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"42bb58138a905f1f6b1f00c4ade6b74e6bc2d6f35cf2ca575b45d28fb3374125"} err="failed to get container status \"42bb58138a905f1f6b1f00c4ade6b74e6bc2d6f35cf2ca575b45d28fb3374125\": rpc error: code = NotFound desc = could not find container \"42bb58138a905f1f6b1f00c4ade6b74e6bc2d6f35cf2ca575b45d28fb3374125\": container with ID starting with 42bb58138a905f1f6b1f00c4ade6b74e6bc2d6f35cf2ca575b45d28fb3374125 not found: ID does not exist" Jan 22 06:58:20 crc kubenswrapper[4720]: I0122 06:58:20.921578 4720 scope.go:117] "RemoveContainer" containerID="62bdf42917d601bc91cbf6f5e9a6964a1debf6183d8a1e070526ad3615ff5291" Jan 22 06:58:20 crc kubenswrapper[4720]: E0122 06:58:20.922272 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"62bdf42917d601bc91cbf6f5e9a6964a1debf6183d8a1e070526ad3615ff5291\": container with ID starting with 62bdf42917d601bc91cbf6f5e9a6964a1debf6183d8a1e070526ad3615ff5291 not found: ID does not exist" containerID="62bdf42917d601bc91cbf6f5e9a6964a1debf6183d8a1e070526ad3615ff5291" Jan 22 06:58:20 crc kubenswrapper[4720]: I0122 06:58:20.922303 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"62bdf42917d601bc91cbf6f5e9a6964a1debf6183d8a1e070526ad3615ff5291"} err="failed to get container status \"62bdf42917d601bc91cbf6f5e9a6964a1debf6183d8a1e070526ad3615ff5291\": rpc error: code = NotFound desc = could not find container \"62bdf42917d601bc91cbf6f5e9a6964a1debf6183d8a1e070526ad3615ff5291\": container with ID starting with 62bdf42917d601bc91cbf6f5e9a6964a1debf6183d8a1e070526ad3615ff5291 not found: ID does not exist" Jan 22 06:58:20 crc kubenswrapper[4720]: I0122 06:58:20.922324 4720 scope.go:117] "RemoveContainer" containerID="c25b672cba9ddedbd7765629c42e27e45674893ceb41a83b46de6aeddf464381" Jan 22 06:58:20 crc kubenswrapper[4720]: E0122 06:58:20.922555 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c25b672cba9ddedbd7765629c42e27e45674893ceb41a83b46de6aeddf464381\": container with ID starting with c25b672cba9ddedbd7765629c42e27e45674893ceb41a83b46de6aeddf464381 not found: ID does not exist" containerID="c25b672cba9ddedbd7765629c42e27e45674893ceb41a83b46de6aeddf464381" Jan 22 06:58:20 crc kubenswrapper[4720]: I0122 06:58:20.922581 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c25b672cba9ddedbd7765629c42e27e45674893ceb41a83b46de6aeddf464381"} err="failed to get container status \"c25b672cba9ddedbd7765629c42e27e45674893ceb41a83b46de6aeddf464381\": rpc error: code = NotFound desc = could not find container \"c25b672cba9ddedbd7765629c42e27e45674893ceb41a83b46de6aeddf464381\": container with ID starting with c25b672cba9ddedbd7765629c42e27e45674893ceb41a83b46de6aeddf464381 not found: ID does not exist" Jan 22 06:58:20 crc kubenswrapper[4720]: I0122 06:58:20.974059 4720 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b548db90-81a9-4307-974a-fab031bfc971-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 06:58:20 crc kubenswrapper[4720]: I0122 06:58:20.978299 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-gqhvs" Jan 22 06:58:21 crc kubenswrapper[4720]: I0122 06:58:21.031463 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 06:58:21 crc kubenswrapper[4720]: I0122 06:58:21.034170 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-49c6-account-create-update-67wmk" Jan 22 06:58:21 crc kubenswrapper[4720]: I0122 06:58:21.041235 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 06:58:21 crc kubenswrapper[4720]: I0122 06:58:21.058576 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 06:58:21 crc kubenswrapper[4720]: E0122 06:58:21.059086 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f233f6d9-0fea-4c79-99e3-dd3d4edd0644" containerName="registry-server" Jan 22 06:58:21 crc kubenswrapper[4720]: I0122 06:58:21.059108 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="f233f6d9-0fea-4c79-99e3-dd3d4edd0644" containerName="registry-server" Jan 22 06:58:21 crc kubenswrapper[4720]: E0122 06:58:21.059124 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b548db90-81a9-4307-974a-fab031bfc971" containerName="ceilometer-notification-agent" Jan 22 06:58:21 crc kubenswrapper[4720]: I0122 06:58:21.059130 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="b548db90-81a9-4307-974a-fab031bfc971" containerName="ceilometer-notification-agent" Jan 22 06:58:21 crc kubenswrapper[4720]: E0122 06:58:21.059148 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b548db90-81a9-4307-974a-fab031bfc971" containerName="proxy-httpd" Jan 22 06:58:21 crc kubenswrapper[4720]: I0122 06:58:21.059155 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="b548db90-81a9-4307-974a-fab031bfc971" containerName="proxy-httpd" Jan 22 06:58:21 crc kubenswrapper[4720]: E0122 06:58:21.059162 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e078fabf-6d6b-44fe-bf95-f236bc469762" containerName="mariadb-account-create-update" Jan 22 06:58:21 crc kubenswrapper[4720]: I0122 06:58:21.059170 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="e078fabf-6d6b-44fe-bf95-f236bc469762" containerName="mariadb-account-create-update" Jan 22 06:58:21 crc kubenswrapper[4720]: E0122 06:58:21.059181 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b548db90-81a9-4307-974a-fab031bfc971" containerName="ceilometer-central-agent" Jan 22 06:58:21 crc kubenswrapper[4720]: I0122 06:58:21.059187 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="b548db90-81a9-4307-974a-fab031bfc971" containerName="ceilometer-central-agent" Jan 22 06:58:21 crc kubenswrapper[4720]: E0122 06:58:21.059204 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="509e786a-0709-438c-b2fc-1cf663797c56" containerName="mariadb-database-create" Jan 22 06:58:21 crc kubenswrapper[4720]: I0122 06:58:21.059212 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="509e786a-0709-438c-b2fc-1cf663797c56" containerName="mariadb-database-create" Jan 22 06:58:21 crc kubenswrapper[4720]: E0122 06:58:21.059223 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b548db90-81a9-4307-974a-fab031bfc971" containerName="sg-core" Jan 22 06:58:21 crc kubenswrapper[4720]: I0122 06:58:21.059229 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="b548db90-81a9-4307-974a-fab031bfc971" containerName="sg-core" Jan 22 06:58:21 crc kubenswrapper[4720]: E0122 06:58:21.059240 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f233f6d9-0fea-4c79-99e3-dd3d4edd0644" containerName="extract-utilities" Jan 22 06:58:21 crc kubenswrapper[4720]: I0122 06:58:21.059247 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="f233f6d9-0fea-4c79-99e3-dd3d4edd0644" containerName="extract-utilities" Jan 22 06:58:21 crc kubenswrapper[4720]: E0122 06:58:21.059259 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f233f6d9-0fea-4c79-99e3-dd3d4edd0644" containerName="extract-content" Jan 22 06:58:21 crc kubenswrapper[4720]: I0122 06:58:21.059266 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="f233f6d9-0fea-4c79-99e3-dd3d4edd0644" containerName="extract-content" Jan 22 06:58:21 crc kubenswrapper[4720]: I0122 06:58:21.059430 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="b548db90-81a9-4307-974a-fab031bfc971" containerName="proxy-httpd" Jan 22 06:58:21 crc kubenswrapper[4720]: I0122 06:58:21.059443 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="e078fabf-6d6b-44fe-bf95-f236bc469762" containerName="mariadb-account-create-update" Jan 22 06:58:21 crc kubenswrapper[4720]: I0122 06:58:21.059458 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="b548db90-81a9-4307-974a-fab031bfc971" containerName="sg-core" Jan 22 06:58:21 crc kubenswrapper[4720]: I0122 06:58:21.059467 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="b548db90-81a9-4307-974a-fab031bfc971" containerName="ceilometer-notification-agent" Jan 22 06:58:21 crc kubenswrapper[4720]: I0122 06:58:21.059475 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="b548db90-81a9-4307-974a-fab031bfc971" containerName="ceilometer-central-agent" Jan 22 06:58:21 crc kubenswrapper[4720]: I0122 06:58:21.059484 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="509e786a-0709-438c-b2fc-1cf663797c56" containerName="mariadb-database-create" Jan 22 06:58:21 crc kubenswrapper[4720]: I0122 06:58:21.059491 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="f233f6d9-0fea-4c79-99e3-dd3d4edd0644" containerName="registry-server" Jan 22 06:58:21 crc kubenswrapper[4720]: I0122 06:58:21.061191 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 22 06:58:21 crc kubenswrapper[4720]: I0122 06:58:21.064359 4720 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-scripts" Jan 22 06:58:21 crc kubenswrapper[4720]: I0122 06:58:21.064712 4720 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-config-data" Jan 22 06:58:21 crc kubenswrapper[4720]: I0122 06:58:21.064879 4720 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-ceilometer-internal-svc" Jan 22 06:58:21 crc kubenswrapper[4720]: I0122 06:58:21.087847 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 06:58:21 crc kubenswrapper[4720]: I0122 06:58:21.176387 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/509e786a-0709-438c-b2fc-1cf663797c56-operator-scripts\") pod \"509e786a-0709-438c-b2fc-1cf663797c56\" (UID: \"509e786a-0709-438c-b2fc-1cf663797c56\") " Jan 22 06:58:21 crc kubenswrapper[4720]: I0122 06:58:21.176521 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e078fabf-6d6b-44fe-bf95-f236bc469762-operator-scripts\") pod \"e078fabf-6d6b-44fe-bf95-f236bc469762\" (UID: \"e078fabf-6d6b-44fe-bf95-f236bc469762\") " Jan 22 06:58:21 crc kubenswrapper[4720]: I0122 06:58:21.176549 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dv9bd\" (UniqueName: \"kubernetes.io/projected/509e786a-0709-438c-b2fc-1cf663797c56-kube-api-access-dv9bd\") pod \"509e786a-0709-438c-b2fc-1cf663797c56\" (UID: \"509e786a-0709-438c-b2fc-1cf663797c56\") " Jan 22 06:58:21 crc kubenswrapper[4720]: I0122 06:58:21.176590 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7z4d6\" (UniqueName: \"kubernetes.io/projected/e078fabf-6d6b-44fe-bf95-f236bc469762-kube-api-access-7z4d6\") pod \"e078fabf-6d6b-44fe-bf95-f236bc469762\" (UID: \"e078fabf-6d6b-44fe-bf95-f236bc469762\") " Jan 22 06:58:21 crc kubenswrapper[4720]: I0122 06:58:21.176936 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/777aebda-6518-41fd-a1e1-0051e2998417-run-httpd\") pod \"ceilometer-0\" (UID: \"777aebda-6518-41fd-a1e1-0051e2998417\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 06:58:21 crc kubenswrapper[4720]: I0122 06:58:21.176998 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/777aebda-6518-41fd-a1e1-0051e2998417-config-data\") pod \"ceilometer-0\" (UID: \"777aebda-6518-41fd-a1e1-0051e2998417\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 06:58:21 crc kubenswrapper[4720]: I0122 06:58:21.177036 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/777aebda-6518-41fd-a1e1-0051e2998417-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"777aebda-6518-41fd-a1e1-0051e2998417\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 06:58:21 crc kubenswrapper[4720]: I0122 06:58:21.177061 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/777aebda-6518-41fd-a1e1-0051e2998417-log-httpd\") pod \"ceilometer-0\" (UID: \"777aebda-6518-41fd-a1e1-0051e2998417\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 06:58:21 crc kubenswrapper[4720]: I0122 06:58:21.177102 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/777aebda-6518-41fd-a1e1-0051e2998417-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"777aebda-6518-41fd-a1e1-0051e2998417\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 06:58:21 crc kubenswrapper[4720]: I0122 06:58:21.177120 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mm7s7\" (UniqueName: \"kubernetes.io/projected/777aebda-6518-41fd-a1e1-0051e2998417-kube-api-access-mm7s7\") pod \"ceilometer-0\" (UID: \"777aebda-6518-41fd-a1e1-0051e2998417\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 06:58:21 crc kubenswrapper[4720]: I0122 06:58:21.178040 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e078fabf-6d6b-44fe-bf95-f236bc469762-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e078fabf-6d6b-44fe-bf95-f236bc469762" (UID: "e078fabf-6d6b-44fe-bf95-f236bc469762"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 06:58:21 crc kubenswrapper[4720]: I0122 06:58:21.178149 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/509e786a-0709-438c-b2fc-1cf663797c56-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "509e786a-0709-438c-b2fc-1cf663797c56" (UID: "509e786a-0709-438c-b2fc-1cf663797c56"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 06:58:21 crc kubenswrapper[4720]: I0122 06:58:21.178204 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/777aebda-6518-41fd-a1e1-0051e2998417-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"777aebda-6518-41fd-a1e1-0051e2998417\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 06:58:21 crc kubenswrapper[4720]: I0122 06:58:21.178310 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/777aebda-6518-41fd-a1e1-0051e2998417-scripts\") pod \"ceilometer-0\" (UID: \"777aebda-6518-41fd-a1e1-0051e2998417\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 06:58:21 crc kubenswrapper[4720]: I0122 06:58:21.178389 4720 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e078fabf-6d6b-44fe-bf95-f236bc469762-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 06:58:21 crc kubenswrapper[4720]: I0122 06:58:21.178403 4720 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/509e786a-0709-438c-b2fc-1cf663797c56-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 06:58:21 crc kubenswrapper[4720]: I0122 06:58:21.180300 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e078fabf-6d6b-44fe-bf95-f236bc469762-kube-api-access-7z4d6" (OuterVolumeSpecName: "kube-api-access-7z4d6") pod "e078fabf-6d6b-44fe-bf95-f236bc469762" (UID: "e078fabf-6d6b-44fe-bf95-f236bc469762"). InnerVolumeSpecName "kube-api-access-7z4d6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 06:58:21 crc kubenswrapper[4720]: I0122 06:58:21.180809 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/509e786a-0709-438c-b2fc-1cf663797c56-kube-api-access-dv9bd" (OuterVolumeSpecName: "kube-api-access-dv9bd") pod "509e786a-0709-438c-b2fc-1cf663797c56" (UID: "509e786a-0709-438c-b2fc-1cf663797c56"). InnerVolumeSpecName "kube-api-access-dv9bd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 06:58:21 crc kubenswrapper[4720]: I0122 06:58:21.280476 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/777aebda-6518-41fd-a1e1-0051e2998417-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"777aebda-6518-41fd-a1e1-0051e2998417\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 06:58:21 crc kubenswrapper[4720]: I0122 06:58:21.280550 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/777aebda-6518-41fd-a1e1-0051e2998417-log-httpd\") pod \"ceilometer-0\" (UID: \"777aebda-6518-41fd-a1e1-0051e2998417\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 06:58:21 crc kubenswrapper[4720]: I0122 06:58:21.280616 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/777aebda-6518-41fd-a1e1-0051e2998417-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"777aebda-6518-41fd-a1e1-0051e2998417\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 06:58:21 crc kubenswrapper[4720]: I0122 06:58:21.280643 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mm7s7\" (UniqueName: \"kubernetes.io/projected/777aebda-6518-41fd-a1e1-0051e2998417-kube-api-access-mm7s7\") pod \"ceilometer-0\" (UID: \"777aebda-6518-41fd-a1e1-0051e2998417\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 06:58:21 crc kubenswrapper[4720]: I0122 06:58:21.280690 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/777aebda-6518-41fd-a1e1-0051e2998417-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"777aebda-6518-41fd-a1e1-0051e2998417\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 06:58:21 crc kubenswrapper[4720]: I0122 06:58:21.280719 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/777aebda-6518-41fd-a1e1-0051e2998417-scripts\") pod \"ceilometer-0\" (UID: \"777aebda-6518-41fd-a1e1-0051e2998417\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 06:58:21 crc kubenswrapper[4720]: I0122 06:58:21.280792 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/777aebda-6518-41fd-a1e1-0051e2998417-run-httpd\") pod \"ceilometer-0\" (UID: \"777aebda-6518-41fd-a1e1-0051e2998417\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 06:58:21 crc kubenswrapper[4720]: I0122 06:58:21.280856 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/777aebda-6518-41fd-a1e1-0051e2998417-config-data\") pod \"ceilometer-0\" (UID: \"777aebda-6518-41fd-a1e1-0051e2998417\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 06:58:21 crc kubenswrapper[4720]: I0122 06:58:21.280941 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dv9bd\" (UniqueName: \"kubernetes.io/projected/509e786a-0709-438c-b2fc-1cf663797c56-kube-api-access-dv9bd\") on node \"crc\" DevicePath \"\"" Jan 22 06:58:21 crc kubenswrapper[4720]: I0122 06:58:21.280957 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7z4d6\" (UniqueName: \"kubernetes.io/projected/e078fabf-6d6b-44fe-bf95-f236bc469762-kube-api-access-7z4d6\") on node \"crc\" DevicePath \"\"" Jan 22 06:58:21 crc kubenswrapper[4720]: I0122 06:58:21.281322 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/777aebda-6518-41fd-a1e1-0051e2998417-log-httpd\") pod \"ceilometer-0\" (UID: \"777aebda-6518-41fd-a1e1-0051e2998417\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 06:58:21 crc kubenswrapper[4720]: I0122 06:58:21.281855 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/777aebda-6518-41fd-a1e1-0051e2998417-run-httpd\") pod \"ceilometer-0\" (UID: \"777aebda-6518-41fd-a1e1-0051e2998417\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 06:58:21 crc kubenswrapper[4720]: I0122 06:58:21.292039 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/777aebda-6518-41fd-a1e1-0051e2998417-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"777aebda-6518-41fd-a1e1-0051e2998417\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 06:58:21 crc kubenswrapper[4720]: I0122 06:58:21.295097 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/777aebda-6518-41fd-a1e1-0051e2998417-config-data\") pod \"ceilometer-0\" (UID: \"777aebda-6518-41fd-a1e1-0051e2998417\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 06:58:21 crc kubenswrapper[4720]: I0122 06:58:21.295880 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/777aebda-6518-41fd-a1e1-0051e2998417-scripts\") pod \"ceilometer-0\" (UID: \"777aebda-6518-41fd-a1e1-0051e2998417\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 06:58:21 crc kubenswrapper[4720]: I0122 06:58:21.297869 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/777aebda-6518-41fd-a1e1-0051e2998417-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"777aebda-6518-41fd-a1e1-0051e2998417\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 06:58:21 crc kubenswrapper[4720]: I0122 06:58:21.300618 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/777aebda-6518-41fd-a1e1-0051e2998417-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"777aebda-6518-41fd-a1e1-0051e2998417\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 06:58:21 crc kubenswrapper[4720]: I0122 06:58:21.312719 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mm7s7\" (UniqueName: \"kubernetes.io/projected/777aebda-6518-41fd-a1e1-0051e2998417-kube-api-access-mm7s7\") pod \"ceilometer-0\" (UID: \"777aebda-6518-41fd-a1e1-0051e2998417\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 06:58:21 crc kubenswrapper[4720]: I0122 06:58:21.389405 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 22 06:58:21 crc kubenswrapper[4720]: I0122 06:58:21.706158 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-49c6-account-create-update-67wmk" event={"ID":"e078fabf-6d6b-44fe-bf95-f236bc469762","Type":"ContainerDied","Data":"a851b48bd04255b0f17b33f7f2bd9a3b9756583c1fbf14b3d1ae2e29d5c3802f"} Jan 22 06:58:21 crc kubenswrapper[4720]: I0122 06:58:21.706549 4720 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a851b48bd04255b0f17b33f7f2bd9a3b9756583c1fbf14b3d1ae2e29d5c3802f" Jan 22 06:58:21 crc kubenswrapper[4720]: I0122 06:58:21.706372 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-49c6-account-create-update-67wmk" Jan 22 06:58:21 crc kubenswrapper[4720]: I0122 06:58:21.707977 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-gqhvs" event={"ID":"509e786a-0709-438c-b2fc-1cf663797c56","Type":"ContainerDied","Data":"911b4cb83694a725a9c9daa79343507203a5b779ccfdcdb0f5c205a3e6ee3849"} Jan 22 06:58:21 crc kubenswrapper[4720]: I0122 06:58:21.708004 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-gqhvs" Jan 22 06:58:21 crc kubenswrapper[4720]: I0122 06:58:21.708022 4720 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="911b4cb83694a725a9c9daa79343507203a5b779ccfdcdb0f5c205a3e6ee3849" Jan 22 06:58:21 crc kubenswrapper[4720]: I0122 06:58:21.893491 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 06:58:21 crc kubenswrapper[4720]: W0122 06:58:21.897021 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod777aebda_6518_41fd_a1e1_0051e2998417.slice/crio-71aa068bf80fd07b962fcc6270b194530fea02d9d695119e484a9253cf43c35e WatchSource:0}: Error finding container 71aa068bf80fd07b962fcc6270b194530fea02d9d695119e484a9253cf43c35e: Status 404 returned error can't find the container with id 71aa068bf80fd07b962fcc6270b194530fea02d9d695119e484a9253cf43c35e Jan 22 06:58:22 crc kubenswrapper[4720]: I0122 06:58:22.221562 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b548db90-81a9-4307-974a-fab031bfc971" path="/var/lib/kubelet/pods/b548db90-81a9-4307-974a-fab031bfc971/volumes" Jan 22 06:58:22 crc kubenswrapper[4720]: I0122 06:58:22.222754 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f233f6d9-0fea-4c79-99e3-dd3d4edd0644" path="/var/lib/kubelet/pods/f233f6d9-0fea-4c79-99e3-dd3d4edd0644/volumes" Jan 22 06:58:22 crc kubenswrapper[4720]: I0122 06:58:22.719482 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"777aebda-6518-41fd-a1e1-0051e2998417","Type":"ContainerStarted","Data":"747abfa7ed5e09c67021a969f5de063a3278c727e625a549560a977193f5067d"} Jan 22 06:58:22 crc kubenswrapper[4720]: I0122 06:58:22.719551 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"777aebda-6518-41fd-a1e1-0051e2998417","Type":"ContainerStarted","Data":"71aa068bf80fd07b962fcc6270b194530fea02d9d695119e484a9253cf43c35e"} Jan 22 06:58:23 crc kubenswrapper[4720]: I0122 06:58:23.210828 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-s8wq2"] Jan 22 06:58:23 crc kubenswrapper[4720]: I0122 06:58:23.212326 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-s8wq2" Jan 22 06:58:23 crc kubenswrapper[4720]: I0122 06:58:23.223395 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-s8wq2"] Jan 22 06:58:23 crc kubenswrapper[4720]: I0122 06:58:23.224435 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c492082-fc06-480d-9b68-5d09c9c7549c-combined-ca-bundle\") pod \"watcher-kuttl-db-sync-s8wq2\" (UID: \"6c492082-fc06-480d-9b68-5d09c9c7549c\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-s8wq2" Jan 22 06:58:23 crc kubenswrapper[4720]: I0122 06:58:23.224504 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/6c492082-fc06-480d-9b68-5d09c9c7549c-db-sync-config-data\") pod \"watcher-kuttl-db-sync-s8wq2\" (UID: \"6c492082-fc06-480d-9b68-5d09c9c7549c\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-s8wq2" Jan 22 06:58:23 crc kubenswrapper[4720]: I0122 06:58:23.224573 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fcm7x\" (UniqueName: \"kubernetes.io/projected/6c492082-fc06-480d-9b68-5d09c9c7549c-kube-api-access-fcm7x\") pod \"watcher-kuttl-db-sync-s8wq2\" (UID: \"6c492082-fc06-480d-9b68-5d09c9c7549c\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-s8wq2" Jan 22 06:58:23 crc kubenswrapper[4720]: I0122 06:58:23.224602 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c492082-fc06-480d-9b68-5d09c9c7549c-config-data\") pod \"watcher-kuttl-db-sync-s8wq2\" (UID: \"6c492082-fc06-480d-9b68-5d09c9c7549c\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-s8wq2" Jan 22 06:58:23 crc kubenswrapper[4720]: I0122 06:58:23.226264 4720 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-config-data" Jan 22 06:58:23 crc kubenswrapper[4720]: I0122 06:58:23.226519 4720 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-watcher-kuttl-dockercfg-rpjj7" Jan 22 06:58:23 crc kubenswrapper[4720]: I0122 06:58:23.326847 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c492082-fc06-480d-9b68-5d09c9c7549c-combined-ca-bundle\") pod \"watcher-kuttl-db-sync-s8wq2\" (UID: \"6c492082-fc06-480d-9b68-5d09c9c7549c\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-s8wq2" Jan 22 06:58:23 crc kubenswrapper[4720]: I0122 06:58:23.326947 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/6c492082-fc06-480d-9b68-5d09c9c7549c-db-sync-config-data\") pod \"watcher-kuttl-db-sync-s8wq2\" (UID: \"6c492082-fc06-480d-9b68-5d09c9c7549c\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-s8wq2" Jan 22 06:58:23 crc kubenswrapper[4720]: I0122 06:58:23.326990 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fcm7x\" (UniqueName: \"kubernetes.io/projected/6c492082-fc06-480d-9b68-5d09c9c7549c-kube-api-access-fcm7x\") pod \"watcher-kuttl-db-sync-s8wq2\" (UID: \"6c492082-fc06-480d-9b68-5d09c9c7549c\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-s8wq2" Jan 22 06:58:23 crc kubenswrapper[4720]: I0122 06:58:23.327012 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c492082-fc06-480d-9b68-5d09c9c7549c-config-data\") pod \"watcher-kuttl-db-sync-s8wq2\" (UID: \"6c492082-fc06-480d-9b68-5d09c9c7549c\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-s8wq2" Jan 22 06:58:23 crc kubenswrapper[4720]: I0122 06:58:23.333042 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c492082-fc06-480d-9b68-5d09c9c7549c-combined-ca-bundle\") pod \"watcher-kuttl-db-sync-s8wq2\" (UID: \"6c492082-fc06-480d-9b68-5d09c9c7549c\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-s8wq2" Jan 22 06:58:23 crc kubenswrapper[4720]: I0122 06:58:23.333201 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/6c492082-fc06-480d-9b68-5d09c9c7549c-db-sync-config-data\") pod \"watcher-kuttl-db-sync-s8wq2\" (UID: \"6c492082-fc06-480d-9b68-5d09c9c7549c\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-s8wq2" Jan 22 06:58:23 crc kubenswrapper[4720]: I0122 06:58:23.343592 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c492082-fc06-480d-9b68-5d09c9c7549c-config-data\") pod \"watcher-kuttl-db-sync-s8wq2\" (UID: \"6c492082-fc06-480d-9b68-5d09c9c7549c\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-s8wq2" Jan 22 06:58:23 crc kubenswrapper[4720]: I0122 06:58:23.348442 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fcm7x\" (UniqueName: \"kubernetes.io/projected/6c492082-fc06-480d-9b68-5d09c9c7549c-kube-api-access-fcm7x\") pod \"watcher-kuttl-db-sync-s8wq2\" (UID: \"6c492082-fc06-480d-9b68-5d09c9c7549c\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-s8wq2" Jan 22 06:58:23 crc kubenswrapper[4720]: I0122 06:58:23.528375 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-s8wq2" Jan 22 06:58:24 crc kubenswrapper[4720]: W0122 06:58:24.083044 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6c492082_fc06_480d_9b68_5d09c9c7549c.slice/crio-3f8dcbc2834bffde4b7b619714b16ac46fd6e7f6a2c7dbb3223329c9bf607c4e WatchSource:0}: Error finding container 3f8dcbc2834bffde4b7b619714b16ac46fd6e7f6a2c7dbb3223329c9bf607c4e: Status 404 returned error can't find the container with id 3f8dcbc2834bffde4b7b619714b16ac46fd6e7f6a2c7dbb3223329c9bf607c4e Jan 22 06:58:24 crc kubenswrapper[4720]: I0122 06:58:24.087768 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-s8wq2"] Jan 22 06:58:24 crc kubenswrapper[4720]: I0122 06:58:24.764114 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"777aebda-6518-41fd-a1e1-0051e2998417","Type":"ContainerStarted","Data":"6824d5ad7940fb402c871f08df5c7e5942c3c5d6223cb349fa2a323dd95c109c"} Jan 22 06:58:24 crc kubenswrapper[4720]: I0122 06:58:24.766655 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-s8wq2" event={"ID":"6c492082-fc06-480d-9b68-5d09c9c7549c","Type":"ContainerStarted","Data":"3f8dcbc2834bffde4b7b619714b16ac46fd6e7f6a2c7dbb3223329c9bf607c4e"} Jan 22 06:58:25 crc kubenswrapper[4720]: I0122 06:58:25.188802 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/kube-state-metrics-0" Jan 22 06:58:25 crc kubenswrapper[4720]: I0122 06:58:25.871736 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"777aebda-6518-41fd-a1e1-0051e2998417","Type":"ContainerStarted","Data":"db8e9d57ee13137d9a909e2797b30420323ee01644cff9bd29f0cec98428895f"} Jan 22 06:58:26 crc kubenswrapper[4720]: I0122 06:58:26.884671 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"777aebda-6518-41fd-a1e1-0051e2998417","Type":"ContainerStarted","Data":"77438e53bae5925cb476b121a9344e6952cf22c13c6f3ed5411ed793986415bf"} Jan 22 06:58:26 crc kubenswrapper[4720]: I0122 06:58:26.885023 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/ceilometer-0" Jan 22 06:58:26 crc kubenswrapper[4720]: I0122 06:58:26.909989 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/ceilometer-0" podStartSLOduration=1.428278717 podStartE2EDuration="5.909949979s" podCreationTimestamp="2026-01-22 06:58:21 +0000 UTC" firstStartedPulling="2026-01-22 06:58:21.90071638 +0000 UTC m=+1394.042623075" lastFinishedPulling="2026-01-22 06:58:26.382387632 +0000 UTC m=+1398.524294337" observedRunningTime="2026-01-22 06:58:26.906109539 +0000 UTC m=+1399.048016244" watchObservedRunningTime="2026-01-22 06:58:26.909949979 +0000 UTC m=+1399.051856704" Jan 22 06:58:49 crc kubenswrapper[4720]: E0122 06:58:49.187057 4720 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.50:5001/podified-master-centos10/openstack-watcher-api:watcher_latest" Jan 22 06:58:49 crc kubenswrapper[4720]: E0122 06:58:49.187629 4720 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.50:5001/podified-master-centos10/openstack-watcher-api:watcher_latest" Jan 22 06:58:49 crc kubenswrapper[4720]: E0122 06:58:49.187830 4720 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:watcher-kuttl-db-sync,Image:38.102.83.50:5001/podified-master-centos10/openstack-watcher-api:watcher_latest,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/watcher/watcher.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:watcher-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fcm7x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-kuttl-db-sync-s8wq2_watcher-kuttl-default(6c492082-fc06-480d-9b68-5d09c9c7549c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 22 06:58:49 crc kubenswrapper[4720]: E0122 06:58:49.189018 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-kuttl-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="watcher-kuttl-default/watcher-kuttl-db-sync-s8wq2" podUID="6c492082-fc06-480d-9b68-5d09c9c7549c" Jan 22 06:58:50 crc kubenswrapper[4720]: E0122 06:58:50.108325 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-kuttl-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.50:5001/podified-master-centos10/openstack-watcher-api:watcher_latest\\\"\"" pod="watcher-kuttl-default/watcher-kuttl-db-sync-s8wq2" podUID="6c492082-fc06-480d-9b68-5d09c9c7549c" Jan 22 06:58:51 crc kubenswrapper[4720]: I0122 06:58:51.398360 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/ceilometer-0" Jan 22 06:58:59 crc kubenswrapper[4720]: I0122 06:58:59.780699 4720 patch_prober.go:28] interesting pod/machine-config-daemon-bnsvd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 06:58:59 crc kubenswrapper[4720]: I0122 06:58:59.781293 4720 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" podUID="f4b26e9d-6a95-4b1c-9750-88b6aa100c67" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 06:59:03 crc kubenswrapper[4720]: I0122 06:59:03.403202 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-s8wq2" event={"ID":"6c492082-fc06-480d-9b68-5d09c9c7549c","Type":"ContainerStarted","Data":"6154ec025579f67fef067420cf0b7795ffabd535e39b818e26a71d951d9a26be"} Jan 22 06:59:03 crc kubenswrapper[4720]: I0122 06:59:03.427650 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-db-sync-s8wq2" podStartSLOduration=1.909944108 podStartE2EDuration="40.427629434s" podCreationTimestamp="2026-01-22 06:58:23 +0000 UTC" firstStartedPulling="2026-01-22 06:58:24.086318238 +0000 UTC m=+1396.228224943" lastFinishedPulling="2026-01-22 06:59:02.604003564 +0000 UTC m=+1434.745910269" observedRunningTime="2026-01-22 06:59:03.422456846 +0000 UTC m=+1435.564363541" watchObservedRunningTime="2026-01-22 06:59:03.427629434 +0000 UTC m=+1435.569536129" Jan 22 06:59:06 crc kubenswrapper[4720]: I0122 06:59:06.428056 4720 generic.go:334] "Generic (PLEG): container finished" podID="6c492082-fc06-480d-9b68-5d09c9c7549c" containerID="6154ec025579f67fef067420cf0b7795ffabd535e39b818e26a71d951d9a26be" exitCode=0 Jan 22 06:59:06 crc kubenswrapper[4720]: I0122 06:59:06.428157 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-s8wq2" event={"ID":"6c492082-fc06-480d-9b68-5d09c9c7549c","Type":"ContainerDied","Data":"6154ec025579f67fef067420cf0b7795ffabd535e39b818e26a71d951d9a26be"} Jan 22 06:59:07 crc kubenswrapper[4720]: I0122 06:59:07.760403 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-s8wq2" Jan 22 06:59:07 crc kubenswrapper[4720]: I0122 06:59:07.944318 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/6c492082-fc06-480d-9b68-5d09c9c7549c-db-sync-config-data\") pod \"6c492082-fc06-480d-9b68-5d09c9c7549c\" (UID: \"6c492082-fc06-480d-9b68-5d09c9c7549c\") " Jan 22 06:59:07 crc kubenswrapper[4720]: I0122 06:59:07.944376 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c492082-fc06-480d-9b68-5d09c9c7549c-combined-ca-bundle\") pod \"6c492082-fc06-480d-9b68-5d09c9c7549c\" (UID: \"6c492082-fc06-480d-9b68-5d09c9c7549c\") " Jan 22 06:59:07 crc kubenswrapper[4720]: I0122 06:59:07.944413 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c492082-fc06-480d-9b68-5d09c9c7549c-config-data\") pod \"6c492082-fc06-480d-9b68-5d09c9c7549c\" (UID: \"6c492082-fc06-480d-9b68-5d09c9c7549c\") " Jan 22 06:59:07 crc kubenswrapper[4720]: I0122 06:59:07.944516 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcm7x\" (UniqueName: \"kubernetes.io/projected/6c492082-fc06-480d-9b68-5d09c9c7549c-kube-api-access-fcm7x\") pod \"6c492082-fc06-480d-9b68-5d09c9c7549c\" (UID: \"6c492082-fc06-480d-9b68-5d09c9c7549c\") " Jan 22 06:59:07 crc kubenswrapper[4720]: I0122 06:59:07.949793 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6c492082-fc06-480d-9b68-5d09c9c7549c-kube-api-access-fcm7x" (OuterVolumeSpecName: "kube-api-access-fcm7x") pod "6c492082-fc06-480d-9b68-5d09c9c7549c" (UID: "6c492082-fc06-480d-9b68-5d09c9c7549c"). InnerVolumeSpecName "kube-api-access-fcm7x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 06:59:07 crc kubenswrapper[4720]: I0122 06:59:07.950516 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c492082-fc06-480d-9b68-5d09c9c7549c-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "6c492082-fc06-480d-9b68-5d09c9c7549c" (UID: "6c492082-fc06-480d-9b68-5d09c9c7549c"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 06:59:07 crc kubenswrapper[4720]: I0122 06:59:07.970263 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c492082-fc06-480d-9b68-5d09c9c7549c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6c492082-fc06-480d-9b68-5d09c9c7549c" (UID: "6c492082-fc06-480d-9b68-5d09c9c7549c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 06:59:07 crc kubenswrapper[4720]: I0122 06:59:07.994042 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c492082-fc06-480d-9b68-5d09c9c7549c-config-data" (OuterVolumeSpecName: "config-data") pod "6c492082-fc06-480d-9b68-5d09c9c7549c" (UID: "6c492082-fc06-480d-9b68-5d09c9c7549c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 06:59:08 crc kubenswrapper[4720]: I0122 06:59:08.047089 4720 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/6c492082-fc06-480d-9b68-5d09c9c7549c-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 06:59:08 crc kubenswrapper[4720]: I0122 06:59:08.047133 4720 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c492082-fc06-480d-9b68-5d09c9c7549c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 06:59:08 crc kubenswrapper[4720]: I0122 06:59:08.047219 4720 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c492082-fc06-480d-9b68-5d09c9c7549c-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 06:59:08 crc kubenswrapper[4720]: I0122 06:59:08.047241 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcm7x\" (UniqueName: \"kubernetes.io/projected/6c492082-fc06-480d-9b68-5d09c9c7549c-kube-api-access-fcm7x\") on node \"crc\" DevicePath \"\"" Jan 22 06:59:08 crc kubenswrapper[4720]: I0122 06:59:08.445036 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-s8wq2" event={"ID":"6c492082-fc06-480d-9b68-5d09c9c7549c","Type":"ContainerDied","Data":"3f8dcbc2834bffde4b7b619714b16ac46fd6e7f6a2c7dbb3223329c9bf607c4e"} Jan 22 06:59:08 crc kubenswrapper[4720]: I0122 06:59:08.445088 4720 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3f8dcbc2834bffde4b7b619714b16ac46fd6e7f6a2c7dbb3223329c9bf607c4e" Jan 22 06:59:08 crc kubenswrapper[4720]: I0122 06:59:08.445133 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-s8wq2" Jan 22 06:59:08 crc kubenswrapper[4720]: I0122 06:59:08.870167 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 22 06:59:08 crc kubenswrapper[4720]: E0122 06:59:08.870707 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c492082-fc06-480d-9b68-5d09c9c7549c" containerName="watcher-kuttl-db-sync" Jan 22 06:59:08 crc kubenswrapper[4720]: I0122 06:59:08.870725 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c492082-fc06-480d-9b68-5d09c9c7549c" containerName="watcher-kuttl-db-sync" Jan 22 06:59:08 crc kubenswrapper[4720]: I0122 06:59:08.870955 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c492082-fc06-480d-9b68-5d09c9c7549c" containerName="watcher-kuttl-db-sync" Jan 22 06:59:08 crc kubenswrapper[4720]: I0122 06:59:08.871744 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 06:59:08 crc kubenswrapper[4720]: I0122 06:59:08.874724 4720 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-decision-engine-config-data" Jan 22 06:59:08 crc kubenswrapper[4720]: I0122 06:59:08.875030 4720 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-watcher-kuttl-dockercfg-rpjj7" Jan 22 06:59:08 crc kubenswrapper[4720]: I0122 06:59:08.879657 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 22 06:59:08 crc kubenswrapper[4720]: I0122 06:59:08.888524 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 06:59:08 crc kubenswrapper[4720]: I0122 06:59:08.892648 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 22 06:59:08 crc kubenswrapper[4720]: I0122 06:59:08.892948 4720 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-api-config-data" Jan 22 06:59:08 crc kubenswrapper[4720]: I0122 06:59:08.919817 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 22 06:59:09 crc kubenswrapper[4720]: I0122 06:59:09.042023 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 22 06:59:09 crc kubenswrapper[4720]: I0122 06:59:09.043369 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 06:59:09 crc kubenswrapper[4720]: I0122 06:59:09.045404 4720 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-applier-config-data" Jan 22 06:59:09 crc kubenswrapper[4720]: I0122 06:59:09.057497 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 22 06:59:09 crc kubenswrapper[4720]: I0122 06:59:09.069621 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c85be378-c080-4165-803d-a7ee88403c07-logs\") pod \"watcher-kuttl-api-0\" (UID: \"c85be378-c080-4165-803d-a7ee88403c07\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 06:59:09 crc kubenswrapper[4720]: I0122 06:59:09.069801 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/49c39509-be83-4644-aa0b-87ad0237579d-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"49c39509-be83-4644-aa0b-87ad0237579d\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 06:59:09 crc kubenswrapper[4720]: I0122 06:59:09.069856 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/49c39509-be83-4644-aa0b-87ad0237579d-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"49c39509-be83-4644-aa0b-87ad0237579d\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 06:59:09 crc kubenswrapper[4720]: I0122 06:59:09.069886 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/c85be378-c080-4165-803d-a7ee88403c07-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"c85be378-c080-4165-803d-a7ee88403c07\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 06:59:09 crc kubenswrapper[4720]: I0122 06:59:09.069952 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c85be378-c080-4165-803d-a7ee88403c07-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"c85be378-c080-4165-803d-a7ee88403c07\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 06:59:09 crc kubenswrapper[4720]: I0122 06:59:09.070022 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/601a4487-efe4-4a79-89fa-3a33a89d7b0d-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"601a4487-efe4-4a79-89fa-3a33a89d7b0d\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 06:59:09 crc kubenswrapper[4720]: I0122 06:59:09.070056 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49c39509-be83-4644-aa0b-87ad0237579d-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"49c39509-be83-4644-aa0b-87ad0237579d\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 06:59:09 crc kubenswrapper[4720]: I0122 06:59:09.070077 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/601a4487-efe4-4a79-89fa-3a33a89d7b0d-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"601a4487-efe4-4a79-89fa-3a33a89d7b0d\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 06:59:09 crc kubenswrapper[4720]: I0122 06:59:09.070101 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c85be378-c080-4165-803d-a7ee88403c07-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"c85be378-c080-4165-803d-a7ee88403c07\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 06:59:09 crc kubenswrapper[4720]: I0122 06:59:09.070123 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/601a4487-efe4-4a79-89fa-3a33a89d7b0d-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"601a4487-efe4-4a79-89fa-3a33a89d7b0d\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 06:59:09 crc kubenswrapper[4720]: I0122 06:59:09.070159 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/601a4487-efe4-4a79-89fa-3a33a89d7b0d-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"601a4487-efe4-4a79-89fa-3a33a89d7b0d\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 06:59:09 crc kubenswrapper[4720]: I0122 06:59:09.070188 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l5fpx\" (UniqueName: \"kubernetes.io/projected/49c39509-be83-4644-aa0b-87ad0237579d-kube-api-access-l5fpx\") pod \"watcher-kuttl-applier-0\" (UID: \"49c39509-be83-4644-aa0b-87ad0237579d\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 06:59:09 crc kubenswrapper[4720]: I0122 06:59:09.070211 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mbmb8\" (UniqueName: \"kubernetes.io/projected/601a4487-efe4-4a79-89fa-3a33a89d7b0d-kube-api-access-mbmb8\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"601a4487-efe4-4a79-89fa-3a33a89d7b0d\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 06:59:09 crc kubenswrapper[4720]: I0122 06:59:09.070246 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6zst9\" (UniqueName: \"kubernetes.io/projected/c85be378-c080-4165-803d-a7ee88403c07-kube-api-access-6zst9\") pod \"watcher-kuttl-api-0\" (UID: \"c85be378-c080-4165-803d-a7ee88403c07\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 06:59:09 crc kubenswrapper[4720]: I0122 06:59:09.171547 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c85be378-c080-4165-803d-a7ee88403c07-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"c85be378-c080-4165-803d-a7ee88403c07\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 06:59:09 crc kubenswrapper[4720]: I0122 06:59:09.172375 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/601a4487-efe4-4a79-89fa-3a33a89d7b0d-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"601a4487-efe4-4a79-89fa-3a33a89d7b0d\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 06:59:09 crc kubenswrapper[4720]: I0122 06:59:09.172417 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/601a4487-efe4-4a79-89fa-3a33a89d7b0d-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"601a4487-efe4-4a79-89fa-3a33a89d7b0d\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 06:59:09 crc kubenswrapper[4720]: I0122 06:59:09.172450 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l5fpx\" (UniqueName: \"kubernetes.io/projected/49c39509-be83-4644-aa0b-87ad0237579d-kube-api-access-l5fpx\") pod \"watcher-kuttl-applier-0\" (UID: \"49c39509-be83-4644-aa0b-87ad0237579d\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 06:59:09 crc kubenswrapper[4720]: I0122 06:59:09.172498 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mbmb8\" (UniqueName: \"kubernetes.io/projected/601a4487-efe4-4a79-89fa-3a33a89d7b0d-kube-api-access-mbmb8\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"601a4487-efe4-4a79-89fa-3a33a89d7b0d\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 06:59:09 crc kubenswrapper[4720]: I0122 06:59:09.172868 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6zst9\" (UniqueName: \"kubernetes.io/projected/c85be378-c080-4165-803d-a7ee88403c07-kube-api-access-6zst9\") pod \"watcher-kuttl-api-0\" (UID: \"c85be378-c080-4165-803d-a7ee88403c07\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 06:59:09 crc kubenswrapper[4720]: I0122 06:59:09.172943 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c85be378-c080-4165-803d-a7ee88403c07-logs\") pod \"watcher-kuttl-api-0\" (UID: \"c85be378-c080-4165-803d-a7ee88403c07\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 06:59:09 crc kubenswrapper[4720]: I0122 06:59:09.173003 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/601a4487-efe4-4a79-89fa-3a33a89d7b0d-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"601a4487-efe4-4a79-89fa-3a33a89d7b0d\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 06:59:09 crc kubenswrapper[4720]: I0122 06:59:09.173339 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c85be378-c080-4165-803d-a7ee88403c07-logs\") pod \"watcher-kuttl-api-0\" (UID: \"c85be378-c080-4165-803d-a7ee88403c07\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 06:59:09 crc kubenswrapper[4720]: I0122 06:59:09.172981 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/49c39509-be83-4644-aa0b-87ad0237579d-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"49c39509-be83-4644-aa0b-87ad0237579d\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 06:59:09 crc kubenswrapper[4720]: I0122 06:59:09.173476 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/49c39509-be83-4644-aa0b-87ad0237579d-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"49c39509-be83-4644-aa0b-87ad0237579d\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 06:59:09 crc kubenswrapper[4720]: I0122 06:59:09.173494 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/49c39509-be83-4644-aa0b-87ad0237579d-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"49c39509-be83-4644-aa0b-87ad0237579d\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 06:59:09 crc kubenswrapper[4720]: I0122 06:59:09.173526 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/c85be378-c080-4165-803d-a7ee88403c07-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"c85be378-c080-4165-803d-a7ee88403c07\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 06:59:09 crc kubenswrapper[4720]: I0122 06:59:09.173556 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c85be378-c080-4165-803d-a7ee88403c07-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"c85be378-c080-4165-803d-a7ee88403c07\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 06:59:09 crc kubenswrapper[4720]: I0122 06:59:09.173894 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/601a4487-efe4-4a79-89fa-3a33a89d7b0d-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"601a4487-efe4-4a79-89fa-3a33a89d7b0d\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 06:59:09 crc kubenswrapper[4720]: I0122 06:59:09.174190 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49c39509-be83-4644-aa0b-87ad0237579d-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"49c39509-be83-4644-aa0b-87ad0237579d\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 06:59:09 crc kubenswrapper[4720]: I0122 06:59:09.174213 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/601a4487-efe4-4a79-89fa-3a33a89d7b0d-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"601a4487-efe4-4a79-89fa-3a33a89d7b0d\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 06:59:09 crc kubenswrapper[4720]: I0122 06:59:09.177998 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/601a4487-efe4-4a79-89fa-3a33a89d7b0d-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"601a4487-efe4-4a79-89fa-3a33a89d7b0d\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 06:59:09 crc kubenswrapper[4720]: I0122 06:59:09.178248 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49c39509-be83-4644-aa0b-87ad0237579d-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"49c39509-be83-4644-aa0b-87ad0237579d\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 06:59:09 crc kubenswrapper[4720]: I0122 06:59:09.178588 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/49c39509-be83-4644-aa0b-87ad0237579d-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"49c39509-be83-4644-aa0b-87ad0237579d\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 06:59:09 crc kubenswrapper[4720]: I0122 06:59:09.178596 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c85be378-c080-4165-803d-a7ee88403c07-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"c85be378-c080-4165-803d-a7ee88403c07\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 06:59:09 crc kubenswrapper[4720]: I0122 06:59:09.183594 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/601a4487-efe4-4a79-89fa-3a33a89d7b0d-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"601a4487-efe4-4a79-89fa-3a33a89d7b0d\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 06:59:09 crc kubenswrapper[4720]: I0122 06:59:09.183732 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c85be378-c080-4165-803d-a7ee88403c07-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"c85be378-c080-4165-803d-a7ee88403c07\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 06:59:09 crc kubenswrapper[4720]: I0122 06:59:09.184424 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/c85be378-c080-4165-803d-a7ee88403c07-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"c85be378-c080-4165-803d-a7ee88403c07\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 06:59:09 crc kubenswrapper[4720]: I0122 06:59:09.189489 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/601a4487-efe4-4a79-89fa-3a33a89d7b0d-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"601a4487-efe4-4a79-89fa-3a33a89d7b0d\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 06:59:09 crc kubenswrapper[4720]: I0122 06:59:09.192374 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mbmb8\" (UniqueName: \"kubernetes.io/projected/601a4487-efe4-4a79-89fa-3a33a89d7b0d-kube-api-access-mbmb8\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"601a4487-efe4-4a79-89fa-3a33a89d7b0d\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 06:59:09 crc kubenswrapper[4720]: I0122 06:59:09.195747 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6zst9\" (UniqueName: \"kubernetes.io/projected/c85be378-c080-4165-803d-a7ee88403c07-kube-api-access-6zst9\") pod \"watcher-kuttl-api-0\" (UID: \"c85be378-c080-4165-803d-a7ee88403c07\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 06:59:09 crc kubenswrapper[4720]: I0122 06:59:09.197755 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l5fpx\" (UniqueName: \"kubernetes.io/projected/49c39509-be83-4644-aa0b-87ad0237579d-kube-api-access-l5fpx\") pod \"watcher-kuttl-applier-0\" (UID: \"49c39509-be83-4644-aa0b-87ad0237579d\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 06:59:09 crc kubenswrapper[4720]: I0122 06:59:09.205207 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 06:59:09 crc kubenswrapper[4720]: I0122 06:59:09.362288 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 06:59:09 crc kubenswrapper[4720]: I0122 06:59:09.488506 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 06:59:09 crc kubenswrapper[4720]: I0122 06:59:09.650045 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 22 06:59:09 crc kubenswrapper[4720]: I0122 06:59:09.819812 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 22 06:59:09 crc kubenswrapper[4720]: W0122 06:59:09.823954 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod49c39509_be83_4644_aa0b_87ad0237579d.slice/crio-01b0150f87a15a6cafea054f1ca196189d7842ae9226b810d06d93b6cad41a9b WatchSource:0}: Error finding container 01b0150f87a15a6cafea054f1ca196189d7842ae9226b810d06d93b6cad41a9b: Status 404 returned error can't find the container with id 01b0150f87a15a6cafea054f1ca196189d7842ae9226b810d06d93b6cad41a9b Jan 22 06:59:09 crc kubenswrapper[4720]: W0122 06:59:09.968955 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod601a4487_efe4_4a79_89fa_3a33a89d7b0d.slice/crio-74e605ea162327dfdc142fd024ac42976d8340896ac82b7dcb9b4cc097e3486c WatchSource:0}: Error finding container 74e605ea162327dfdc142fd024ac42976d8340896ac82b7dcb9b4cc097e3486c: Status 404 returned error can't find the container with id 74e605ea162327dfdc142fd024ac42976d8340896ac82b7dcb9b4cc097e3486c Jan 22 06:59:09 crc kubenswrapper[4720]: I0122 06:59:09.969629 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 22 06:59:10 crc kubenswrapper[4720]: I0122 06:59:10.472405 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"49c39509-be83-4644-aa0b-87ad0237579d","Type":"ContainerStarted","Data":"01b0150f87a15a6cafea054f1ca196189d7842ae9226b810d06d93b6cad41a9b"} Jan 22 06:59:10 crc kubenswrapper[4720]: I0122 06:59:10.474482 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"601a4487-efe4-4a79-89fa-3a33a89d7b0d","Type":"ContainerStarted","Data":"74e605ea162327dfdc142fd024ac42976d8340896ac82b7dcb9b4cc097e3486c"} Jan 22 06:59:10 crc kubenswrapper[4720]: I0122 06:59:10.478879 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"c85be378-c080-4165-803d-a7ee88403c07","Type":"ContainerStarted","Data":"13a932f45c7d00825403b4941f132e39cfaba19c3eb49c440e58fd2ce6b57c98"} Jan 22 06:59:10 crc kubenswrapper[4720]: I0122 06:59:10.478985 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"c85be378-c080-4165-803d-a7ee88403c07","Type":"ContainerStarted","Data":"4a2c952974e9aa100a1bbadf0fbd2418cd00fc217b4c88b6932095bc7f4c7b56"} Jan 22 06:59:10 crc kubenswrapper[4720]: I0122 06:59:10.479015 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"c85be378-c080-4165-803d-a7ee88403c07","Type":"ContainerStarted","Data":"d1f8850ce90fa69a833b6da1a4f265a5d968eacf2b63d4a3235ac43bdb6f380d"} Jan 22 06:59:10 crc kubenswrapper[4720]: I0122 06:59:10.482593 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 06:59:10 crc kubenswrapper[4720]: I0122 06:59:10.483744 4720 prober.go:107] "Probe failed" probeType="Readiness" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="c85be378-c080-4165-803d-a7ee88403c07" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.134:9322/\": dial tcp 10.217.0.134:9322: connect: connection refused" Jan 22 06:59:10 crc kubenswrapper[4720]: I0122 06:59:10.506894 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-api-0" podStartSLOduration=2.5068692 podStartE2EDuration="2.5068692s" podCreationTimestamp="2026-01-22 06:59:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 06:59:10.501366403 +0000 UTC m=+1442.643273108" watchObservedRunningTime="2026-01-22 06:59:10.5068692 +0000 UTC m=+1442.648775915" Jan 22 06:59:12 crc kubenswrapper[4720]: I0122 06:59:12.508087 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"601a4487-efe4-4a79-89fa-3a33a89d7b0d","Type":"ContainerStarted","Data":"d76422a3f33f128135ae3fa2baec96115252e8e66fad4819ca2d257f2af13404"} Jan 22 06:59:12 crc kubenswrapper[4720]: I0122 06:59:12.509945 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"49c39509-be83-4644-aa0b-87ad0237579d","Type":"ContainerStarted","Data":"3306f226979cc15d3d654c3c6fe9ddf2b50d915d6708498e06c13ea6e23d0e7b"} Jan 22 06:59:12 crc kubenswrapper[4720]: I0122 06:59:12.528727 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podStartSLOduration=2.990515113 podStartE2EDuration="4.528696014s" podCreationTimestamp="2026-01-22 06:59:08 +0000 UTC" firstStartedPulling="2026-01-22 06:59:09.971302273 +0000 UTC m=+1442.113208978" lastFinishedPulling="2026-01-22 06:59:11.509483174 +0000 UTC m=+1443.651389879" observedRunningTime="2026-01-22 06:59:12.52645817 +0000 UTC m=+1444.668364885" watchObservedRunningTime="2026-01-22 06:59:12.528696014 +0000 UTC m=+1444.670602729" Jan 22 06:59:12 crc kubenswrapper[4720]: I0122 06:59:12.549125 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podStartSLOduration=1.849183324 podStartE2EDuration="3.549055506s" podCreationTimestamp="2026-01-22 06:59:09 +0000 UTC" firstStartedPulling="2026-01-22 06:59:09.826026071 +0000 UTC m=+1441.967932776" lastFinishedPulling="2026-01-22 06:59:11.525898253 +0000 UTC m=+1443.667804958" observedRunningTime="2026-01-22 06:59:12.544550677 +0000 UTC m=+1444.686457392" watchObservedRunningTime="2026-01-22 06:59:12.549055506 +0000 UTC m=+1444.690962231" Jan 22 06:59:13 crc kubenswrapper[4720]: I0122 06:59:13.993387 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 06:59:14 crc kubenswrapper[4720]: I0122 06:59:14.206258 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 06:59:14 crc kubenswrapper[4720]: I0122 06:59:14.363192 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 06:59:19 crc kubenswrapper[4720]: E0122 06:59:19.080902 4720 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.147:37314->38.102.83.147:40617: write tcp 38.102.83.147:37314->38.102.83.147:40617: write: broken pipe Jan 22 06:59:19 crc kubenswrapper[4720]: I0122 06:59:19.205666 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 06:59:19 crc kubenswrapper[4720]: I0122 06:59:19.212068 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 06:59:19 crc kubenswrapper[4720]: I0122 06:59:19.363247 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 06:59:19 crc kubenswrapper[4720]: I0122 06:59:19.389278 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 06:59:19 crc kubenswrapper[4720]: I0122 06:59:19.488787 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 06:59:19 crc kubenswrapper[4720]: I0122 06:59:19.521617 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 06:59:19 crc kubenswrapper[4720]: I0122 06:59:19.583277 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 06:59:19 crc kubenswrapper[4720]: I0122 06:59:19.602724 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 06:59:19 crc kubenswrapper[4720]: I0122 06:59:19.612359 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 06:59:19 crc kubenswrapper[4720]: I0122 06:59:19.614560 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 06:59:22 crc kubenswrapper[4720]: I0122 06:59:22.381163 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 06:59:22 crc kubenswrapper[4720]: I0122 06:59:22.381829 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="777aebda-6518-41fd-a1e1-0051e2998417" containerName="ceilometer-central-agent" containerID="cri-o://747abfa7ed5e09c67021a969f5de063a3278c727e625a549560a977193f5067d" gracePeriod=30 Jan 22 06:59:22 crc kubenswrapper[4720]: I0122 06:59:22.381886 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="777aebda-6518-41fd-a1e1-0051e2998417" containerName="proxy-httpd" containerID="cri-o://77438e53bae5925cb476b121a9344e6952cf22c13c6f3ed5411ed793986415bf" gracePeriod=30 Jan 22 06:59:22 crc kubenswrapper[4720]: I0122 06:59:22.381970 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="777aebda-6518-41fd-a1e1-0051e2998417" containerName="ceilometer-notification-agent" containerID="cri-o://6824d5ad7940fb402c871f08df5c7e5942c3c5d6223cb349fa2a323dd95c109c" gracePeriod=30 Jan 22 06:59:22 crc kubenswrapper[4720]: I0122 06:59:22.381980 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="777aebda-6518-41fd-a1e1-0051e2998417" containerName="sg-core" containerID="cri-o://db8e9d57ee13137d9a909e2797b30420323ee01644cff9bd29f0cec98428895f" gracePeriod=30 Jan 22 06:59:22 crc kubenswrapper[4720]: I0122 06:59:22.615685 4720 generic.go:334] "Generic (PLEG): container finished" podID="777aebda-6518-41fd-a1e1-0051e2998417" containerID="77438e53bae5925cb476b121a9344e6952cf22c13c6f3ed5411ed793986415bf" exitCode=0 Jan 22 06:59:22 crc kubenswrapper[4720]: I0122 06:59:22.616022 4720 generic.go:334] "Generic (PLEG): container finished" podID="777aebda-6518-41fd-a1e1-0051e2998417" containerID="db8e9d57ee13137d9a909e2797b30420323ee01644cff9bd29f0cec98428895f" exitCode=2 Jan 22 06:59:22 crc kubenswrapper[4720]: I0122 06:59:22.615806 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"777aebda-6518-41fd-a1e1-0051e2998417","Type":"ContainerDied","Data":"77438e53bae5925cb476b121a9344e6952cf22c13c6f3ed5411ed793986415bf"} Jan 22 06:59:22 crc kubenswrapper[4720]: I0122 06:59:22.616066 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"777aebda-6518-41fd-a1e1-0051e2998417","Type":"ContainerDied","Data":"db8e9d57ee13137d9a909e2797b30420323ee01644cff9bd29f0cec98428895f"} Jan 22 06:59:22 crc kubenswrapper[4720]: I0122 06:59:22.762281 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-s8wq2"] Jan 22 06:59:22 crc kubenswrapper[4720]: I0122 06:59:22.778458 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-s8wq2"] Jan 22 06:59:22 crc kubenswrapper[4720]: I0122 06:59:22.829455 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher49c6-account-delete-hdxsj"] Jan 22 06:59:22 crc kubenswrapper[4720]: I0122 06:59:22.830538 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher49c6-account-delete-hdxsj" Jan 22 06:59:22 crc kubenswrapper[4720]: I0122 06:59:22.839332 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher49c6-account-delete-hdxsj"] Jan 22 06:59:22 crc kubenswrapper[4720]: I0122 06:59:22.901288 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 22 06:59:22 crc kubenswrapper[4720]: I0122 06:59:22.901533 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podUID="49c39509-be83-4644-aa0b-87ad0237579d" containerName="watcher-applier" containerID="cri-o://3306f226979cc15d3d654c3c6fe9ddf2b50d915d6708498e06c13ea6e23d0e7b" gracePeriod=30 Jan 22 06:59:22 crc kubenswrapper[4720]: I0122 06:59:22.918212 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e0541b1c-8509-4175-94fd-f9a341d35e64-operator-scripts\") pod \"watcher49c6-account-delete-hdxsj\" (UID: \"e0541b1c-8509-4175-94fd-f9a341d35e64\") " pod="watcher-kuttl-default/watcher49c6-account-delete-hdxsj" Jan 22 06:59:22 crc kubenswrapper[4720]: I0122 06:59:22.918546 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8s9wq\" (UniqueName: \"kubernetes.io/projected/e0541b1c-8509-4175-94fd-f9a341d35e64-kube-api-access-8s9wq\") pod \"watcher49c6-account-delete-hdxsj\" (UID: \"e0541b1c-8509-4175-94fd-f9a341d35e64\") " pod="watcher-kuttl-default/watcher49c6-account-delete-hdxsj" Jan 22 06:59:22 crc kubenswrapper[4720]: I0122 06:59:22.975988 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 22 06:59:22 crc kubenswrapper[4720]: I0122 06:59:22.976648 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="c85be378-c080-4165-803d-a7ee88403c07" containerName="watcher-kuttl-api-log" containerID="cri-o://4a2c952974e9aa100a1bbadf0fbd2418cd00fc217b4c88b6932095bc7f4c7b56" gracePeriod=30 Jan 22 06:59:22 crc kubenswrapper[4720]: I0122 06:59:22.977293 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="c85be378-c080-4165-803d-a7ee88403c07" containerName="watcher-api" containerID="cri-o://13a932f45c7d00825403b4941f132e39cfaba19c3eb49c440e58fd2ce6b57c98" gracePeriod=30 Jan 22 06:59:22 crc kubenswrapper[4720]: I0122 06:59:22.987963 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 22 06:59:22 crc kubenswrapper[4720]: I0122 06:59:22.988305 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="601a4487-efe4-4a79-89fa-3a33a89d7b0d" containerName="watcher-decision-engine" containerID="cri-o://d76422a3f33f128135ae3fa2baec96115252e8e66fad4819ca2d257f2af13404" gracePeriod=30 Jan 22 06:59:23 crc kubenswrapper[4720]: I0122 06:59:23.021312 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8s9wq\" (UniqueName: \"kubernetes.io/projected/e0541b1c-8509-4175-94fd-f9a341d35e64-kube-api-access-8s9wq\") pod \"watcher49c6-account-delete-hdxsj\" (UID: \"e0541b1c-8509-4175-94fd-f9a341d35e64\") " pod="watcher-kuttl-default/watcher49c6-account-delete-hdxsj" Jan 22 06:59:23 crc kubenswrapper[4720]: I0122 06:59:23.021377 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e0541b1c-8509-4175-94fd-f9a341d35e64-operator-scripts\") pod \"watcher49c6-account-delete-hdxsj\" (UID: \"e0541b1c-8509-4175-94fd-f9a341d35e64\") " pod="watcher-kuttl-default/watcher49c6-account-delete-hdxsj" Jan 22 06:59:23 crc kubenswrapper[4720]: I0122 06:59:23.022768 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e0541b1c-8509-4175-94fd-f9a341d35e64-operator-scripts\") pod \"watcher49c6-account-delete-hdxsj\" (UID: \"e0541b1c-8509-4175-94fd-f9a341d35e64\") " pod="watcher-kuttl-default/watcher49c6-account-delete-hdxsj" Jan 22 06:59:23 crc kubenswrapper[4720]: I0122 06:59:23.052175 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8s9wq\" (UniqueName: \"kubernetes.io/projected/e0541b1c-8509-4175-94fd-f9a341d35e64-kube-api-access-8s9wq\") pod \"watcher49c6-account-delete-hdxsj\" (UID: \"e0541b1c-8509-4175-94fd-f9a341d35e64\") " pod="watcher-kuttl-default/watcher49c6-account-delete-hdxsj" Jan 22 06:59:23 crc kubenswrapper[4720]: I0122 06:59:23.150383 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher49c6-account-delete-hdxsj" Jan 22 06:59:23 crc kubenswrapper[4720]: I0122 06:59:23.629103 4720 generic.go:334] "Generic (PLEG): container finished" podID="c85be378-c080-4165-803d-a7ee88403c07" containerID="4a2c952974e9aa100a1bbadf0fbd2418cd00fc217b4c88b6932095bc7f4c7b56" exitCode=143 Jan 22 06:59:23 crc kubenswrapper[4720]: I0122 06:59:23.629174 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"c85be378-c080-4165-803d-a7ee88403c07","Type":"ContainerDied","Data":"4a2c952974e9aa100a1bbadf0fbd2418cd00fc217b4c88b6932095bc7f4c7b56"} Jan 22 06:59:23 crc kubenswrapper[4720]: I0122 06:59:23.633808 4720 generic.go:334] "Generic (PLEG): container finished" podID="777aebda-6518-41fd-a1e1-0051e2998417" containerID="6824d5ad7940fb402c871f08df5c7e5942c3c5d6223cb349fa2a323dd95c109c" exitCode=0 Jan 22 06:59:23 crc kubenswrapper[4720]: I0122 06:59:23.633844 4720 generic.go:334] "Generic (PLEG): container finished" podID="777aebda-6518-41fd-a1e1-0051e2998417" containerID="747abfa7ed5e09c67021a969f5de063a3278c727e625a549560a977193f5067d" exitCode=0 Jan 22 06:59:23 crc kubenswrapper[4720]: I0122 06:59:23.633870 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"777aebda-6518-41fd-a1e1-0051e2998417","Type":"ContainerDied","Data":"6824d5ad7940fb402c871f08df5c7e5942c3c5d6223cb349fa2a323dd95c109c"} Jan 22 06:59:23 crc kubenswrapper[4720]: I0122 06:59:23.633902 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"777aebda-6518-41fd-a1e1-0051e2998417","Type":"ContainerDied","Data":"747abfa7ed5e09c67021a969f5de063a3278c727e625a549560a977193f5067d"} Jan 22 06:59:23 crc kubenswrapper[4720]: I0122 06:59:23.656013 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher49c6-account-delete-hdxsj"] Jan 22 06:59:23 crc kubenswrapper[4720]: I0122 06:59:23.839478 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 22 06:59:23 crc kubenswrapper[4720]: I0122 06:59:23.943442 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/777aebda-6518-41fd-a1e1-0051e2998417-ceilometer-tls-certs\") pod \"777aebda-6518-41fd-a1e1-0051e2998417\" (UID: \"777aebda-6518-41fd-a1e1-0051e2998417\") " Jan 22 06:59:23 crc kubenswrapper[4720]: I0122 06:59:23.943662 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/777aebda-6518-41fd-a1e1-0051e2998417-log-httpd\") pod \"777aebda-6518-41fd-a1e1-0051e2998417\" (UID: \"777aebda-6518-41fd-a1e1-0051e2998417\") " Jan 22 06:59:23 crc kubenswrapper[4720]: I0122 06:59:23.943798 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/777aebda-6518-41fd-a1e1-0051e2998417-config-data\") pod \"777aebda-6518-41fd-a1e1-0051e2998417\" (UID: \"777aebda-6518-41fd-a1e1-0051e2998417\") " Jan 22 06:59:23 crc kubenswrapper[4720]: I0122 06:59:23.943857 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mm7s7\" (UniqueName: \"kubernetes.io/projected/777aebda-6518-41fd-a1e1-0051e2998417-kube-api-access-mm7s7\") pod \"777aebda-6518-41fd-a1e1-0051e2998417\" (UID: \"777aebda-6518-41fd-a1e1-0051e2998417\") " Jan 22 06:59:23 crc kubenswrapper[4720]: I0122 06:59:23.943959 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/777aebda-6518-41fd-a1e1-0051e2998417-combined-ca-bundle\") pod \"777aebda-6518-41fd-a1e1-0051e2998417\" (UID: \"777aebda-6518-41fd-a1e1-0051e2998417\") " Jan 22 06:59:23 crc kubenswrapper[4720]: I0122 06:59:23.943995 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/777aebda-6518-41fd-a1e1-0051e2998417-run-httpd\") pod \"777aebda-6518-41fd-a1e1-0051e2998417\" (UID: \"777aebda-6518-41fd-a1e1-0051e2998417\") " Jan 22 06:59:23 crc kubenswrapper[4720]: I0122 06:59:23.944062 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/777aebda-6518-41fd-a1e1-0051e2998417-scripts\") pod \"777aebda-6518-41fd-a1e1-0051e2998417\" (UID: \"777aebda-6518-41fd-a1e1-0051e2998417\") " Jan 22 06:59:23 crc kubenswrapper[4720]: I0122 06:59:23.944097 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/777aebda-6518-41fd-a1e1-0051e2998417-sg-core-conf-yaml\") pod \"777aebda-6518-41fd-a1e1-0051e2998417\" (UID: \"777aebda-6518-41fd-a1e1-0051e2998417\") " Jan 22 06:59:23 crc kubenswrapper[4720]: I0122 06:59:23.970524 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/777aebda-6518-41fd-a1e1-0051e2998417-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "777aebda-6518-41fd-a1e1-0051e2998417" (UID: "777aebda-6518-41fd-a1e1-0051e2998417"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 06:59:23 crc kubenswrapper[4720]: I0122 06:59:23.981531 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/777aebda-6518-41fd-a1e1-0051e2998417-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "777aebda-6518-41fd-a1e1-0051e2998417" (UID: "777aebda-6518-41fd-a1e1-0051e2998417"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 06:59:24 crc kubenswrapper[4720]: I0122 06:59:24.018981 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/777aebda-6518-41fd-a1e1-0051e2998417-kube-api-access-mm7s7" (OuterVolumeSpecName: "kube-api-access-mm7s7") pod "777aebda-6518-41fd-a1e1-0051e2998417" (UID: "777aebda-6518-41fd-a1e1-0051e2998417"). InnerVolumeSpecName "kube-api-access-mm7s7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 06:59:24 crc kubenswrapper[4720]: I0122 06:59:24.020036 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/777aebda-6518-41fd-a1e1-0051e2998417-scripts" (OuterVolumeSpecName: "scripts") pod "777aebda-6518-41fd-a1e1-0051e2998417" (UID: "777aebda-6518-41fd-a1e1-0051e2998417"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 06:59:24 crc kubenswrapper[4720]: I0122 06:59:24.045876 4720 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/777aebda-6518-41fd-a1e1-0051e2998417-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 22 06:59:24 crc kubenswrapper[4720]: I0122 06:59:24.045928 4720 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/777aebda-6518-41fd-a1e1-0051e2998417-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 06:59:24 crc kubenswrapper[4720]: I0122 06:59:24.045939 4720 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/777aebda-6518-41fd-a1e1-0051e2998417-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 22 06:59:24 crc kubenswrapper[4720]: I0122 06:59:24.045948 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mm7s7\" (UniqueName: \"kubernetes.io/projected/777aebda-6518-41fd-a1e1-0051e2998417-kube-api-access-mm7s7\") on node \"crc\" DevicePath \"\"" Jan 22 06:59:24 crc kubenswrapper[4720]: I0122 06:59:24.078394 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/777aebda-6518-41fd-a1e1-0051e2998417-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "777aebda-6518-41fd-a1e1-0051e2998417" (UID: "777aebda-6518-41fd-a1e1-0051e2998417"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 06:59:24 crc kubenswrapper[4720]: I0122 06:59:24.081142 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/777aebda-6518-41fd-a1e1-0051e2998417-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "777aebda-6518-41fd-a1e1-0051e2998417" (UID: "777aebda-6518-41fd-a1e1-0051e2998417"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 06:59:24 crc kubenswrapper[4720]: I0122 06:59:24.143040 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/777aebda-6518-41fd-a1e1-0051e2998417-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "777aebda-6518-41fd-a1e1-0051e2998417" (UID: "777aebda-6518-41fd-a1e1-0051e2998417"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 06:59:24 crc kubenswrapper[4720]: I0122 06:59:24.157318 4720 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/777aebda-6518-41fd-a1e1-0051e2998417-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 06:59:24 crc kubenswrapper[4720]: I0122 06:59:24.157582 4720 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/777aebda-6518-41fd-a1e1-0051e2998417-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 22 06:59:24 crc kubenswrapper[4720]: I0122 06:59:24.157732 4720 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/777aebda-6518-41fd-a1e1-0051e2998417-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 22 06:59:24 crc kubenswrapper[4720]: I0122 06:59:24.159122 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/777aebda-6518-41fd-a1e1-0051e2998417-config-data" (OuterVolumeSpecName: "config-data") pod "777aebda-6518-41fd-a1e1-0051e2998417" (UID: "777aebda-6518-41fd-a1e1-0051e2998417"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 06:59:24 crc kubenswrapper[4720]: I0122 06:59:24.246296 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6c492082-fc06-480d-9b68-5d09c9c7549c" path="/var/lib/kubelet/pods/6c492082-fc06-480d-9b68-5d09c9c7549c/volumes" Jan 22 06:59:24 crc kubenswrapper[4720]: I0122 06:59:24.259219 4720 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/777aebda-6518-41fd-a1e1-0051e2998417-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 06:59:24 crc kubenswrapper[4720]: E0122 06:59:24.393186 4720 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3306f226979cc15d3d654c3c6fe9ddf2b50d915d6708498e06c13ea6e23d0e7b" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Jan 22 06:59:24 crc kubenswrapper[4720]: E0122 06:59:24.418115 4720 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3306f226979cc15d3d654c3c6fe9ddf2b50d915d6708498e06c13ea6e23d0e7b" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Jan 22 06:59:24 crc kubenswrapper[4720]: E0122 06:59:24.427100 4720 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3306f226979cc15d3d654c3c6fe9ddf2b50d915d6708498e06c13ea6e23d0e7b" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Jan 22 06:59:24 crc kubenswrapper[4720]: E0122 06:59:24.427214 4720 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podUID="49c39509-be83-4644-aa0b-87ad0237579d" containerName="watcher-applier" Jan 22 06:59:24 crc kubenswrapper[4720]: I0122 06:59:24.487541 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 06:59:24 crc kubenswrapper[4720]: I0122 06:59:24.648471 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"777aebda-6518-41fd-a1e1-0051e2998417","Type":"ContainerDied","Data":"71aa068bf80fd07b962fcc6270b194530fea02d9d695119e484a9253cf43c35e"} Jan 22 06:59:24 crc kubenswrapper[4720]: I0122 06:59:24.648551 4720 scope.go:117] "RemoveContainer" containerID="77438e53bae5925cb476b121a9344e6952cf22c13c6f3ed5411ed793986415bf" Jan 22 06:59:24 crc kubenswrapper[4720]: I0122 06:59:24.648754 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 22 06:59:24 crc kubenswrapper[4720]: I0122 06:59:24.652188 4720 generic.go:334] "Generic (PLEG): container finished" podID="c85be378-c080-4165-803d-a7ee88403c07" containerID="13a932f45c7d00825403b4941f132e39cfaba19c3eb49c440e58fd2ce6b57c98" exitCode=0 Jan 22 06:59:24 crc kubenswrapper[4720]: I0122 06:59:24.652263 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"c85be378-c080-4165-803d-a7ee88403c07","Type":"ContainerDied","Data":"13a932f45c7d00825403b4941f132e39cfaba19c3eb49c440e58fd2ce6b57c98"} Jan 22 06:59:24 crc kubenswrapper[4720]: I0122 06:59:24.652309 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"c85be378-c080-4165-803d-a7ee88403c07","Type":"ContainerDied","Data":"d1f8850ce90fa69a833b6da1a4f265a5d968eacf2b63d4a3235ac43bdb6f380d"} Jan 22 06:59:24 crc kubenswrapper[4720]: I0122 06:59:24.652395 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 06:59:24 crc kubenswrapper[4720]: I0122 06:59:24.663626 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c85be378-c080-4165-803d-a7ee88403c07-logs\") pod \"c85be378-c080-4165-803d-a7ee88403c07\" (UID: \"c85be378-c080-4165-803d-a7ee88403c07\") " Jan 22 06:59:24 crc kubenswrapper[4720]: I0122 06:59:24.663723 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6zst9\" (UniqueName: \"kubernetes.io/projected/c85be378-c080-4165-803d-a7ee88403c07-kube-api-access-6zst9\") pod \"c85be378-c080-4165-803d-a7ee88403c07\" (UID: \"c85be378-c080-4165-803d-a7ee88403c07\") " Jan 22 06:59:24 crc kubenswrapper[4720]: I0122 06:59:24.663750 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c85be378-c080-4165-803d-a7ee88403c07-config-data\") pod \"c85be378-c080-4165-803d-a7ee88403c07\" (UID: \"c85be378-c080-4165-803d-a7ee88403c07\") " Jan 22 06:59:24 crc kubenswrapper[4720]: I0122 06:59:24.663801 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c85be378-c080-4165-803d-a7ee88403c07-combined-ca-bundle\") pod \"c85be378-c080-4165-803d-a7ee88403c07\" (UID: \"c85be378-c080-4165-803d-a7ee88403c07\") " Jan 22 06:59:24 crc kubenswrapper[4720]: I0122 06:59:24.663884 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/c85be378-c080-4165-803d-a7ee88403c07-custom-prometheus-ca\") pod \"c85be378-c080-4165-803d-a7ee88403c07\" (UID: \"c85be378-c080-4165-803d-a7ee88403c07\") " Jan 22 06:59:24 crc kubenswrapper[4720]: I0122 06:59:24.665841 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c85be378-c080-4165-803d-a7ee88403c07-logs" (OuterVolumeSpecName: "logs") pod "c85be378-c080-4165-803d-a7ee88403c07" (UID: "c85be378-c080-4165-803d-a7ee88403c07"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 06:59:24 crc kubenswrapper[4720]: I0122 06:59:24.666038 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher49c6-account-delete-hdxsj" event={"ID":"e0541b1c-8509-4175-94fd-f9a341d35e64","Type":"ContainerStarted","Data":"eb5649ecc7c2ab54f2dc5aa7a5878e7656682c34abc3fb110d2837af4ed1984b"} Jan 22 06:59:24 crc kubenswrapper[4720]: I0122 06:59:24.666090 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher49c6-account-delete-hdxsj" event={"ID":"e0541b1c-8509-4175-94fd-f9a341d35e64","Type":"ContainerStarted","Data":"131c7cf5284ada4b7016e43a39db3924e99462fd290c685aa60a0a795b41b812"} Jan 22 06:59:24 crc kubenswrapper[4720]: I0122 06:59:24.671630 4720 scope.go:117] "RemoveContainer" containerID="db8e9d57ee13137d9a909e2797b30420323ee01644cff9bd29f0cec98428895f" Jan 22 06:59:24 crc kubenswrapper[4720]: I0122 06:59:24.672111 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c85be378-c080-4165-803d-a7ee88403c07-kube-api-access-6zst9" (OuterVolumeSpecName: "kube-api-access-6zst9") pod "c85be378-c080-4165-803d-a7ee88403c07" (UID: "c85be378-c080-4165-803d-a7ee88403c07"). InnerVolumeSpecName "kube-api-access-6zst9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 06:59:24 crc kubenswrapper[4720]: I0122 06:59:24.734868 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 06:59:24 crc kubenswrapper[4720]: I0122 06:59:24.756301 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c85be378-c080-4165-803d-a7ee88403c07-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c85be378-c080-4165-803d-a7ee88403c07" (UID: "c85be378-c080-4165-803d-a7ee88403c07"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 06:59:24 crc kubenswrapper[4720]: I0122 06:59:24.762436 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 06:59:24 crc kubenswrapper[4720]: I0122 06:59:24.776082 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c85be378-c080-4165-803d-a7ee88403c07-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "c85be378-c080-4165-803d-a7ee88403c07" (UID: "c85be378-c080-4165-803d-a7ee88403c07"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 06:59:24 crc kubenswrapper[4720]: I0122 06:59:24.776252 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 06:59:24 crc kubenswrapper[4720]: I0122 06:59:24.776645 4720 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c85be378-c080-4165-803d-a7ee88403c07-logs\") on node \"crc\" DevicePath \"\"" Jan 22 06:59:24 crc kubenswrapper[4720]: I0122 06:59:24.776672 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6zst9\" (UniqueName: \"kubernetes.io/projected/c85be378-c080-4165-803d-a7ee88403c07-kube-api-access-6zst9\") on node \"crc\" DevicePath \"\"" Jan 22 06:59:24 crc kubenswrapper[4720]: I0122 06:59:24.776714 4720 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c85be378-c080-4165-803d-a7ee88403c07-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 06:59:24 crc kubenswrapper[4720]: I0122 06:59:24.776738 4720 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/c85be378-c080-4165-803d-a7ee88403c07-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Jan 22 06:59:24 crc kubenswrapper[4720]: E0122 06:59:24.777248 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c85be378-c080-4165-803d-a7ee88403c07" containerName="watcher-kuttl-api-log" Jan 22 06:59:24 crc kubenswrapper[4720]: I0122 06:59:24.777268 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="c85be378-c080-4165-803d-a7ee88403c07" containerName="watcher-kuttl-api-log" Jan 22 06:59:24 crc kubenswrapper[4720]: I0122 06:59:24.777271 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher49c6-account-delete-hdxsj" podStartSLOduration=2.777259359 podStartE2EDuration="2.777259359s" podCreationTimestamp="2026-01-22 06:59:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 06:59:24.724250864 +0000 UTC m=+1456.866157599" watchObservedRunningTime="2026-01-22 06:59:24.777259359 +0000 UTC m=+1456.919166064" Jan 22 06:59:24 crc kubenswrapper[4720]: E0122 06:59:24.777295 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c85be378-c080-4165-803d-a7ee88403c07" containerName="watcher-api" Jan 22 06:59:24 crc kubenswrapper[4720]: I0122 06:59:24.777969 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="c85be378-c080-4165-803d-a7ee88403c07" containerName="watcher-api" Jan 22 06:59:24 crc kubenswrapper[4720]: E0122 06:59:24.778024 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="777aebda-6518-41fd-a1e1-0051e2998417" containerName="proxy-httpd" Jan 22 06:59:24 crc kubenswrapper[4720]: I0122 06:59:24.778032 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="777aebda-6518-41fd-a1e1-0051e2998417" containerName="proxy-httpd" Jan 22 06:59:24 crc kubenswrapper[4720]: E0122 06:59:24.778041 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="777aebda-6518-41fd-a1e1-0051e2998417" containerName="ceilometer-central-agent" Jan 22 06:59:24 crc kubenswrapper[4720]: I0122 06:59:24.778049 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="777aebda-6518-41fd-a1e1-0051e2998417" containerName="ceilometer-central-agent" Jan 22 06:59:24 crc kubenswrapper[4720]: E0122 06:59:24.778092 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="777aebda-6518-41fd-a1e1-0051e2998417" containerName="ceilometer-notification-agent" Jan 22 06:59:24 crc kubenswrapper[4720]: I0122 06:59:24.778100 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="777aebda-6518-41fd-a1e1-0051e2998417" containerName="ceilometer-notification-agent" Jan 22 06:59:24 crc kubenswrapper[4720]: E0122 06:59:24.778121 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="777aebda-6518-41fd-a1e1-0051e2998417" containerName="sg-core" Jan 22 06:59:24 crc kubenswrapper[4720]: I0122 06:59:24.778127 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="777aebda-6518-41fd-a1e1-0051e2998417" containerName="sg-core" Jan 22 06:59:24 crc kubenswrapper[4720]: I0122 06:59:24.778978 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="777aebda-6518-41fd-a1e1-0051e2998417" containerName="ceilometer-notification-agent" Jan 22 06:59:24 crc kubenswrapper[4720]: I0122 06:59:24.779007 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="c85be378-c080-4165-803d-a7ee88403c07" containerName="watcher-api" Jan 22 06:59:24 crc kubenswrapper[4720]: I0122 06:59:24.779030 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="777aebda-6518-41fd-a1e1-0051e2998417" containerName="proxy-httpd" Jan 22 06:59:24 crc kubenswrapper[4720]: I0122 06:59:24.779054 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="777aebda-6518-41fd-a1e1-0051e2998417" containerName="ceilometer-central-agent" Jan 22 06:59:24 crc kubenswrapper[4720]: I0122 06:59:24.779066 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="777aebda-6518-41fd-a1e1-0051e2998417" containerName="sg-core" Jan 22 06:59:24 crc kubenswrapper[4720]: I0122 06:59:24.779078 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="c85be378-c080-4165-803d-a7ee88403c07" containerName="watcher-kuttl-api-log" Jan 22 06:59:24 crc kubenswrapper[4720]: I0122 06:59:24.789471 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 22 06:59:24 crc kubenswrapper[4720]: I0122 06:59:24.793291 4720 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-ceilometer-internal-svc" Jan 22 06:59:24 crc kubenswrapper[4720]: I0122 06:59:24.794024 4720 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-scripts" Jan 22 06:59:24 crc kubenswrapper[4720]: I0122 06:59:24.794288 4720 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-config-data" Jan 22 06:59:24 crc kubenswrapper[4720]: I0122 06:59:24.800188 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c85be378-c080-4165-803d-a7ee88403c07-config-data" (OuterVolumeSpecName: "config-data") pod "c85be378-c080-4165-803d-a7ee88403c07" (UID: "c85be378-c080-4165-803d-a7ee88403c07"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 06:59:24 crc kubenswrapper[4720]: I0122 06:59:24.828000 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 06:59:24 crc kubenswrapper[4720]: I0122 06:59:24.878689 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/949bfffa-9b7f-4557-8dc4-d3406f64f231-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"949bfffa-9b7f-4557-8dc4-d3406f64f231\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 06:59:24 crc kubenswrapper[4720]: I0122 06:59:24.878740 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/949bfffa-9b7f-4557-8dc4-d3406f64f231-run-httpd\") pod \"ceilometer-0\" (UID: \"949bfffa-9b7f-4557-8dc4-d3406f64f231\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 06:59:24 crc kubenswrapper[4720]: I0122 06:59:24.878825 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/949bfffa-9b7f-4557-8dc4-d3406f64f231-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"949bfffa-9b7f-4557-8dc4-d3406f64f231\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 06:59:24 crc kubenswrapper[4720]: I0122 06:59:24.878855 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/949bfffa-9b7f-4557-8dc4-d3406f64f231-config-data\") pod \"ceilometer-0\" (UID: \"949bfffa-9b7f-4557-8dc4-d3406f64f231\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 06:59:24 crc kubenswrapper[4720]: I0122 06:59:24.878879 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/949bfffa-9b7f-4557-8dc4-d3406f64f231-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"949bfffa-9b7f-4557-8dc4-d3406f64f231\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 06:59:24 crc kubenswrapper[4720]: I0122 06:59:24.878970 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7p5qg\" (UniqueName: \"kubernetes.io/projected/949bfffa-9b7f-4557-8dc4-d3406f64f231-kube-api-access-7p5qg\") pod \"ceilometer-0\" (UID: \"949bfffa-9b7f-4557-8dc4-d3406f64f231\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 06:59:24 crc kubenswrapper[4720]: I0122 06:59:24.879010 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/949bfffa-9b7f-4557-8dc4-d3406f64f231-scripts\") pod \"ceilometer-0\" (UID: \"949bfffa-9b7f-4557-8dc4-d3406f64f231\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 06:59:24 crc kubenswrapper[4720]: I0122 06:59:24.879028 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/949bfffa-9b7f-4557-8dc4-d3406f64f231-log-httpd\") pod \"ceilometer-0\" (UID: \"949bfffa-9b7f-4557-8dc4-d3406f64f231\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 06:59:24 crc kubenswrapper[4720]: I0122 06:59:24.879106 4720 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c85be378-c080-4165-803d-a7ee88403c07-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 06:59:24 crc kubenswrapper[4720]: I0122 06:59:24.926232 4720 scope.go:117] "RemoveContainer" containerID="6824d5ad7940fb402c871f08df5c7e5942c3c5d6223cb349fa2a323dd95c109c" Jan 22 06:59:24 crc kubenswrapper[4720]: I0122 06:59:24.950922 4720 scope.go:117] "RemoveContainer" containerID="747abfa7ed5e09c67021a969f5de063a3278c727e625a549560a977193f5067d" Jan 22 06:59:24 crc kubenswrapper[4720]: I0122 06:59:24.972134 4720 scope.go:117] "RemoveContainer" containerID="13a932f45c7d00825403b4941f132e39cfaba19c3eb49c440e58fd2ce6b57c98" Jan 22 06:59:24 crc kubenswrapper[4720]: I0122 06:59:24.980292 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/949bfffa-9b7f-4557-8dc4-d3406f64f231-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"949bfffa-9b7f-4557-8dc4-d3406f64f231\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 06:59:24 crc kubenswrapper[4720]: I0122 06:59:24.980341 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/949bfffa-9b7f-4557-8dc4-d3406f64f231-run-httpd\") pod \"ceilometer-0\" (UID: \"949bfffa-9b7f-4557-8dc4-d3406f64f231\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 06:59:24 crc kubenswrapper[4720]: I0122 06:59:24.980401 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/949bfffa-9b7f-4557-8dc4-d3406f64f231-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"949bfffa-9b7f-4557-8dc4-d3406f64f231\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 06:59:24 crc kubenswrapper[4720]: I0122 06:59:24.980437 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/949bfffa-9b7f-4557-8dc4-d3406f64f231-config-data\") pod \"ceilometer-0\" (UID: \"949bfffa-9b7f-4557-8dc4-d3406f64f231\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 06:59:24 crc kubenswrapper[4720]: I0122 06:59:24.980463 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/949bfffa-9b7f-4557-8dc4-d3406f64f231-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"949bfffa-9b7f-4557-8dc4-d3406f64f231\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 06:59:24 crc kubenswrapper[4720]: I0122 06:59:24.980520 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7p5qg\" (UniqueName: \"kubernetes.io/projected/949bfffa-9b7f-4557-8dc4-d3406f64f231-kube-api-access-7p5qg\") pod \"ceilometer-0\" (UID: \"949bfffa-9b7f-4557-8dc4-d3406f64f231\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 06:59:24 crc kubenswrapper[4720]: I0122 06:59:24.980548 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/949bfffa-9b7f-4557-8dc4-d3406f64f231-scripts\") pod \"ceilometer-0\" (UID: \"949bfffa-9b7f-4557-8dc4-d3406f64f231\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 06:59:24 crc kubenswrapper[4720]: I0122 06:59:24.980569 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/949bfffa-9b7f-4557-8dc4-d3406f64f231-log-httpd\") pod \"ceilometer-0\" (UID: \"949bfffa-9b7f-4557-8dc4-d3406f64f231\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 06:59:24 crc kubenswrapper[4720]: I0122 06:59:24.981097 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/949bfffa-9b7f-4557-8dc4-d3406f64f231-log-httpd\") pod \"ceilometer-0\" (UID: \"949bfffa-9b7f-4557-8dc4-d3406f64f231\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 06:59:24 crc kubenswrapper[4720]: I0122 06:59:24.981393 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/949bfffa-9b7f-4557-8dc4-d3406f64f231-run-httpd\") pod \"ceilometer-0\" (UID: \"949bfffa-9b7f-4557-8dc4-d3406f64f231\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 06:59:24 crc kubenswrapper[4720]: I0122 06:59:24.987513 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/949bfffa-9b7f-4557-8dc4-d3406f64f231-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"949bfffa-9b7f-4557-8dc4-d3406f64f231\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 06:59:24 crc kubenswrapper[4720]: I0122 06:59:24.987710 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/949bfffa-9b7f-4557-8dc4-d3406f64f231-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"949bfffa-9b7f-4557-8dc4-d3406f64f231\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 06:59:24 crc kubenswrapper[4720]: I0122 06:59:24.990215 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/949bfffa-9b7f-4557-8dc4-d3406f64f231-config-data\") pod \"ceilometer-0\" (UID: \"949bfffa-9b7f-4557-8dc4-d3406f64f231\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 06:59:24 crc kubenswrapper[4720]: I0122 06:59:24.994792 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/949bfffa-9b7f-4557-8dc4-d3406f64f231-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"949bfffa-9b7f-4557-8dc4-d3406f64f231\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 06:59:24 crc kubenswrapper[4720]: I0122 06:59:24.996903 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 22 06:59:24 crc kubenswrapper[4720]: I0122 06:59:24.997625 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/949bfffa-9b7f-4557-8dc4-d3406f64f231-scripts\") pod \"ceilometer-0\" (UID: \"949bfffa-9b7f-4557-8dc4-d3406f64f231\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 06:59:25 crc kubenswrapper[4720]: I0122 06:59:25.004235 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 22 06:59:25 crc kubenswrapper[4720]: I0122 06:59:25.004709 4720 scope.go:117] "RemoveContainer" containerID="4a2c952974e9aa100a1bbadf0fbd2418cd00fc217b4c88b6932095bc7f4c7b56" Jan 22 06:59:25 crc kubenswrapper[4720]: I0122 06:59:25.007331 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7p5qg\" (UniqueName: \"kubernetes.io/projected/949bfffa-9b7f-4557-8dc4-d3406f64f231-kube-api-access-7p5qg\") pod \"ceilometer-0\" (UID: \"949bfffa-9b7f-4557-8dc4-d3406f64f231\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 06:59:25 crc kubenswrapper[4720]: I0122 06:59:25.020166 4720 scope.go:117] "RemoveContainer" containerID="13a932f45c7d00825403b4941f132e39cfaba19c3eb49c440e58fd2ce6b57c98" Jan 22 06:59:25 crc kubenswrapper[4720]: E0122 06:59:25.021822 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"13a932f45c7d00825403b4941f132e39cfaba19c3eb49c440e58fd2ce6b57c98\": container with ID starting with 13a932f45c7d00825403b4941f132e39cfaba19c3eb49c440e58fd2ce6b57c98 not found: ID does not exist" containerID="13a932f45c7d00825403b4941f132e39cfaba19c3eb49c440e58fd2ce6b57c98" Jan 22 06:59:25 crc kubenswrapper[4720]: I0122 06:59:25.021859 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"13a932f45c7d00825403b4941f132e39cfaba19c3eb49c440e58fd2ce6b57c98"} err="failed to get container status \"13a932f45c7d00825403b4941f132e39cfaba19c3eb49c440e58fd2ce6b57c98\": rpc error: code = NotFound desc = could not find container \"13a932f45c7d00825403b4941f132e39cfaba19c3eb49c440e58fd2ce6b57c98\": container with ID starting with 13a932f45c7d00825403b4941f132e39cfaba19c3eb49c440e58fd2ce6b57c98 not found: ID does not exist" Jan 22 06:59:25 crc kubenswrapper[4720]: I0122 06:59:25.021888 4720 scope.go:117] "RemoveContainer" containerID="4a2c952974e9aa100a1bbadf0fbd2418cd00fc217b4c88b6932095bc7f4c7b56" Jan 22 06:59:25 crc kubenswrapper[4720]: E0122 06:59:25.022167 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4a2c952974e9aa100a1bbadf0fbd2418cd00fc217b4c88b6932095bc7f4c7b56\": container with ID starting with 4a2c952974e9aa100a1bbadf0fbd2418cd00fc217b4c88b6932095bc7f4c7b56 not found: ID does not exist" containerID="4a2c952974e9aa100a1bbadf0fbd2418cd00fc217b4c88b6932095bc7f4c7b56" Jan 22 06:59:25 crc kubenswrapper[4720]: I0122 06:59:25.022195 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4a2c952974e9aa100a1bbadf0fbd2418cd00fc217b4c88b6932095bc7f4c7b56"} err="failed to get container status \"4a2c952974e9aa100a1bbadf0fbd2418cd00fc217b4c88b6932095bc7f4c7b56\": rpc error: code = NotFound desc = could not find container \"4a2c952974e9aa100a1bbadf0fbd2418cd00fc217b4c88b6932095bc7f4c7b56\": container with ID starting with 4a2c952974e9aa100a1bbadf0fbd2418cd00fc217b4c88b6932095bc7f4c7b56 not found: ID does not exist" Jan 22 06:59:25 crc kubenswrapper[4720]: I0122 06:59:25.225240 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 22 06:59:25 crc kubenswrapper[4720]: I0122 06:59:25.675072 4720 generic.go:334] "Generic (PLEG): container finished" podID="49c39509-be83-4644-aa0b-87ad0237579d" containerID="3306f226979cc15d3d654c3c6fe9ddf2b50d915d6708498e06c13ea6e23d0e7b" exitCode=0 Jan 22 06:59:25 crc kubenswrapper[4720]: I0122 06:59:25.675364 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"49c39509-be83-4644-aa0b-87ad0237579d","Type":"ContainerDied","Data":"3306f226979cc15d3d654c3c6fe9ddf2b50d915d6708498e06c13ea6e23d0e7b"} Jan 22 06:59:25 crc kubenswrapper[4720]: I0122 06:59:25.685038 4720 generic.go:334] "Generic (PLEG): container finished" podID="e0541b1c-8509-4175-94fd-f9a341d35e64" containerID="eb5649ecc7c2ab54f2dc5aa7a5878e7656682c34abc3fb110d2837af4ed1984b" exitCode=0 Jan 22 06:59:25 crc kubenswrapper[4720]: I0122 06:59:25.685104 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher49c6-account-delete-hdxsj" event={"ID":"e0541b1c-8509-4175-94fd-f9a341d35e64","Type":"ContainerDied","Data":"eb5649ecc7c2ab54f2dc5aa7a5878e7656682c34abc3fb110d2837af4ed1984b"} Jan 22 06:59:25 crc kubenswrapper[4720]: I0122 06:59:25.740220 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 06:59:25 crc kubenswrapper[4720]: I0122 06:59:25.800430 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 06:59:25 crc kubenswrapper[4720]: I0122 06:59:25.902526 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l5fpx\" (UniqueName: \"kubernetes.io/projected/49c39509-be83-4644-aa0b-87ad0237579d-kube-api-access-l5fpx\") pod \"49c39509-be83-4644-aa0b-87ad0237579d\" (UID: \"49c39509-be83-4644-aa0b-87ad0237579d\") " Jan 22 06:59:25 crc kubenswrapper[4720]: I0122 06:59:25.902696 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49c39509-be83-4644-aa0b-87ad0237579d-combined-ca-bundle\") pod \"49c39509-be83-4644-aa0b-87ad0237579d\" (UID: \"49c39509-be83-4644-aa0b-87ad0237579d\") " Jan 22 06:59:25 crc kubenswrapper[4720]: I0122 06:59:25.902852 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/49c39509-be83-4644-aa0b-87ad0237579d-config-data\") pod \"49c39509-be83-4644-aa0b-87ad0237579d\" (UID: \"49c39509-be83-4644-aa0b-87ad0237579d\") " Jan 22 06:59:25 crc kubenswrapper[4720]: I0122 06:59:25.902942 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/49c39509-be83-4644-aa0b-87ad0237579d-logs\") pod \"49c39509-be83-4644-aa0b-87ad0237579d\" (UID: \"49c39509-be83-4644-aa0b-87ad0237579d\") " Jan 22 06:59:25 crc kubenswrapper[4720]: I0122 06:59:25.903250 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/49c39509-be83-4644-aa0b-87ad0237579d-logs" (OuterVolumeSpecName: "logs") pod "49c39509-be83-4644-aa0b-87ad0237579d" (UID: "49c39509-be83-4644-aa0b-87ad0237579d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 06:59:25 crc kubenswrapper[4720]: I0122 06:59:25.909067 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c39509-be83-4644-aa0b-87ad0237579d-kube-api-access-l5fpx" (OuterVolumeSpecName: "kube-api-access-l5fpx") pod "49c39509-be83-4644-aa0b-87ad0237579d" (UID: "49c39509-be83-4644-aa0b-87ad0237579d"). InnerVolumeSpecName "kube-api-access-l5fpx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 06:59:25 crc kubenswrapper[4720]: I0122 06:59:25.927649 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c39509-be83-4644-aa0b-87ad0237579d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "49c39509-be83-4644-aa0b-87ad0237579d" (UID: "49c39509-be83-4644-aa0b-87ad0237579d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 06:59:25 crc kubenswrapper[4720]: I0122 06:59:25.962233 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c39509-be83-4644-aa0b-87ad0237579d-config-data" (OuterVolumeSpecName: "config-data") pod "49c39509-be83-4644-aa0b-87ad0237579d" (UID: "49c39509-be83-4644-aa0b-87ad0237579d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 06:59:26 crc kubenswrapper[4720]: I0122 06:59:26.004978 4720 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/49c39509-be83-4644-aa0b-87ad0237579d-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 06:59:26 crc kubenswrapper[4720]: I0122 06:59:26.005031 4720 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/49c39509-be83-4644-aa0b-87ad0237579d-logs\") on node \"crc\" DevicePath \"\"" Jan 22 06:59:26 crc kubenswrapper[4720]: I0122 06:59:26.005059 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l5fpx\" (UniqueName: \"kubernetes.io/projected/49c39509-be83-4644-aa0b-87ad0237579d-kube-api-access-l5fpx\") on node \"crc\" DevicePath \"\"" Jan 22 06:59:26 crc kubenswrapper[4720]: I0122 06:59:26.005071 4720 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49c39509-be83-4644-aa0b-87ad0237579d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 06:59:26 crc kubenswrapper[4720]: I0122 06:59:26.223137 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="777aebda-6518-41fd-a1e1-0051e2998417" path="/var/lib/kubelet/pods/777aebda-6518-41fd-a1e1-0051e2998417/volumes" Jan 22 06:59:26 crc kubenswrapper[4720]: I0122 06:59:26.223902 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c85be378-c080-4165-803d-a7ee88403c07" path="/var/lib/kubelet/pods/c85be378-c080-4165-803d-a7ee88403c07/volumes" Jan 22 06:59:26 crc kubenswrapper[4720]: I0122 06:59:26.224694 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 06:59:26 crc kubenswrapper[4720]: I0122 06:59:26.700684 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"949bfffa-9b7f-4557-8dc4-d3406f64f231","Type":"ContainerStarted","Data":"3383e92c239dce0c5a755ec70414eac3f05cb942aebd7f04e392b0c522daac8f"} Jan 22 06:59:26 crc kubenswrapper[4720]: I0122 06:59:26.700764 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"949bfffa-9b7f-4557-8dc4-d3406f64f231","Type":"ContainerStarted","Data":"8dbf79dbb57a1f654309a8087db616c11eb838b870968f1baf2eae4317d55180"} Jan 22 06:59:26 crc kubenswrapper[4720]: I0122 06:59:26.702687 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 06:59:26 crc kubenswrapper[4720]: I0122 06:59:26.702714 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"49c39509-be83-4644-aa0b-87ad0237579d","Type":"ContainerDied","Data":"01b0150f87a15a6cafea054f1ca196189d7842ae9226b810d06d93b6cad41a9b"} Jan 22 06:59:26 crc kubenswrapper[4720]: I0122 06:59:26.702777 4720 scope.go:117] "RemoveContainer" containerID="3306f226979cc15d3d654c3c6fe9ddf2b50d915d6708498e06c13ea6e23d0e7b" Jan 22 06:59:26 crc kubenswrapper[4720]: I0122 06:59:26.706260 4720 generic.go:334] "Generic (PLEG): container finished" podID="601a4487-efe4-4a79-89fa-3a33a89d7b0d" containerID="d76422a3f33f128135ae3fa2baec96115252e8e66fad4819ca2d257f2af13404" exitCode=0 Jan 22 06:59:26 crc kubenswrapper[4720]: I0122 06:59:26.706329 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"601a4487-efe4-4a79-89fa-3a33a89d7b0d","Type":"ContainerDied","Data":"d76422a3f33f128135ae3fa2baec96115252e8e66fad4819ca2d257f2af13404"} Jan 22 06:59:26 crc kubenswrapper[4720]: I0122 06:59:26.732673 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 22 06:59:26 crc kubenswrapper[4720]: I0122 06:59:26.745402 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 22 06:59:27 crc kubenswrapper[4720]: I0122 06:59:27.063847 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher49c6-account-delete-hdxsj" Jan 22 06:59:27 crc kubenswrapper[4720]: I0122 06:59:27.125243 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8s9wq\" (UniqueName: \"kubernetes.io/projected/e0541b1c-8509-4175-94fd-f9a341d35e64-kube-api-access-8s9wq\") pod \"e0541b1c-8509-4175-94fd-f9a341d35e64\" (UID: \"e0541b1c-8509-4175-94fd-f9a341d35e64\") " Jan 22 06:59:27 crc kubenswrapper[4720]: I0122 06:59:27.126067 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e0541b1c-8509-4175-94fd-f9a341d35e64-operator-scripts\") pod \"e0541b1c-8509-4175-94fd-f9a341d35e64\" (UID: \"e0541b1c-8509-4175-94fd-f9a341d35e64\") " Jan 22 06:59:27 crc kubenswrapper[4720]: I0122 06:59:27.127081 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e0541b1c-8509-4175-94fd-f9a341d35e64-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e0541b1c-8509-4175-94fd-f9a341d35e64" (UID: "e0541b1c-8509-4175-94fd-f9a341d35e64"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 06:59:27 crc kubenswrapper[4720]: I0122 06:59:27.132734 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e0541b1c-8509-4175-94fd-f9a341d35e64-kube-api-access-8s9wq" (OuterVolumeSpecName: "kube-api-access-8s9wq") pod "e0541b1c-8509-4175-94fd-f9a341d35e64" (UID: "e0541b1c-8509-4175-94fd-f9a341d35e64"). InnerVolumeSpecName "kube-api-access-8s9wq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 06:59:27 crc kubenswrapper[4720]: I0122 06:59:27.160753 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 06:59:27 crc kubenswrapper[4720]: I0122 06:59:27.227701 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mbmb8\" (UniqueName: \"kubernetes.io/projected/601a4487-efe4-4a79-89fa-3a33a89d7b0d-kube-api-access-mbmb8\") pod \"601a4487-efe4-4a79-89fa-3a33a89d7b0d\" (UID: \"601a4487-efe4-4a79-89fa-3a33a89d7b0d\") " Jan 22 06:59:27 crc kubenswrapper[4720]: I0122 06:59:27.227767 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/601a4487-efe4-4a79-89fa-3a33a89d7b0d-combined-ca-bundle\") pod \"601a4487-efe4-4a79-89fa-3a33a89d7b0d\" (UID: \"601a4487-efe4-4a79-89fa-3a33a89d7b0d\") " Jan 22 06:59:27 crc kubenswrapper[4720]: I0122 06:59:27.227817 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/601a4487-efe4-4a79-89fa-3a33a89d7b0d-config-data\") pod \"601a4487-efe4-4a79-89fa-3a33a89d7b0d\" (UID: \"601a4487-efe4-4a79-89fa-3a33a89d7b0d\") " Jan 22 06:59:27 crc kubenswrapper[4720]: I0122 06:59:27.227897 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/601a4487-efe4-4a79-89fa-3a33a89d7b0d-custom-prometheus-ca\") pod \"601a4487-efe4-4a79-89fa-3a33a89d7b0d\" (UID: \"601a4487-efe4-4a79-89fa-3a33a89d7b0d\") " Jan 22 06:59:27 crc kubenswrapper[4720]: I0122 06:59:27.228035 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/601a4487-efe4-4a79-89fa-3a33a89d7b0d-logs\") pod \"601a4487-efe4-4a79-89fa-3a33a89d7b0d\" (UID: \"601a4487-efe4-4a79-89fa-3a33a89d7b0d\") " Jan 22 06:59:27 crc kubenswrapper[4720]: I0122 06:59:27.228358 4720 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e0541b1c-8509-4175-94fd-f9a341d35e64-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 06:59:27 crc kubenswrapper[4720]: I0122 06:59:27.228375 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8s9wq\" (UniqueName: \"kubernetes.io/projected/e0541b1c-8509-4175-94fd-f9a341d35e64-kube-api-access-8s9wq\") on node \"crc\" DevicePath \"\"" Jan 22 06:59:27 crc kubenswrapper[4720]: I0122 06:59:27.234463 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/601a4487-efe4-4a79-89fa-3a33a89d7b0d-logs" (OuterVolumeSpecName: "logs") pod "601a4487-efe4-4a79-89fa-3a33a89d7b0d" (UID: "601a4487-efe4-4a79-89fa-3a33a89d7b0d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 06:59:27 crc kubenswrapper[4720]: I0122 06:59:27.239099 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/601a4487-efe4-4a79-89fa-3a33a89d7b0d-kube-api-access-mbmb8" (OuterVolumeSpecName: "kube-api-access-mbmb8") pod "601a4487-efe4-4a79-89fa-3a33a89d7b0d" (UID: "601a4487-efe4-4a79-89fa-3a33a89d7b0d"). InnerVolumeSpecName "kube-api-access-mbmb8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 06:59:27 crc kubenswrapper[4720]: I0122 06:59:27.284081 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/601a4487-efe4-4a79-89fa-3a33a89d7b0d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "601a4487-efe4-4a79-89fa-3a33a89d7b0d" (UID: "601a4487-efe4-4a79-89fa-3a33a89d7b0d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 06:59:27 crc kubenswrapper[4720]: I0122 06:59:27.330336 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mbmb8\" (UniqueName: \"kubernetes.io/projected/601a4487-efe4-4a79-89fa-3a33a89d7b0d-kube-api-access-mbmb8\") on node \"crc\" DevicePath \"\"" Jan 22 06:59:27 crc kubenswrapper[4720]: I0122 06:59:27.330384 4720 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/601a4487-efe4-4a79-89fa-3a33a89d7b0d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 06:59:27 crc kubenswrapper[4720]: I0122 06:59:27.330400 4720 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/601a4487-efe4-4a79-89fa-3a33a89d7b0d-logs\") on node \"crc\" DevicePath \"\"" Jan 22 06:59:27 crc kubenswrapper[4720]: I0122 06:59:27.338235 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/601a4487-efe4-4a79-89fa-3a33a89d7b0d-config-data" (OuterVolumeSpecName: "config-data") pod "601a4487-efe4-4a79-89fa-3a33a89d7b0d" (UID: "601a4487-efe4-4a79-89fa-3a33a89d7b0d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 06:59:27 crc kubenswrapper[4720]: I0122 06:59:27.341035 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/601a4487-efe4-4a79-89fa-3a33a89d7b0d-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "601a4487-efe4-4a79-89fa-3a33a89d7b0d" (UID: "601a4487-efe4-4a79-89fa-3a33a89d7b0d"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 06:59:27 crc kubenswrapper[4720]: I0122 06:59:27.436294 4720 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/601a4487-efe4-4a79-89fa-3a33a89d7b0d-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Jan 22 06:59:27 crc kubenswrapper[4720]: I0122 06:59:27.436337 4720 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/601a4487-efe4-4a79-89fa-3a33a89d7b0d-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 06:59:27 crc kubenswrapper[4720]: I0122 06:59:27.714918 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher49c6-account-delete-hdxsj" event={"ID":"e0541b1c-8509-4175-94fd-f9a341d35e64","Type":"ContainerDied","Data":"131c7cf5284ada4b7016e43a39db3924e99462fd290c685aa60a0a795b41b812"} Jan 22 06:59:27 crc kubenswrapper[4720]: I0122 06:59:27.714965 4720 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="131c7cf5284ada4b7016e43a39db3924e99462fd290c685aa60a0a795b41b812" Jan 22 06:59:27 crc kubenswrapper[4720]: I0122 06:59:27.714942 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher49c6-account-delete-hdxsj" Jan 22 06:59:27 crc kubenswrapper[4720]: I0122 06:59:27.719303 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"601a4487-efe4-4a79-89fa-3a33a89d7b0d","Type":"ContainerDied","Data":"74e605ea162327dfdc142fd024ac42976d8340896ac82b7dcb9b4cc097e3486c"} Jan 22 06:59:27 crc kubenswrapper[4720]: I0122 06:59:27.719363 4720 scope.go:117] "RemoveContainer" containerID="d76422a3f33f128135ae3fa2baec96115252e8e66fad4819ca2d257f2af13404" Jan 22 06:59:27 crc kubenswrapper[4720]: I0122 06:59:27.719696 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 06:59:27 crc kubenswrapper[4720]: I0122 06:59:27.722034 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"949bfffa-9b7f-4557-8dc4-d3406f64f231","Type":"ContainerStarted","Data":"e43115d36e6310961fe143c420cb3df89e9395ec0ed4f21ff6773d7e4d7ad575"} Jan 22 06:59:27 crc kubenswrapper[4720]: I0122 06:59:27.772851 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 22 06:59:27 crc kubenswrapper[4720]: I0122 06:59:27.779166 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 22 06:59:28 crc kubenswrapper[4720]: I0122 06:59:28.222237 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c39509-be83-4644-aa0b-87ad0237579d" path="/var/lib/kubelet/pods/49c39509-be83-4644-aa0b-87ad0237579d/volumes" Jan 22 06:59:28 crc kubenswrapper[4720]: I0122 06:59:28.223226 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="601a4487-efe4-4a79-89fa-3a33a89d7b0d" path="/var/lib/kubelet/pods/601a4487-efe4-4a79-89fa-3a33a89d7b0d/volumes" Jan 22 06:59:28 crc kubenswrapper[4720]: I0122 06:59:28.731973 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"949bfffa-9b7f-4557-8dc4-d3406f64f231","Type":"ContainerStarted","Data":"4e75b721c5acbb2e1cc075679f1c3daeb4f358f9b703dc6b838830a8d30b7dfd"} Jan 22 06:59:29 crc kubenswrapper[4720]: I0122 06:59:29.207120 4720 prober.go:107] "Probe failed" probeType="Readiness" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="c85be378-c080-4165-803d-a7ee88403c07" containerName="watcher-kuttl-api-log" probeResult="failure" output="Get \"http://10.217.0.134:9322/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 22 06:59:29 crc kubenswrapper[4720]: I0122 06:59:29.208079 4720 prober.go:107] "Probe failed" probeType="Readiness" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="c85be378-c080-4165-803d-a7ee88403c07" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.134:9322/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 22 06:59:29 crc kubenswrapper[4720]: I0122 06:59:29.748127 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"949bfffa-9b7f-4557-8dc4-d3406f64f231","Type":"ContainerStarted","Data":"3f379f5fd7ee84811478fe943576773f0b915ed2e0111aefe2174840b16692dc"} Jan 22 06:59:29 crc kubenswrapper[4720]: I0122 06:59:29.748568 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/ceilometer-0" Jan 22 06:59:29 crc kubenswrapper[4720]: I0122 06:59:29.748430 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="949bfffa-9b7f-4557-8dc4-d3406f64f231" containerName="proxy-httpd" containerID="cri-o://3f379f5fd7ee84811478fe943576773f0b915ed2e0111aefe2174840b16692dc" gracePeriod=30 Jan 22 06:59:29 crc kubenswrapper[4720]: I0122 06:59:29.748344 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="949bfffa-9b7f-4557-8dc4-d3406f64f231" containerName="ceilometer-central-agent" containerID="cri-o://3383e92c239dce0c5a755ec70414eac3f05cb942aebd7f04e392b0c522daac8f" gracePeriod=30 Jan 22 06:59:29 crc kubenswrapper[4720]: I0122 06:59:29.748448 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="949bfffa-9b7f-4557-8dc4-d3406f64f231" containerName="sg-core" containerID="cri-o://4e75b721c5acbb2e1cc075679f1c3daeb4f358f9b703dc6b838830a8d30b7dfd" gracePeriod=30 Jan 22 06:59:29 crc kubenswrapper[4720]: I0122 06:59:29.748463 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="949bfffa-9b7f-4557-8dc4-d3406f64f231" containerName="ceilometer-notification-agent" containerID="cri-o://e43115d36e6310961fe143c420cb3df89e9395ec0ed4f21ff6773d7e4d7ad575" gracePeriod=30 Jan 22 06:59:29 crc kubenswrapper[4720]: I0122 06:59:29.774104 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/ceilometer-0" podStartSLOduration=2.287022528 podStartE2EDuration="5.774081998s" podCreationTimestamp="2026-01-22 06:59:24 +0000 UTC" firstStartedPulling="2026-01-22 06:59:25.756765083 +0000 UTC m=+1457.898671788" lastFinishedPulling="2026-01-22 06:59:29.243824553 +0000 UTC m=+1461.385731258" observedRunningTime="2026-01-22 06:59:29.772994637 +0000 UTC m=+1461.914901332" watchObservedRunningTime="2026-01-22 06:59:29.774081998 +0000 UTC m=+1461.915988703" Jan 22 06:59:29 crc kubenswrapper[4720]: I0122 06:59:29.780386 4720 patch_prober.go:28] interesting pod/machine-config-daemon-bnsvd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 06:59:29 crc kubenswrapper[4720]: I0122 06:59:29.780465 4720 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" podUID="f4b26e9d-6a95-4b1c-9750-88b6aa100c67" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 06:59:30 crc kubenswrapper[4720]: I0122 06:59:30.788883 4720 generic.go:334] "Generic (PLEG): container finished" podID="949bfffa-9b7f-4557-8dc4-d3406f64f231" containerID="3f379f5fd7ee84811478fe943576773f0b915ed2e0111aefe2174840b16692dc" exitCode=0 Jan 22 06:59:30 crc kubenswrapper[4720]: I0122 06:59:30.790129 4720 generic.go:334] "Generic (PLEG): container finished" podID="949bfffa-9b7f-4557-8dc4-d3406f64f231" containerID="4e75b721c5acbb2e1cc075679f1c3daeb4f358f9b703dc6b838830a8d30b7dfd" exitCode=2 Jan 22 06:59:30 crc kubenswrapper[4720]: I0122 06:59:30.790203 4720 generic.go:334] "Generic (PLEG): container finished" podID="949bfffa-9b7f-4557-8dc4-d3406f64f231" containerID="e43115d36e6310961fe143c420cb3df89e9395ec0ed4f21ff6773d7e4d7ad575" exitCode=0 Jan 22 06:59:30 crc kubenswrapper[4720]: I0122 06:59:30.790270 4720 generic.go:334] "Generic (PLEG): container finished" podID="949bfffa-9b7f-4557-8dc4-d3406f64f231" containerID="3383e92c239dce0c5a755ec70414eac3f05cb942aebd7f04e392b0c522daac8f" exitCode=0 Jan 22 06:59:30 crc kubenswrapper[4720]: I0122 06:59:30.789025 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"949bfffa-9b7f-4557-8dc4-d3406f64f231","Type":"ContainerDied","Data":"3f379f5fd7ee84811478fe943576773f0b915ed2e0111aefe2174840b16692dc"} Jan 22 06:59:30 crc kubenswrapper[4720]: I0122 06:59:30.790430 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"949bfffa-9b7f-4557-8dc4-d3406f64f231","Type":"ContainerDied","Data":"4e75b721c5acbb2e1cc075679f1c3daeb4f358f9b703dc6b838830a8d30b7dfd"} Jan 22 06:59:30 crc kubenswrapper[4720]: I0122 06:59:30.790496 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"949bfffa-9b7f-4557-8dc4-d3406f64f231","Type":"ContainerDied","Data":"e43115d36e6310961fe143c420cb3df89e9395ec0ed4f21ff6773d7e4d7ad575"} Jan 22 06:59:30 crc kubenswrapper[4720]: I0122 06:59:30.790565 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"949bfffa-9b7f-4557-8dc4-d3406f64f231","Type":"ContainerDied","Data":"3383e92c239dce0c5a755ec70414eac3f05cb942aebd7f04e392b0c522daac8f"} Jan 22 06:59:31 crc kubenswrapper[4720]: I0122 06:59:31.100457 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 22 06:59:31 crc kubenswrapper[4720]: I0122 06:59:31.229116 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/949bfffa-9b7f-4557-8dc4-d3406f64f231-ceilometer-tls-certs\") pod \"949bfffa-9b7f-4557-8dc4-d3406f64f231\" (UID: \"949bfffa-9b7f-4557-8dc4-d3406f64f231\") " Jan 22 06:59:31 crc kubenswrapper[4720]: I0122 06:59:31.229347 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/949bfffa-9b7f-4557-8dc4-d3406f64f231-config-data\") pod \"949bfffa-9b7f-4557-8dc4-d3406f64f231\" (UID: \"949bfffa-9b7f-4557-8dc4-d3406f64f231\") " Jan 22 06:59:31 crc kubenswrapper[4720]: I0122 06:59:31.229431 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/949bfffa-9b7f-4557-8dc4-d3406f64f231-run-httpd\") pod \"949bfffa-9b7f-4557-8dc4-d3406f64f231\" (UID: \"949bfffa-9b7f-4557-8dc4-d3406f64f231\") " Jan 22 06:59:31 crc kubenswrapper[4720]: I0122 06:59:31.229455 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7p5qg\" (UniqueName: \"kubernetes.io/projected/949bfffa-9b7f-4557-8dc4-d3406f64f231-kube-api-access-7p5qg\") pod \"949bfffa-9b7f-4557-8dc4-d3406f64f231\" (UID: \"949bfffa-9b7f-4557-8dc4-d3406f64f231\") " Jan 22 06:59:31 crc kubenswrapper[4720]: I0122 06:59:31.229490 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/949bfffa-9b7f-4557-8dc4-d3406f64f231-log-httpd\") pod \"949bfffa-9b7f-4557-8dc4-d3406f64f231\" (UID: \"949bfffa-9b7f-4557-8dc4-d3406f64f231\") " Jan 22 06:59:31 crc kubenswrapper[4720]: I0122 06:59:31.229580 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/949bfffa-9b7f-4557-8dc4-d3406f64f231-combined-ca-bundle\") pod \"949bfffa-9b7f-4557-8dc4-d3406f64f231\" (UID: \"949bfffa-9b7f-4557-8dc4-d3406f64f231\") " Jan 22 06:59:31 crc kubenswrapper[4720]: I0122 06:59:31.229729 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/949bfffa-9b7f-4557-8dc4-d3406f64f231-sg-core-conf-yaml\") pod \"949bfffa-9b7f-4557-8dc4-d3406f64f231\" (UID: \"949bfffa-9b7f-4557-8dc4-d3406f64f231\") " Jan 22 06:59:31 crc kubenswrapper[4720]: I0122 06:59:31.229846 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/949bfffa-9b7f-4557-8dc4-d3406f64f231-scripts\") pod \"949bfffa-9b7f-4557-8dc4-d3406f64f231\" (UID: \"949bfffa-9b7f-4557-8dc4-d3406f64f231\") " Jan 22 06:59:31 crc kubenswrapper[4720]: I0122 06:59:31.230208 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/949bfffa-9b7f-4557-8dc4-d3406f64f231-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "949bfffa-9b7f-4557-8dc4-d3406f64f231" (UID: "949bfffa-9b7f-4557-8dc4-d3406f64f231"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 06:59:31 crc kubenswrapper[4720]: I0122 06:59:31.230538 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/949bfffa-9b7f-4557-8dc4-d3406f64f231-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "949bfffa-9b7f-4557-8dc4-d3406f64f231" (UID: "949bfffa-9b7f-4557-8dc4-d3406f64f231"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 06:59:31 crc kubenswrapper[4720]: I0122 06:59:31.235174 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/949bfffa-9b7f-4557-8dc4-d3406f64f231-kube-api-access-7p5qg" (OuterVolumeSpecName: "kube-api-access-7p5qg") pod "949bfffa-9b7f-4557-8dc4-d3406f64f231" (UID: "949bfffa-9b7f-4557-8dc4-d3406f64f231"). InnerVolumeSpecName "kube-api-access-7p5qg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 06:59:31 crc kubenswrapper[4720]: I0122 06:59:31.235803 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/949bfffa-9b7f-4557-8dc4-d3406f64f231-scripts" (OuterVolumeSpecName: "scripts") pod "949bfffa-9b7f-4557-8dc4-d3406f64f231" (UID: "949bfffa-9b7f-4557-8dc4-d3406f64f231"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 06:59:31 crc kubenswrapper[4720]: I0122 06:59:31.252844 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/949bfffa-9b7f-4557-8dc4-d3406f64f231-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "949bfffa-9b7f-4557-8dc4-d3406f64f231" (UID: "949bfffa-9b7f-4557-8dc4-d3406f64f231"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 06:59:31 crc kubenswrapper[4720]: I0122 06:59:31.276433 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/949bfffa-9b7f-4557-8dc4-d3406f64f231-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "949bfffa-9b7f-4557-8dc4-d3406f64f231" (UID: "949bfffa-9b7f-4557-8dc4-d3406f64f231"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 06:59:31 crc kubenswrapper[4720]: I0122 06:59:31.295135 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/949bfffa-9b7f-4557-8dc4-d3406f64f231-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "949bfffa-9b7f-4557-8dc4-d3406f64f231" (UID: "949bfffa-9b7f-4557-8dc4-d3406f64f231"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 06:59:31 crc kubenswrapper[4720]: I0122 06:59:31.319767 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/949bfffa-9b7f-4557-8dc4-d3406f64f231-config-data" (OuterVolumeSpecName: "config-data") pod "949bfffa-9b7f-4557-8dc4-d3406f64f231" (UID: "949bfffa-9b7f-4557-8dc4-d3406f64f231"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 06:59:31 crc kubenswrapper[4720]: I0122 06:59:31.332121 4720 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/949bfffa-9b7f-4557-8dc4-d3406f64f231-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 22 06:59:31 crc kubenswrapper[4720]: I0122 06:59:31.332167 4720 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/949bfffa-9b7f-4557-8dc4-d3406f64f231-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 06:59:31 crc kubenswrapper[4720]: I0122 06:59:31.332177 4720 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/949bfffa-9b7f-4557-8dc4-d3406f64f231-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 22 06:59:31 crc kubenswrapper[4720]: I0122 06:59:31.332186 4720 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/949bfffa-9b7f-4557-8dc4-d3406f64f231-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 06:59:31 crc kubenswrapper[4720]: I0122 06:59:31.332196 4720 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/949bfffa-9b7f-4557-8dc4-d3406f64f231-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 22 06:59:31 crc kubenswrapper[4720]: I0122 06:59:31.332213 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7p5qg\" (UniqueName: \"kubernetes.io/projected/949bfffa-9b7f-4557-8dc4-d3406f64f231-kube-api-access-7p5qg\") on node \"crc\" DevicePath \"\"" Jan 22 06:59:31 crc kubenswrapper[4720]: I0122 06:59:31.332225 4720 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/949bfffa-9b7f-4557-8dc4-d3406f64f231-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 22 06:59:31 crc kubenswrapper[4720]: I0122 06:59:31.332238 4720 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/949bfffa-9b7f-4557-8dc4-d3406f64f231-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 06:59:31 crc kubenswrapper[4720]: I0122 06:59:31.805007 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"949bfffa-9b7f-4557-8dc4-d3406f64f231","Type":"ContainerDied","Data":"8dbf79dbb57a1f654309a8087db616c11eb838b870968f1baf2eae4317d55180"} Jan 22 06:59:31 crc kubenswrapper[4720]: I0122 06:59:31.805087 4720 scope.go:117] "RemoveContainer" containerID="3f379f5fd7ee84811478fe943576773f0b915ed2e0111aefe2174840b16692dc" Jan 22 06:59:31 crc kubenswrapper[4720]: I0122 06:59:31.805104 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 22 06:59:31 crc kubenswrapper[4720]: I0122 06:59:31.828635 4720 scope.go:117] "RemoveContainer" containerID="4e75b721c5acbb2e1cc075679f1c3daeb4f358f9b703dc6b838830a8d30b7dfd" Jan 22 06:59:31 crc kubenswrapper[4720]: I0122 06:59:31.850440 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 06:59:31 crc kubenswrapper[4720]: I0122 06:59:31.858912 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 06:59:31 crc kubenswrapper[4720]: I0122 06:59:31.873692 4720 scope.go:117] "RemoveContainer" containerID="e43115d36e6310961fe143c420cb3df89e9395ec0ed4f21ff6773d7e4d7ad575" Jan 22 06:59:31 crc kubenswrapper[4720]: I0122 06:59:31.880795 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 06:59:31 crc kubenswrapper[4720]: E0122 06:59:31.881228 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49c39509-be83-4644-aa0b-87ad0237579d" containerName="watcher-applier" Jan 22 06:59:31 crc kubenswrapper[4720]: I0122 06:59:31.881249 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="49c39509-be83-4644-aa0b-87ad0237579d" containerName="watcher-applier" Jan 22 06:59:31 crc kubenswrapper[4720]: E0122 06:59:31.881262 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0541b1c-8509-4175-94fd-f9a341d35e64" containerName="mariadb-account-delete" Jan 22 06:59:31 crc kubenswrapper[4720]: I0122 06:59:31.881268 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0541b1c-8509-4175-94fd-f9a341d35e64" containerName="mariadb-account-delete" Jan 22 06:59:31 crc kubenswrapper[4720]: E0122 06:59:31.881282 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="601a4487-efe4-4a79-89fa-3a33a89d7b0d" containerName="watcher-decision-engine" Jan 22 06:59:31 crc kubenswrapper[4720]: I0122 06:59:31.881289 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="601a4487-efe4-4a79-89fa-3a33a89d7b0d" containerName="watcher-decision-engine" Jan 22 06:59:31 crc kubenswrapper[4720]: E0122 06:59:31.881302 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="949bfffa-9b7f-4557-8dc4-d3406f64f231" containerName="sg-core" Jan 22 06:59:31 crc kubenswrapper[4720]: I0122 06:59:31.881308 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="949bfffa-9b7f-4557-8dc4-d3406f64f231" containerName="sg-core" Jan 22 06:59:31 crc kubenswrapper[4720]: E0122 06:59:31.881320 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="949bfffa-9b7f-4557-8dc4-d3406f64f231" containerName="ceilometer-central-agent" Jan 22 06:59:31 crc kubenswrapper[4720]: I0122 06:59:31.881326 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="949bfffa-9b7f-4557-8dc4-d3406f64f231" containerName="ceilometer-central-agent" Jan 22 06:59:31 crc kubenswrapper[4720]: E0122 06:59:31.881342 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="949bfffa-9b7f-4557-8dc4-d3406f64f231" containerName="proxy-httpd" Jan 22 06:59:31 crc kubenswrapper[4720]: I0122 06:59:31.881349 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="949bfffa-9b7f-4557-8dc4-d3406f64f231" containerName="proxy-httpd" Jan 22 06:59:31 crc kubenswrapper[4720]: E0122 06:59:31.881363 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="949bfffa-9b7f-4557-8dc4-d3406f64f231" containerName="ceilometer-notification-agent" Jan 22 06:59:31 crc kubenswrapper[4720]: I0122 06:59:31.881371 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="949bfffa-9b7f-4557-8dc4-d3406f64f231" containerName="ceilometer-notification-agent" Jan 22 06:59:31 crc kubenswrapper[4720]: I0122 06:59:31.881549 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="949bfffa-9b7f-4557-8dc4-d3406f64f231" containerName="sg-core" Jan 22 06:59:31 crc kubenswrapper[4720]: I0122 06:59:31.881569 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="e0541b1c-8509-4175-94fd-f9a341d35e64" containerName="mariadb-account-delete" Jan 22 06:59:31 crc kubenswrapper[4720]: I0122 06:59:31.881584 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="949bfffa-9b7f-4557-8dc4-d3406f64f231" containerName="ceilometer-notification-agent" Jan 22 06:59:31 crc kubenswrapper[4720]: I0122 06:59:31.881600 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="949bfffa-9b7f-4557-8dc4-d3406f64f231" containerName="ceilometer-central-agent" Jan 22 06:59:31 crc kubenswrapper[4720]: I0122 06:59:31.881611 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="49c39509-be83-4644-aa0b-87ad0237579d" containerName="watcher-applier" Jan 22 06:59:31 crc kubenswrapper[4720]: I0122 06:59:31.881622 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="949bfffa-9b7f-4557-8dc4-d3406f64f231" containerName="proxy-httpd" Jan 22 06:59:31 crc kubenswrapper[4720]: I0122 06:59:31.881630 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="601a4487-efe4-4a79-89fa-3a33a89d7b0d" containerName="watcher-decision-engine" Jan 22 06:59:31 crc kubenswrapper[4720]: I0122 06:59:31.884566 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 22 06:59:31 crc kubenswrapper[4720]: I0122 06:59:31.899332 4720 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-scripts" Jan 22 06:59:31 crc kubenswrapper[4720]: I0122 06:59:31.899579 4720 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-config-data" Jan 22 06:59:31 crc kubenswrapper[4720]: I0122 06:59:31.899579 4720 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-ceilometer-internal-svc" Jan 22 06:59:31 crc kubenswrapper[4720]: I0122 06:59:31.907565 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 06:59:31 crc kubenswrapper[4720]: I0122 06:59:31.917327 4720 scope.go:117] "RemoveContainer" containerID="3383e92c239dce0c5a755ec70414eac3f05cb942aebd7f04e392b0c522daac8f" Jan 22 06:59:32 crc kubenswrapper[4720]: I0122 06:59:32.043992 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0003d040-a30c-45fb-9521-41221cb33286-run-httpd\") pod \"ceilometer-0\" (UID: \"0003d040-a30c-45fb-9521-41221cb33286\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 06:59:32 crc kubenswrapper[4720]: I0122 06:59:32.044318 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0003d040-a30c-45fb-9521-41221cb33286-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0003d040-a30c-45fb-9521-41221cb33286\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 06:59:32 crc kubenswrapper[4720]: I0122 06:59:32.044351 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0003d040-a30c-45fb-9521-41221cb33286-config-data\") pod \"ceilometer-0\" (UID: \"0003d040-a30c-45fb-9521-41221cb33286\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 06:59:32 crc kubenswrapper[4720]: I0122 06:59:32.044396 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0003d040-a30c-45fb-9521-41221cb33286-log-httpd\") pod \"ceilometer-0\" (UID: \"0003d040-a30c-45fb-9521-41221cb33286\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 06:59:32 crc kubenswrapper[4720]: I0122 06:59:32.044413 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0003d040-a30c-45fb-9521-41221cb33286-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0003d040-a30c-45fb-9521-41221cb33286\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 06:59:32 crc kubenswrapper[4720]: I0122 06:59:32.044454 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qd2tk\" (UniqueName: \"kubernetes.io/projected/0003d040-a30c-45fb-9521-41221cb33286-kube-api-access-qd2tk\") pod \"ceilometer-0\" (UID: \"0003d040-a30c-45fb-9521-41221cb33286\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 06:59:32 crc kubenswrapper[4720]: I0122 06:59:32.044472 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0003d040-a30c-45fb-9521-41221cb33286-scripts\") pod \"ceilometer-0\" (UID: \"0003d040-a30c-45fb-9521-41221cb33286\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 06:59:32 crc kubenswrapper[4720]: I0122 06:59:32.044498 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/0003d040-a30c-45fb-9521-41221cb33286-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"0003d040-a30c-45fb-9521-41221cb33286\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 06:59:32 crc kubenswrapper[4720]: I0122 06:59:32.146629 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0003d040-a30c-45fb-9521-41221cb33286-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0003d040-a30c-45fb-9521-41221cb33286\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 06:59:32 crc kubenswrapper[4720]: I0122 06:59:32.146715 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0003d040-a30c-45fb-9521-41221cb33286-config-data\") pod \"ceilometer-0\" (UID: \"0003d040-a30c-45fb-9521-41221cb33286\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 06:59:32 crc kubenswrapper[4720]: I0122 06:59:32.146821 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0003d040-a30c-45fb-9521-41221cb33286-log-httpd\") pod \"ceilometer-0\" (UID: \"0003d040-a30c-45fb-9521-41221cb33286\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 06:59:32 crc kubenswrapper[4720]: I0122 06:59:32.146842 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0003d040-a30c-45fb-9521-41221cb33286-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0003d040-a30c-45fb-9521-41221cb33286\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 06:59:32 crc kubenswrapper[4720]: I0122 06:59:32.146935 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qd2tk\" (UniqueName: \"kubernetes.io/projected/0003d040-a30c-45fb-9521-41221cb33286-kube-api-access-qd2tk\") pod \"ceilometer-0\" (UID: \"0003d040-a30c-45fb-9521-41221cb33286\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 06:59:32 crc kubenswrapper[4720]: I0122 06:59:32.146961 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0003d040-a30c-45fb-9521-41221cb33286-scripts\") pod \"ceilometer-0\" (UID: \"0003d040-a30c-45fb-9521-41221cb33286\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 06:59:32 crc kubenswrapper[4720]: I0122 06:59:32.147000 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/0003d040-a30c-45fb-9521-41221cb33286-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"0003d040-a30c-45fb-9521-41221cb33286\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 06:59:32 crc kubenswrapper[4720]: I0122 06:59:32.147219 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0003d040-a30c-45fb-9521-41221cb33286-run-httpd\") pod \"ceilometer-0\" (UID: \"0003d040-a30c-45fb-9521-41221cb33286\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 06:59:32 crc kubenswrapper[4720]: I0122 06:59:32.147686 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0003d040-a30c-45fb-9521-41221cb33286-log-httpd\") pod \"ceilometer-0\" (UID: \"0003d040-a30c-45fb-9521-41221cb33286\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 06:59:32 crc kubenswrapper[4720]: I0122 06:59:32.147840 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0003d040-a30c-45fb-9521-41221cb33286-run-httpd\") pod \"ceilometer-0\" (UID: \"0003d040-a30c-45fb-9521-41221cb33286\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 06:59:32 crc kubenswrapper[4720]: I0122 06:59:32.158851 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0003d040-a30c-45fb-9521-41221cb33286-scripts\") pod \"ceilometer-0\" (UID: \"0003d040-a30c-45fb-9521-41221cb33286\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 06:59:32 crc kubenswrapper[4720]: I0122 06:59:32.158893 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/0003d040-a30c-45fb-9521-41221cb33286-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"0003d040-a30c-45fb-9521-41221cb33286\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 06:59:32 crc kubenswrapper[4720]: I0122 06:59:32.159160 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0003d040-a30c-45fb-9521-41221cb33286-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0003d040-a30c-45fb-9521-41221cb33286\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 06:59:32 crc kubenswrapper[4720]: I0122 06:59:32.159235 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0003d040-a30c-45fb-9521-41221cb33286-config-data\") pod \"ceilometer-0\" (UID: \"0003d040-a30c-45fb-9521-41221cb33286\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 06:59:32 crc kubenswrapper[4720]: I0122 06:59:32.164495 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0003d040-a30c-45fb-9521-41221cb33286-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0003d040-a30c-45fb-9521-41221cb33286\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 06:59:32 crc kubenswrapper[4720]: I0122 06:59:32.167389 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qd2tk\" (UniqueName: \"kubernetes.io/projected/0003d040-a30c-45fb-9521-41221cb33286-kube-api-access-qd2tk\") pod \"ceilometer-0\" (UID: \"0003d040-a30c-45fb-9521-41221cb33286\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 06:59:32 crc kubenswrapper[4720]: I0122 06:59:32.214429 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 22 06:59:32 crc kubenswrapper[4720]: I0122 06:59:32.221754 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="949bfffa-9b7f-4557-8dc4-d3406f64f231" path="/var/lib/kubelet/pods/949bfffa-9b7f-4557-8dc4-d3406f64f231/volumes" Jan 22 06:59:32 crc kubenswrapper[4720]: I0122 06:59:32.717469 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 06:59:32 crc kubenswrapper[4720]: I0122 06:59:32.813968 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"0003d040-a30c-45fb-9521-41221cb33286","Type":"ContainerStarted","Data":"69f5969246273b01fb5da17858f46b02fda5e93251f8e7a790e087414b5a0be9"} Jan 22 06:59:32 crc kubenswrapper[4720]: I0122 06:59:32.873780 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-db-create-gqhvs"] Jan 22 06:59:32 crc kubenswrapper[4720]: I0122 06:59:32.893226 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-db-create-gqhvs"] Jan 22 06:59:32 crc kubenswrapper[4720]: I0122 06:59:32.902054 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher49c6-account-delete-hdxsj"] Jan 22 06:59:32 crc kubenswrapper[4720]: I0122 06:59:32.909546 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher49c6-account-delete-hdxsj"] Jan 22 06:59:32 crc kubenswrapper[4720]: I0122 06:59:32.917235 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-49c6-account-create-update-67wmk"] Jan 22 06:59:32 crc kubenswrapper[4720]: I0122 06:59:32.923951 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-49c6-account-create-update-67wmk"] Jan 22 06:59:33 crc kubenswrapper[4720]: I0122 06:59:33.581220 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-db-create-vgmr2"] Jan 22 06:59:33 crc kubenswrapper[4720]: I0122 06:59:33.583308 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-vgmr2" Jan 22 06:59:33 crc kubenswrapper[4720]: I0122 06:59:33.593375 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-db-create-vgmr2"] Jan 22 06:59:33 crc kubenswrapper[4720]: I0122 06:59:33.678284 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cg6gj\" (UniqueName: \"kubernetes.io/projected/b7166439-84ea-4607-a3a7-3dcd65e1001a-kube-api-access-cg6gj\") pod \"watcher-db-create-vgmr2\" (UID: \"b7166439-84ea-4607-a3a7-3dcd65e1001a\") " pod="watcher-kuttl-default/watcher-db-create-vgmr2" Jan 22 06:59:33 crc kubenswrapper[4720]: I0122 06:59:33.678453 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b7166439-84ea-4607-a3a7-3dcd65e1001a-operator-scripts\") pod \"watcher-db-create-vgmr2\" (UID: \"b7166439-84ea-4607-a3a7-3dcd65e1001a\") " pod="watcher-kuttl-default/watcher-db-create-vgmr2" Jan 22 06:59:33 crc kubenswrapper[4720]: I0122 06:59:33.694981 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-db54-account-create-update-j4sp5"] Jan 22 06:59:33 crc kubenswrapper[4720]: I0122 06:59:33.696463 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db54-account-create-update-j4sp5" Jan 22 06:59:33 crc kubenswrapper[4720]: I0122 06:59:33.698862 4720 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-db-secret" Jan 22 06:59:33 crc kubenswrapper[4720]: I0122 06:59:33.712218 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-db54-account-create-update-j4sp5"] Jan 22 06:59:33 crc kubenswrapper[4720]: I0122 06:59:33.780247 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f45f6544-152d-4235-a6c6-d72625f9d66f-operator-scripts\") pod \"watcher-db54-account-create-update-j4sp5\" (UID: \"f45f6544-152d-4235-a6c6-d72625f9d66f\") " pod="watcher-kuttl-default/watcher-db54-account-create-update-j4sp5" Jan 22 06:59:33 crc kubenswrapper[4720]: I0122 06:59:33.780348 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cg6gj\" (UniqueName: \"kubernetes.io/projected/b7166439-84ea-4607-a3a7-3dcd65e1001a-kube-api-access-cg6gj\") pod \"watcher-db-create-vgmr2\" (UID: \"b7166439-84ea-4607-a3a7-3dcd65e1001a\") " pod="watcher-kuttl-default/watcher-db-create-vgmr2" Jan 22 06:59:33 crc kubenswrapper[4720]: I0122 06:59:33.780412 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mdgnl\" (UniqueName: \"kubernetes.io/projected/f45f6544-152d-4235-a6c6-d72625f9d66f-kube-api-access-mdgnl\") pod \"watcher-db54-account-create-update-j4sp5\" (UID: \"f45f6544-152d-4235-a6c6-d72625f9d66f\") " pod="watcher-kuttl-default/watcher-db54-account-create-update-j4sp5" Jan 22 06:59:33 crc kubenswrapper[4720]: I0122 06:59:33.780466 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b7166439-84ea-4607-a3a7-3dcd65e1001a-operator-scripts\") pod \"watcher-db-create-vgmr2\" (UID: \"b7166439-84ea-4607-a3a7-3dcd65e1001a\") " pod="watcher-kuttl-default/watcher-db-create-vgmr2" Jan 22 06:59:33 crc kubenswrapper[4720]: I0122 06:59:33.781361 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b7166439-84ea-4607-a3a7-3dcd65e1001a-operator-scripts\") pod \"watcher-db-create-vgmr2\" (UID: \"b7166439-84ea-4607-a3a7-3dcd65e1001a\") " pod="watcher-kuttl-default/watcher-db-create-vgmr2" Jan 22 06:59:33 crc kubenswrapper[4720]: I0122 06:59:33.822628 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cg6gj\" (UniqueName: \"kubernetes.io/projected/b7166439-84ea-4607-a3a7-3dcd65e1001a-kube-api-access-cg6gj\") pod \"watcher-db-create-vgmr2\" (UID: \"b7166439-84ea-4607-a3a7-3dcd65e1001a\") " pod="watcher-kuttl-default/watcher-db-create-vgmr2" Jan 22 06:59:33 crc kubenswrapper[4720]: I0122 06:59:33.848335 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"0003d040-a30c-45fb-9521-41221cb33286","Type":"ContainerStarted","Data":"368ca16d7aa1e6e7fae904b08369a61b72fb4284a65f4e6e3513d57294fb6ab8"} Jan 22 06:59:33 crc kubenswrapper[4720]: I0122 06:59:33.887966 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f45f6544-152d-4235-a6c6-d72625f9d66f-operator-scripts\") pod \"watcher-db54-account-create-update-j4sp5\" (UID: \"f45f6544-152d-4235-a6c6-d72625f9d66f\") " pod="watcher-kuttl-default/watcher-db54-account-create-update-j4sp5" Jan 22 06:59:33 crc kubenswrapper[4720]: I0122 06:59:33.888072 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mdgnl\" (UniqueName: \"kubernetes.io/projected/f45f6544-152d-4235-a6c6-d72625f9d66f-kube-api-access-mdgnl\") pod \"watcher-db54-account-create-update-j4sp5\" (UID: \"f45f6544-152d-4235-a6c6-d72625f9d66f\") " pod="watcher-kuttl-default/watcher-db54-account-create-update-j4sp5" Jan 22 06:59:33 crc kubenswrapper[4720]: I0122 06:59:33.889175 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f45f6544-152d-4235-a6c6-d72625f9d66f-operator-scripts\") pod \"watcher-db54-account-create-update-j4sp5\" (UID: \"f45f6544-152d-4235-a6c6-d72625f9d66f\") " pod="watcher-kuttl-default/watcher-db54-account-create-update-j4sp5" Jan 22 06:59:33 crc kubenswrapper[4720]: I0122 06:59:33.916216 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-vgmr2" Jan 22 06:59:33 crc kubenswrapper[4720]: I0122 06:59:33.995702 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mdgnl\" (UniqueName: \"kubernetes.io/projected/f45f6544-152d-4235-a6c6-d72625f9d66f-kube-api-access-mdgnl\") pod \"watcher-db54-account-create-update-j4sp5\" (UID: \"f45f6544-152d-4235-a6c6-d72625f9d66f\") " pod="watcher-kuttl-default/watcher-db54-account-create-update-j4sp5" Jan 22 06:59:34 crc kubenswrapper[4720]: I0122 06:59:34.034885 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db54-account-create-update-j4sp5" Jan 22 06:59:34 crc kubenswrapper[4720]: I0122 06:59:34.220247 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="509e786a-0709-438c-b2fc-1cf663797c56" path="/var/lib/kubelet/pods/509e786a-0709-438c-b2fc-1cf663797c56/volumes" Jan 22 06:59:34 crc kubenswrapper[4720]: I0122 06:59:34.221209 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e0541b1c-8509-4175-94fd-f9a341d35e64" path="/var/lib/kubelet/pods/e0541b1c-8509-4175-94fd-f9a341d35e64/volumes" Jan 22 06:59:34 crc kubenswrapper[4720]: I0122 06:59:34.221886 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e078fabf-6d6b-44fe-bf95-f236bc469762" path="/var/lib/kubelet/pods/e078fabf-6d6b-44fe-bf95-f236bc469762/volumes" Jan 22 06:59:34 crc kubenswrapper[4720]: W0122 06:59:34.552698 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb7166439_84ea_4607_a3a7_3dcd65e1001a.slice/crio-5c4e3f10142e6da856419b16b5528aa27e692268aa05816406e8d7f12bf2a18f WatchSource:0}: Error finding container 5c4e3f10142e6da856419b16b5528aa27e692268aa05816406e8d7f12bf2a18f: Status 404 returned error can't find the container with id 5c4e3f10142e6da856419b16b5528aa27e692268aa05816406e8d7f12bf2a18f Jan 22 06:59:34 crc kubenswrapper[4720]: I0122 06:59:34.555521 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-db-create-vgmr2"] Jan 22 06:59:34 crc kubenswrapper[4720]: I0122 06:59:34.688775 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-db54-account-create-update-j4sp5"] Jan 22 06:59:34 crc kubenswrapper[4720]: I0122 06:59:34.857626 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-vgmr2" event={"ID":"b7166439-84ea-4607-a3a7-3dcd65e1001a","Type":"ContainerStarted","Data":"d81b4d0cfc63fd64314c793b2f71b312f481851a03f9ea54e104e647f35d5c28"} Jan 22 06:59:34 crc kubenswrapper[4720]: I0122 06:59:34.857674 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-vgmr2" event={"ID":"b7166439-84ea-4607-a3a7-3dcd65e1001a","Type":"ContainerStarted","Data":"5c4e3f10142e6da856419b16b5528aa27e692268aa05816406e8d7f12bf2a18f"} Jan 22 06:59:34 crc kubenswrapper[4720]: I0122 06:59:34.859824 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db54-account-create-update-j4sp5" event={"ID":"f45f6544-152d-4235-a6c6-d72625f9d66f","Type":"ContainerStarted","Data":"64cf752fc4b705d26561fb6a06c1d91676c2c1297c8d860a55783111b936e97f"} Jan 22 06:59:34 crc kubenswrapper[4720]: I0122 06:59:34.859852 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db54-account-create-update-j4sp5" event={"ID":"f45f6544-152d-4235-a6c6-d72625f9d66f","Type":"ContainerStarted","Data":"68304f285f322067d46eeacfaa877eb7b934f53c879f5df297f5218f564938ea"} Jan 22 06:59:34 crc kubenswrapper[4720]: I0122 06:59:34.862709 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"0003d040-a30c-45fb-9521-41221cb33286","Type":"ContainerStarted","Data":"cb237adffedb5327656866f0ace6618a000af67f14f783b93f23a6dbe243e223"} Jan 22 06:59:34 crc kubenswrapper[4720]: I0122 06:59:34.873633 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-db-create-vgmr2" podStartSLOduration=1.8736120440000001 podStartE2EDuration="1.873612044s" podCreationTimestamp="2026-01-22 06:59:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 06:59:34.870202487 +0000 UTC m=+1467.012109192" watchObservedRunningTime="2026-01-22 06:59:34.873612044 +0000 UTC m=+1467.015518749" Jan 22 06:59:34 crc kubenswrapper[4720]: I0122 06:59:34.893138 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-db54-account-create-update-j4sp5" podStartSLOduration=1.893118962 podStartE2EDuration="1.893118962s" podCreationTimestamp="2026-01-22 06:59:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 06:59:34.892568006 +0000 UTC m=+1467.034474711" watchObservedRunningTime="2026-01-22 06:59:34.893118962 +0000 UTC m=+1467.035025657" Jan 22 06:59:35 crc kubenswrapper[4720]: I0122 06:59:35.878280 4720 generic.go:334] "Generic (PLEG): container finished" podID="b7166439-84ea-4607-a3a7-3dcd65e1001a" containerID="d81b4d0cfc63fd64314c793b2f71b312f481851a03f9ea54e104e647f35d5c28" exitCode=0 Jan 22 06:59:35 crc kubenswrapper[4720]: I0122 06:59:35.878367 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-vgmr2" event={"ID":"b7166439-84ea-4607-a3a7-3dcd65e1001a","Type":"ContainerDied","Data":"d81b4d0cfc63fd64314c793b2f71b312f481851a03f9ea54e104e647f35d5c28"} Jan 22 06:59:35 crc kubenswrapper[4720]: I0122 06:59:35.880832 4720 generic.go:334] "Generic (PLEG): container finished" podID="f45f6544-152d-4235-a6c6-d72625f9d66f" containerID="64cf752fc4b705d26561fb6a06c1d91676c2c1297c8d860a55783111b936e97f" exitCode=0 Jan 22 06:59:35 crc kubenswrapper[4720]: I0122 06:59:35.880883 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db54-account-create-update-j4sp5" event={"ID":"f45f6544-152d-4235-a6c6-d72625f9d66f","Type":"ContainerDied","Data":"64cf752fc4b705d26561fb6a06c1d91676c2c1297c8d860a55783111b936e97f"} Jan 22 06:59:36 crc kubenswrapper[4720]: I0122 06:59:36.890776 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"0003d040-a30c-45fb-9521-41221cb33286","Type":"ContainerStarted","Data":"30a6a2d9258f4ba2958365dac1ae459e95cdf97ccde19b75d64ad72f9e5f8758"} Jan 22 06:59:37 crc kubenswrapper[4720]: I0122 06:59:37.427119 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db54-account-create-update-j4sp5" Jan 22 06:59:37 crc kubenswrapper[4720]: I0122 06:59:37.434044 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-vgmr2" Jan 22 06:59:37 crc kubenswrapper[4720]: I0122 06:59:37.580283 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b7166439-84ea-4607-a3a7-3dcd65e1001a-operator-scripts\") pod \"b7166439-84ea-4607-a3a7-3dcd65e1001a\" (UID: \"b7166439-84ea-4607-a3a7-3dcd65e1001a\") " Jan 22 06:59:37 crc kubenswrapper[4720]: I0122 06:59:37.580753 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f45f6544-152d-4235-a6c6-d72625f9d66f-operator-scripts\") pod \"f45f6544-152d-4235-a6c6-d72625f9d66f\" (UID: \"f45f6544-152d-4235-a6c6-d72625f9d66f\") " Jan 22 06:59:37 crc kubenswrapper[4720]: I0122 06:59:37.580859 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b7166439-84ea-4607-a3a7-3dcd65e1001a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b7166439-84ea-4607-a3a7-3dcd65e1001a" (UID: "b7166439-84ea-4607-a3a7-3dcd65e1001a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 06:59:37 crc kubenswrapper[4720]: I0122 06:59:37.581211 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f45f6544-152d-4235-a6c6-d72625f9d66f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f45f6544-152d-4235-a6c6-d72625f9d66f" (UID: "f45f6544-152d-4235-a6c6-d72625f9d66f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 06:59:37 crc kubenswrapper[4720]: I0122 06:59:37.581551 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cg6gj\" (UniqueName: \"kubernetes.io/projected/b7166439-84ea-4607-a3a7-3dcd65e1001a-kube-api-access-cg6gj\") pod \"b7166439-84ea-4607-a3a7-3dcd65e1001a\" (UID: \"b7166439-84ea-4607-a3a7-3dcd65e1001a\") " Jan 22 06:59:37 crc kubenswrapper[4720]: I0122 06:59:37.581584 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mdgnl\" (UniqueName: \"kubernetes.io/projected/f45f6544-152d-4235-a6c6-d72625f9d66f-kube-api-access-mdgnl\") pod \"f45f6544-152d-4235-a6c6-d72625f9d66f\" (UID: \"f45f6544-152d-4235-a6c6-d72625f9d66f\") " Jan 22 06:59:37 crc kubenswrapper[4720]: I0122 06:59:37.582380 4720 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f45f6544-152d-4235-a6c6-d72625f9d66f-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 06:59:37 crc kubenswrapper[4720]: I0122 06:59:37.582400 4720 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b7166439-84ea-4607-a3a7-3dcd65e1001a-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 06:59:37 crc kubenswrapper[4720]: I0122 06:59:37.585984 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b7166439-84ea-4607-a3a7-3dcd65e1001a-kube-api-access-cg6gj" (OuterVolumeSpecName: "kube-api-access-cg6gj") pod "b7166439-84ea-4607-a3a7-3dcd65e1001a" (UID: "b7166439-84ea-4607-a3a7-3dcd65e1001a"). InnerVolumeSpecName "kube-api-access-cg6gj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 06:59:37 crc kubenswrapper[4720]: I0122 06:59:37.587040 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f45f6544-152d-4235-a6c6-d72625f9d66f-kube-api-access-mdgnl" (OuterVolumeSpecName: "kube-api-access-mdgnl") pod "f45f6544-152d-4235-a6c6-d72625f9d66f" (UID: "f45f6544-152d-4235-a6c6-d72625f9d66f"). InnerVolumeSpecName "kube-api-access-mdgnl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 06:59:37 crc kubenswrapper[4720]: I0122 06:59:37.683347 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cg6gj\" (UniqueName: \"kubernetes.io/projected/b7166439-84ea-4607-a3a7-3dcd65e1001a-kube-api-access-cg6gj\") on node \"crc\" DevicePath \"\"" Jan 22 06:59:37 crc kubenswrapper[4720]: I0122 06:59:37.683378 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mdgnl\" (UniqueName: \"kubernetes.io/projected/f45f6544-152d-4235-a6c6-d72625f9d66f-kube-api-access-mdgnl\") on node \"crc\" DevicePath \"\"" Jan 22 06:59:37 crc kubenswrapper[4720]: I0122 06:59:37.903702 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db54-account-create-update-j4sp5" Jan 22 06:59:37 crc kubenswrapper[4720]: I0122 06:59:37.903679 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db54-account-create-update-j4sp5" event={"ID":"f45f6544-152d-4235-a6c6-d72625f9d66f","Type":"ContainerDied","Data":"68304f285f322067d46eeacfaa877eb7b934f53c879f5df297f5218f564938ea"} Jan 22 06:59:37 crc kubenswrapper[4720]: I0122 06:59:37.903927 4720 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="68304f285f322067d46eeacfaa877eb7b934f53c879f5df297f5218f564938ea" Jan 22 06:59:37 crc kubenswrapper[4720]: I0122 06:59:37.907669 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"0003d040-a30c-45fb-9521-41221cb33286","Type":"ContainerStarted","Data":"04b8fa42b313bf5e49c9b69c6696ae1d9c5b578518331b84a8db651dd4f61da4"} Jan 22 06:59:37 crc kubenswrapper[4720]: I0122 06:59:37.909348 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/ceilometer-0" Jan 22 06:59:37 crc kubenswrapper[4720]: I0122 06:59:37.911785 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-vgmr2" event={"ID":"b7166439-84ea-4607-a3a7-3dcd65e1001a","Type":"ContainerDied","Data":"5c4e3f10142e6da856419b16b5528aa27e692268aa05816406e8d7f12bf2a18f"} Jan 22 06:59:37 crc kubenswrapper[4720]: I0122 06:59:37.911848 4720 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5c4e3f10142e6da856419b16b5528aa27e692268aa05816406e8d7f12bf2a18f" Jan 22 06:59:37 crc kubenswrapper[4720]: I0122 06:59:37.911874 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-vgmr2" Jan 22 06:59:37 crc kubenswrapper[4720]: I0122 06:59:37.942016 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/ceilometer-0" podStartSLOduration=2.160274636 podStartE2EDuration="6.941994129s" podCreationTimestamp="2026-01-22 06:59:31 +0000 UTC" firstStartedPulling="2026-01-22 06:59:32.714465555 +0000 UTC m=+1464.856372260" lastFinishedPulling="2026-01-22 06:59:37.496185048 +0000 UTC m=+1469.638091753" observedRunningTime="2026-01-22 06:59:37.937729557 +0000 UTC m=+1470.079636272" watchObservedRunningTime="2026-01-22 06:59:37.941994129 +0000 UTC m=+1470.083900834" Jan 22 06:59:39 crc kubenswrapper[4720]: I0122 06:59:39.051254 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-p9r4r"] Jan 22 06:59:39 crc kubenswrapper[4720]: E0122 06:59:39.051947 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b7166439-84ea-4607-a3a7-3dcd65e1001a" containerName="mariadb-database-create" Jan 22 06:59:39 crc kubenswrapper[4720]: I0122 06:59:39.051961 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7166439-84ea-4607-a3a7-3dcd65e1001a" containerName="mariadb-database-create" Jan 22 06:59:39 crc kubenswrapper[4720]: E0122 06:59:39.051990 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f45f6544-152d-4235-a6c6-d72625f9d66f" containerName="mariadb-account-create-update" Jan 22 06:59:39 crc kubenswrapper[4720]: I0122 06:59:39.051996 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="f45f6544-152d-4235-a6c6-d72625f9d66f" containerName="mariadb-account-create-update" Jan 22 06:59:39 crc kubenswrapper[4720]: I0122 06:59:39.052150 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="b7166439-84ea-4607-a3a7-3dcd65e1001a" containerName="mariadb-database-create" Jan 22 06:59:39 crc kubenswrapper[4720]: I0122 06:59:39.052165 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="f45f6544-152d-4235-a6c6-d72625f9d66f" containerName="mariadb-account-create-update" Jan 22 06:59:39 crc kubenswrapper[4720]: I0122 06:59:39.052778 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-p9r4r" Jan 22 06:59:39 crc kubenswrapper[4720]: I0122 06:59:39.054645 4720 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-config-data" Jan 22 06:59:39 crc kubenswrapper[4720]: I0122 06:59:39.061520 4720 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-watcher-kuttl-dockercfg-9bnfq" Jan 22 06:59:39 crc kubenswrapper[4720]: I0122 06:59:39.065661 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-p9r4r"] Jan 22 06:59:39 crc kubenswrapper[4720]: I0122 06:59:39.211705 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9r6lk\" (UniqueName: \"kubernetes.io/projected/f156a962-58f9-4335-abef-b6ef9b0531e8-kube-api-access-9r6lk\") pod \"watcher-kuttl-db-sync-p9r4r\" (UID: \"f156a962-58f9-4335-abef-b6ef9b0531e8\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-p9r4r" Jan 22 06:59:39 crc kubenswrapper[4720]: I0122 06:59:39.211836 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f156a962-58f9-4335-abef-b6ef9b0531e8-combined-ca-bundle\") pod \"watcher-kuttl-db-sync-p9r4r\" (UID: \"f156a962-58f9-4335-abef-b6ef9b0531e8\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-p9r4r" Jan 22 06:59:39 crc kubenswrapper[4720]: I0122 06:59:39.212405 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f156a962-58f9-4335-abef-b6ef9b0531e8-db-sync-config-data\") pod \"watcher-kuttl-db-sync-p9r4r\" (UID: \"f156a962-58f9-4335-abef-b6ef9b0531e8\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-p9r4r" Jan 22 06:59:39 crc kubenswrapper[4720]: I0122 06:59:39.212526 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f156a962-58f9-4335-abef-b6ef9b0531e8-config-data\") pod \"watcher-kuttl-db-sync-p9r4r\" (UID: \"f156a962-58f9-4335-abef-b6ef9b0531e8\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-p9r4r" Jan 22 06:59:39 crc kubenswrapper[4720]: I0122 06:59:39.316048 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9r6lk\" (UniqueName: \"kubernetes.io/projected/f156a962-58f9-4335-abef-b6ef9b0531e8-kube-api-access-9r6lk\") pod \"watcher-kuttl-db-sync-p9r4r\" (UID: \"f156a962-58f9-4335-abef-b6ef9b0531e8\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-p9r4r" Jan 22 06:59:39 crc kubenswrapper[4720]: I0122 06:59:39.316147 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f156a962-58f9-4335-abef-b6ef9b0531e8-combined-ca-bundle\") pod \"watcher-kuttl-db-sync-p9r4r\" (UID: \"f156a962-58f9-4335-abef-b6ef9b0531e8\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-p9r4r" Jan 22 06:59:39 crc kubenswrapper[4720]: I0122 06:59:39.316176 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f156a962-58f9-4335-abef-b6ef9b0531e8-db-sync-config-data\") pod \"watcher-kuttl-db-sync-p9r4r\" (UID: \"f156a962-58f9-4335-abef-b6ef9b0531e8\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-p9r4r" Jan 22 06:59:39 crc kubenswrapper[4720]: I0122 06:59:39.316285 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f156a962-58f9-4335-abef-b6ef9b0531e8-config-data\") pod \"watcher-kuttl-db-sync-p9r4r\" (UID: \"f156a962-58f9-4335-abef-b6ef9b0531e8\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-p9r4r" Jan 22 06:59:39 crc kubenswrapper[4720]: I0122 06:59:39.323071 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f156a962-58f9-4335-abef-b6ef9b0531e8-db-sync-config-data\") pod \"watcher-kuttl-db-sync-p9r4r\" (UID: \"f156a962-58f9-4335-abef-b6ef9b0531e8\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-p9r4r" Jan 22 06:59:39 crc kubenswrapper[4720]: I0122 06:59:39.325744 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f156a962-58f9-4335-abef-b6ef9b0531e8-config-data\") pod \"watcher-kuttl-db-sync-p9r4r\" (UID: \"f156a962-58f9-4335-abef-b6ef9b0531e8\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-p9r4r" Jan 22 06:59:39 crc kubenswrapper[4720]: I0122 06:59:39.327529 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f156a962-58f9-4335-abef-b6ef9b0531e8-combined-ca-bundle\") pod \"watcher-kuttl-db-sync-p9r4r\" (UID: \"f156a962-58f9-4335-abef-b6ef9b0531e8\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-p9r4r" Jan 22 06:59:39 crc kubenswrapper[4720]: I0122 06:59:39.335431 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9r6lk\" (UniqueName: \"kubernetes.io/projected/f156a962-58f9-4335-abef-b6ef9b0531e8-kube-api-access-9r6lk\") pod \"watcher-kuttl-db-sync-p9r4r\" (UID: \"f156a962-58f9-4335-abef-b6ef9b0531e8\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-p9r4r" Jan 22 06:59:39 crc kubenswrapper[4720]: I0122 06:59:39.374673 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-p9r4r" Jan 22 06:59:39 crc kubenswrapper[4720]: I0122 06:59:39.836513 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-p9r4r"] Jan 22 06:59:39 crc kubenswrapper[4720]: W0122 06:59:39.843160 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf156a962_58f9_4335_abef_b6ef9b0531e8.slice/crio-5bcf0719f2550e20c12737f785df74c8fb6cbc23aa9c55b99d6c5b10c869635c WatchSource:0}: Error finding container 5bcf0719f2550e20c12737f785df74c8fb6cbc23aa9c55b99d6c5b10c869635c: Status 404 returned error can't find the container with id 5bcf0719f2550e20c12737f785df74c8fb6cbc23aa9c55b99d6c5b10c869635c Jan 22 06:59:39 crc kubenswrapper[4720]: I0122 06:59:39.941080 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-p9r4r" event={"ID":"f156a962-58f9-4335-abef-b6ef9b0531e8","Type":"ContainerStarted","Data":"5bcf0719f2550e20c12737f785df74c8fb6cbc23aa9c55b99d6c5b10c869635c"} Jan 22 06:59:40 crc kubenswrapper[4720]: I0122 06:59:40.951681 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-p9r4r" event={"ID":"f156a962-58f9-4335-abef-b6ef9b0531e8","Type":"ContainerStarted","Data":"9a5607d7819b622b913b6449fd1e6264ff33bd18e7bb5ea97c8bbb87ab558e9a"} Jan 22 06:59:40 crc kubenswrapper[4720]: I0122 06:59:40.973604 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-db-sync-p9r4r" podStartSLOduration=1.9735834319999999 podStartE2EDuration="1.973583432s" podCreationTimestamp="2026-01-22 06:59:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 06:59:40.9693162 +0000 UTC m=+1473.111222925" watchObservedRunningTime="2026-01-22 06:59:40.973583432 +0000 UTC m=+1473.115490137" Jan 22 06:59:43 crc kubenswrapper[4720]: I0122 06:59:43.975973 4720 generic.go:334] "Generic (PLEG): container finished" podID="f156a962-58f9-4335-abef-b6ef9b0531e8" containerID="9a5607d7819b622b913b6449fd1e6264ff33bd18e7bb5ea97c8bbb87ab558e9a" exitCode=0 Jan 22 06:59:43 crc kubenswrapper[4720]: I0122 06:59:43.976079 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-p9r4r" event={"ID":"f156a962-58f9-4335-abef-b6ef9b0531e8","Type":"ContainerDied","Data":"9a5607d7819b622b913b6449fd1e6264ff33bd18e7bb5ea97c8bbb87ab558e9a"} Jan 22 06:59:45 crc kubenswrapper[4720]: I0122 06:59:45.297665 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-p9r4r" Jan 22 06:59:45 crc kubenswrapper[4720]: I0122 06:59:45.337369 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f156a962-58f9-4335-abef-b6ef9b0531e8-db-sync-config-data\") pod \"f156a962-58f9-4335-abef-b6ef9b0531e8\" (UID: \"f156a962-58f9-4335-abef-b6ef9b0531e8\") " Jan 22 06:59:45 crc kubenswrapper[4720]: I0122 06:59:45.337442 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f156a962-58f9-4335-abef-b6ef9b0531e8-combined-ca-bundle\") pod \"f156a962-58f9-4335-abef-b6ef9b0531e8\" (UID: \"f156a962-58f9-4335-abef-b6ef9b0531e8\") " Jan 22 06:59:45 crc kubenswrapper[4720]: I0122 06:59:45.337566 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9r6lk\" (UniqueName: \"kubernetes.io/projected/f156a962-58f9-4335-abef-b6ef9b0531e8-kube-api-access-9r6lk\") pod \"f156a962-58f9-4335-abef-b6ef9b0531e8\" (UID: \"f156a962-58f9-4335-abef-b6ef9b0531e8\") " Jan 22 06:59:45 crc kubenswrapper[4720]: I0122 06:59:45.337598 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f156a962-58f9-4335-abef-b6ef9b0531e8-config-data\") pod \"f156a962-58f9-4335-abef-b6ef9b0531e8\" (UID: \"f156a962-58f9-4335-abef-b6ef9b0531e8\") " Jan 22 06:59:45 crc kubenswrapper[4720]: I0122 06:59:45.355982 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f156a962-58f9-4335-abef-b6ef9b0531e8-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "f156a962-58f9-4335-abef-b6ef9b0531e8" (UID: "f156a962-58f9-4335-abef-b6ef9b0531e8"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 06:59:45 crc kubenswrapper[4720]: I0122 06:59:45.356061 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f156a962-58f9-4335-abef-b6ef9b0531e8-kube-api-access-9r6lk" (OuterVolumeSpecName: "kube-api-access-9r6lk") pod "f156a962-58f9-4335-abef-b6ef9b0531e8" (UID: "f156a962-58f9-4335-abef-b6ef9b0531e8"). InnerVolumeSpecName "kube-api-access-9r6lk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 06:59:45 crc kubenswrapper[4720]: I0122 06:59:45.372998 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f156a962-58f9-4335-abef-b6ef9b0531e8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f156a962-58f9-4335-abef-b6ef9b0531e8" (UID: "f156a962-58f9-4335-abef-b6ef9b0531e8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 06:59:45 crc kubenswrapper[4720]: I0122 06:59:45.400685 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f156a962-58f9-4335-abef-b6ef9b0531e8-config-data" (OuterVolumeSpecName: "config-data") pod "f156a962-58f9-4335-abef-b6ef9b0531e8" (UID: "f156a962-58f9-4335-abef-b6ef9b0531e8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 06:59:45 crc kubenswrapper[4720]: I0122 06:59:45.440453 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9r6lk\" (UniqueName: \"kubernetes.io/projected/f156a962-58f9-4335-abef-b6ef9b0531e8-kube-api-access-9r6lk\") on node \"crc\" DevicePath \"\"" Jan 22 06:59:45 crc kubenswrapper[4720]: I0122 06:59:45.440515 4720 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f156a962-58f9-4335-abef-b6ef9b0531e8-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 06:59:45 crc kubenswrapper[4720]: I0122 06:59:45.440531 4720 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f156a962-58f9-4335-abef-b6ef9b0531e8-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 06:59:45 crc kubenswrapper[4720]: I0122 06:59:45.440543 4720 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f156a962-58f9-4335-abef-b6ef9b0531e8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 06:59:45 crc kubenswrapper[4720]: I0122 06:59:45.997432 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-p9r4r" event={"ID":"f156a962-58f9-4335-abef-b6ef9b0531e8","Type":"ContainerDied","Data":"5bcf0719f2550e20c12737f785df74c8fb6cbc23aa9c55b99d6c5b10c869635c"} Jan 22 06:59:45 crc kubenswrapper[4720]: I0122 06:59:45.997715 4720 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5bcf0719f2550e20c12737f785df74c8fb6cbc23aa9c55b99d6c5b10c869635c" Jan 22 06:59:45 crc kubenswrapper[4720]: I0122 06:59:45.997534 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-p9r4r" Jan 22 06:59:46 crc kubenswrapper[4720]: I0122 06:59:46.294470 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 22 06:59:46 crc kubenswrapper[4720]: E0122 06:59:46.295039 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f156a962-58f9-4335-abef-b6ef9b0531e8" containerName="watcher-kuttl-db-sync" Jan 22 06:59:46 crc kubenswrapper[4720]: I0122 06:59:46.295066 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="f156a962-58f9-4335-abef-b6ef9b0531e8" containerName="watcher-kuttl-db-sync" Jan 22 06:59:46 crc kubenswrapper[4720]: I0122 06:59:46.295314 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="f156a962-58f9-4335-abef-b6ef9b0531e8" containerName="watcher-kuttl-db-sync" Jan 22 06:59:46 crc kubenswrapper[4720]: I0122 06:59:46.296121 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 06:59:46 crc kubenswrapper[4720]: I0122 06:59:46.298798 4720 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-decision-engine-config-data" Jan 22 06:59:46 crc kubenswrapper[4720]: I0122 06:59:46.299279 4720 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-watcher-kuttl-dockercfg-9bnfq" Jan 22 06:59:46 crc kubenswrapper[4720]: I0122 06:59:46.322768 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 22 06:59:46 crc kubenswrapper[4720]: I0122 06:59:46.359439 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/7fee6a10-fdd0-4b20-aa01-88c426dc5d91-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"7fee6a10-fdd0-4b20-aa01-88c426dc5d91\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 06:59:46 crc kubenswrapper[4720]: I0122 06:59:46.359655 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k4cmf\" (UniqueName: \"kubernetes.io/projected/7fee6a10-fdd0-4b20-aa01-88c426dc5d91-kube-api-access-k4cmf\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"7fee6a10-fdd0-4b20-aa01-88c426dc5d91\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 06:59:46 crc kubenswrapper[4720]: I0122 06:59:46.359843 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7fee6a10-fdd0-4b20-aa01-88c426dc5d91-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"7fee6a10-fdd0-4b20-aa01-88c426dc5d91\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 06:59:46 crc kubenswrapper[4720]: I0122 06:59:46.359945 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fee6a10-fdd0-4b20-aa01-88c426dc5d91-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"7fee6a10-fdd0-4b20-aa01-88c426dc5d91\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 06:59:46 crc kubenswrapper[4720]: I0122 06:59:46.360124 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7fee6a10-fdd0-4b20-aa01-88c426dc5d91-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"7fee6a10-fdd0-4b20-aa01-88c426dc5d91\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 06:59:46 crc kubenswrapper[4720]: I0122 06:59:46.437929 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 22 06:59:46 crc kubenswrapper[4720]: I0122 06:59:46.439375 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 06:59:46 crc kubenswrapper[4720]: I0122 06:59:46.443268 4720 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-api-config-data" Jan 22 06:59:46 crc kubenswrapper[4720]: I0122 06:59:46.455658 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 22 06:59:46 crc kubenswrapper[4720]: I0122 06:59:46.456951 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 06:59:46 crc kubenswrapper[4720]: I0122 06:59:46.461878 4720 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-applier-config-data" Jan 22 06:59:46 crc kubenswrapper[4720]: I0122 06:59:46.463890 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7fee6a10-fdd0-4b20-aa01-88c426dc5d91-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"7fee6a10-fdd0-4b20-aa01-88c426dc5d91\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 06:59:46 crc kubenswrapper[4720]: I0122 06:59:46.464005 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fee6a10-fdd0-4b20-aa01-88c426dc5d91-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"7fee6a10-fdd0-4b20-aa01-88c426dc5d91\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 06:59:46 crc kubenswrapper[4720]: I0122 06:59:46.464056 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7fee6a10-fdd0-4b20-aa01-88c426dc5d91-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"7fee6a10-fdd0-4b20-aa01-88c426dc5d91\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 06:59:46 crc kubenswrapper[4720]: I0122 06:59:46.464095 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/7fee6a10-fdd0-4b20-aa01-88c426dc5d91-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"7fee6a10-fdd0-4b20-aa01-88c426dc5d91\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 06:59:46 crc kubenswrapper[4720]: I0122 06:59:46.464164 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k4cmf\" (UniqueName: \"kubernetes.io/projected/7fee6a10-fdd0-4b20-aa01-88c426dc5d91-kube-api-access-k4cmf\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"7fee6a10-fdd0-4b20-aa01-88c426dc5d91\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 06:59:46 crc kubenswrapper[4720]: I0122 06:59:46.465279 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7fee6a10-fdd0-4b20-aa01-88c426dc5d91-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"7fee6a10-fdd0-4b20-aa01-88c426dc5d91\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 06:59:46 crc kubenswrapper[4720]: I0122 06:59:46.475389 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fee6a10-fdd0-4b20-aa01-88c426dc5d91-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"7fee6a10-fdd0-4b20-aa01-88c426dc5d91\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 06:59:46 crc kubenswrapper[4720]: I0122 06:59:46.476598 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/7fee6a10-fdd0-4b20-aa01-88c426dc5d91-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"7fee6a10-fdd0-4b20-aa01-88c426dc5d91\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 06:59:46 crc kubenswrapper[4720]: I0122 06:59:46.485561 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7fee6a10-fdd0-4b20-aa01-88c426dc5d91-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"7fee6a10-fdd0-4b20-aa01-88c426dc5d91\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 06:59:46 crc kubenswrapper[4720]: I0122 06:59:46.486624 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 22 06:59:46 crc kubenswrapper[4720]: I0122 06:59:46.494664 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k4cmf\" (UniqueName: \"kubernetes.io/projected/7fee6a10-fdd0-4b20-aa01-88c426dc5d91-kube-api-access-k4cmf\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"7fee6a10-fdd0-4b20-aa01-88c426dc5d91\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 06:59:46 crc kubenswrapper[4720]: I0122 06:59:46.547665 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 22 06:59:46 crc kubenswrapper[4720]: I0122 06:59:46.566028 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7e0c5995-91ad-47a5-a367-9987cdcf9a02-logs\") pod \"watcher-kuttl-api-0\" (UID: \"7e0c5995-91ad-47a5-a367-9987cdcf9a02\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 06:59:46 crc kubenswrapper[4720]: I0122 06:59:46.566123 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/7e0c5995-91ad-47a5-a367-9987cdcf9a02-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"7e0c5995-91ad-47a5-a367-9987cdcf9a02\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 06:59:46 crc kubenswrapper[4720]: I0122 06:59:46.566146 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8kqwr\" (UniqueName: \"kubernetes.io/projected/7e0c5995-91ad-47a5-a367-9987cdcf9a02-kube-api-access-8kqwr\") pod \"watcher-kuttl-api-0\" (UID: \"7e0c5995-91ad-47a5-a367-9987cdcf9a02\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 06:59:46 crc kubenswrapper[4720]: I0122 06:59:46.566162 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e0c5995-91ad-47a5-a367-9987cdcf9a02-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"7e0c5995-91ad-47a5-a367-9987cdcf9a02\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 06:59:46 crc kubenswrapper[4720]: I0122 06:59:46.566215 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37f7586a-44ae-4c9f-9049-45c9dba9d7a9-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"37f7586a-44ae-4c9f-9049-45c9dba9d7a9\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 06:59:46 crc kubenswrapper[4720]: I0122 06:59:46.566245 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sccc7\" (UniqueName: \"kubernetes.io/projected/37f7586a-44ae-4c9f-9049-45c9dba9d7a9-kube-api-access-sccc7\") pod \"watcher-kuttl-applier-0\" (UID: \"37f7586a-44ae-4c9f-9049-45c9dba9d7a9\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 06:59:46 crc kubenswrapper[4720]: I0122 06:59:46.566260 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e0c5995-91ad-47a5-a367-9987cdcf9a02-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"7e0c5995-91ad-47a5-a367-9987cdcf9a02\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 06:59:46 crc kubenswrapper[4720]: I0122 06:59:46.566288 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/37f7586a-44ae-4c9f-9049-45c9dba9d7a9-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"37f7586a-44ae-4c9f-9049-45c9dba9d7a9\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 06:59:46 crc kubenswrapper[4720]: I0122 06:59:46.566322 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/37f7586a-44ae-4c9f-9049-45c9dba9d7a9-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"37f7586a-44ae-4c9f-9049-45c9dba9d7a9\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 06:59:46 crc kubenswrapper[4720]: I0122 06:59:46.614391 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 06:59:46 crc kubenswrapper[4720]: I0122 06:59:46.667885 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37f7586a-44ae-4c9f-9049-45c9dba9d7a9-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"37f7586a-44ae-4c9f-9049-45c9dba9d7a9\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 06:59:46 crc kubenswrapper[4720]: I0122 06:59:46.667964 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sccc7\" (UniqueName: \"kubernetes.io/projected/37f7586a-44ae-4c9f-9049-45c9dba9d7a9-kube-api-access-sccc7\") pod \"watcher-kuttl-applier-0\" (UID: \"37f7586a-44ae-4c9f-9049-45c9dba9d7a9\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 06:59:46 crc kubenswrapper[4720]: I0122 06:59:46.667994 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e0c5995-91ad-47a5-a367-9987cdcf9a02-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"7e0c5995-91ad-47a5-a367-9987cdcf9a02\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 06:59:46 crc kubenswrapper[4720]: I0122 06:59:46.668023 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/37f7586a-44ae-4c9f-9049-45c9dba9d7a9-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"37f7586a-44ae-4c9f-9049-45c9dba9d7a9\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 06:59:46 crc kubenswrapper[4720]: I0122 06:59:46.668054 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/37f7586a-44ae-4c9f-9049-45c9dba9d7a9-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"37f7586a-44ae-4c9f-9049-45c9dba9d7a9\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 06:59:46 crc kubenswrapper[4720]: I0122 06:59:46.668095 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7e0c5995-91ad-47a5-a367-9987cdcf9a02-logs\") pod \"watcher-kuttl-api-0\" (UID: \"7e0c5995-91ad-47a5-a367-9987cdcf9a02\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 06:59:46 crc kubenswrapper[4720]: I0122 06:59:46.668156 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/7e0c5995-91ad-47a5-a367-9987cdcf9a02-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"7e0c5995-91ad-47a5-a367-9987cdcf9a02\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 06:59:46 crc kubenswrapper[4720]: I0122 06:59:46.668184 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8kqwr\" (UniqueName: \"kubernetes.io/projected/7e0c5995-91ad-47a5-a367-9987cdcf9a02-kube-api-access-8kqwr\") pod \"watcher-kuttl-api-0\" (UID: \"7e0c5995-91ad-47a5-a367-9987cdcf9a02\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 06:59:46 crc kubenswrapper[4720]: I0122 06:59:46.668201 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e0c5995-91ad-47a5-a367-9987cdcf9a02-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"7e0c5995-91ad-47a5-a367-9987cdcf9a02\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 06:59:46 crc kubenswrapper[4720]: I0122 06:59:46.668471 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/37f7586a-44ae-4c9f-9049-45c9dba9d7a9-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"37f7586a-44ae-4c9f-9049-45c9dba9d7a9\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 06:59:46 crc kubenswrapper[4720]: I0122 06:59:46.668677 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7e0c5995-91ad-47a5-a367-9987cdcf9a02-logs\") pod \"watcher-kuttl-api-0\" (UID: \"7e0c5995-91ad-47a5-a367-9987cdcf9a02\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 06:59:46 crc kubenswrapper[4720]: I0122 06:59:46.675874 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/7e0c5995-91ad-47a5-a367-9987cdcf9a02-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"7e0c5995-91ad-47a5-a367-9987cdcf9a02\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 06:59:46 crc kubenswrapper[4720]: I0122 06:59:46.676736 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e0c5995-91ad-47a5-a367-9987cdcf9a02-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"7e0c5995-91ad-47a5-a367-9987cdcf9a02\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 06:59:46 crc kubenswrapper[4720]: I0122 06:59:46.677172 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/37f7586a-44ae-4c9f-9049-45c9dba9d7a9-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"37f7586a-44ae-4c9f-9049-45c9dba9d7a9\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 06:59:46 crc kubenswrapper[4720]: I0122 06:59:46.677593 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e0c5995-91ad-47a5-a367-9987cdcf9a02-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"7e0c5995-91ad-47a5-a367-9987cdcf9a02\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 06:59:46 crc kubenswrapper[4720]: I0122 06:59:46.687670 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37f7586a-44ae-4c9f-9049-45c9dba9d7a9-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"37f7586a-44ae-4c9f-9049-45c9dba9d7a9\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 06:59:46 crc kubenswrapper[4720]: I0122 06:59:46.698002 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8kqwr\" (UniqueName: \"kubernetes.io/projected/7e0c5995-91ad-47a5-a367-9987cdcf9a02-kube-api-access-8kqwr\") pod \"watcher-kuttl-api-0\" (UID: \"7e0c5995-91ad-47a5-a367-9987cdcf9a02\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 06:59:46 crc kubenswrapper[4720]: I0122 06:59:46.703018 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sccc7\" (UniqueName: \"kubernetes.io/projected/37f7586a-44ae-4c9f-9049-45c9dba9d7a9-kube-api-access-sccc7\") pod \"watcher-kuttl-applier-0\" (UID: \"37f7586a-44ae-4c9f-9049-45c9dba9d7a9\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 06:59:46 crc kubenswrapper[4720]: I0122 06:59:46.757453 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 06:59:46 crc kubenswrapper[4720]: I0122 06:59:46.852079 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 06:59:47 crc kubenswrapper[4720]: I0122 06:59:47.178383 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 22 06:59:47 crc kubenswrapper[4720]: I0122 06:59:47.249336 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 22 06:59:47 crc kubenswrapper[4720]: I0122 06:59:47.334024 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 22 06:59:48 crc kubenswrapper[4720]: I0122 06:59:48.024785 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"37f7586a-44ae-4c9f-9049-45c9dba9d7a9","Type":"ContainerStarted","Data":"7b3591fdc5b78010ccdcba98933afdbfb2ad45c54ee2c8981969027ea488293c"} Jan 22 06:59:48 crc kubenswrapper[4720]: I0122 06:59:48.025143 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"37f7586a-44ae-4c9f-9049-45c9dba9d7a9","Type":"ContainerStarted","Data":"724cbf3f22b3be27b2ac6a099d92a2d8da5faa542b1a0ea70b23b9f7f1b4b047"} Jan 22 06:59:48 crc kubenswrapper[4720]: I0122 06:59:48.028555 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"7fee6a10-fdd0-4b20-aa01-88c426dc5d91","Type":"ContainerStarted","Data":"8cb13f55dfcdcab8982685e5ef2f3a363ec4b5ea31ab198df21c97d5c2311936"} Jan 22 06:59:48 crc kubenswrapper[4720]: I0122 06:59:48.028612 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"7fee6a10-fdd0-4b20-aa01-88c426dc5d91","Type":"ContainerStarted","Data":"ca2077164d5fc3b50abf715fa88ca667edc548c8a6010586eac5a5064b9db12d"} Jan 22 06:59:48 crc kubenswrapper[4720]: I0122 06:59:48.031353 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"7e0c5995-91ad-47a5-a367-9987cdcf9a02","Type":"ContainerStarted","Data":"2a5a24f3305406e9832472898b3596b41fcc38c62be65a2207c2387d31a9a5c9"} Jan 22 06:59:48 crc kubenswrapper[4720]: I0122 06:59:48.031383 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"7e0c5995-91ad-47a5-a367-9987cdcf9a02","Type":"ContainerStarted","Data":"ac668e98b03a83e9ecc94200673728ba6de4c81f1d16750ec25415deb83e2b23"} Jan 22 06:59:48 crc kubenswrapper[4720]: I0122 06:59:48.031395 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"7e0c5995-91ad-47a5-a367-9987cdcf9a02","Type":"ContainerStarted","Data":"d7e15eb9607feb99968b12607181d837779688ceabe83c42b129e5ca90d76cbe"} Jan 22 06:59:48 crc kubenswrapper[4720]: I0122 06:59:48.032241 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 06:59:48 crc kubenswrapper[4720]: I0122 06:59:48.074550 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podStartSLOduration=2.074523299 podStartE2EDuration="2.074523299s" podCreationTimestamp="2026-01-22 06:59:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 06:59:48.056610647 +0000 UTC m=+1480.198517352" watchObservedRunningTime="2026-01-22 06:59:48.074523299 +0000 UTC m=+1480.216430024" Jan 22 06:59:48 crc kubenswrapper[4720]: I0122 06:59:48.078590 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-api-0" podStartSLOduration=2.078569234 podStartE2EDuration="2.078569234s" podCreationTimestamp="2026-01-22 06:59:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 06:59:48.075561768 +0000 UTC m=+1480.217468503" watchObservedRunningTime="2026-01-22 06:59:48.078569234 +0000 UTC m=+1480.220475949" Jan 22 06:59:48 crc kubenswrapper[4720]: I0122 06:59:48.091880 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podStartSLOduration=2.091856214 podStartE2EDuration="2.091856214s" podCreationTimestamp="2026-01-22 06:59:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 06:59:48.090712061 +0000 UTC m=+1480.232618776" watchObservedRunningTime="2026-01-22 06:59:48.091856214 +0000 UTC m=+1480.233762929" Jan 22 06:59:50 crc kubenswrapper[4720]: I0122 06:59:50.046786 4720 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 22 06:59:51 crc kubenswrapper[4720]: I0122 06:59:50.692805 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 06:59:51 crc kubenswrapper[4720]: I0122 06:59:51.758783 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 06:59:51 crc kubenswrapper[4720]: I0122 06:59:51.853288 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 06:59:56 crc kubenswrapper[4720]: I0122 06:59:56.615244 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 06:59:56 crc kubenswrapper[4720]: I0122 06:59:56.665789 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 06:59:56 crc kubenswrapper[4720]: I0122 06:59:56.758960 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 06:59:56 crc kubenswrapper[4720]: I0122 06:59:56.763920 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 06:59:56 crc kubenswrapper[4720]: I0122 06:59:56.853358 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 06:59:56 crc kubenswrapper[4720]: I0122 06:59:56.879993 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 06:59:57 crc kubenswrapper[4720]: I0122 06:59:57.263959 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 06:59:57 crc kubenswrapper[4720]: I0122 06:59:57.270247 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 06:59:57 crc kubenswrapper[4720]: I0122 06:59:57.296857 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 06:59:57 crc kubenswrapper[4720]: I0122 06:59:57.308745 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 06:59:59 crc kubenswrapper[4720]: I0122 06:59:59.249870 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-p9r4r"] Jan 22 06:59:59 crc kubenswrapper[4720]: I0122 06:59:59.258979 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-p9r4r"] Jan 22 06:59:59 crc kubenswrapper[4720]: I0122 06:59:59.299623 4720 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" secret="" err="secret \"watcher-watcher-kuttl-dockercfg-9bnfq\" not found" Jan 22 06:59:59 crc kubenswrapper[4720]: I0122 06:59:59.325191 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcherdb54-account-delete-dcbwq"] Jan 22 06:59:59 crc kubenswrapper[4720]: I0122 06:59:59.326697 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcherdb54-account-delete-dcbwq" Jan 22 06:59:59 crc kubenswrapper[4720]: I0122 06:59:59.335850 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcherdb54-account-delete-dcbwq"] Jan 22 06:59:59 crc kubenswrapper[4720]: I0122 06:59:59.375067 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 22 06:59:59 crc kubenswrapper[4720]: I0122 06:59:59.390787 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/70a444b0-cf26-46ac-8caf-187a0bccd253-operator-scripts\") pod \"watcherdb54-account-delete-dcbwq\" (UID: \"70a444b0-cf26-46ac-8caf-187a0bccd253\") " pod="watcher-kuttl-default/watcherdb54-account-delete-dcbwq" Jan 22 06:59:59 crc kubenswrapper[4720]: I0122 06:59:59.390926 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ch4bb\" (UniqueName: \"kubernetes.io/projected/70a444b0-cf26-46ac-8caf-187a0bccd253-kube-api-access-ch4bb\") pod \"watcherdb54-account-delete-dcbwq\" (UID: \"70a444b0-cf26-46ac-8caf-187a0bccd253\") " pod="watcher-kuttl-default/watcherdb54-account-delete-dcbwq" Jan 22 06:59:59 crc kubenswrapper[4720]: E0122 06:59:59.391096 4720 secret.go:188] Couldn't get secret watcher-kuttl-default/watcher-kuttl-decision-engine-config-data: secret "watcher-kuttl-decision-engine-config-data" not found Jan 22 06:59:59 crc kubenswrapper[4720]: E0122 06:59:59.391175 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7fee6a10-fdd0-4b20-aa01-88c426dc5d91-config-data podName:7fee6a10-fdd0-4b20-aa01-88c426dc5d91 nodeName:}" failed. No retries permitted until 2026-01-22 06:59:59.891151409 +0000 UTC m=+1492.033058114 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/secret/7fee6a10-fdd0-4b20-aa01-88c426dc5d91-config-data") pod "watcher-kuttl-decision-engine-0" (UID: "7fee6a10-fdd0-4b20-aa01-88c426dc5d91") : secret "watcher-kuttl-decision-engine-config-data" not found Jan 22 06:59:59 crc kubenswrapper[4720]: I0122 06:59:59.426394 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 22 06:59:59 crc kubenswrapper[4720]: I0122 06:59:59.426640 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podUID="37f7586a-44ae-4c9f-9049-45c9dba9d7a9" containerName="watcher-applier" containerID="cri-o://7b3591fdc5b78010ccdcba98933afdbfb2ad45c54ee2c8981969027ea488293c" gracePeriod=30 Jan 22 06:59:59 crc kubenswrapper[4720]: I0122 06:59:59.492790 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ch4bb\" (UniqueName: \"kubernetes.io/projected/70a444b0-cf26-46ac-8caf-187a0bccd253-kube-api-access-ch4bb\") pod \"watcherdb54-account-delete-dcbwq\" (UID: \"70a444b0-cf26-46ac-8caf-187a0bccd253\") " pod="watcher-kuttl-default/watcherdb54-account-delete-dcbwq" Jan 22 06:59:59 crc kubenswrapper[4720]: I0122 06:59:59.492894 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/70a444b0-cf26-46ac-8caf-187a0bccd253-operator-scripts\") pod \"watcherdb54-account-delete-dcbwq\" (UID: \"70a444b0-cf26-46ac-8caf-187a0bccd253\") " pod="watcher-kuttl-default/watcherdb54-account-delete-dcbwq" Jan 22 06:59:59 crc kubenswrapper[4720]: I0122 06:59:59.492800 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 22 06:59:59 crc kubenswrapper[4720]: I0122 06:59:59.493289 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="7e0c5995-91ad-47a5-a367-9987cdcf9a02" containerName="watcher-kuttl-api-log" containerID="cri-o://ac668e98b03a83e9ecc94200673728ba6de4c81f1d16750ec25415deb83e2b23" gracePeriod=30 Jan 22 06:59:59 crc kubenswrapper[4720]: I0122 06:59:59.493796 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/70a444b0-cf26-46ac-8caf-187a0bccd253-operator-scripts\") pod \"watcherdb54-account-delete-dcbwq\" (UID: \"70a444b0-cf26-46ac-8caf-187a0bccd253\") " pod="watcher-kuttl-default/watcherdb54-account-delete-dcbwq" Jan 22 06:59:59 crc kubenswrapper[4720]: I0122 06:59:59.493815 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="7e0c5995-91ad-47a5-a367-9987cdcf9a02" containerName="watcher-api" containerID="cri-o://2a5a24f3305406e9832472898b3596b41fcc38c62be65a2207c2387d31a9a5c9" gracePeriod=30 Jan 22 06:59:59 crc kubenswrapper[4720]: I0122 06:59:59.524565 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ch4bb\" (UniqueName: \"kubernetes.io/projected/70a444b0-cf26-46ac-8caf-187a0bccd253-kube-api-access-ch4bb\") pod \"watcherdb54-account-delete-dcbwq\" (UID: \"70a444b0-cf26-46ac-8caf-187a0bccd253\") " pod="watcher-kuttl-default/watcherdb54-account-delete-dcbwq" Jan 22 06:59:59 crc kubenswrapper[4720]: I0122 06:59:59.648565 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcherdb54-account-delete-dcbwq" Jan 22 06:59:59 crc kubenswrapper[4720]: I0122 06:59:59.784354 4720 patch_prober.go:28] interesting pod/machine-config-daemon-bnsvd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 06:59:59 crc kubenswrapper[4720]: I0122 06:59:59.784404 4720 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" podUID="f4b26e9d-6a95-4b1c-9750-88b6aa100c67" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 06:59:59 crc kubenswrapper[4720]: I0122 06:59:59.784471 4720 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" Jan 22 06:59:59 crc kubenswrapper[4720]: I0122 06:59:59.785227 4720 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"cef29da1a352e3d091047268daeade230282190271ca25c80b09fe79bbd42efe"} pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 22 06:59:59 crc kubenswrapper[4720]: I0122 06:59:59.785280 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" podUID="f4b26e9d-6a95-4b1c-9750-88b6aa100c67" containerName="machine-config-daemon" containerID="cri-o://cef29da1a352e3d091047268daeade230282190271ca25c80b09fe79bbd42efe" gracePeriod=600 Jan 22 06:59:59 crc kubenswrapper[4720]: E0122 06:59:59.902172 4720 secret.go:188] Couldn't get secret watcher-kuttl-default/watcher-kuttl-decision-engine-config-data: secret "watcher-kuttl-decision-engine-config-data" not found Jan 22 06:59:59 crc kubenswrapper[4720]: E0122 06:59:59.902444 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7fee6a10-fdd0-4b20-aa01-88c426dc5d91-config-data podName:7fee6a10-fdd0-4b20-aa01-88c426dc5d91 nodeName:}" failed. No retries permitted until 2026-01-22 07:00:00.902429332 +0000 UTC m=+1493.044336037 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/secret/7fee6a10-fdd0-4b20-aa01-88c426dc5d91-config-data") pod "watcher-kuttl-decision-engine-0" (UID: "7fee6a10-fdd0-4b20-aa01-88c426dc5d91") : secret "watcher-kuttl-decision-engine-config-data" not found Jan 22 06:59:59 crc kubenswrapper[4720]: I0122 06:59:59.951029 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 06:59:59 crc kubenswrapper[4720]: I0122 06:59:59.951424 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="0003d040-a30c-45fb-9521-41221cb33286" containerName="ceilometer-central-agent" containerID="cri-o://368ca16d7aa1e6e7fae904b08369a61b72fb4284a65f4e6e3513d57294fb6ab8" gracePeriod=30 Jan 22 06:59:59 crc kubenswrapper[4720]: I0122 06:59:59.952089 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="0003d040-a30c-45fb-9521-41221cb33286" containerName="proxy-httpd" containerID="cri-o://04b8fa42b313bf5e49c9b69c6696ae1d9c5b578518331b84a8db651dd4f61da4" gracePeriod=30 Jan 22 06:59:59 crc kubenswrapper[4720]: I0122 06:59:59.952155 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="0003d040-a30c-45fb-9521-41221cb33286" containerName="sg-core" containerID="cri-o://30a6a2d9258f4ba2958365dac1ae459e95cdf97ccde19b75d64ad72f9e5f8758" gracePeriod=30 Jan 22 06:59:59 crc kubenswrapper[4720]: I0122 06:59:59.952197 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="0003d040-a30c-45fb-9521-41221cb33286" containerName="ceilometer-notification-agent" containerID="cri-o://cb237adffedb5327656866f0ace6618a000af67f14f783b93f23a6dbe243e223" gracePeriod=30 Jan 22 07:00:00 crc kubenswrapper[4720]: I0122 07:00:00.041519 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:00:00 crc kubenswrapper[4720]: I0122 07:00:00.173459 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484420-f6lwk"] Jan 22 07:00:00 crc kubenswrapper[4720]: I0122 07:00:00.186145 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484420-f6lwk" Jan 22 07:00:00 crc kubenswrapper[4720]: I0122 07:00:00.190584 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 22 07:00:00 crc kubenswrapper[4720]: I0122 07:00:00.190955 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 22 07:00:00 crc kubenswrapper[4720]: I0122 07:00:00.202654 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484420-f6lwk"] Jan 22 07:00:00 crc kubenswrapper[4720]: I0122 07:00:00.212342 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c38ccafb-7319-4e13-a9e1-f38f73a8bd3c-secret-volume\") pod \"collect-profiles-29484420-f6lwk\" (UID: \"c38ccafb-7319-4e13-a9e1-f38f73a8bd3c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484420-f6lwk" Jan 22 07:00:00 crc kubenswrapper[4720]: I0122 07:00:00.212508 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c6xs7\" (UniqueName: \"kubernetes.io/projected/c38ccafb-7319-4e13-a9e1-f38f73a8bd3c-kube-api-access-c6xs7\") pod \"collect-profiles-29484420-f6lwk\" (UID: \"c38ccafb-7319-4e13-a9e1-f38f73a8bd3c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484420-f6lwk" Jan 22 07:00:00 crc kubenswrapper[4720]: I0122 07:00:00.212581 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c38ccafb-7319-4e13-a9e1-f38f73a8bd3c-config-volume\") pod \"collect-profiles-29484420-f6lwk\" (UID: \"c38ccafb-7319-4e13-a9e1-f38f73a8bd3c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484420-f6lwk" Jan 22 07:00:00 crc kubenswrapper[4720]: I0122 07:00:00.242602 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f156a962-58f9-4335-abef-b6ef9b0531e8" path="/var/lib/kubelet/pods/f156a962-58f9-4335-abef-b6ef9b0531e8/volumes" Jan 22 07:00:00 crc kubenswrapper[4720]: I0122 07:00:00.260976 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcherdb54-account-delete-dcbwq"] Jan 22 07:00:00 crc kubenswrapper[4720]: I0122 07:00:00.315419 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c6xs7\" (UniqueName: \"kubernetes.io/projected/c38ccafb-7319-4e13-a9e1-f38f73a8bd3c-kube-api-access-c6xs7\") pod \"collect-profiles-29484420-f6lwk\" (UID: \"c38ccafb-7319-4e13-a9e1-f38f73a8bd3c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484420-f6lwk" Jan 22 07:00:00 crc kubenswrapper[4720]: I0122 07:00:00.315550 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c38ccafb-7319-4e13-a9e1-f38f73a8bd3c-config-volume\") pod \"collect-profiles-29484420-f6lwk\" (UID: \"c38ccafb-7319-4e13-a9e1-f38f73a8bd3c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484420-f6lwk" Jan 22 07:00:00 crc kubenswrapper[4720]: I0122 07:00:00.315636 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c38ccafb-7319-4e13-a9e1-f38f73a8bd3c-secret-volume\") pod \"collect-profiles-29484420-f6lwk\" (UID: \"c38ccafb-7319-4e13-a9e1-f38f73a8bd3c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484420-f6lwk" Jan 22 07:00:00 crc kubenswrapper[4720]: I0122 07:00:00.316836 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c38ccafb-7319-4e13-a9e1-f38f73a8bd3c-config-volume\") pod \"collect-profiles-29484420-f6lwk\" (UID: \"c38ccafb-7319-4e13-a9e1-f38f73a8bd3c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484420-f6lwk" Jan 22 07:00:00 crc kubenswrapper[4720]: I0122 07:00:00.331176 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c38ccafb-7319-4e13-a9e1-f38f73a8bd3c-secret-volume\") pod \"collect-profiles-29484420-f6lwk\" (UID: \"c38ccafb-7319-4e13-a9e1-f38f73a8bd3c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484420-f6lwk" Jan 22 07:00:00 crc kubenswrapper[4720]: I0122 07:00:00.360501 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c6xs7\" (UniqueName: \"kubernetes.io/projected/c38ccafb-7319-4e13-a9e1-f38f73a8bd3c-kube-api-access-c6xs7\") pod \"collect-profiles-29484420-f6lwk\" (UID: \"c38ccafb-7319-4e13-a9e1-f38f73a8bd3c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484420-f6lwk" Jan 22 07:00:00 crc kubenswrapper[4720]: I0122 07:00:00.402269 4720 generic.go:334] "Generic (PLEG): container finished" podID="0003d040-a30c-45fb-9521-41221cb33286" containerID="30a6a2d9258f4ba2958365dac1ae459e95cdf97ccde19b75d64ad72f9e5f8758" exitCode=2 Jan 22 07:00:00 crc kubenswrapper[4720]: I0122 07:00:00.402407 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"0003d040-a30c-45fb-9521-41221cb33286","Type":"ContainerDied","Data":"30a6a2d9258f4ba2958365dac1ae459e95cdf97ccde19b75d64ad72f9e5f8758"} Jan 22 07:00:00 crc kubenswrapper[4720]: I0122 07:00:00.425118 4720 generic.go:334] "Generic (PLEG): container finished" podID="7e0c5995-91ad-47a5-a367-9987cdcf9a02" containerID="ac668e98b03a83e9ecc94200673728ba6de4c81f1d16750ec25415deb83e2b23" exitCode=143 Jan 22 07:00:00 crc kubenswrapper[4720]: I0122 07:00:00.425204 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"7e0c5995-91ad-47a5-a367-9987cdcf9a02","Type":"ContainerDied","Data":"ac668e98b03a83e9ecc94200673728ba6de4c81f1d16750ec25415deb83e2b23"} Jan 22 07:00:00 crc kubenswrapper[4720]: I0122 07:00:00.444408 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcherdb54-account-delete-dcbwq" event={"ID":"70a444b0-cf26-46ac-8caf-187a0bccd253","Type":"ContainerStarted","Data":"f5f35e2c5c853f35bdc33961191be560bf8868899b77a3e80e26540686a087ab"} Jan 22 07:00:00 crc kubenswrapper[4720]: I0122 07:00:00.464750 4720 generic.go:334] "Generic (PLEG): container finished" podID="f4b26e9d-6a95-4b1c-9750-88b6aa100c67" containerID="cef29da1a352e3d091047268daeade230282190271ca25c80b09fe79bbd42efe" exitCode=0 Jan 22 07:00:00 crc kubenswrapper[4720]: I0122 07:00:00.465046 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="7fee6a10-fdd0-4b20-aa01-88c426dc5d91" containerName="watcher-decision-engine" containerID="cri-o://8cb13f55dfcdcab8982685e5ef2f3a363ec4b5ea31ab198df21c97d5c2311936" gracePeriod=30 Jan 22 07:00:00 crc kubenswrapper[4720]: I0122 07:00:00.465032 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" event={"ID":"f4b26e9d-6a95-4b1c-9750-88b6aa100c67","Type":"ContainerDied","Data":"cef29da1a352e3d091047268daeade230282190271ca25c80b09fe79bbd42efe"} Jan 22 07:00:00 crc kubenswrapper[4720]: I0122 07:00:00.465131 4720 scope.go:117] "RemoveContainer" containerID="c3c253bdde52e7e13d966a713540bfc6fece8955f90bf08577d309f38a73e677" Jan 22 07:00:00 crc kubenswrapper[4720]: I0122 07:00:00.535402 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484420-f6lwk" Jan 22 07:00:00 crc kubenswrapper[4720]: I0122 07:00:00.859761 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 07:00:00 crc kubenswrapper[4720]: I0122 07:00:00.935786 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37f7586a-44ae-4c9f-9049-45c9dba9d7a9-combined-ca-bundle\") pod \"37f7586a-44ae-4c9f-9049-45c9dba9d7a9\" (UID: \"37f7586a-44ae-4c9f-9049-45c9dba9d7a9\") " Jan 22 07:00:00 crc kubenswrapper[4720]: I0122 07:00:00.936623 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/37f7586a-44ae-4c9f-9049-45c9dba9d7a9-logs\") pod \"37f7586a-44ae-4c9f-9049-45c9dba9d7a9\" (UID: \"37f7586a-44ae-4c9f-9049-45c9dba9d7a9\") " Jan 22 07:00:00 crc kubenswrapper[4720]: I0122 07:00:00.936691 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sccc7\" (UniqueName: \"kubernetes.io/projected/37f7586a-44ae-4c9f-9049-45c9dba9d7a9-kube-api-access-sccc7\") pod \"37f7586a-44ae-4c9f-9049-45c9dba9d7a9\" (UID: \"37f7586a-44ae-4c9f-9049-45c9dba9d7a9\") " Jan 22 07:00:00 crc kubenswrapper[4720]: I0122 07:00:00.936739 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/37f7586a-44ae-4c9f-9049-45c9dba9d7a9-config-data\") pod \"37f7586a-44ae-4c9f-9049-45c9dba9d7a9\" (UID: \"37f7586a-44ae-4c9f-9049-45c9dba9d7a9\") " Jan 22 07:00:00 crc kubenswrapper[4720]: I0122 07:00:00.937112 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/37f7586a-44ae-4c9f-9049-45c9dba9d7a9-logs" (OuterVolumeSpecName: "logs") pod "37f7586a-44ae-4c9f-9049-45c9dba9d7a9" (UID: "37f7586a-44ae-4c9f-9049-45c9dba9d7a9"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 07:00:00 crc kubenswrapper[4720]: E0122 07:00:00.937216 4720 secret.go:188] Couldn't get secret watcher-kuttl-default/watcher-kuttl-decision-engine-config-data: secret "watcher-kuttl-decision-engine-config-data" not found Jan 22 07:00:00 crc kubenswrapper[4720]: E0122 07:00:00.937274 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7fee6a10-fdd0-4b20-aa01-88c426dc5d91-config-data podName:7fee6a10-fdd0-4b20-aa01-88c426dc5d91 nodeName:}" failed. No retries permitted until 2026-01-22 07:00:02.937256198 +0000 UTC m=+1495.079162903 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/secret/7fee6a10-fdd0-4b20-aa01-88c426dc5d91-config-data") pod "watcher-kuttl-decision-engine-0" (UID: "7fee6a10-fdd0-4b20-aa01-88c426dc5d91") : secret "watcher-kuttl-decision-engine-config-data" not found Jan 22 07:00:00 crc kubenswrapper[4720]: I0122 07:00:00.937383 4720 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/37f7586a-44ae-4c9f-9049-45c9dba9d7a9-logs\") on node \"crc\" DevicePath \"\"" Jan 22 07:00:00 crc kubenswrapper[4720]: I0122 07:00:00.945295 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/37f7586a-44ae-4c9f-9049-45c9dba9d7a9-kube-api-access-sccc7" (OuterVolumeSpecName: "kube-api-access-sccc7") pod "37f7586a-44ae-4c9f-9049-45c9dba9d7a9" (UID: "37f7586a-44ae-4c9f-9049-45c9dba9d7a9"). InnerVolumeSpecName "kube-api-access-sccc7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 07:00:00 crc kubenswrapper[4720]: I0122 07:00:00.962225 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/37f7586a-44ae-4c9f-9049-45c9dba9d7a9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "37f7586a-44ae-4c9f-9049-45c9dba9d7a9" (UID: "37f7586a-44ae-4c9f-9049-45c9dba9d7a9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:00:01 crc kubenswrapper[4720]: I0122 07:00:01.014448 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/37f7586a-44ae-4c9f-9049-45c9dba9d7a9-config-data" (OuterVolumeSpecName: "config-data") pod "37f7586a-44ae-4c9f-9049-45c9dba9d7a9" (UID: "37f7586a-44ae-4c9f-9049-45c9dba9d7a9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:00:01 crc kubenswrapper[4720]: I0122 07:00:01.039330 4720 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37f7586a-44ae-4c9f-9049-45c9dba9d7a9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 07:00:01 crc kubenswrapper[4720]: I0122 07:00:01.039386 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sccc7\" (UniqueName: \"kubernetes.io/projected/37f7586a-44ae-4c9f-9049-45c9dba9d7a9-kube-api-access-sccc7\") on node \"crc\" DevicePath \"\"" Jan 22 07:00:01 crc kubenswrapper[4720]: I0122 07:00:01.039398 4720 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/37f7586a-44ae-4c9f-9049-45c9dba9d7a9-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 07:00:01 crc kubenswrapper[4720]: I0122 07:00:01.125705 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484420-f6lwk"] Jan 22 07:00:01 crc kubenswrapper[4720]: I0122 07:00:01.480560 4720 generic.go:334] "Generic (PLEG): container finished" podID="70a444b0-cf26-46ac-8caf-187a0bccd253" containerID="f7f9a35a28503c7f5cdf6d056f6a50b947d856a6f06156c496f25d1e28fde1d4" exitCode=0 Jan 22 07:00:01 crc kubenswrapper[4720]: I0122 07:00:01.480668 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcherdb54-account-delete-dcbwq" event={"ID":"70a444b0-cf26-46ac-8caf-187a0bccd253","Type":"ContainerDied","Data":"f7f9a35a28503c7f5cdf6d056f6a50b947d856a6f06156c496f25d1e28fde1d4"} Jan 22 07:00:01 crc kubenswrapper[4720]: I0122 07:00:01.491421 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" event={"ID":"f4b26e9d-6a95-4b1c-9750-88b6aa100c67","Type":"ContainerStarted","Data":"b610ae887215ebfcaa39de45e339fcae21a08ebe6e59b991ec1661de0f19a21c"} Jan 22 07:00:01 crc kubenswrapper[4720]: I0122 07:00:01.494245 4720 generic.go:334] "Generic (PLEG): container finished" podID="0003d040-a30c-45fb-9521-41221cb33286" containerID="04b8fa42b313bf5e49c9b69c6696ae1d9c5b578518331b84a8db651dd4f61da4" exitCode=0 Jan 22 07:00:01 crc kubenswrapper[4720]: I0122 07:00:01.494282 4720 generic.go:334] "Generic (PLEG): container finished" podID="0003d040-a30c-45fb-9521-41221cb33286" containerID="368ca16d7aa1e6e7fae904b08369a61b72fb4284a65f4e6e3513d57294fb6ab8" exitCode=0 Jan 22 07:00:01 crc kubenswrapper[4720]: I0122 07:00:01.494334 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"0003d040-a30c-45fb-9521-41221cb33286","Type":"ContainerDied","Data":"04b8fa42b313bf5e49c9b69c6696ae1d9c5b578518331b84a8db651dd4f61da4"} Jan 22 07:00:01 crc kubenswrapper[4720]: I0122 07:00:01.494393 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"0003d040-a30c-45fb-9521-41221cb33286","Type":"ContainerDied","Data":"368ca16d7aa1e6e7fae904b08369a61b72fb4284a65f4e6e3513d57294fb6ab8"} Jan 22 07:00:01 crc kubenswrapper[4720]: I0122 07:00:01.495819 4720 generic.go:334] "Generic (PLEG): container finished" podID="37f7586a-44ae-4c9f-9049-45c9dba9d7a9" containerID="7b3591fdc5b78010ccdcba98933afdbfb2ad45c54ee2c8981969027ea488293c" exitCode=0 Jan 22 07:00:01 crc kubenswrapper[4720]: I0122 07:00:01.495875 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"37f7586a-44ae-4c9f-9049-45c9dba9d7a9","Type":"ContainerDied","Data":"7b3591fdc5b78010ccdcba98933afdbfb2ad45c54ee2c8981969027ea488293c"} Jan 22 07:00:01 crc kubenswrapper[4720]: I0122 07:00:01.495955 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"37f7586a-44ae-4c9f-9049-45c9dba9d7a9","Type":"ContainerDied","Data":"724cbf3f22b3be27b2ac6a099d92a2d8da5faa542b1a0ea70b23b9f7f1b4b047"} Jan 22 07:00:01 crc kubenswrapper[4720]: I0122 07:00:01.495984 4720 scope.go:117] "RemoveContainer" containerID="7b3591fdc5b78010ccdcba98933afdbfb2ad45c54ee2c8981969027ea488293c" Jan 22 07:00:01 crc kubenswrapper[4720]: I0122 07:00:01.496158 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 07:00:01 crc kubenswrapper[4720]: I0122 07:00:01.502391 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29484420-f6lwk" event={"ID":"c38ccafb-7319-4e13-a9e1-f38f73a8bd3c","Type":"ContainerStarted","Data":"2fd1f6213d307140eac84aae651360f60c42a6f8497d24e80e7cad9b552dc318"} Jan 22 07:00:01 crc kubenswrapper[4720]: I0122 07:00:01.502462 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29484420-f6lwk" event={"ID":"c38ccafb-7319-4e13-a9e1-f38f73a8bd3c","Type":"ContainerStarted","Data":"74c08f473c5494d1915db78a13f2a4d0ecc0d9987cbfc8122055c1f1f61a82e6"} Jan 22 07:00:01 crc kubenswrapper[4720]: I0122 07:00:01.542259 4720 scope.go:117] "RemoveContainer" containerID="7b3591fdc5b78010ccdcba98933afdbfb2ad45c54ee2c8981969027ea488293c" Jan 22 07:00:01 crc kubenswrapper[4720]: E0122 07:00:01.543485 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7b3591fdc5b78010ccdcba98933afdbfb2ad45c54ee2c8981969027ea488293c\": container with ID starting with 7b3591fdc5b78010ccdcba98933afdbfb2ad45c54ee2c8981969027ea488293c not found: ID does not exist" containerID="7b3591fdc5b78010ccdcba98933afdbfb2ad45c54ee2c8981969027ea488293c" Jan 22 07:00:01 crc kubenswrapper[4720]: I0122 07:00:01.543619 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7b3591fdc5b78010ccdcba98933afdbfb2ad45c54ee2c8981969027ea488293c"} err="failed to get container status \"7b3591fdc5b78010ccdcba98933afdbfb2ad45c54ee2c8981969027ea488293c\": rpc error: code = NotFound desc = could not find container \"7b3591fdc5b78010ccdcba98933afdbfb2ad45c54ee2c8981969027ea488293c\": container with ID starting with 7b3591fdc5b78010ccdcba98933afdbfb2ad45c54ee2c8981969027ea488293c not found: ID does not exist" Jan 22 07:00:01 crc kubenswrapper[4720]: I0122 07:00:01.554514 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 22 07:00:01 crc kubenswrapper[4720]: I0122 07:00:01.561577 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 22 07:00:01 crc kubenswrapper[4720]: I0122 07:00:01.758636 4720 prober.go:107] "Probe failed" probeType="Readiness" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="7e0c5995-91ad-47a5-a367-9987cdcf9a02" containerName="watcher-kuttl-api-log" probeResult="failure" output="Get \"http://10.217.0.143:9322/\": dial tcp 10.217.0.143:9322: connect: connection refused" Jan 22 07:00:01 crc kubenswrapper[4720]: I0122 07:00:01.758683 4720 prober.go:107] "Probe failed" probeType="Readiness" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="7e0c5995-91ad-47a5-a367-9987cdcf9a02" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.143:9322/\": dial tcp 10.217.0.143:9322: connect: connection refused" Jan 22 07:00:01 crc kubenswrapper[4720]: E0122 07:00:01.854647 4720 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc38ccafb_7319_4e13_a9e1_f38f73a8bd3c.slice/crio-conmon-2fd1f6213d307140eac84aae651360f60c42a6f8497d24e80e7cad9b552dc318.scope\": RecentStats: unable to find data in memory cache]" Jan 22 07:00:02 crc kubenswrapper[4720]: I0122 07:00:02.223542 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="37f7586a-44ae-4c9f-9049-45c9dba9d7a9" path="/var/lib/kubelet/pods/37f7586a-44ae-4c9f-9049-45c9dba9d7a9/volumes" Jan 22 07:00:02 crc kubenswrapper[4720]: I0122 07:00:02.362090 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:00:02 crc kubenswrapper[4720]: I0122 07:00:02.508443 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0003d040-a30c-45fb-9521-41221cb33286-run-httpd\") pod \"0003d040-a30c-45fb-9521-41221cb33286\" (UID: \"0003d040-a30c-45fb-9521-41221cb33286\") " Jan 22 07:00:02 crc kubenswrapper[4720]: I0122 07:00:02.508542 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0003d040-a30c-45fb-9521-41221cb33286-scripts\") pod \"0003d040-a30c-45fb-9521-41221cb33286\" (UID: \"0003d040-a30c-45fb-9521-41221cb33286\") " Jan 22 07:00:02 crc kubenswrapper[4720]: I0122 07:00:02.508615 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qd2tk\" (UniqueName: \"kubernetes.io/projected/0003d040-a30c-45fb-9521-41221cb33286-kube-api-access-qd2tk\") pod \"0003d040-a30c-45fb-9521-41221cb33286\" (UID: \"0003d040-a30c-45fb-9521-41221cb33286\") " Jan 22 07:00:02 crc kubenswrapper[4720]: I0122 07:00:02.508698 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0003d040-a30c-45fb-9521-41221cb33286-config-data\") pod \"0003d040-a30c-45fb-9521-41221cb33286\" (UID: \"0003d040-a30c-45fb-9521-41221cb33286\") " Jan 22 07:00:02 crc kubenswrapper[4720]: I0122 07:00:02.508782 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0003d040-a30c-45fb-9521-41221cb33286-sg-core-conf-yaml\") pod \"0003d040-a30c-45fb-9521-41221cb33286\" (UID: \"0003d040-a30c-45fb-9521-41221cb33286\") " Jan 22 07:00:02 crc kubenswrapper[4720]: I0122 07:00:02.509083 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0003d040-a30c-45fb-9521-41221cb33286-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "0003d040-a30c-45fb-9521-41221cb33286" (UID: "0003d040-a30c-45fb-9521-41221cb33286"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 07:00:02 crc kubenswrapper[4720]: I0122 07:00:02.509605 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0003d040-a30c-45fb-9521-41221cb33286-log-httpd\") pod \"0003d040-a30c-45fb-9521-41221cb33286\" (UID: \"0003d040-a30c-45fb-9521-41221cb33286\") " Jan 22 07:00:02 crc kubenswrapper[4720]: I0122 07:00:02.509651 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/0003d040-a30c-45fb-9521-41221cb33286-ceilometer-tls-certs\") pod \"0003d040-a30c-45fb-9521-41221cb33286\" (UID: \"0003d040-a30c-45fb-9521-41221cb33286\") " Jan 22 07:00:02 crc kubenswrapper[4720]: I0122 07:00:02.509714 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0003d040-a30c-45fb-9521-41221cb33286-combined-ca-bundle\") pod \"0003d040-a30c-45fb-9521-41221cb33286\" (UID: \"0003d040-a30c-45fb-9521-41221cb33286\") " Jan 22 07:00:02 crc kubenswrapper[4720]: I0122 07:00:02.510018 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0003d040-a30c-45fb-9521-41221cb33286-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "0003d040-a30c-45fb-9521-41221cb33286" (UID: "0003d040-a30c-45fb-9521-41221cb33286"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 07:00:02 crc kubenswrapper[4720]: I0122 07:00:02.510368 4720 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0003d040-a30c-45fb-9521-41221cb33286-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 22 07:00:02 crc kubenswrapper[4720]: I0122 07:00:02.510383 4720 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0003d040-a30c-45fb-9521-41221cb33286-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 22 07:00:02 crc kubenswrapper[4720]: I0122 07:00:02.515600 4720 generic.go:334] "Generic (PLEG): container finished" podID="0003d040-a30c-45fb-9521-41221cb33286" containerID="cb237adffedb5327656866f0ace6618a000af67f14f783b93f23a6dbe243e223" exitCode=0 Jan 22 07:00:02 crc kubenswrapper[4720]: I0122 07:00:02.515666 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"0003d040-a30c-45fb-9521-41221cb33286","Type":"ContainerDied","Data":"cb237adffedb5327656866f0ace6618a000af67f14f783b93f23a6dbe243e223"} Jan 22 07:00:02 crc kubenswrapper[4720]: I0122 07:00:02.515703 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"0003d040-a30c-45fb-9521-41221cb33286","Type":"ContainerDied","Data":"69f5969246273b01fb5da17858f46b02fda5e93251f8e7a790e087414b5a0be9"} Jan 22 07:00:02 crc kubenswrapper[4720]: I0122 07:00:02.515722 4720 scope.go:117] "RemoveContainer" containerID="04b8fa42b313bf5e49c9b69c6696ae1d9c5b578518331b84a8db651dd4f61da4" Jan 22 07:00:02 crc kubenswrapper[4720]: I0122 07:00:02.515843 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:00:02 crc kubenswrapper[4720]: I0122 07:00:02.523551 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0003d040-a30c-45fb-9521-41221cb33286-kube-api-access-qd2tk" (OuterVolumeSpecName: "kube-api-access-qd2tk") pod "0003d040-a30c-45fb-9521-41221cb33286" (UID: "0003d040-a30c-45fb-9521-41221cb33286"). InnerVolumeSpecName "kube-api-access-qd2tk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 07:00:02 crc kubenswrapper[4720]: I0122 07:00:02.524735 4720 generic.go:334] "Generic (PLEG): container finished" podID="c38ccafb-7319-4e13-a9e1-f38f73a8bd3c" containerID="2fd1f6213d307140eac84aae651360f60c42a6f8497d24e80e7cad9b552dc318" exitCode=0 Jan 22 07:00:02 crc kubenswrapper[4720]: I0122 07:00:02.524809 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29484420-f6lwk" event={"ID":"c38ccafb-7319-4e13-a9e1-f38f73a8bd3c","Type":"ContainerDied","Data":"2fd1f6213d307140eac84aae651360f60c42a6f8497d24e80e7cad9b552dc318"} Jan 22 07:00:02 crc kubenswrapper[4720]: I0122 07:00:02.526469 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:00:02 crc kubenswrapper[4720]: I0122 07:00:02.529085 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0003d040-a30c-45fb-9521-41221cb33286-scripts" (OuterVolumeSpecName: "scripts") pod "0003d040-a30c-45fb-9521-41221cb33286" (UID: "0003d040-a30c-45fb-9521-41221cb33286"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:00:02 crc kubenswrapper[4720]: I0122 07:00:02.533138 4720 generic.go:334] "Generic (PLEG): container finished" podID="7e0c5995-91ad-47a5-a367-9987cdcf9a02" containerID="2a5a24f3305406e9832472898b3596b41fcc38c62be65a2207c2387d31a9a5c9" exitCode=0 Jan 22 07:00:02 crc kubenswrapper[4720]: I0122 07:00:02.533321 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"7e0c5995-91ad-47a5-a367-9987cdcf9a02","Type":"ContainerDied","Data":"2a5a24f3305406e9832472898b3596b41fcc38c62be65a2207c2387d31a9a5c9"} Jan 22 07:00:02 crc kubenswrapper[4720]: I0122 07:00:02.533348 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"7e0c5995-91ad-47a5-a367-9987cdcf9a02","Type":"ContainerDied","Data":"d7e15eb9607feb99968b12607181d837779688ceabe83c42b129e5ca90d76cbe"} Jan 22 07:00:02 crc kubenswrapper[4720]: I0122 07:00:02.553722 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0003d040-a30c-45fb-9521-41221cb33286-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "0003d040-a30c-45fb-9521-41221cb33286" (UID: "0003d040-a30c-45fb-9521-41221cb33286"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:00:02 crc kubenswrapper[4720]: I0122 07:00:02.563811 4720 scope.go:117] "RemoveContainer" containerID="30a6a2d9258f4ba2958365dac1ae459e95cdf97ccde19b75d64ad72f9e5f8758" Jan 22 07:00:02 crc kubenswrapper[4720]: I0122 07:00:02.615996 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7e0c5995-91ad-47a5-a367-9987cdcf9a02-logs\") pod \"7e0c5995-91ad-47a5-a367-9987cdcf9a02\" (UID: \"7e0c5995-91ad-47a5-a367-9987cdcf9a02\") " Jan 22 07:00:02 crc kubenswrapper[4720]: I0122 07:00:02.616045 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e0c5995-91ad-47a5-a367-9987cdcf9a02-combined-ca-bundle\") pod \"7e0c5995-91ad-47a5-a367-9987cdcf9a02\" (UID: \"7e0c5995-91ad-47a5-a367-9987cdcf9a02\") " Jan 22 07:00:02 crc kubenswrapper[4720]: I0122 07:00:02.616131 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8kqwr\" (UniqueName: \"kubernetes.io/projected/7e0c5995-91ad-47a5-a367-9987cdcf9a02-kube-api-access-8kqwr\") pod \"7e0c5995-91ad-47a5-a367-9987cdcf9a02\" (UID: \"7e0c5995-91ad-47a5-a367-9987cdcf9a02\") " Jan 22 07:00:02 crc kubenswrapper[4720]: I0122 07:00:02.616164 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/7e0c5995-91ad-47a5-a367-9987cdcf9a02-custom-prometheus-ca\") pod \"7e0c5995-91ad-47a5-a367-9987cdcf9a02\" (UID: \"7e0c5995-91ad-47a5-a367-9987cdcf9a02\") " Jan 22 07:00:02 crc kubenswrapper[4720]: I0122 07:00:02.616192 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e0c5995-91ad-47a5-a367-9987cdcf9a02-config-data\") pod \"7e0c5995-91ad-47a5-a367-9987cdcf9a02\" (UID: \"7e0c5995-91ad-47a5-a367-9987cdcf9a02\") " Jan 22 07:00:02 crc kubenswrapper[4720]: I0122 07:00:02.616652 4720 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0003d040-a30c-45fb-9521-41221cb33286-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 07:00:02 crc kubenswrapper[4720]: I0122 07:00:02.616670 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qd2tk\" (UniqueName: \"kubernetes.io/projected/0003d040-a30c-45fb-9521-41221cb33286-kube-api-access-qd2tk\") on node \"crc\" DevicePath \"\"" Jan 22 07:00:02 crc kubenswrapper[4720]: I0122 07:00:02.616682 4720 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0003d040-a30c-45fb-9521-41221cb33286-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 22 07:00:02 crc kubenswrapper[4720]: I0122 07:00:02.623764 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7e0c5995-91ad-47a5-a367-9987cdcf9a02-logs" (OuterVolumeSpecName: "logs") pod "7e0c5995-91ad-47a5-a367-9987cdcf9a02" (UID: "7e0c5995-91ad-47a5-a367-9987cdcf9a02"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 07:00:02 crc kubenswrapper[4720]: I0122 07:00:02.630129 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0003d040-a30c-45fb-9521-41221cb33286-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0003d040-a30c-45fb-9521-41221cb33286" (UID: "0003d040-a30c-45fb-9521-41221cb33286"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:00:02 crc kubenswrapper[4720]: I0122 07:00:02.638128 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7e0c5995-91ad-47a5-a367-9987cdcf9a02-kube-api-access-8kqwr" (OuterVolumeSpecName: "kube-api-access-8kqwr") pod "7e0c5995-91ad-47a5-a367-9987cdcf9a02" (UID: "7e0c5995-91ad-47a5-a367-9987cdcf9a02"). InnerVolumeSpecName "kube-api-access-8kqwr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 07:00:02 crc kubenswrapper[4720]: I0122 07:00:02.660055 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e0c5995-91ad-47a5-a367-9987cdcf9a02-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7e0c5995-91ad-47a5-a367-9987cdcf9a02" (UID: "7e0c5995-91ad-47a5-a367-9987cdcf9a02"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:00:02 crc kubenswrapper[4720]: I0122 07:00:02.680805 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0003d040-a30c-45fb-9521-41221cb33286-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "0003d040-a30c-45fb-9521-41221cb33286" (UID: "0003d040-a30c-45fb-9521-41221cb33286"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:00:02 crc kubenswrapper[4720]: I0122 07:00:02.685824 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e0c5995-91ad-47a5-a367-9987cdcf9a02-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "7e0c5995-91ad-47a5-a367-9987cdcf9a02" (UID: "7e0c5995-91ad-47a5-a367-9987cdcf9a02"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:00:02 crc kubenswrapper[4720]: I0122 07:00:02.705320 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e0c5995-91ad-47a5-a367-9987cdcf9a02-config-data" (OuterVolumeSpecName: "config-data") pod "7e0c5995-91ad-47a5-a367-9987cdcf9a02" (UID: "7e0c5995-91ad-47a5-a367-9987cdcf9a02"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:00:02 crc kubenswrapper[4720]: I0122 07:00:02.719200 4720 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/7e0c5995-91ad-47a5-a367-9987cdcf9a02-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Jan 22 07:00:02 crc kubenswrapper[4720]: I0122 07:00:02.719343 4720 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e0c5995-91ad-47a5-a367-9987cdcf9a02-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 07:00:02 crc kubenswrapper[4720]: I0122 07:00:02.719355 4720 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/0003d040-a30c-45fb-9521-41221cb33286-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 22 07:00:02 crc kubenswrapper[4720]: I0122 07:00:02.719368 4720 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0003d040-a30c-45fb-9521-41221cb33286-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 07:00:02 crc kubenswrapper[4720]: I0122 07:00:02.719378 4720 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7e0c5995-91ad-47a5-a367-9987cdcf9a02-logs\") on node \"crc\" DevicePath \"\"" Jan 22 07:00:02 crc kubenswrapper[4720]: I0122 07:00:02.719409 4720 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e0c5995-91ad-47a5-a367-9987cdcf9a02-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 07:00:02 crc kubenswrapper[4720]: I0122 07:00:02.719421 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8kqwr\" (UniqueName: \"kubernetes.io/projected/7e0c5995-91ad-47a5-a367-9987cdcf9a02-kube-api-access-8kqwr\") on node \"crc\" DevicePath \"\"" Jan 22 07:00:02 crc kubenswrapper[4720]: I0122 07:00:02.720016 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0003d040-a30c-45fb-9521-41221cb33286-config-data" (OuterVolumeSpecName: "config-data") pod "0003d040-a30c-45fb-9521-41221cb33286" (UID: "0003d040-a30c-45fb-9521-41221cb33286"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:00:02 crc kubenswrapper[4720]: I0122 07:00:02.784101 4720 scope.go:117] "RemoveContainer" containerID="cb237adffedb5327656866f0ace6618a000af67f14f783b93f23a6dbe243e223" Jan 22 07:00:02 crc kubenswrapper[4720]: I0122 07:00:02.820884 4720 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0003d040-a30c-45fb-9521-41221cb33286-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 07:00:02 crc kubenswrapper[4720]: I0122 07:00:02.828730 4720 scope.go:117] "RemoveContainer" containerID="368ca16d7aa1e6e7fae904b08369a61b72fb4284a65f4e6e3513d57294fb6ab8" Jan 22 07:00:02 crc kubenswrapper[4720]: I0122 07:00:02.879682 4720 scope.go:117] "RemoveContainer" containerID="04b8fa42b313bf5e49c9b69c6696ae1d9c5b578518331b84a8db651dd4f61da4" Jan 22 07:00:02 crc kubenswrapper[4720]: E0122 07:00:02.884591 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"04b8fa42b313bf5e49c9b69c6696ae1d9c5b578518331b84a8db651dd4f61da4\": container with ID starting with 04b8fa42b313bf5e49c9b69c6696ae1d9c5b578518331b84a8db651dd4f61da4 not found: ID does not exist" containerID="04b8fa42b313bf5e49c9b69c6696ae1d9c5b578518331b84a8db651dd4f61da4" Jan 22 07:00:02 crc kubenswrapper[4720]: I0122 07:00:02.884652 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"04b8fa42b313bf5e49c9b69c6696ae1d9c5b578518331b84a8db651dd4f61da4"} err="failed to get container status \"04b8fa42b313bf5e49c9b69c6696ae1d9c5b578518331b84a8db651dd4f61da4\": rpc error: code = NotFound desc = could not find container \"04b8fa42b313bf5e49c9b69c6696ae1d9c5b578518331b84a8db651dd4f61da4\": container with ID starting with 04b8fa42b313bf5e49c9b69c6696ae1d9c5b578518331b84a8db651dd4f61da4 not found: ID does not exist" Jan 22 07:00:02 crc kubenswrapper[4720]: I0122 07:00:02.884690 4720 scope.go:117] "RemoveContainer" containerID="30a6a2d9258f4ba2958365dac1ae459e95cdf97ccde19b75d64ad72f9e5f8758" Jan 22 07:00:02 crc kubenswrapper[4720]: E0122 07:00:02.888975 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"30a6a2d9258f4ba2958365dac1ae459e95cdf97ccde19b75d64ad72f9e5f8758\": container with ID starting with 30a6a2d9258f4ba2958365dac1ae459e95cdf97ccde19b75d64ad72f9e5f8758 not found: ID does not exist" containerID="30a6a2d9258f4ba2958365dac1ae459e95cdf97ccde19b75d64ad72f9e5f8758" Jan 22 07:00:02 crc kubenswrapper[4720]: I0122 07:00:02.889026 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"30a6a2d9258f4ba2958365dac1ae459e95cdf97ccde19b75d64ad72f9e5f8758"} err="failed to get container status \"30a6a2d9258f4ba2958365dac1ae459e95cdf97ccde19b75d64ad72f9e5f8758\": rpc error: code = NotFound desc = could not find container \"30a6a2d9258f4ba2958365dac1ae459e95cdf97ccde19b75d64ad72f9e5f8758\": container with ID starting with 30a6a2d9258f4ba2958365dac1ae459e95cdf97ccde19b75d64ad72f9e5f8758 not found: ID does not exist" Jan 22 07:00:02 crc kubenswrapper[4720]: I0122 07:00:02.889064 4720 scope.go:117] "RemoveContainer" containerID="cb237adffedb5327656866f0ace6618a000af67f14f783b93f23a6dbe243e223" Jan 22 07:00:02 crc kubenswrapper[4720]: E0122 07:00:02.893702 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cb237adffedb5327656866f0ace6618a000af67f14f783b93f23a6dbe243e223\": container with ID starting with cb237adffedb5327656866f0ace6618a000af67f14f783b93f23a6dbe243e223 not found: ID does not exist" containerID="cb237adffedb5327656866f0ace6618a000af67f14f783b93f23a6dbe243e223" Jan 22 07:00:02 crc kubenswrapper[4720]: I0122 07:00:02.893763 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cb237adffedb5327656866f0ace6618a000af67f14f783b93f23a6dbe243e223"} err="failed to get container status \"cb237adffedb5327656866f0ace6618a000af67f14f783b93f23a6dbe243e223\": rpc error: code = NotFound desc = could not find container \"cb237adffedb5327656866f0ace6618a000af67f14f783b93f23a6dbe243e223\": container with ID starting with cb237adffedb5327656866f0ace6618a000af67f14f783b93f23a6dbe243e223 not found: ID does not exist" Jan 22 07:00:02 crc kubenswrapper[4720]: I0122 07:00:02.893799 4720 scope.go:117] "RemoveContainer" containerID="368ca16d7aa1e6e7fae904b08369a61b72fb4284a65f4e6e3513d57294fb6ab8" Jan 22 07:00:02 crc kubenswrapper[4720]: I0122 07:00:02.893956 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 07:00:02 crc kubenswrapper[4720]: E0122 07:00:02.894472 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"368ca16d7aa1e6e7fae904b08369a61b72fb4284a65f4e6e3513d57294fb6ab8\": container with ID starting with 368ca16d7aa1e6e7fae904b08369a61b72fb4284a65f4e6e3513d57294fb6ab8 not found: ID does not exist" containerID="368ca16d7aa1e6e7fae904b08369a61b72fb4284a65f4e6e3513d57294fb6ab8" Jan 22 07:00:02 crc kubenswrapper[4720]: I0122 07:00:02.894511 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"368ca16d7aa1e6e7fae904b08369a61b72fb4284a65f4e6e3513d57294fb6ab8"} err="failed to get container status \"368ca16d7aa1e6e7fae904b08369a61b72fb4284a65f4e6e3513d57294fb6ab8\": rpc error: code = NotFound desc = could not find container \"368ca16d7aa1e6e7fae904b08369a61b72fb4284a65f4e6e3513d57294fb6ab8\": container with ID starting with 368ca16d7aa1e6e7fae904b08369a61b72fb4284a65f4e6e3513d57294fb6ab8 not found: ID does not exist" Jan 22 07:00:02 crc kubenswrapper[4720]: I0122 07:00:02.894531 4720 scope.go:117] "RemoveContainer" containerID="2a5a24f3305406e9832472898b3596b41fcc38c62be65a2207c2387d31a9a5c9" Jan 22 07:00:02 crc kubenswrapper[4720]: I0122 07:00:02.914095 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 07:00:02 crc kubenswrapper[4720]: I0122 07:00:02.923826 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 07:00:02 crc kubenswrapper[4720]: E0122 07:00:02.924214 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e0c5995-91ad-47a5-a367-9987cdcf9a02" containerName="watcher-kuttl-api-log" Jan 22 07:00:02 crc kubenswrapper[4720]: I0122 07:00:02.924235 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e0c5995-91ad-47a5-a367-9987cdcf9a02" containerName="watcher-kuttl-api-log" Jan 22 07:00:02 crc kubenswrapper[4720]: E0122 07:00:02.924250 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0003d040-a30c-45fb-9521-41221cb33286" containerName="proxy-httpd" Jan 22 07:00:02 crc kubenswrapper[4720]: I0122 07:00:02.924256 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="0003d040-a30c-45fb-9521-41221cb33286" containerName="proxy-httpd" Jan 22 07:00:02 crc kubenswrapper[4720]: E0122 07:00:02.924278 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37f7586a-44ae-4c9f-9049-45c9dba9d7a9" containerName="watcher-applier" Jan 22 07:00:02 crc kubenswrapper[4720]: I0122 07:00:02.924285 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="37f7586a-44ae-4c9f-9049-45c9dba9d7a9" containerName="watcher-applier" Jan 22 07:00:02 crc kubenswrapper[4720]: E0122 07:00:02.924296 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0003d040-a30c-45fb-9521-41221cb33286" containerName="sg-core" Jan 22 07:00:02 crc kubenswrapper[4720]: I0122 07:00:02.924302 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="0003d040-a30c-45fb-9521-41221cb33286" containerName="sg-core" Jan 22 07:00:02 crc kubenswrapper[4720]: E0122 07:00:02.924346 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0003d040-a30c-45fb-9521-41221cb33286" containerName="ceilometer-notification-agent" Jan 22 07:00:02 crc kubenswrapper[4720]: I0122 07:00:02.924354 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="0003d040-a30c-45fb-9521-41221cb33286" containerName="ceilometer-notification-agent" Jan 22 07:00:02 crc kubenswrapper[4720]: E0122 07:00:02.924363 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e0c5995-91ad-47a5-a367-9987cdcf9a02" containerName="watcher-api" Jan 22 07:00:02 crc kubenswrapper[4720]: I0122 07:00:02.924371 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e0c5995-91ad-47a5-a367-9987cdcf9a02" containerName="watcher-api" Jan 22 07:00:02 crc kubenswrapper[4720]: E0122 07:00:02.924383 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0003d040-a30c-45fb-9521-41221cb33286" containerName="ceilometer-central-agent" Jan 22 07:00:02 crc kubenswrapper[4720]: I0122 07:00:02.924389 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="0003d040-a30c-45fb-9521-41221cb33286" containerName="ceilometer-central-agent" Jan 22 07:00:02 crc kubenswrapper[4720]: I0122 07:00:02.924605 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="0003d040-a30c-45fb-9521-41221cb33286" containerName="proxy-httpd" Jan 22 07:00:02 crc kubenswrapper[4720]: I0122 07:00:02.924625 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="0003d040-a30c-45fb-9521-41221cb33286" containerName="sg-core" Jan 22 07:00:02 crc kubenswrapper[4720]: I0122 07:00:02.924635 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="7e0c5995-91ad-47a5-a367-9987cdcf9a02" containerName="watcher-api" Jan 22 07:00:02 crc kubenswrapper[4720]: I0122 07:00:02.924646 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="37f7586a-44ae-4c9f-9049-45c9dba9d7a9" containerName="watcher-applier" Jan 22 07:00:02 crc kubenswrapper[4720]: I0122 07:00:02.924653 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="7e0c5995-91ad-47a5-a367-9987cdcf9a02" containerName="watcher-kuttl-api-log" Jan 22 07:00:02 crc kubenswrapper[4720]: I0122 07:00:02.924665 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="0003d040-a30c-45fb-9521-41221cb33286" containerName="ceilometer-central-agent" Jan 22 07:00:02 crc kubenswrapper[4720]: I0122 07:00:02.924673 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="0003d040-a30c-45fb-9521-41221cb33286" containerName="ceilometer-notification-agent" Jan 22 07:00:02 crc kubenswrapper[4720]: I0122 07:00:02.926243 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:00:02 crc kubenswrapper[4720]: I0122 07:00:02.928872 4720 scope.go:117] "RemoveContainer" containerID="ac668e98b03a83e9ecc94200673728ba6de4c81f1d16750ec25415deb83e2b23" Jan 22 07:00:02 crc kubenswrapper[4720]: I0122 07:00:02.930561 4720 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-ceilometer-internal-svc" Jan 22 07:00:02 crc kubenswrapper[4720]: I0122 07:00:02.930846 4720 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-scripts" Jan 22 07:00:02 crc kubenswrapper[4720]: I0122 07:00:02.931039 4720 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-config-data" Jan 22 07:00:02 crc kubenswrapper[4720]: I0122 07:00:02.941609 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 07:00:02 crc kubenswrapper[4720]: I0122 07:00:02.981806 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcherdb54-account-delete-dcbwq" Jan 22 07:00:03 crc kubenswrapper[4720]: I0122 07:00:03.003866 4720 scope.go:117] "RemoveContainer" containerID="2a5a24f3305406e9832472898b3596b41fcc38c62be65a2207c2387d31a9a5c9" Jan 22 07:00:03 crc kubenswrapper[4720]: E0122 07:00:03.004477 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2a5a24f3305406e9832472898b3596b41fcc38c62be65a2207c2387d31a9a5c9\": container with ID starting with 2a5a24f3305406e9832472898b3596b41fcc38c62be65a2207c2387d31a9a5c9 not found: ID does not exist" containerID="2a5a24f3305406e9832472898b3596b41fcc38c62be65a2207c2387d31a9a5c9" Jan 22 07:00:03 crc kubenswrapper[4720]: I0122 07:00:03.004526 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a5a24f3305406e9832472898b3596b41fcc38c62be65a2207c2387d31a9a5c9"} err="failed to get container status \"2a5a24f3305406e9832472898b3596b41fcc38c62be65a2207c2387d31a9a5c9\": rpc error: code = NotFound desc = could not find container \"2a5a24f3305406e9832472898b3596b41fcc38c62be65a2207c2387d31a9a5c9\": container with ID starting with 2a5a24f3305406e9832472898b3596b41fcc38c62be65a2207c2387d31a9a5c9 not found: ID does not exist" Jan 22 07:00:03 crc kubenswrapper[4720]: I0122 07:00:03.004561 4720 scope.go:117] "RemoveContainer" containerID="ac668e98b03a83e9ecc94200673728ba6de4c81f1d16750ec25415deb83e2b23" Jan 22 07:00:03 crc kubenswrapper[4720]: E0122 07:00:03.005059 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ac668e98b03a83e9ecc94200673728ba6de4c81f1d16750ec25415deb83e2b23\": container with ID starting with ac668e98b03a83e9ecc94200673728ba6de4c81f1d16750ec25415deb83e2b23 not found: ID does not exist" containerID="ac668e98b03a83e9ecc94200673728ba6de4c81f1d16750ec25415deb83e2b23" Jan 22 07:00:03 crc kubenswrapper[4720]: I0122 07:00:03.005085 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ac668e98b03a83e9ecc94200673728ba6de4c81f1d16750ec25415deb83e2b23"} err="failed to get container status \"ac668e98b03a83e9ecc94200673728ba6de4c81f1d16750ec25415deb83e2b23\": rpc error: code = NotFound desc = could not find container \"ac668e98b03a83e9ecc94200673728ba6de4c81f1d16750ec25415deb83e2b23\": container with ID starting with ac668e98b03a83e9ecc94200673728ba6de4c81f1d16750ec25415deb83e2b23 not found: ID does not exist" Jan 22 07:00:03 crc kubenswrapper[4720]: E0122 07:00:03.025815 4720 secret.go:188] Couldn't get secret watcher-kuttl-default/watcher-kuttl-decision-engine-config-data: secret "watcher-kuttl-decision-engine-config-data" not found Jan 22 07:00:03 crc kubenswrapper[4720]: E0122 07:00:03.026182 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7fee6a10-fdd0-4b20-aa01-88c426dc5d91-config-data podName:7fee6a10-fdd0-4b20-aa01-88c426dc5d91 nodeName:}" failed. No retries permitted until 2026-01-22 07:00:07.026164229 +0000 UTC m=+1499.168070934 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/secret/7fee6a10-fdd0-4b20-aa01-88c426dc5d91-config-data") pod "watcher-kuttl-decision-engine-0" (UID: "7fee6a10-fdd0-4b20-aa01-88c426dc5d91") : secret "watcher-kuttl-decision-engine-config-data" not found Jan 22 07:00:03 crc kubenswrapper[4720]: I0122 07:00:03.127153 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/70a444b0-cf26-46ac-8caf-187a0bccd253-operator-scripts\") pod \"70a444b0-cf26-46ac-8caf-187a0bccd253\" (UID: \"70a444b0-cf26-46ac-8caf-187a0bccd253\") " Jan 22 07:00:03 crc kubenswrapper[4720]: I0122 07:00:03.127291 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ch4bb\" (UniqueName: \"kubernetes.io/projected/70a444b0-cf26-46ac-8caf-187a0bccd253-kube-api-access-ch4bb\") pod \"70a444b0-cf26-46ac-8caf-187a0bccd253\" (UID: \"70a444b0-cf26-46ac-8caf-187a0bccd253\") " Jan 22 07:00:03 crc kubenswrapper[4720]: I0122 07:00:03.127531 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8d89b304-1eb9-4cc6-8fbf-8c859e7c4a02-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8d89b304-1eb9-4cc6-8fbf-8c859e7c4a02\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:00:03 crc kubenswrapper[4720]: I0122 07:00:03.127572 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d89b304-1eb9-4cc6-8fbf-8c859e7c4a02-config-data\") pod \"ceilometer-0\" (UID: \"8d89b304-1eb9-4cc6-8fbf-8c859e7c4a02\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:00:03 crc kubenswrapper[4720]: I0122 07:00:03.127624 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d89b304-1eb9-4cc6-8fbf-8c859e7c4a02-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8d89b304-1eb9-4cc6-8fbf-8c859e7c4a02\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:00:03 crc kubenswrapper[4720]: I0122 07:00:03.127661 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lhvbc\" (UniqueName: \"kubernetes.io/projected/8d89b304-1eb9-4cc6-8fbf-8c859e7c4a02-kube-api-access-lhvbc\") pod \"ceilometer-0\" (UID: \"8d89b304-1eb9-4cc6-8fbf-8c859e7c4a02\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:00:03 crc kubenswrapper[4720]: I0122 07:00:03.127692 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8d89b304-1eb9-4cc6-8fbf-8c859e7c4a02-log-httpd\") pod \"ceilometer-0\" (UID: \"8d89b304-1eb9-4cc6-8fbf-8c859e7c4a02\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:00:03 crc kubenswrapper[4720]: I0122 07:00:03.127710 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8d89b304-1eb9-4cc6-8fbf-8c859e7c4a02-scripts\") pod \"ceilometer-0\" (UID: \"8d89b304-1eb9-4cc6-8fbf-8c859e7c4a02\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:00:03 crc kubenswrapper[4720]: I0122 07:00:03.127731 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8d89b304-1eb9-4cc6-8fbf-8c859e7c4a02-run-httpd\") pod \"ceilometer-0\" (UID: \"8d89b304-1eb9-4cc6-8fbf-8c859e7c4a02\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:00:03 crc kubenswrapper[4720]: I0122 07:00:03.127765 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/8d89b304-1eb9-4cc6-8fbf-8c859e7c4a02-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"8d89b304-1eb9-4cc6-8fbf-8c859e7c4a02\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:00:03 crc kubenswrapper[4720]: I0122 07:00:03.128020 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/70a444b0-cf26-46ac-8caf-187a0bccd253-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "70a444b0-cf26-46ac-8caf-187a0bccd253" (UID: "70a444b0-cf26-46ac-8caf-187a0bccd253"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 07:00:03 crc kubenswrapper[4720]: I0122 07:00:03.132177 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/70a444b0-cf26-46ac-8caf-187a0bccd253-kube-api-access-ch4bb" (OuterVolumeSpecName: "kube-api-access-ch4bb") pod "70a444b0-cf26-46ac-8caf-187a0bccd253" (UID: "70a444b0-cf26-46ac-8caf-187a0bccd253"). InnerVolumeSpecName "kube-api-access-ch4bb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 07:00:03 crc kubenswrapper[4720]: I0122 07:00:03.228955 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8d89b304-1eb9-4cc6-8fbf-8c859e7c4a02-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8d89b304-1eb9-4cc6-8fbf-8c859e7c4a02\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:00:03 crc kubenswrapper[4720]: I0122 07:00:03.229019 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d89b304-1eb9-4cc6-8fbf-8c859e7c4a02-config-data\") pod \"ceilometer-0\" (UID: \"8d89b304-1eb9-4cc6-8fbf-8c859e7c4a02\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:00:03 crc kubenswrapper[4720]: I0122 07:00:03.229062 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d89b304-1eb9-4cc6-8fbf-8c859e7c4a02-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8d89b304-1eb9-4cc6-8fbf-8c859e7c4a02\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:00:03 crc kubenswrapper[4720]: I0122 07:00:03.229102 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lhvbc\" (UniqueName: \"kubernetes.io/projected/8d89b304-1eb9-4cc6-8fbf-8c859e7c4a02-kube-api-access-lhvbc\") pod \"ceilometer-0\" (UID: \"8d89b304-1eb9-4cc6-8fbf-8c859e7c4a02\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:00:03 crc kubenswrapper[4720]: I0122 07:00:03.229132 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8d89b304-1eb9-4cc6-8fbf-8c859e7c4a02-log-httpd\") pod \"ceilometer-0\" (UID: \"8d89b304-1eb9-4cc6-8fbf-8c859e7c4a02\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:00:03 crc kubenswrapper[4720]: I0122 07:00:03.229149 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8d89b304-1eb9-4cc6-8fbf-8c859e7c4a02-scripts\") pod \"ceilometer-0\" (UID: \"8d89b304-1eb9-4cc6-8fbf-8c859e7c4a02\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:00:03 crc kubenswrapper[4720]: I0122 07:00:03.229164 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8d89b304-1eb9-4cc6-8fbf-8c859e7c4a02-run-httpd\") pod \"ceilometer-0\" (UID: \"8d89b304-1eb9-4cc6-8fbf-8c859e7c4a02\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:00:03 crc kubenswrapper[4720]: I0122 07:00:03.229197 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/8d89b304-1eb9-4cc6-8fbf-8c859e7c4a02-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"8d89b304-1eb9-4cc6-8fbf-8c859e7c4a02\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:00:03 crc kubenswrapper[4720]: I0122 07:00:03.229624 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ch4bb\" (UniqueName: \"kubernetes.io/projected/70a444b0-cf26-46ac-8caf-187a0bccd253-kube-api-access-ch4bb\") on node \"crc\" DevicePath \"\"" Jan 22 07:00:03 crc kubenswrapper[4720]: I0122 07:00:03.230042 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8d89b304-1eb9-4cc6-8fbf-8c859e7c4a02-run-httpd\") pod \"ceilometer-0\" (UID: \"8d89b304-1eb9-4cc6-8fbf-8c859e7c4a02\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:00:03 crc kubenswrapper[4720]: I0122 07:00:03.230173 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8d89b304-1eb9-4cc6-8fbf-8c859e7c4a02-log-httpd\") pod \"ceilometer-0\" (UID: \"8d89b304-1eb9-4cc6-8fbf-8c859e7c4a02\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:00:03 crc kubenswrapper[4720]: I0122 07:00:03.230331 4720 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/70a444b0-cf26-46ac-8caf-187a0bccd253-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 07:00:03 crc kubenswrapper[4720]: I0122 07:00:03.233333 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d89b304-1eb9-4cc6-8fbf-8c859e7c4a02-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8d89b304-1eb9-4cc6-8fbf-8c859e7c4a02\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:00:03 crc kubenswrapper[4720]: I0122 07:00:03.233826 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d89b304-1eb9-4cc6-8fbf-8c859e7c4a02-config-data\") pod \"ceilometer-0\" (UID: \"8d89b304-1eb9-4cc6-8fbf-8c859e7c4a02\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:00:03 crc kubenswrapper[4720]: I0122 07:00:03.234098 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8d89b304-1eb9-4cc6-8fbf-8c859e7c4a02-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8d89b304-1eb9-4cc6-8fbf-8c859e7c4a02\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:00:03 crc kubenswrapper[4720]: I0122 07:00:03.234275 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/8d89b304-1eb9-4cc6-8fbf-8c859e7c4a02-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"8d89b304-1eb9-4cc6-8fbf-8c859e7c4a02\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:00:03 crc kubenswrapper[4720]: I0122 07:00:03.236697 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8d89b304-1eb9-4cc6-8fbf-8c859e7c4a02-scripts\") pod \"ceilometer-0\" (UID: \"8d89b304-1eb9-4cc6-8fbf-8c859e7c4a02\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:00:03 crc kubenswrapper[4720]: I0122 07:00:03.248084 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lhvbc\" (UniqueName: \"kubernetes.io/projected/8d89b304-1eb9-4cc6-8fbf-8c859e7c4a02-kube-api-access-lhvbc\") pod \"ceilometer-0\" (UID: \"8d89b304-1eb9-4cc6-8fbf-8c859e7c4a02\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:00:03 crc kubenswrapper[4720]: I0122 07:00:03.280050 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:00:03 crc kubenswrapper[4720]: I0122 07:00:03.549554 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:00:03 crc kubenswrapper[4720]: I0122 07:00:03.550838 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcherdb54-account-delete-dcbwq" event={"ID":"70a444b0-cf26-46ac-8caf-187a0bccd253","Type":"ContainerDied","Data":"f5f35e2c5c853f35bdc33961191be560bf8868899b77a3e80e26540686a087ab"} Jan 22 07:00:03 crc kubenswrapper[4720]: I0122 07:00:03.550868 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcherdb54-account-delete-dcbwq" Jan 22 07:00:03 crc kubenswrapper[4720]: I0122 07:00:03.550877 4720 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f5f35e2c5c853f35bdc33961191be560bf8868899b77a3e80e26540686a087ab" Jan 22 07:00:03 crc kubenswrapper[4720]: I0122 07:00:03.607802 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 22 07:00:03 crc kubenswrapper[4720]: I0122 07:00:03.615829 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 22 07:00:03 crc kubenswrapper[4720]: I0122 07:00:03.903421 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 07:00:04 crc kubenswrapper[4720]: I0122 07:00:04.061149 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484420-f6lwk" Jan 22 07:00:04 crc kubenswrapper[4720]: I0122 07:00:04.074694 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 07:00:04 crc kubenswrapper[4720]: I0122 07:00:04.146251 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c38ccafb-7319-4e13-a9e1-f38f73a8bd3c-secret-volume\") pod \"c38ccafb-7319-4e13-a9e1-f38f73a8bd3c\" (UID: \"c38ccafb-7319-4e13-a9e1-f38f73a8bd3c\") " Jan 22 07:00:04 crc kubenswrapper[4720]: I0122 07:00:04.146379 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c6xs7\" (UniqueName: \"kubernetes.io/projected/c38ccafb-7319-4e13-a9e1-f38f73a8bd3c-kube-api-access-c6xs7\") pod \"c38ccafb-7319-4e13-a9e1-f38f73a8bd3c\" (UID: \"c38ccafb-7319-4e13-a9e1-f38f73a8bd3c\") " Jan 22 07:00:04 crc kubenswrapper[4720]: I0122 07:00:04.146529 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c38ccafb-7319-4e13-a9e1-f38f73a8bd3c-config-volume\") pod \"c38ccafb-7319-4e13-a9e1-f38f73a8bd3c\" (UID: \"c38ccafb-7319-4e13-a9e1-f38f73a8bd3c\") " Jan 22 07:00:04 crc kubenswrapper[4720]: I0122 07:00:04.147217 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c38ccafb-7319-4e13-a9e1-f38f73a8bd3c-config-volume" (OuterVolumeSpecName: "config-volume") pod "c38ccafb-7319-4e13-a9e1-f38f73a8bd3c" (UID: "c38ccafb-7319-4e13-a9e1-f38f73a8bd3c"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 07:00:04 crc kubenswrapper[4720]: I0122 07:00:04.150545 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c38ccafb-7319-4e13-a9e1-f38f73a8bd3c-kube-api-access-c6xs7" (OuterVolumeSpecName: "kube-api-access-c6xs7") pod "c38ccafb-7319-4e13-a9e1-f38f73a8bd3c" (UID: "c38ccafb-7319-4e13-a9e1-f38f73a8bd3c"). InnerVolumeSpecName "kube-api-access-c6xs7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 07:00:04 crc kubenswrapper[4720]: I0122 07:00:04.151052 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c38ccafb-7319-4e13-a9e1-f38f73a8bd3c-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "c38ccafb-7319-4e13-a9e1-f38f73a8bd3c" (UID: "c38ccafb-7319-4e13-a9e1-f38f73a8bd3c"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:00:04 crc kubenswrapper[4720]: I0122 07:00:04.221595 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0003d040-a30c-45fb-9521-41221cb33286" path="/var/lib/kubelet/pods/0003d040-a30c-45fb-9521-41221cb33286/volumes" Jan 22 07:00:04 crc kubenswrapper[4720]: I0122 07:00:04.222652 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7e0c5995-91ad-47a5-a367-9987cdcf9a02" path="/var/lib/kubelet/pods/7e0c5995-91ad-47a5-a367-9987cdcf9a02/volumes" Jan 22 07:00:04 crc kubenswrapper[4720]: I0122 07:00:04.247901 4720 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c38ccafb-7319-4e13-a9e1-f38f73a8bd3c-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 22 07:00:04 crc kubenswrapper[4720]: I0122 07:00:04.247948 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c6xs7\" (UniqueName: \"kubernetes.io/projected/c38ccafb-7319-4e13-a9e1-f38f73a8bd3c-kube-api-access-c6xs7\") on node \"crc\" DevicePath \"\"" Jan 22 07:00:04 crc kubenswrapper[4720]: I0122 07:00:04.247959 4720 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c38ccafb-7319-4e13-a9e1-f38f73a8bd3c-config-volume\") on node \"crc\" DevicePath \"\"" Jan 22 07:00:04 crc kubenswrapper[4720]: I0122 07:00:04.352742 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-db-create-vgmr2"] Jan 22 07:00:04 crc kubenswrapper[4720]: I0122 07:00:04.360570 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-db-create-vgmr2"] Jan 22 07:00:04 crc kubenswrapper[4720]: I0122 07:00:04.384478 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcherdb54-account-delete-dcbwq"] Jan 22 07:00:04 crc kubenswrapper[4720]: I0122 07:00:04.393131 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-db54-account-create-update-j4sp5"] Jan 22 07:00:04 crc kubenswrapper[4720]: I0122 07:00:04.399175 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcherdb54-account-delete-dcbwq"] Jan 22 07:00:04 crc kubenswrapper[4720]: I0122 07:00:04.405872 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-db54-account-create-update-j4sp5"] Jan 22 07:00:04 crc kubenswrapper[4720]: I0122 07:00:04.562143 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"8d89b304-1eb9-4cc6-8fbf-8c859e7c4a02","Type":"ContainerStarted","Data":"150686739b9f6c3616d703c5ee6f63df44b3d71da387d32d91ac6cf88938e43e"} Jan 22 07:00:04 crc kubenswrapper[4720]: I0122 07:00:04.563882 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29484420-f6lwk" event={"ID":"c38ccafb-7319-4e13-a9e1-f38f73a8bd3c","Type":"ContainerDied","Data":"74c08f473c5494d1915db78a13f2a4d0ecc0d9987cbfc8122055c1f1f61a82e6"} Jan 22 07:00:04 crc kubenswrapper[4720]: I0122 07:00:04.563932 4720 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="74c08f473c5494d1915db78a13f2a4d0ecc0d9987cbfc8122055c1f1f61a82e6" Jan 22 07:00:04 crc kubenswrapper[4720]: I0122 07:00:04.564000 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484420-f6lwk" Jan 22 07:00:05 crc kubenswrapper[4720]: I0122 07:00:05.598433 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"8d89b304-1eb9-4cc6-8fbf-8c859e7c4a02","Type":"ContainerStarted","Data":"ace28fced767eb6f4679c38ef14f854ef329c077632be63999e1572cc43ef3f9"} Jan 22 07:00:06 crc kubenswrapper[4720]: I0122 07:00:06.248842 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="70a444b0-cf26-46ac-8caf-187a0bccd253" path="/var/lib/kubelet/pods/70a444b0-cf26-46ac-8caf-187a0bccd253/volumes" Jan 22 07:00:06 crc kubenswrapper[4720]: I0122 07:00:06.249786 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b7166439-84ea-4607-a3a7-3dcd65e1001a" path="/var/lib/kubelet/pods/b7166439-84ea-4607-a3a7-3dcd65e1001a/volumes" Jan 22 07:00:06 crc kubenswrapper[4720]: I0122 07:00:06.250320 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f45f6544-152d-4235-a6c6-d72625f9d66f" path="/var/lib/kubelet/pods/f45f6544-152d-4235-a6c6-d72625f9d66f/volumes" Jan 22 07:00:06 crc kubenswrapper[4720]: I0122 07:00:06.524019 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 07:00:06 crc kubenswrapper[4720]: I0122 07:00:06.614509 4720 generic.go:334] "Generic (PLEG): container finished" podID="7fee6a10-fdd0-4b20-aa01-88c426dc5d91" containerID="8cb13f55dfcdcab8982685e5ef2f3a363ec4b5ea31ab198df21c97d5c2311936" exitCode=0 Jan 22 07:00:06 crc kubenswrapper[4720]: I0122 07:00:06.614598 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 07:00:06 crc kubenswrapper[4720]: I0122 07:00:06.614596 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"7fee6a10-fdd0-4b20-aa01-88c426dc5d91","Type":"ContainerDied","Data":"8cb13f55dfcdcab8982685e5ef2f3a363ec4b5ea31ab198df21c97d5c2311936"} Jan 22 07:00:06 crc kubenswrapper[4720]: I0122 07:00:06.614709 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"7fee6a10-fdd0-4b20-aa01-88c426dc5d91","Type":"ContainerDied","Data":"ca2077164d5fc3b50abf715fa88ca667edc548c8a6010586eac5a5064b9db12d"} Jan 22 07:00:06 crc kubenswrapper[4720]: I0122 07:00:06.614739 4720 scope.go:117] "RemoveContainer" containerID="8cb13f55dfcdcab8982685e5ef2f3a363ec4b5ea31ab198df21c97d5c2311936" Jan 22 07:00:06 crc kubenswrapper[4720]: I0122 07:00:06.618197 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"8d89b304-1eb9-4cc6-8fbf-8c859e7c4a02","Type":"ContainerStarted","Data":"55c27ac6a41e0fd44cdf13f4137ad08f5f42d67a09efd34b5ba27c208288494c"} Jan 22 07:00:06 crc kubenswrapper[4720]: I0122 07:00:06.618239 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"8d89b304-1eb9-4cc6-8fbf-8c859e7c4a02","Type":"ContainerStarted","Data":"55e90ecd3ac4e349ec8db53b9590cd2860a9088888fdcac68f10f773709d8d8a"} Jan 22 07:00:06 crc kubenswrapper[4720]: I0122 07:00:06.635881 4720 scope.go:117] "RemoveContainer" containerID="8cb13f55dfcdcab8982685e5ef2f3a363ec4b5ea31ab198df21c97d5c2311936" Jan 22 07:00:06 crc kubenswrapper[4720]: E0122 07:00:06.636294 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8cb13f55dfcdcab8982685e5ef2f3a363ec4b5ea31ab198df21c97d5c2311936\": container with ID starting with 8cb13f55dfcdcab8982685e5ef2f3a363ec4b5ea31ab198df21c97d5c2311936 not found: ID does not exist" containerID="8cb13f55dfcdcab8982685e5ef2f3a363ec4b5ea31ab198df21c97d5c2311936" Jan 22 07:00:06 crc kubenswrapper[4720]: I0122 07:00:06.636335 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8cb13f55dfcdcab8982685e5ef2f3a363ec4b5ea31ab198df21c97d5c2311936"} err="failed to get container status \"8cb13f55dfcdcab8982685e5ef2f3a363ec4b5ea31ab198df21c97d5c2311936\": rpc error: code = NotFound desc = could not find container \"8cb13f55dfcdcab8982685e5ef2f3a363ec4b5ea31ab198df21c97d5c2311936\": container with ID starting with 8cb13f55dfcdcab8982685e5ef2f3a363ec4b5ea31ab198df21c97d5c2311936 not found: ID does not exist" Jan 22 07:00:06 crc kubenswrapper[4720]: I0122 07:00:06.666821 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/7fee6a10-fdd0-4b20-aa01-88c426dc5d91-custom-prometheus-ca\") pod \"7fee6a10-fdd0-4b20-aa01-88c426dc5d91\" (UID: \"7fee6a10-fdd0-4b20-aa01-88c426dc5d91\") " Jan 22 07:00:06 crc kubenswrapper[4720]: I0122 07:00:06.666957 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7fee6a10-fdd0-4b20-aa01-88c426dc5d91-config-data\") pod \"7fee6a10-fdd0-4b20-aa01-88c426dc5d91\" (UID: \"7fee6a10-fdd0-4b20-aa01-88c426dc5d91\") " Jan 22 07:00:06 crc kubenswrapper[4720]: I0122 07:00:06.667191 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fee6a10-fdd0-4b20-aa01-88c426dc5d91-combined-ca-bundle\") pod \"7fee6a10-fdd0-4b20-aa01-88c426dc5d91\" (UID: \"7fee6a10-fdd0-4b20-aa01-88c426dc5d91\") " Jan 22 07:00:06 crc kubenswrapper[4720]: I0122 07:00:06.667258 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7fee6a10-fdd0-4b20-aa01-88c426dc5d91-logs\") pod \"7fee6a10-fdd0-4b20-aa01-88c426dc5d91\" (UID: \"7fee6a10-fdd0-4b20-aa01-88c426dc5d91\") " Jan 22 07:00:06 crc kubenswrapper[4720]: I0122 07:00:06.667310 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k4cmf\" (UniqueName: \"kubernetes.io/projected/7fee6a10-fdd0-4b20-aa01-88c426dc5d91-kube-api-access-k4cmf\") pod \"7fee6a10-fdd0-4b20-aa01-88c426dc5d91\" (UID: \"7fee6a10-fdd0-4b20-aa01-88c426dc5d91\") " Jan 22 07:00:06 crc kubenswrapper[4720]: I0122 07:00:06.667771 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7fee6a10-fdd0-4b20-aa01-88c426dc5d91-logs" (OuterVolumeSpecName: "logs") pod "7fee6a10-fdd0-4b20-aa01-88c426dc5d91" (UID: "7fee6a10-fdd0-4b20-aa01-88c426dc5d91"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 07:00:06 crc kubenswrapper[4720]: I0122 07:00:06.674055 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7fee6a10-fdd0-4b20-aa01-88c426dc5d91-kube-api-access-k4cmf" (OuterVolumeSpecName: "kube-api-access-k4cmf") pod "7fee6a10-fdd0-4b20-aa01-88c426dc5d91" (UID: "7fee6a10-fdd0-4b20-aa01-88c426dc5d91"). InnerVolumeSpecName "kube-api-access-k4cmf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 07:00:06 crc kubenswrapper[4720]: I0122 07:00:06.690670 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fee6a10-fdd0-4b20-aa01-88c426dc5d91-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "7fee6a10-fdd0-4b20-aa01-88c426dc5d91" (UID: "7fee6a10-fdd0-4b20-aa01-88c426dc5d91"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:00:06 crc kubenswrapper[4720]: I0122 07:00:06.695467 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fee6a10-fdd0-4b20-aa01-88c426dc5d91-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7fee6a10-fdd0-4b20-aa01-88c426dc5d91" (UID: "7fee6a10-fdd0-4b20-aa01-88c426dc5d91"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:00:06 crc kubenswrapper[4720]: I0122 07:00:06.712069 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fee6a10-fdd0-4b20-aa01-88c426dc5d91-config-data" (OuterVolumeSpecName: "config-data") pod "7fee6a10-fdd0-4b20-aa01-88c426dc5d91" (UID: "7fee6a10-fdd0-4b20-aa01-88c426dc5d91"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:00:06 crc kubenswrapper[4720]: I0122 07:00:06.815695 4720 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fee6a10-fdd0-4b20-aa01-88c426dc5d91-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 07:00:06 crc kubenswrapper[4720]: I0122 07:00:06.815733 4720 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7fee6a10-fdd0-4b20-aa01-88c426dc5d91-logs\") on node \"crc\" DevicePath \"\"" Jan 22 07:00:06 crc kubenswrapper[4720]: I0122 07:00:06.815744 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k4cmf\" (UniqueName: \"kubernetes.io/projected/7fee6a10-fdd0-4b20-aa01-88c426dc5d91-kube-api-access-k4cmf\") on node \"crc\" DevicePath \"\"" Jan 22 07:00:06 crc kubenswrapper[4720]: I0122 07:00:06.815756 4720 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/7fee6a10-fdd0-4b20-aa01-88c426dc5d91-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Jan 22 07:00:06 crc kubenswrapper[4720]: I0122 07:00:06.815765 4720 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7fee6a10-fdd0-4b20-aa01-88c426dc5d91-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 07:00:06 crc kubenswrapper[4720]: I0122 07:00:06.946751 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 22 07:00:06 crc kubenswrapper[4720]: I0122 07:00:06.952556 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 22 07:00:07 crc kubenswrapper[4720]: I0122 07:00:07.678795 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-db-create-2dww7"] Jan 22 07:00:07 crc kubenswrapper[4720]: E0122 07:00:07.679204 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70a444b0-cf26-46ac-8caf-187a0bccd253" containerName="mariadb-account-delete" Jan 22 07:00:07 crc kubenswrapper[4720]: I0122 07:00:07.679217 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="70a444b0-cf26-46ac-8caf-187a0bccd253" containerName="mariadb-account-delete" Jan 22 07:00:07 crc kubenswrapper[4720]: E0122 07:00:07.679246 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c38ccafb-7319-4e13-a9e1-f38f73a8bd3c" containerName="collect-profiles" Jan 22 07:00:07 crc kubenswrapper[4720]: I0122 07:00:07.679253 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="c38ccafb-7319-4e13-a9e1-f38f73a8bd3c" containerName="collect-profiles" Jan 22 07:00:07 crc kubenswrapper[4720]: E0122 07:00:07.679271 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7fee6a10-fdd0-4b20-aa01-88c426dc5d91" containerName="watcher-decision-engine" Jan 22 07:00:07 crc kubenswrapper[4720]: I0122 07:00:07.679279 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="7fee6a10-fdd0-4b20-aa01-88c426dc5d91" containerName="watcher-decision-engine" Jan 22 07:00:07 crc kubenswrapper[4720]: I0122 07:00:07.679424 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="c38ccafb-7319-4e13-a9e1-f38f73a8bd3c" containerName="collect-profiles" Jan 22 07:00:07 crc kubenswrapper[4720]: I0122 07:00:07.679441 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="70a444b0-cf26-46ac-8caf-187a0bccd253" containerName="mariadb-account-delete" Jan 22 07:00:07 crc kubenswrapper[4720]: I0122 07:00:07.679452 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="7fee6a10-fdd0-4b20-aa01-88c426dc5d91" containerName="watcher-decision-engine" Jan 22 07:00:07 crc kubenswrapper[4720]: I0122 07:00:07.680080 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-2dww7" Jan 22 07:00:07 crc kubenswrapper[4720]: I0122 07:00:07.694557 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-db-create-2dww7"] Jan 22 07:00:07 crc kubenswrapper[4720]: I0122 07:00:07.730367 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-5b9d-account-create-update-hh4vt"] Jan 22 07:00:07 crc kubenswrapper[4720]: I0122 07:00:07.733707 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-5b9d-account-create-update-hh4vt" Jan 22 07:00:07 crc kubenswrapper[4720]: I0122 07:00:07.739272 4720 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-db-secret" Jan 22 07:00:07 crc kubenswrapper[4720]: I0122 07:00:07.772516 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-5b9d-account-create-update-hh4vt"] Jan 22 07:00:07 crc kubenswrapper[4720]: I0122 07:00:07.842527 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/51c71a30-7002-460d-aaab-0a7bc54247fa-operator-scripts\") pod \"watcher-5b9d-account-create-update-hh4vt\" (UID: \"51c71a30-7002-460d-aaab-0a7bc54247fa\") " pod="watcher-kuttl-default/watcher-5b9d-account-create-update-hh4vt" Jan 22 07:00:07 crc kubenswrapper[4720]: I0122 07:00:07.842778 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8d5a9f1b-a220-45c7-8902-634631838ea7-operator-scripts\") pod \"watcher-db-create-2dww7\" (UID: \"8d5a9f1b-a220-45c7-8902-634631838ea7\") " pod="watcher-kuttl-default/watcher-db-create-2dww7" Jan 22 07:00:07 crc kubenswrapper[4720]: I0122 07:00:07.842851 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-86br4\" (UniqueName: \"kubernetes.io/projected/8d5a9f1b-a220-45c7-8902-634631838ea7-kube-api-access-86br4\") pod \"watcher-db-create-2dww7\" (UID: \"8d5a9f1b-a220-45c7-8902-634631838ea7\") " pod="watcher-kuttl-default/watcher-db-create-2dww7" Jan 22 07:00:07 crc kubenswrapper[4720]: I0122 07:00:07.842877 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rc79k\" (UniqueName: \"kubernetes.io/projected/51c71a30-7002-460d-aaab-0a7bc54247fa-kube-api-access-rc79k\") pod \"watcher-5b9d-account-create-update-hh4vt\" (UID: \"51c71a30-7002-460d-aaab-0a7bc54247fa\") " pod="watcher-kuttl-default/watcher-5b9d-account-create-update-hh4vt" Jan 22 07:00:07 crc kubenswrapper[4720]: I0122 07:00:07.943898 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-86br4\" (UniqueName: \"kubernetes.io/projected/8d5a9f1b-a220-45c7-8902-634631838ea7-kube-api-access-86br4\") pod \"watcher-db-create-2dww7\" (UID: \"8d5a9f1b-a220-45c7-8902-634631838ea7\") " pod="watcher-kuttl-default/watcher-db-create-2dww7" Jan 22 07:00:07 crc kubenswrapper[4720]: I0122 07:00:07.943988 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rc79k\" (UniqueName: \"kubernetes.io/projected/51c71a30-7002-460d-aaab-0a7bc54247fa-kube-api-access-rc79k\") pod \"watcher-5b9d-account-create-update-hh4vt\" (UID: \"51c71a30-7002-460d-aaab-0a7bc54247fa\") " pod="watcher-kuttl-default/watcher-5b9d-account-create-update-hh4vt" Jan 22 07:00:07 crc kubenswrapper[4720]: I0122 07:00:07.944066 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/51c71a30-7002-460d-aaab-0a7bc54247fa-operator-scripts\") pod \"watcher-5b9d-account-create-update-hh4vt\" (UID: \"51c71a30-7002-460d-aaab-0a7bc54247fa\") " pod="watcher-kuttl-default/watcher-5b9d-account-create-update-hh4vt" Jan 22 07:00:07 crc kubenswrapper[4720]: I0122 07:00:07.944093 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8d5a9f1b-a220-45c7-8902-634631838ea7-operator-scripts\") pod \"watcher-db-create-2dww7\" (UID: \"8d5a9f1b-a220-45c7-8902-634631838ea7\") " pod="watcher-kuttl-default/watcher-db-create-2dww7" Jan 22 07:00:07 crc kubenswrapper[4720]: I0122 07:00:07.945136 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8d5a9f1b-a220-45c7-8902-634631838ea7-operator-scripts\") pod \"watcher-db-create-2dww7\" (UID: \"8d5a9f1b-a220-45c7-8902-634631838ea7\") " pod="watcher-kuttl-default/watcher-db-create-2dww7" Jan 22 07:00:07 crc kubenswrapper[4720]: I0122 07:00:07.945211 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/51c71a30-7002-460d-aaab-0a7bc54247fa-operator-scripts\") pod \"watcher-5b9d-account-create-update-hh4vt\" (UID: \"51c71a30-7002-460d-aaab-0a7bc54247fa\") " pod="watcher-kuttl-default/watcher-5b9d-account-create-update-hh4vt" Jan 22 07:00:07 crc kubenswrapper[4720]: I0122 07:00:07.980602 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rc79k\" (UniqueName: \"kubernetes.io/projected/51c71a30-7002-460d-aaab-0a7bc54247fa-kube-api-access-rc79k\") pod \"watcher-5b9d-account-create-update-hh4vt\" (UID: \"51c71a30-7002-460d-aaab-0a7bc54247fa\") " pod="watcher-kuttl-default/watcher-5b9d-account-create-update-hh4vt" Jan 22 07:00:07 crc kubenswrapper[4720]: I0122 07:00:07.984537 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-86br4\" (UniqueName: \"kubernetes.io/projected/8d5a9f1b-a220-45c7-8902-634631838ea7-kube-api-access-86br4\") pod \"watcher-db-create-2dww7\" (UID: \"8d5a9f1b-a220-45c7-8902-634631838ea7\") " pod="watcher-kuttl-default/watcher-db-create-2dww7" Jan 22 07:00:07 crc kubenswrapper[4720]: I0122 07:00:07.997884 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-2dww7" Jan 22 07:00:08 crc kubenswrapper[4720]: I0122 07:00:08.069228 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-5b9d-account-create-update-hh4vt" Jan 22 07:00:08 crc kubenswrapper[4720]: I0122 07:00:08.283577 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7fee6a10-fdd0-4b20-aa01-88c426dc5d91" path="/var/lib/kubelet/pods/7fee6a10-fdd0-4b20-aa01-88c426dc5d91/volumes" Jan 22 07:00:08 crc kubenswrapper[4720]: I0122 07:00:08.600604 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-db-create-2dww7"] Jan 22 07:00:08 crc kubenswrapper[4720]: W0122 07:00:08.617267 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8d5a9f1b_a220_45c7_8902_634631838ea7.slice/crio-2bceb768b2b0328ab15a445059f14dc5167151e2ddffa0596825a0059c46c921 WatchSource:0}: Error finding container 2bceb768b2b0328ab15a445059f14dc5167151e2ddffa0596825a0059c46c921: Status 404 returned error can't find the container with id 2bceb768b2b0328ab15a445059f14dc5167151e2ddffa0596825a0059c46c921 Jan 22 07:00:08 crc kubenswrapper[4720]: I0122 07:00:08.662416 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-2dww7" event={"ID":"8d5a9f1b-a220-45c7-8902-634631838ea7","Type":"ContainerStarted","Data":"2bceb768b2b0328ab15a445059f14dc5167151e2ddffa0596825a0059c46c921"} Jan 22 07:00:08 crc kubenswrapper[4720]: I0122 07:00:08.671440 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"8d89b304-1eb9-4cc6-8fbf-8c859e7c4a02","Type":"ContainerStarted","Data":"9617d4432b3c038c31f66629a8efc6f1a8fdf3664fc2e6a82a7206ea69b3b568"} Jan 22 07:00:08 crc kubenswrapper[4720]: I0122 07:00:08.671643 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="8d89b304-1eb9-4cc6-8fbf-8c859e7c4a02" containerName="ceilometer-central-agent" containerID="cri-o://ace28fced767eb6f4679c38ef14f854ef329c077632be63999e1572cc43ef3f9" gracePeriod=30 Jan 22 07:00:08 crc kubenswrapper[4720]: I0122 07:00:08.671720 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:00:08 crc kubenswrapper[4720]: I0122 07:00:08.671749 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="8d89b304-1eb9-4cc6-8fbf-8c859e7c4a02" containerName="sg-core" containerID="cri-o://55c27ac6a41e0fd44cdf13f4137ad08f5f42d67a09efd34b5ba27c208288494c" gracePeriod=30 Jan 22 07:00:08 crc kubenswrapper[4720]: I0122 07:00:08.671737 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="8d89b304-1eb9-4cc6-8fbf-8c859e7c4a02" containerName="proxy-httpd" containerID="cri-o://9617d4432b3c038c31f66629a8efc6f1a8fdf3664fc2e6a82a7206ea69b3b568" gracePeriod=30 Jan 22 07:00:08 crc kubenswrapper[4720]: I0122 07:00:08.671791 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="8d89b304-1eb9-4cc6-8fbf-8c859e7c4a02" containerName="ceilometer-notification-agent" containerID="cri-o://55e90ecd3ac4e349ec8db53b9590cd2860a9088888fdcac68f10f773709d8d8a" gracePeriod=30 Jan 22 07:00:08 crc kubenswrapper[4720]: I0122 07:00:08.719706 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/ceilometer-0" podStartSLOduration=2.860472343 podStartE2EDuration="6.71968484s" podCreationTimestamp="2026-01-22 07:00:02 +0000 UTC" firstStartedPulling="2026-01-22 07:00:03.911844332 +0000 UTC m=+1496.053751037" lastFinishedPulling="2026-01-22 07:00:07.771056839 +0000 UTC m=+1499.912963534" observedRunningTime="2026-01-22 07:00:08.712428993 +0000 UTC m=+1500.854335698" watchObservedRunningTime="2026-01-22 07:00:08.71968484 +0000 UTC m=+1500.861591545" Jan 22 07:00:08 crc kubenswrapper[4720]: I0122 07:00:08.854240 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-5b9d-account-create-update-hh4vt"] Jan 22 07:00:08 crc kubenswrapper[4720]: W0122 07:00:08.858159 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod51c71a30_7002_460d_aaab_0a7bc54247fa.slice/crio-bb2e48273de9131b9b75d72280b2b1980f265dfc3355f42b1f671e5062095ffa WatchSource:0}: Error finding container bb2e48273de9131b9b75d72280b2b1980f265dfc3355f42b1f671e5062095ffa: Status 404 returned error can't find the container with id bb2e48273de9131b9b75d72280b2b1980f265dfc3355f42b1f671e5062095ffa Jan 22 07:00:08 crc kubenswrapper[4720]: I0122 07:00:08.863234 4720 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-db-secret" Jan 22 07:00:09 crc kubenswrapper[4720]: I0122 07:00:09.696704 4720 generic.go:334] "Generic (PLEG): container finished" podID="8d89b304-1eb9-4cc6-8fbf-8c859e7c4a02" containerID="9617d4432b3c038c31f66629a8efc6f1a8fdf3664fc2e6a82a7206ea69b3b568" exitCode=0 Jan 22 07:00:09 crc kubenswrapper[4720]: I0122 07:00:09.697081 4720 generic.go:334] "Generic (PLEG): container finished" podID="8d89b304-1eb9-4cc6-8fbf-8c859e7c4a02" containerID="55c27ac6a41e0fd44cdf13f4137ad08f5f42d67a09efd34b5ba27c208288494c" exitCode=2 Jan 22 07:00:09 crc kubenswrapper[4720]: I0122 07:00:09.697098 4720 generic.go:334] "Generic (PLEG): container finished" podID="8d89b304-1eb9-4cc6-8fbf-8c859e7c4a02" containerID="55e90ecd3ac4e349ec8db53b9590cd2860a9088888fdcac68f10f773709d8d8a" exitCode=0 Jan 22 07:00:09 crc kubenswrapper[4720]: I0122 07:00:09.696867 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"8d89b304-1eb9-4cc6-8fbf-8c859e7c4a02","Type":"ContainerDied","Data":"9617d4432b3c038c31f66629a8efc6f1a8fdf3664fc2e6a82a7206ea69b3b568"} Jan 22 07:00:09 crc kubenswrapper[4720]: I0122 07:00:09.697190 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"8d89b304-1eb9-4cc6-8fbf-8c859e7c4a02","Type":"ContainerDied","Data":"55c27ac6a41e0fd44cdf13f4137ad08f5f42d67a09efd34b5ba27c208288494c"} Jan 22 07:00:09 crc kubenswrapper[4720]: I0122 07:00:09.697215 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"8d89b304-1eb9-4cc6-8fbf-8c859e7c4a02","Type":"ContainerDied","Data":"55e90ecd3ac4e349ec8db53b9590cd2860a9088888fdcac68f10f773709d8d8a"} Jan 22 07:00:09 crc kubenswrapper[4720]: I0122 07:00:09.699621 4720 generic.go:334] "Generic (PLEG): container finished" podID="8d5a9f1b-a220-45c7-8902-634631838ea7" containerID="5517b719f8dab98ca6d28fc71100fee4f6d967b1b70e682f90937a78c5fcbec1" exitCode=0 Jan 22 07:00:09 crc kubenswrapper[4720]: I0122 07:00:09.699773 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-2dww7" event={"ID":"8d5a9f1b-a220-45c7-8902-634631838ea7","Type":"ContainerDied","Data":"5517b719f8dab98ca6d28fc71100fee4f6d967b1b70e682f90937a78c5fcbec1"} Jan 22 07:00:09 crc kubenswrapper[4720]: I0122 07:00:09.702955 4720 generic.go:334] "Generic (PLEG): container finished" podID="51c71a30-7002-460d-aaab-0a7bc54247fa" containerID="1e21d13d07afa88bb5c09e77ef705a17a9c36bd81635ace49ed4dcb4c4ca31c0" exitCode=0 Jan 22 07:00:09 crc kubenswrapper[4720]: I0122 07:00:09.703009 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-5b9d-account-create-update-hh4vt" event={"ID":"51c71a30-7002-460d-aaab-0a7bc54247fa","Type":"ContainerDied","Data":"1e21d13d07afa88bb5c09e77ef705a17a9c36bd81635ace49ed4dcb4c4ca31c0"} Jan 22 07:00:09 crc kubenswrapper[4720]: I0122 07:00:09.703040 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-5b9d-account-create-update-hh4vt" event={"ID":"51c71a30-7002-460d-aaab-0a7bc54247fa","Type":"ContainerStarted","Data":"bb2e48273de9131b9b75d72280b2b1980f265dfc3355f42b1f671e5062095ffa"} Jan 22 07:00:09 crc kubenswrapper[4720]: I0122 07:00:09.936147 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-xsbk9"] Jan 22 07:00:09 crc kubenswrapper[4720]: I0122 07:00:09.940950 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xsbk9" Jan 22 07:00:09 crc kubenswrapper[4720]: I0122 07:00:09.954888 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-xsbk9"] Jan 22 07:00:09 crc kubenswrapper[4720]: I0122 07:00:09.996204 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2cef4b87-a58e-4efe-a1ac-16cc86a676b1-utilities\") pod \"redhat-marketplace-xsbk9\" (UID: \"2cef4b87-a58e-4efe-a1ac-16cc86a676b1\") " pod="openshift-marketplace/redhat-marketplace-xsbk9" Jan 22 07:00:09 crc kubenswrapper[4720]: I0122 07:00:09.996249 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2cef4b87-a58e-4efe-a1ac-16cc86a676b1-catalog-content\") pod \"redhat-marketplace-xsbk9\" (UID: \"2cef4b87-a58e-4efe-a1ac-16cc86a676b1\") " pod="openshift-marketplace/redhat-marketplace-xsbk9" Jan 22 07:00:09 crc kubenswrapper[4720]: I0122 07:00:09.996301 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f459b\" (UniqueName: \"kubernetes.io/projected/2cef4b87-a58e-4efe-a1ac-16cc86a676b1-kube-api-access-f459b\") pod \"redhat-marketplace-xsbk9\" (UID: \"2cef4b87-a58e-4efe-a1ac-16cc86a676b1\") " pod="openshift-marketplace/redhat-marketplace-xsbk9" Jan 22 07:00:10 crc kubenswrapper[4720]: I0122 07:00:10.097631 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f459b\" (UniqueName: \"kubernetes.io/projected/2cef4b87-a58e-4efe-a1ac-16cc86a676b1-kube-api-access-f459b\") pod \"redhat-marketplace-xsbk9\" (UID: \"2cef4b87-a58e-4efe-a1ac-16cc86a676b1\") " pod="openshift-marketplace/redhat-marketplace-xsbk9" Jan 22 07:00:10 crc kubenswrapper[4720]: I0122 07:00:10.097750 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2cef4b87-a58e-4efe-a1ac-16cc86a676b1-utilities\") pod \"redhat-marketplace-xsbk9\" (UID: \"2cef4b87-a58e-4efe-a1ac-16cc86a676b1\") " pod="openshift-marketplace/redhat-marketplace-xsbk9" Jan 22 07:00:10 crc kubenswrapper[4720]: I0122 07:00:10.097774 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2cef4b87-a58e-4efe-a1ac-16cc86a676b1-catalog-content\") pod \"redhat-marketplace-xsbk9\" (UID: \"2cef4b87-a58e-4efe-a1ac-16cc86a676b1\") " pod="openshift-marketplace/redhat-marketplace-xsbk9" Jan 22 07:00:10 crc kubenswrapper[4720]: I0122 07:00:10.098319 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2cef4b87-a58e-4efe-a1ac-16cc86a676b1-catalog-content\") pod \"redhat-marketplace-xsbk9\" (UID: \"2cef4b87-a58e-4efe-a1ac-16cc86a676b1\") " pod="openshift-marketplace/redhat-marketplace-xsbk9" Jan 22 07:00:10 crc kubenswrapper[4720]: I0122 07:00:10.098337 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2cef4b87-a58e-4efe-a1ac-16cc86a676b1-utilities\") pod \"redhat-marketplace-xsbk9\" (UID: \"2cef4b87-a58e-4efe-a1ac-16cc86a676b1\") " pod="openshift-marketplace/redhat-marketplace-xsbk9" Jan 22 07:00:10 crc kubenswrapper[4720]: I0122 07:00:10.117694 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f459b\" (UniqueName: \"kubernetes.io/projected/2cef4b87-a58e-4efe-a1ac-16cc86a676b1-kube-api-access-f459b\") pod \"redhat-marketplace-xsbk9\" (UID: \"2cef4b87-a58e-4efe-a1ac-16cc86a676b1\") " pod="openshift-marketplace/redhat-marketplace-xsbk9" Jan 22 07:00:10 crc kubenswrapper[4720]: I0122 07:00:10.352848 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xsbk9" Jan 22 07:00:10 crc kubenswrapper[4720]: I0122 07:00:10.498385 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:00:10 crc kubenswrapper[4720]: I0122 07:00:10.510212 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/8d89b304-1eb9-4cc6-8fbf-8c859e7c4a02-ceilometer-tls-certs\") pod \"8d89b304-1eb9-4cc6-8fbf-8c859e7c4a02\" (UID: \"8d89b304-1eb9-4cc6-8fbf-8c859e7c4a02\") " Jan 22 07:00:10 crc kubenswrapper[4720]: I0122 07:00:10.510285 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8d89b304-1eb9-4cc6-8fbf-8c859e7c4a02-sg-core-conf-yaml\") pod \"8d89b304-1eb9-4cc6-8fbf-8c859e7c4a02\" (UID: \"8d89b304-1eb9-4cc6-8fbf-8c859e7c4a02\") " Jan 22 07:00:10 crc kubenswrapper[4720]: I0122 07:00:10.510393 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8d89b304-1eb9-4cc6-8fbf-8c859e7c4a02-log-httpd\") pod \"8d89b304-1eb9-4cc6-8fbf-8c859e7c4a02\" (UID: \"8d89b304-1eb9-4cc6-8fbf-8c859e7c4a02\") " Jan 22 07:00:10 crc kubenswrapper[4720]: I0122 07:00:10.510439 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d89b304-1eb9-4cc6-8fbf-8c859e7c4a02-config-data\") pod \"8d89b304-1eb9-4cc6-8fbf-8c859e7c4a02\" (UID: \"8d89b304-1eb9-4cc6-8fbf-8c859e7c4a02\") " Jan 22 07:00:10 crc kubenswrapper[4720]: I0122 07:00:10.510461 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d89b304-1eb9-4cc6-8fbf-8c859e7c4a02-combined-ca-bundle\") pod \"8d89b304-1eb9-4cc6-8fbf-8c859e7c4a02\" (UID: \"8d89b304-1eb9-4cc6-8fbf-8c859e7c4a02\") " Jan 22 07:00:10 crc kubenswrapper[4720]: I0122 07:00:10.510681 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lhvbc\" (UniqueName: \"kubernetes.io/projected/8d89b304-1eb9-4cc6-8fbf-8c859e7c4a02-kube-api-access-lhvbc\") pod \"8d89b304-1eb9-4cc6-8fbf-8c859e7c4a02\" (UID: \"8d89b304-1eb9-4cc6-8fbf-8c859e7c4a02\") " Jan 22 07:00:10 crc kubenswrapper[4720]: I0122 07:00:10.510713 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8d89b304-1eb9-4cc6-8fbf-8c859e7c4a02-run-httpd\") pod \"8d89b304-1eb9-4cc6-8fbf-8c859e7c4a02\" (UID: \"8d89b304-1eb9-4cc6-8fbf-8c859e7c4a02\") " Jan 22 07:00:10 crc kubenswrapper[4720]: I0122 07:00:10.511182 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8d89b304-1eb9-4cc6-8fbf-8c859e7c4a02-scripts\") pod \"8d89b304-1eb9-4cc6-8fbf-8c859e7c4a02\" (UID: \"8d89b304-1eb9-4cc6-8fbf-8c859e7c4a02\") " Jan 22 07:00:10 crc kubenswrapper[4720]: I0122 07:00:10.513665 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8d89b304-1eb9-4cc6-8fbf-8c859e7c4a02-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "8d89b304-1eb9-4cc6-8fbf-8c859e7c4a02" (UID: "8d89b304-1eb9-4cc6-8fbf-8c859e7c4a02"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 07:00:10 crc kubenswrapper[4720]: I0122 07:00:10.514009 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8d89b304-1eb9-4cc6-8fbf-8c859e7c4a02-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "8d89b304-1eb9-4cc6-8fbf-8c859e7c4a02" (UID: "8d89b304-1eb9-4cc6-8fbf-8c859e7c4a02"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 07:00:10 crc kubenswrapper[4720]: I0122 07:00:10.540624 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8d89b304-1eb9-4cc6-8fbf-8c859e7c4a02-kube-api-access-lhvbc" (OuterVolumeSpecName: "kube-api-access-lhvbc") pod "8d89b304-1eb9-4cc6-8fbf-8c859e7c4a02" (UID: "8d89b304-1eb9-4cc6-8fbf-8c859e7c4a02"). InnerVolumeSpecName "kube-api-access-lhvbc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 07:00:10 crc kubenswrapper[4720]: I0122 07:00:10.540985 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d89b304-1eb9-4cc6-8fbf-8c859e7c4a02-scripts" (OuterVolumeSpecName: "scripts") pod "8d89b304-1eb9-4cc6-8fbf-8c859e7c4a02" (UID: "8d89b304-1eb9-4cc6-8fbf-8c859e7c4a02"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:00:10 crc kubenswrapper[4720]: I0122 07:00:10.578639 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d89b304-1eb9-4cc6-8fbf-8c859e7c4a02-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "8d89b304-1eb9-4cc6-8fbf-8c859e7c4a02" (UID: "8d89b304-1eb9-4cc6-8fbf-8c859e7c4a02"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:00:10 crc kubenswrapper[4720]: I0122 07:00:10.594576 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d89b304-1eb9-4cc6-8fbf-8c859e7c4a02-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "8d89b304-1eb9-4cc6-8fbf-8c859e7c4a02" (UID: "8d89b304-1eb9-4cc6-8fbf-8c859e7c4a02"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:00:10 crc kubenswrapper[4720]: I0122 07:00:10.615603 4720 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/8d89b304-1eb9-4cc6-8fbf-8c859e7c4a02-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 22 07:00:10 crc kubenswrapper[4720]: I0122 07:00:10.615640 4720 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8d89b304-1eb9-4cc6-8fbf-8c859e7c4a02-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 22 07:00:10 crc kubenswrapper[4720]: I0122 07:00:10.615653 4720 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8d89b304-1eb9-4cc6-8fbf-8c859e7c4a02-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 22 07:00:10 crc kubenswrapper[4720]: I0122 07:00:10.615666 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lhvbc\" (UniqueName: \"kubernetes.io/projected/8d89b304-1eb9-4cc6-8fbf-8c859e7c4a02-kube-api-access-lhvbc\") on node \"crc\" DevicePath \"\"" Jan 22 07:00:10 crc kubenswrapper[4720]: I0122 07:00:10.615684 4720 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8d89b304-1eb9-4cc6-8fbf-8c859e7c4a02-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 22 07:00:10 crc kubenswrapper[4720]: I0122 07:00:10.615699 4720 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8d89b304-1eb9-4cc6-8fbf-8c859e7c4a02-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 07:00:10 crc kubenswrapper[4720]: I0122 07:00:10.670036 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d89b304-1eb9-4cc6-8fbf-8c859e7c4a02-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8d89b304-1eb9-4cc6-8fbf-8c859e7c4a02" (UID: "8d89b304-1eb9-4cc6-8fbf-8c859e7c4a02"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:00:10 crc kubenswrapper[4720]: I0122 07:00:10.710856 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d89b304-1eb9-4cc6-8fbf-8c859e7c4a02-config-data" (OuterVolumeSpecName: "config-data") pod "8d89b304-1eb9-4cc6-8fbf-8c859e7c4a02" (UID: "8d89b304-1eb9-4cc6-8fbf-8c859e7c4a02"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:00:10 crc kubenswrapper[4720]: I0122 07:00:10.717175 4720 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d89b304-1eb9-4cc6-8fbf-8c859e7c4a02-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 07:00:10 crc kubenswrapper[4720]: I0122 07:00:10.717218 4720 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d89b304-1eb9-4cc6-8fbf-8c859e7c4a02-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 07:00:10 crc kubenswrapper[4720]: I0122 07:00:10.723806 4720 generic.go:334] "Generic (PLEG): container finished" podID="8d89b304-1eb9-4cc6-8fbf-8c859e7c4a02" containerID="ace28fced767eb6f4679c38ef14f854ef329c077632be63999e1572cc43ef3f9" exitCode=0 Jan 22 07:00:10 crc kubenswrapper[4720]: I0122 07:00:10.723879 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:00:10 crc kubenswrapper[4720]: I0122 07:00:10.723949 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"8d89b304-1eb9-4cc6-8fbf-8c859e7c4a02","Type":"ContainerDied","Data":"ace28fced767eb6f4679c38ef14f854ef329c077632be63999e1572cc43ef3f9"} Jan 22 07:00:10 crc kubenswrapper[4720]: I0122 07:00:10.723987 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"8d89b304-1eb9-4cc6-8fbf-8c859e7c4a02","Type":"ContainerDied","Data":"150686739b9f6c3616d703c5ee6f63df44b3d71da387d32d91ac6cf88938e43e"} Jan 22 07:00:10 crc kubenswrapper[4720]: I0122 07:00:10.724007 4720 scope.go:117] "RemoveContainer" containerID="9617d4432b3c038c31f66629a8efc6f1a8fdf3664fc2e6a82a7206ea69b3b568" Jan 22 07:00:10 crc kubenswrapper[4720]: I0122 07:00:10.809823 4720 scope.go:117] "RemoveContainer" containerID="55c27ac6a41e0fd44cdf13f4137ad08f5f42d67a09efd34b5ba27c208288494c" Jan 22 07:00:10 crc kubenswrapper[4720]: I0122 07:00:10.827981 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 07:00:10 crc kubenswrapper[4720]: I0122 07:00:10.854984 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 07:00:10 crc kubenswrapper[4720]: I0122 07:00:10.874147 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 07:00:10 crc kubenswrapper[4720]: I0122 07:00:10.874605 4720 scope.go:117] "RemoveContainer" containerID="55e90ecd3ac4e349ec8db53b9590cd2860a9088888fdcac68f10f773709d8d8a" Jan 22 07:00:10 crc kubenswrapper[4720]: E0122 07:00:10.874661 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d89b304-1eb9-4cc6-8fbf-8c859e7c4a02" containerName="ceilometer-notification-agent" Jan 22 07:00:10 crc kubenswrapper[4720]: I0122 07:00:10.874677 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d89b304-1eb9-4cc6-8fbf-8c859e7c4a02" containerName="ceilometer-notification-agent" Jan 22 07:00:10 crc kubenswrapper[4720]: E0122 07:00:10.874690 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d89b304-1eb9-4cc6-8fbf-8c859e7c4a02" containerName="proxy-httpd" Jan 22 07:00:10 crc kubenswrapper[4720]: I0122 07:00:10.874697 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d89b304-1eb9-4cc6-8fbf-8c859e7c4a02" containerName="proxy-httpd" Jan 22 07:00:10 crc kubenswrapper[4720]: E0122 07:00:10.874709 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d89b304-1eb9-4cc6-8fbf-8c859e7c4a02" containerName="sg-core" Jan 22 07:00:10 crc kubenswrapper[4720]: I0122 07:00:10.874718 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d89b304-1eb9-4cc6-8fbf-8c859e7c4a02" containerName="sg-core" Jan 22 07:00:10 crc kubenswrapper[4720]: E0122 07:00:10.874733 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d89b304-1eb9-4cc6-8fbf-8c859e7c4a02" containerName="ceilometer-central-agent" Jan 22 07:00:10 crc kubenswrapper[4720]: I0122 07:00:10.874740 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d89b304-1eb9-4cc6-8fbf-8c859e7c4a02" containerName="ceilometer-central-agent" Jan 22 07:00:10 crc kubenswrapper[4720]: I0122 07:00:10.874991 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d89b304-1eb9-4cc6-8fbf-8c859e7c4a02" containerName="proxy-httpd" Jan 22 07:00:10 crc kubenswrapper[4720]: I0122 07:00:10.875007 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d89b304-1eb9-4cc6-8fbf-8c859e7c4a02" containerName="ceilometer-central-agent" Jan 22 07:00:10 crc kubenswrapper[4720]: I0122 07:00:10.875016 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d89b304-1eb9-4cc6-8fbf-8c859e7c4a02" containerName="sg-core" Jan 22 07:00:10 crc kubenswrapper[4720]: I0122 07:00:10.875030 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d89b304-1eb9-4cc6-8fbf-8c859e7c4a02" containerName="ceilometer-notification-agent" Jan 22 07:00:10 crc kubenswrapper[4720]: I0122 07:00:10.876606 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:00:10 crc kubenswrapper[4720]: I0122 07:00:10.880486 4720 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-scripts" Jan 22 07:00:10 crc kubenswrapper[4720]: I0122 07:00:10.880642 4720 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-config-data" Jan 22 07:00:10 crc kubenswrapper[4720]: I0122 07:00:10.880739 4720 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-ceilometer-internal-svc" Jan 22 07:00:10 crc kubenswrapper[4720]: I0122 07:00:10.896666 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 07:00:10 crc kubenswrapper[4720]: I0122 07:00:10.922435 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e48ac237-80c1-4cca-9e7d-8610d3467cd7-run-httpd\") pod \"ceilometer-0\" (UID: \"e48ac237-80c1-4cca-9e7d-8610d3467cd7\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:00:10 crc kubenswrapper[4720]: I0122 07:00:10.922496 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e48ac237-80c1-4cca-9e7d-8610d3467cd7-config-data\") pod \"ceilometer-0\" (UID: \"e48ac237-80c1-4cca-9e7d-8610d3467cd7\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:00:10 crc kubenswrapper[4720]: I0122 07:00:10.922519 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mgg65\" (UniqueName: \"kubernetes.io/projected/e48ac237-80c1-4cca-9e7d-8610d3467cd7-kube-api-access-mgg65\") pod \"ceilometer-0\" (UID: \"e48ac237-80c1-4cca-9e7d-8610d3467cd7\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:00:10 crc kubenswrapper[4720]: I0122 07:00:10.922562 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e48ac237-80c1-4cca-9e7d-8610d3467cd7-log-httpd\") pod \"ceilometer-0\" (UID: \"e48ac237-80c1-4cca-9e7d-8610d3467cd7\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:00:10 crc kubenswrapper[4720]: I0122 07:00:10.922585 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e48ac237-80c1-4cca-9e7d-8610d3467cd7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e48ac237-80c1-4cca-9e7d-8610d3467cd7\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:00:10 crc kubenswrapper[4720]: I0122 07:00:10.922646 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e48ac237-80c1-4cca-9e7d-8610d3467cd7-scripts\") pod \"ceilometer-0\" (UID: \"e48ac237-80c1-4cca-9e7d-8610d3467cd7\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:00:10 crc kubenswrapper[4720]: I0122 07:00:10.922682 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/e48ac237-80c1-4cca-9e7d-8610d3467cd7-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"e48ac237-80c1-4cca-9e7d-8610d3467cd7\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:00:10 crc kubenswrapper[4720]: I0122 07:00:10.922739 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e48ac237-80c1-4cca-9e7d-8610d3467cd7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e48ac237-80c1-4cca-9e7d-8610d3467cd7\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:00:10 crc kubenswrapper[4720]: I0122 07:00:10.933861 4720 scope.go:117] "RemoveContainer" containerID="ace28fced767eb6f4679c38ef14f854ef329c077632be63999e1572cc43ef3f9" Jan 22 07:00:10 crc kubenswrapper[4720]: W0122 07:00:10.947077 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2cef4b87_a58e_4efe_a1ac_16cc86a676b1.slice/crio-e5c0d496141c824ca7420b84ca9c47f16358c9955d044116b7099cfea21d1174 WatchSource:0}: Error finding container e5c0d496141c824ca7420b84ca9c47f16358c9955d044116b7099cfea21d1174: Status 404 returned error can't find the container with id e5c0d496141c824ca7420b84ca9c47f16358c9955d044116b7099cfea21d1174 Jan 22 07:00:10 crc kubenswrapper[4720]: I0122 07:00:10.947079 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-xsbk9"] Jan 22 07:00:10 crc kubenswrapper[4720]: I0122 07:00:10.992415 4720 scope.go:117] "RemoveContainer" containerID="9617d4432b3c038c31f66629a8efc6f1a8fdf3664fc2e6a82a7206ea69b3b568" Jan 22 07:00:10 crc kubenswrapper[4720]: E0122 07:00:10.993721 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9617d4432b3c038c31f66629a8efc6f1a8fdf3664fc2e6a82a7206ea69b3b568\": container with ID starting with 9617d4432b3c038c31f66629a8efc6f1a8fdf3664fc2e6a82a7206ea69b3b568 not found: ID does not exist" containerID="9617d4432b3c038c31f66629a8efc6f1a8fdf3664fc2e6a82a7206ea69b3b568" Jan 22 07:00:10 crc kubenswrapper[4720]: I0122 07:00:10.993798 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9617d4432b3c038c31f66629a8efc6f1a8fdf3664fc2e6a82a7206ea69b3b568"} err="failed to get container status \"9617d4432b3c038c31f66629a8efc6f1a8fdf3664fc2e6a82a7206ea69b3b568\": rpc error: code = NotFound desc = could not find container \"9617d4432b3c038c31f66629a8efc6f1a8fdf3664fc2e6a82a7206ea69b3b568\": container with ID starting with 9617d4432b3c038c31f66629a8efc6f1a8fdf3664fc2e6a82a7206ea69b3b568 not found: ID does not exist" Jan 22 07:00:10 crc kubenswrapper[4720]: I0122 07:00:10.993840 4720 scope.go:117] "RemoveContainer" containerID="55c27ac6a41e0fd44cdf13f4137ad08f5f42d67a09efd34b5ba27c208288494c" Jan 22 07:00:10 crc kubenswrapper[4720]: E0122 07:00:10.994313 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"55c27ac6a41e0fd44cdf13f4137ad08f5f42d67a09efd34b5ba27c208288494c\": container with ID starting with 55c27ac6a41e0fd44cdf13f4137ad08f5f42d67a09efd34b5ba27c208288494c not found: ID does not exist" containerID="55c27ac6a41e0fd44cdf13f4137ad08f5f42d67a09efd34b5ba27c208288494c" Jan 22 07:00:10 crc kubenswrapper[4720]: I0122 07:00:10.994342 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"55c27ac6a41e0fd44cdf13f4137ad08f5f42d67a09efd34b5ba27c208288494c"} err="failed to get container status \"55c27ac6a41e0fd44cdf13f4137ad08f5f42d67a09efd34b5ba27c208288494c\": rpc error: code = NotFound desc = could not find container \"55c27ac6a41e0fd44cdf13f4137ad08f5f42d67a09efd34b5ba27c208288494c\": container with ID starting with 55c27ac6a41e0fd44cdf13f4137ad08f5f42d67a09efd34b5ba27c208288494c not found: ID does not exist" Jan 22 07:00:10 crc kubenswrapper[4720]: I0122 07:00:10.994361 4720 scope.go:117] "RemoveContainer" containerID="55e90ecd3ac4e349ec8db53b9590cd2860a9088888fdcac68f10f773709d8d8a" Jan 22 07:00:10 crc kubenswrapper[4720]: E0122 07:00:10.994582 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"55e90ecd3ac4e349ec8db53b9590cd2860a9088888fdcac68f10f773709d8d8a\": container with ID starting with 55e90ecd3ac4e349ec8db53b9590cd2860a9088888fdcac68f10f773709d8d8a not found: ID does not exist" containerID="55e90ecd3ac4e349ec8db53b9590cd2860a9088888fdcac68f10f773709d8d8a" Jan 22 07:00:10 crc kubenswrapper[4720]: I0122 07:00:10.994618 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"55e90ecd3ac4e349ec8db53b9590cd2860a9088888fdcac68f10f773709d8d8a"} err="failed to get container status \"55e90ecd3ac4e349ec8db53b9590cd2860a9088888fdcac68f10f773709d8d8a\": rpc error: code = NotFound desc = could not find container \"55e90ecd3ac4e349ec8db53b9590cd2860a9088888fdcac68f10f773709d8d8a\": container with ID starting with 55e90ecd3ac4e349ec8db53b9590cd2860a9088888fdcac68f10f773709d8d8a not found: ID does not exist" Jan 22 07:00:10 crc kubenswrapper[4720]: I0122 07:00:10.994643 4720 scope.go:117] "RemoveContainer" containerID="ace28fced767eb6f4679c38ef14f854ef329c077632be63999e1572cc43ef3f9" Jan 22 07:00:10 crc kubenswrapper[4720]: E0122 07:00:10.994863 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ace28fced767eb6f4679c38ef14f854ef329c077632be63999e1572cc43ef3f9\": container with ID starting with ace28fced767eb6f4679c38ef14f854ef329c077632be63999e1572cc43ef3f9 not found: ID does not exist" containerID="ace28fced767eb6f4679c38ef14f854ef329c077632be63999e1572cc43ef3f9" Jan 22 07:00:10 crc kubenswrapper[4720]: I0122 07:00:10.994940 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ace28fced767eb6f4679c38ef14f854ef329c077632be63999e1572cc43ef3f9"} err="failed to get container status \"ace28fced767eb6f4679c38ef14f854ef329c077632be63999e1572cc43ef3f9\": rpc error: code = NotFound desc = could not find container \"ace28fced767eb6f4679c38ef14f854ef329c077632be63999e1572cc43ef3f9\": container with ID starting with ace28fced767eb6f4679c38ef14f854ef329c077632be63999e1572cc43ef3f9 not found: ID does not exist" Jan 22 07:00:11 crc kubenswrapper[4720]: I0122 07:00:11.024956 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e48ac237-80c1-4cca-9e7d-8610d3467cd7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e48ac237-80c1-4cca-9e7d-8610d3467cd7\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:00:11 crc kubenswrapper[4720]: I0122 07:00:11.025192 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e48ac237-80c1-4cca-9e7d-8610d3467cd7-run-httpd\") pod \"ceilometer-0\" (UID: \"e48ac237-80c1-4cca-9e7d-8610d3467cd7\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:00:11 crc kubenswrapper[4720]: I0122 07:00:11.026019 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e48ac237-80c1-4cca-9e7d-8610d3467cd7-run-httpd\") pod \"ceilometer-0\" (UID: \"e48ac237-80c1-4cca-9e7d-8610d3467cd7\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:00:11 crc kubenswrapper[4720]: I0122 07:00:11.026251 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e48ac237-80c1-4cca-9e7d-8610d3467cd7-config-data\") pod \"ceilometer-0\" (UID: \"e48ac237-80c1-4cca-9e7d-8610d3467cd7\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:00:11 crc kubenswrapper[4720]: I0122 07:00:11.026868 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mgg65\" (UniqueName: \"kubernetes.io/projected/e48ac237-80c1-4cca-9e7d-8610d3467cd7-kube-api-access-mgg65\") pod \"ceilometer-0\" (UID: \"e48ac237-80c1-4cca-9e7d-8610d3467cd7\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:00:11 crc kubenswrapper[4720]: I0122 07:00:11.027107 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e48ac237-80c1-4cca-9e7d-8610d3467cd7-log-httpd\") pod \"ceilometer-0\" (UID: \"e48ac237-80c1-4cca-9e7d-8610d3467cd7\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:00:11 crc kubenswrapper[4720]: I0122 07:00:11.027262 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e48ac237-80c1-4cca-9e7d-8610d3467cd7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e48ac237-80c1-4cca-9e7d-8610d3467cd7\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:00:11 crc kubenswrapper[4720]: I0122 07:00:11.027362 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e48ac237-80c1-4cca-9e7d-8610d3467cd7-scripts\") pod \"ceilometer-0\" (UID: \"e48ac237-80c1-4cca-9e7d-8610d3467cd7\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:00:11 crc kubenswrapper[4720]: I0122 07:00:11.027496 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/e48ac237-80c1-4cca-9e7d-8610d3467cd7-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"e48ac237-80c1-4cca-9e7d-8610d3467cd7\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:00:11 crc kubenswrapper[4720]: I0122 07:00:11.027567 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e48ac237-80c1-4cca-9e7d-8610d3467cd7-log-httpd\") pod \"ceilometer-0\" (UID: \"e48ac237-80c1-4cca-9e7d-8610d3467cd7\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:00:11 crc kubenswrapper[4720]: I0122 07:00:11.031889 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e48ac237-80c1-4cca-9e7d-8610d3467cd7-config-data\") pod \"ceilometer-0\" (UID: \"e48ac237-80c1-4cca-9e7d-8610d3467cd7\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:00:11 crc kubenswrapper[4720]: I0122 07:00:11.034680 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e48ac237-80c1-4cca-9e7d-8610d3467cd7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e48ac237-80c1-4cca-9e7d-8610d3467cd7\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:00:11 crc kubenswrapper[4720]: I0122 07:00:11.044960 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/e48ac237-80c1-4cca-9e7d-8610d3467cd7-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"e48ac237-80c1-4cca-9e7d-8610d3467cd7\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:00:11 crc kubenswrapper[4720]: I0122 07:00:11.045739 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e48ac237-80c1-4cca-9e7d-8610d3467cd7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e48ac237-80c1-4cca-9e7d-8610d3467cd7\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:00:11 crc kubenswrapper[4720]: I0122 07:00:11.045863 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e48ac237-80c1-4cca-9e7d-8610d3467cd7-scripts\") pod \"ceilometer-0\" (UID: \"e48ac237-80c1-4cca-9e7d-8610d3467cd7\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:00:11 crc kubenswrapper[4720]: I0122 07:00:11.050555 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mgg65\" (UniqueName: \"kubernetes.io/projected/e48ac237-80c1-4cca-9e7d-8610d3467cd7-kube-api-access-mgg65\") pod \"ceilometer-0\" (UID: \"e48ac237-80c1-4cca-9e7d-8610d3467cd7\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:00:11 crc kubenswrapper[4720]: I0122 07:00:11.197507 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:00:11 crc kubenswrapper[4720]: I0122 07:00:11.318393 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-2dww7" Jan 22 07:00:11 crc kubenswrapper[4720]: I0122 07:00:11.326369 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-5b9d-account-create-update-hh4vt" Jan 22 07:00:11 crc kubenswrapper[4720]: I0122 07:00:11.441102 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/51c71a30-7002-460d-aaab-0a7bc54247fa-operator-scripts\") pod \"51c71a30-7002-460d-aaab-0a7bc54247fa\" (UID: \"51c71a30-7002-460d-aaab-0a7bc54247fa\") " Jan 22 07:00:11 crc kubenswrapper[4720]: I0122 07:00:11.441278 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-86br4\" (UniqueName: \"kubernetes.io/projected/8d5a9f1b-a220-45c7-8902-634631838ea7-kube-api-access-86br4\") pod \"8d5a9f1b-a220-45c7-8902-634631838ea7\" (UID: \"8d5a9f1b-a220-45c7-8902-634631838ea7\") " Jan 22 07:00:11 crc kubenswrapper[4720]: I0122 07:00:11.441309 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8d5a9f1b-a220-45c7-8902-634631838ea7-operator-scripts\") pod \"8d5a9f1b-a220-45c7-8902-634631838ea7\" (UID: \"8d5a9f1b-a220-45c7-8902-634631838ea7\") " Jan 22 07:00:11 crc kubenswrapper[4720]: I0122 07:00:11.441486 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rc79k\" (UniqueName: \"kubernetes.io/projected/51c71a30-7002-460d-aaab-0a7bc54247fa-kube-api-access-rc79k\") pod \"51c71a30-7002-460d-aaab-0a7bc54247fa\" (UID: \"51c71a30-7002-460d-aaab-0a7bc54247fa\") " Jan 22 07:00:11 crc kubenswrapper[4720]: I0122 07:00:11.445505 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/51c71a30-7002-460d-aaab-0a7bc54247fa-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "51c71a30-7002-460d-aaab-0a7bc54247fa" (UID: "51c71a30-7002-460d-aaab-0a7bc54247fa"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 07:00:11 crc kubenswrapper[4720]: I0122 07:00:11.446658 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8d5a9f1b-a220-45c7-8902-634631838ea7-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8d5a9f1b-a220-45c7-8902-634631838ea7" (UID: "8d5a9f1b-a220-45c7-8902-634631838ea7"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 07:00:11 crc kubenswrapper[4720]: I0122 07:00:11.446844 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/51c71a30-7002-460d-aaab-0a7bc54247fa-kube-api-access-rc79k" (OuterVolumeSpecName: "kube-api-access-rc79k") pod "51c71a30-7002-460d-aaab-0a7bc54247fa" (UID: "51c71a30-7002-460d-aaab-0a7bc54247fa"). InnerVolumeSpecName "kube-api-access-rc79k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 07:00:11 crc kubenswrapper[4720]: I0122 07:00:11.446955 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8d5a9f1b-a220-45c7-8902-634631838ea7-kube-api-access-86br4" (OuterVolumeSpecName: "kube-api-access-86br4") pod "8d5a9f1b-a220-45c7-8902-634631838ea7" (UID: "8d5a9f1b-a220-45c7-8902-634631838ea7"). InnerVolumeSpecName "kube-api-access-86br4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 07:00:11 crc kubenswrapper[4720]: I0122 07:00:11.552733 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-86br4\" (UniqueName: \"kubernetes.io/projected/8d5a9f1b-a220-45c7-8902-634631838ea7-kube-api-access-86br4\") on node \"crc\" DevicePath \"\"" Jan 22 07:00:11 crc kubenswrapper[4720]: I0122 07:00:11.552785 4720 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8d5a9f1b-a220-45c7-8902-634631838ea7-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 07:00:11 crc kubenswrapper[4720]: I0122 07:00:11.552805 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rc79k\" (UniqueName: \"kubernetes.io/projected/51c71a30-7002-460d-aaab-0a7bc54247fa-kube-api-access-rc79k\") on node \"crc\" DevicePath \"\"" Jan 22 07:00:11 crc kubenswrapper[4720]: I0122 07:00:11.552827 4720 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/51c71a30-7002-460d-aaab-0a7bc54247fa-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 07:00:11 crc kubenswrapper[4720]: I0122 07:00:11.687681 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 07:00:11 crc kubenswrapper[4720]: W0122 07:00:11.710372 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode48ac237_80c1_4cca_9e7d_8610d3467cd7.slice/crio-f2cee8ad16ec9e3fae675e79833d9c55f19ebb3881dfcb16b40cee765725e338 WatchSource:0}: Error finding container f2cee8ad16ec9e3fae675e79833d9c55f19ebb3881dfcb16b40cee765725e338: Status 404 returned error can't find the container with id f2cee8ad16ec9e3fae675e79833d9c55f19ebb3881dfcb16b40cee765725e338 Jan 22 07:00:11 crc kubenswrapper[4720]: I0122 07:00:11.737541 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-2dww7" event={"ID":"8d5a9f1b-a220-45c7-8902-634631838ea7","Type":"ContainerDied","Data":"2bceb768b2b0328ab15a445059f14dc5167151e2ddffa0596825a0059c46c921"} Jan 22 07:00:11 crc kubenswrapper[4720]: I0122 07:00:11.737579 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-2dww7" Jan 22 07:00:11 crc kubenswrapper[4720]: I0122 07:00:11.737598 4720 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2bceb768b2b0328ab15a445059f14dc5167151e2ddffa0596825a0059c46c921" Jan 22 07:00:11 crc kubenswrapper[4720]: I0122 07:00:11.740623 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"e48ac237-80c1-4cca-9e7d-8610d3467cd7","Type":"ContainerStarted","Data":"f2cee8ad16ec9e3fae675e79833d9c55f19ebb3881dfcb16b40cee765725e338"} Jan 22 07:00:11 crc kubenswrapper[4720]: I0122 07:00:11.744573 4720 generic.go:334] "Generic (PLEG): container finished" podID="2cef4b87-a58e-4efe-a1ac-16cc86a676b1" containerID="e2b1f9076c311768aee796f92a7d26733bc64f44da4e9760fdfee6a1dad85df5" exitCode=0 Jan 22 07:00:11 crc kubenswrapper[4720]: I0122 07:00:11.744685 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xsbk9" event={"ID":"2cef4b87-a58e-4efe-a1ac-16cc86a676b1","Type":"ContainerDied","Data":"e2b1f9076c311768aee796f92a7d26733bc64f44da4e9760fdfee6a1dad85df5"} Jan 22 07:00:11 crc kubenswrapper[4720]: I0122 07:00:11.744722 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xsbk9" event={"ID":"2cef4b87-a58e-4efe-a1ac-16cc86a676b1","Type":"ContainerStarted","Data":"e5c0d496141c824ca7420b84ca9c47f16358c9955d044116b7099cfea21d1174"} Jan 22 07:00:11 crc kubenswrapper[4720]: I0122 07:00:11.754797 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-5b9d-account-create-update-hh4vt" event={"ID":"51c71a30-7002-460d-aaab-0a7bc54247fa","Type":"ContainerDied","Data":"bb2e48273de9131b9b75d72280b2b1980f265dfc3355f42b1f671e5062095ffa"} Jan 22 07:00:11 crc kubenswrapper[4720]: I0122 07:00:11.754822 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-5b9d-account-create-update-hh4vt" Jan 22 07:00:11 crc kubenswrapper[4720]: I0122 07:00:11.754832 4720 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bb2e48273de9131b9b75d72280b2b1980f265dfc3355f42b1f671e5062095ffa" Jan 22 07:00:12 crc kubenswrapper[4720]: I0122 07:00:12.222571 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8d89b304-1eb9-4cc6-8fbf-8c859e7c4a02" path="/var/lib/kubelet/pods/8d89b304-1eb9-4cc6-8fbf-8c859e7c4a02/volumes" Jan 22 07:00:12 crc kubenswrapper[4720]: I0122 07:00:12.764265 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"e48ac237-80c1-4cca-9e7d-8610d3467cd7","Type":"ContainerStarted","Data":"0f8b14088edb2c3713ecbd556d0f8d10eaf0a8426de801b9e7bb400a843cad12"} Jan 22 07:00:12 crc kubenswrapper[4720]: I0122 07:00:12.766958 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xsbk9" event={"ID":"2cef4b87-a58e-4efe-a1ac-16cc86a676b1","Type":"ContainerStarted","Data":"74d7f8ba9e401c59ca210448c1e0b390a84f87abbd883d27f33e608e667c83ea"} Jan 22 07:00:13 crc kubenswrapper[4720]: I0122 07:00:13.127007 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-k9fts"] Jan 22 07:00:13 crc kubenswrapper[4720]: E0122 07:00:13.127444 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51c71a30-7002-460d-aaab-0a7bc54247fa" containerName="mariadb-account-create-update" Jan 22 07:00:13 crc kubenswrapper[4720]: I0122 07:00:13.127470 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="51c71a30-7002-460d-aaab-0a7bc54247fa" containerName="mariadb-account-create-update" Jan 22 07:00:13 crc kubenswrapper[4720]: E0122 07:00:13.127509 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d5a9f1b-a220-45c7-8902-634631838ea7" containerName="mariadb-database-create" Jan 22 07:00:13 crc kubenswrapper[4720]: I0122 07:00:13.127519 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d5a9f1b-a220-45c7-8902-634631838ea7" containerName="mariadb-database-create" Jan 22 07:00:13 crc kubenswrapper[4720]: I0122 07:00:13.127706 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="51c71a30-7002-460d-aaab-0a7bc54247fa" containerName="mariadb-account-create-update" Jan 22 07:00:13 crc kubenswrapper[4720]: I0122 07:00:13.127725 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d5a9f1b-a220-45c7-8902-634631838ea7" containerName="mariadb-database-create" Jan 22 07:00:13 crc kubenswrapper[4720]: I0122 07:00:13.128328 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-k9fts" Jan 22 07:00:13 crc kubenswrapper[4720]: I0122 07:00:13.131129 4720 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-config-data" Jan 22 07:00:13 crc kubenswrapper[4720]: I0122 07:00:13.132584 4720 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-watcher-kuttl-dockercfg-2rphh" Jan 22 07:00:13 crc kubenswrapper[4720]: I0122 07:00:13.153107 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-k9fts"] Jan 22 07:00:13 crc kubenswrapper[4720]: I0122 07:00:13.193848 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/93d003a3-f42f-4a33-8960-be8f156d121c-db-sync-config-data\") pod \"watcher-kuttl-db-sync-k9fts\" (UID: \"93d003a3-f42f-4a33-8960-be8f156d121c\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-k9fts" Jan 22 07:00:13 crc kubenswrapper[4720]: I0122 07:00:13.193890 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93d003a3-f42f-4a33-8960-be8f156d121c-config-data\") pod \"watcher-kuttl-db-sync-k9fts\" (UID: \"93d003a3-f42f-4a33-8960-be8f156d121c\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-k9fts" Jan 22 07:00:13 crc kubenswrapper[4720]: I0122 07:00:13.194028 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93d003a3-f42f-4a33-8960-be8f156d121c-combined-ca-bundle\") pod \"watcher-kuttl-db-sync-k9fts\" (UID: \"93d003a3-f42f-4a33-8960-be8f156d121c\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-k9fts" Jan 22 07:00:13 crc kubenswrapper[4720]: I0122 07:00:13.194071 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6lnt7\" (UniqueName: \"kubernetes.io/projected/93d003a3-f42f-4a33-8960-be8f156d121c-kube-api-access-6lnt7\") pod \"watcher-kuttl-db-sync-k9fts\" (UID: \"93d003a3-f42f-4a33-8960-be8f156d121c\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-k9fts" Jan 22 07:00:13 crc kubenswrapper[4720]: I0122 07:00:13.295542 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93d003a3-f42f-4a33-8960-be8f156d121c-combined-ca-bundle\") pod \"watcher-kuttl-db-sync-k9fts\" (UID: \"93d003a3-f42f-4a33-8960-be8f156d121c\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-k9fts" Jan 22 07:00:13 crc kubenswrapper[4720]: I0122 07:00:13.295633 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6lnt7\" (UniqueName: \"kubernetes.io/projected/93d003a3-f42f-4a33-8960-be8f156d121c-kube-api-access-6lnt7\") pod \"watcher-kuttl-db-sync-k9fts\" (UID: \"93d003a3-f42f-4a33-8960-be8f156d121c\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-k9fts" Jan 22 07:00:13 crc kubenswrapper[4720]: I0122 07:00:13.295666 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/93d003a3-f42f-4a33-8960-be8f156d121c-db-sync-config-data\") pod \"watcher-kuttl-db-sync-k9fts\" (UID: \"93d003a3-f42f-4a33-8960-be8f156d121c\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-k9fts" Jan 22 07:00:13 crc kubenswrapper[4720]: I0122 07:00:13.295688 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93d003a3-f42f-4a33-8960-be8f156d121c-config-data\") pod \"watcher-kuttl-db-sync-k9fts\" (UID: \"93d003a3-f42f-4a33-8960-be8f156d121c\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-k9fts" Jan 22 07:00:13 crc kubenswrapper[4720]: I0122 07:00:13.301203 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93d003a3-f42f-4a33-8960-be8f156d121c-config-data\") pod \"watcher-kuttl-db-sync-k9fts\" (UID: \"93d003a3-f42f-4a33-8960-be8f156d121c\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-k9fts" Jan 22 07:00:13 crc kubenswrapper[4720]: I0122 07:00:13.303400 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/93d003a3-f42f-4a33-8960-be8f156d121c-db-sync-config-data\") pod \"watcher-kuttl-db-sync-k9fts\" (UID: \"93d003a3-f42f-4a33-8960-be8f156d121c\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-k9fts" Jan 22 07:00:13 crc kubenswrapper[4720]: I0122 07:00:13.304457 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93d003a3-f42f-4a33-8960-be8f156d121c-combined-ca-bundle\") pod \"watcher-kuttl-db-sync-k9fts\" (UID: \"93d003a3-f42f-4a33-8960-be8f156d121c\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-k9fts" Jan 22 07:00:13 crc kubenswrapper[4720]: I0122 07:00:13.316532 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6lnt7\" (UniqueName: \"kubernetes.io/projected/93d003a3-f42f-4a33-8960-be8f156d121c-kube-api-access-6lnt7\") pod \"watcher-kuttl-db-sync-k9fts\" (UID: \"93d003a3-f42f-4a33-8960-be8f156d121c\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-k9fts" Jan 22 07:00:13 crc kubenswrapper[4720]: I0122 07:00:13.443285 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-k9fts" Jan 22 07:00:13 crc kubenswrapper[4720]: I0122 07:00:13.777250 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"e48ac237-80c1-4cca-9e7d-8610d3467cd7","Type":"ContainerStarted","Data":"aee39d7714577108bc858eeff2c7095cf9735e79aba0484e93fc980f07fe2e1c"} Jan 22 07:00:13 crc kubenswrapper[4720]: I0122 07:00:13.780356 4720 generic.go:334] "Generic (PLEG): container finished" podID="2cef4b87-a58e-4efe-a1ac-16cc86a676b1" containerID="74d7f8ba9e401c59ca210448c1e0b390a84f87abbd883d27f33e608e667c83ea" exitCode=0 Jan 22 07:00:13 crc kubenswrapper[4720]: I0122 07:00:13.780401 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xsbk9" event={"ID":"2cef4b87-a58e-4efe-a1ac-16cc86a676b1","Type":"ContainerDied","Data":"74d7f8ba9e401c59ca210448c1e0b390a84f87abbd883d27f33e608e667c83ea"} Jan 22 07:00:13 crc kubenswrapper[4720]: I0122 07:00:13.939557 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-k9fts"] Jan 22 07:00:13 crc kubenswrapper[4720]: W0122 07:00:13.941315 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod93d003a3_f42f_4a33_8960_be8f156d121c.slice/crio-9236568d9294c2f833fb38e1591a2e492c79971e629e9f30ab7884ddda82f76a WatchSource:0}: Error finding container 9236568d9294c2f833fb38e1591a2e492c79971e629e9f30ab7884ddda82f76a: Status 404 returned error can't find the container with id 9236568d9294c2f833fb38e1591a2e492c79971e629e9f30ab7884ddda82f76a Jan 22 07:00:14 crc kubenswrapper[4720]: I0122 07:00:14.791735 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xsbk9" event={"ID":"2cef4b87-a58e-4efe-a1ac-16cc86a676b1","Type":"ContainerStarted","Data":"f34130507fc0f8de34174d97d010e2b4a8efda78a424dc4814e80fc9d37afdcc"} Jan 22 07:00:14 crc kubenswrapper[4720]: I0122 07:00:14.793435 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-k9fts" event={"ID":"93d003a3-f42f-4a33-8960-be8f156d121c","Type":"ContainerStarted","Data":"e7d08a4a55ec335da475031256cc67252a201eb76c20099836a15177c390783f"} Jan 22 07:00:14 crc kubenswrapper[4720]: I0122 07:00:14.793491 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-k9fts" event={"ID":"93d003a3-f42f-4a33-8960-be8f156d121c","Type":"ContainerStarted","Data":"9236568d9294c2f833fb38e1591a2e492c79971e629e9f30ab7884ddda82f76a"} Jan 22 07:00:14 crc kubenswrapper[4720]: I0122 07:00:14.795631 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"e48ac237-80c1-4cca-9e7d-8610d3467cd7","Type":"ContainerStarted","Data":"65b26f295e4bfd0776e384c37036f71b29391b936a23f0bf2ba43d7ce1025dff"} Jan 22 07:00:14 crc kubenswrapper[4720]: I0122 07:00:14.819708 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-xsbk9" podStartSLOduration=3.277035851 podStartE2EDuration="5.81968403s" podCreationTimestamp="2026-01-22 07:00:09 +0000 UTC" firstStartedPulling="2026-01-22 07:00:11.754110126 +0000 UTC m=+1503.896016831" lastFinishedPulling="2026-01-22 07:00:14.296758305 +0000 UTC m=+1506.438665010" observedRunningTime="2026-01-22 07:00:14.813122133 +0000 UTC m=+1506.955028848" watchObservedRunningTime="2026-01-22 07:00:14.81968403 +0000 UTC m=+1506.961590735" Jan 22 07:00:14 crc kubenswrapper[4720]: I0122 07:00:14.839358 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-db-sync-k9fts" podStartSLOduration=1.839328642 podStartE2EDuration="1.839328642s" podCreationTimestamp="2026-01-22 07:00:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 07:00:14.83611162 +0000 UTC m=+1506.978018345" watchObservedRunningTime="2026-01-22 07:00:14.839328642 +0000 UTC m=+1506.981235347" Jan 22 07:00:16 crc kubenswrapper[4720]: I0122 07:00:16.829901 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"e48ac237-80c1-4cca-9e7d-8610d3467cd7","Type":"ContainerStarted","Data":"4ba0580ad485b8814f20f0361de16c519d8af2131daa30699cc8d6b0b8015bbe"} Jan 22 07:00:16 crc kubenswrapper[4720]: I0122 07:00:16.830408 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:00:16 crc kubenswrapper[4720]: I0122 07:00:16.855397 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/ceilometer-0" podStartSLOduration=2.933112671 podStartE2EDuration="6.85537447s" podCreationTimestamp="2026-01-22 07:00:10 +0000 UTC" firstStartedPulling="2026-01-22 07:00:11.712554588 +0000 UTC m=+1503.854461293" lastFinishedPulling="2026-01-22 07:00:15.634816387 +0000 UTC m=+1507.776723092" observedRunningTime="2026-01-22 07:00:16.851969963 +0000 UTC m=+1508.993876668" watchObservedRunningTime="2026-01-22 07:00:16.85537447 +0000 UTC m=+1508.997281175" Jan 22 07:00:17 crc kubenswrapper[4720]: I0122 07:00:17.838987 4720 generic.go:334] "Generic (PLEG): container finished" podID="93d003a3-f42f-4a33-8960-be8f156d121c" containerID="e7d08a4a55ec335da475031256cc67252a201eb76c20099836a15177c390783f" exitCode=0 Jan 22 07:00:17 crc kubenswrapper[4720]: I0122 07:00:17.839083 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-k9fts" event={"ID":"93d003a3-f42f-4a33-8960-be8f156d121c","Type":"ContainerDied","Data":"e7d08a4a55ec335da475031256cc67252a201eb76c20099836a15177c390783f"} Jan 22 07:00:19 crc kubenswrapper[4720]: I0122 07:00:19.246757 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-k9fts" Jan 22 07:00:19 crc kubenswrapper[4720]: I0122 07:00:19.407287 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6lnt7\" (UniqueName: \"kubernetes.io/projected/93d003a3-f42f-4a33-8960-be8f156d121c-kube-api-access-6lnt7\") pod \"93d003a3-f42f-4a33-8960-be8f156d121c\" (UID: \"93d003a3-f42f-4a33-8960-be8f156d121c\") " Jan 22 07:00:19 crc kubenswrapper[4720]: I0122 07:00:19.407345 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93d003a3-f42f-4a33-8960-be8f156d121c-combined-ca-bundle\") pod \"93d003a3-f42f-4a33-8960-be8f156d121c\" (UID: \"93d003a3-f42f-4a33-8960-be8f156d121c\") " Jan 22 07:00:19 crc kubenswrapper[4720]: I0122 07:00:19.407407 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/93d003a3-f42f-4a33-8960-be8f156d121c-db-sync-config-data\") pod \"93d003a3-f42f-4a33-8960-be8f156d121c\" (UID: \"93d003a3-f42f-4a33-8960-be8f156d121c\") " Jan 22 07:00:19 crc kubenswrapper[4720]: I0122 07:00:19.407475 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93d003a3-f42f-4a33-8960-be8f156d121c-config-data\") pod \"93d003a3-f42f-4a33-8960-be8f156d121c\" (UID: \"93d003a3-f42f-4a33-8960-be8f156d121c\") " Jan 22 07:00:19 crc kubenswrapper[4720]: I0122 07:00:19.412925 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/93d003a3-f42f-4a33-8960-be8f156d121c-kube-api-access-6lnt7" (OuterVolumeSpecName: "kube-api-access-6lnt7") pod "93d003a3-f42f-4a33-8960-be8f156d121c" (UID: "93d003a3-f42f-4a33-8960-be8f156d121c"). InnerVolumeSpecName "kube-api-access-6lnt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 07:00:19 crc kubenswrapper[4720]: I0122 07:00:19.414977 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/93d003a3-f42f-4a33-8960-be8f156d121c-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "93d003a3-f42f-4a33-8960-be8f156d121c" (UID: "93d003a3-f42f-4a33-8960-be8f156d121c"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:00:19 crc kubenswrapper[4720]: I0122 07:00:19.433810 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/93d003a3-f42f-4a33-8960-be8f156d121c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "93d003a3-f42f-4a33-8960-be8f156d121c" (UID: "93d003a3-f42f-4a33-8960-be8f156d121c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:00:19 crc kubenswrapper[4720]: I0122 07:00:19.452764 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/93d003a3-f42f-4a33-8960-be8f156d121c-config-data" (OuterVolumeSpecName: "config-data") pod "93d003a3-f42f-4a33-8960-be8f156d121c" (UID: "93d003a3-f42f-4a33-8960-be8f156d121c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:00:19 crc kubenswrapper[4720]: I0122 07:00:19.509556 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6lnt7\" (UniqueName: \"kubernetes.io/projected/93d003a3-f42f-4a33-8960-be8f156d121c-kube-api-access-6lnt7\") on node \"crc\" DevicePath \"\"" Jan 22 07:00:19 crc kubenswrapper[4720]: I0122 07:00:19.509835 4720 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93d003a3-f42f-4a33-8960-be8f156d121c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 07:00:19 crc kubenswrapper[4720]: I0122 07:00:19.509943 4720 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/93d003a3-f42f-4a33-8960-be8f156d121c-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 07:00:19 crc kubenswrapper[4720]: I0122 07:00:19.510029 4720 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93d003a3-f42f-4a33-8960-be8f156d121c-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 07:00:19 crc kubenswrapper[4720]: I0122 07:00:19.858482 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-k9fts" event={"ID":"93d003a3-f42f-4a33-8960-be8f156d121c","Type":"ContainerDied","Data":"9236568d9294c2f833fb38e1591a2e492c79971e629e9f30ab7884ddda82f76a"} Jan 22 07:00:19 crc kubenswrapper[4720]: I0122 07:00:19.858528 4720 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9236568d9294c2f833fb38e1591a2e492c79971e629e9f30ab7884ddda82f76a" Jan 22 07:00:19 crc kubenswrapper[4720]: I0122 07:00:19.858536 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-k9fts" Jan 22 07:00:20 crc kubenswrapper[4720]: I0122 07:00:20.130011 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 22 07:00:20 crc kubenswrapper[4720]: E0122 07:00:20.130799 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93d003a3-f42f-4a33-8960-be8f156d121c" containerName="watcher-kuttl-db-sync" Jan 22 07:00:20 crc kubenswrapper[4720]: I0122 07:00:20.130820 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="93d003a3-f42f-4a33-8960-be8f156d121c" containerName="watcher-kuttl-db-sync" Jan 22 07:00:20 crc kubenswrapper[4720]: I0122 07:00:20.131027 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="93d003a3-f42f-4a33-8960-be8f156d121c" containerName="watcher-kuttl-db-sync" Jan 22 07:00:20 crc kubenswrapper[4720]: I0122 07:00:20.132043 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:00:20 crc kubenswrapper[4720]: I0122 07:00:20.143123 4720 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-watcher-public-svc" Jan 22 07:00:20 crc kubenswrapper[4720]: I0122 07:00:20.144063 4720 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-api-config-data" Jan 22 07:00:20 crc kubenswrapper[4720]: I0122 07:00:20.144110 4720 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-watcher-kuttl-dockercfg-2rphh" Jan 22 07:00:20 crc kubenswrapper[4720]: I0122 07:00:20.149361 4720 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-watcher-internal-svc" Jan 22 07:00:20 crc kubenswrapper[4720]: I0122 07:00:20.154490 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 22 07:00:20 crc kubenswrapper[4720]: I0122 07:00:20.163224 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 22 07:00:20 crc kubenswrapper[4720]: I0122 07:00:20.164338 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 07:00:20 crc kubenswrapper[4720]: I0122 07:00:20.166252 4720 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-decision-engine-config-data" Jan 22 07:00:20 crc kubenswrapper[4720]: I0122 07:00:20.199156 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 22 07:00:20 crc kubenswrapper[4720]: I0122 07:00:20.223257 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/f007a0c1-7881-4fcd-8bc0-8ccf0d02d5b3-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"f007a0c1-7881-4fcd-8bc0-8ccf0d02d5b3\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:00:20 crc kubenswrapper[4720]: I0122 07:00:20.223322 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f007a0c1-7881-4fcd-8bc0-8ccf0d02d5b3-public-tls-certs\") pod \"watcher-kuttl-api-0\" (UID: \"f007a0c1-7881-4fcd-8bc0-8ccf0d02d5b3\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:00:20 crc kubenswrapper[4720]: I0122 07:00:20.223352 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f007a0c1-7881-4fcd-8bc0-8ccf0d02d5b3-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"f007a0c1-7881-4fcd-8bc0-8ccf0d02d5b3\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:00:20 crc kubenswrapper[4720]: I0122 07:00:20.223413 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f007a0c1-7881-4fcd-8bc0-8ccf0d02d5b3-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"f007a0c1-7881-4fcd-8bc0-8ccf0d02d5b3\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:00:20 crc kubenswrapper[4720]: I0122 07:00:20.223463 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f007a0c1-7881-4fcd-8bc0-8ccf0d02d5b3-internal-tls-certs\") pod \"watcher-kuttl-api-0\" (UID: \"f007a0c1-7881-4fcd-8bc0-8ccf0d02d5b3\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:00:20 crc kubenswrapper[4720]: I0122 07:00:20.223485 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2hlsj\" (UniqueName: \"kubernetes.io/projected/f007a0c1-7881-4fcd-8bc0-8ccf0d02d5b3-kube-api-access-2hlsj\") pod \"watcher-kuttl-api-0\" (UID: \"f007a0c1-7881-4fcd-8bc0-8ccf0d02d5b3\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:00:20 crc kubenswrapper[4720]: I0122 07:00:20.223517 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f007a0c1-7881-4fcd-8bc0-8ccf0d02d5b3-logs\") pod \"watcher-kuttl-api-0\" (UID: \"f007a0c1-7881-4fcd-8bc0-8ccf0d02d5b3\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:00:20 crc kubenswrapper[4720]: I0122 07:00:20.276495 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 22 07:00:20 crc kubenswrapper[4720]: I0122 07:00:20.278056 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 07:00:20 crc kubenswrapper[4720]: I0122 07:00:20.281387 4720 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-applier-config-data" Jan 22 07:00:20 crc kubenswrapper[4720]: I0122 07:00:20.290552 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 22 07:00:20 crc kubenswrapper[4720]: I0122 07:00:20.325651 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f007a0c1-7881-4fcd-8bc0-8ccf0d02d5b3-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"f007a0c1-7881-4fcd-8bc0-8ccf0d02d5b3\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:00:20 crc kubenswrapper[4720]: I0122 07:00:20.325710 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xq8lz\" (UniqueName: \"kubernetes.io/projected/bed144f1-70f3-49fa-a09c-f10b5b03c2c2-kube-api-access-xq8lz\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"bed144f1-70f3-49fa-a09c-f10b5b03c2c2\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 07:00:20 crc kubenswrapper[4720]: I0122 07:00:20.325778 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/bed144f1-70f3-49fa-a09c-f10b5b03c2c2-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"bed144f1-70f3-49fa-a09c-f10b5b03c2c2\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 07:00:20 crc kubenswrapper[4720]: I0122 07:00:20.325799 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f007a0c1-7881-4fcd-8bc0-8ccf0d02d5b3-internal-tls-certs\") pod \"watcher-kuttl-api-0\" (UID: \"f007a0c1-7881-4fcd-8bc0-8ccf0d02d5b3\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:00:20 crc kubenswrapper[4720]: I0122 07:00:20.325818 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2hlsj\" (UniqueName: \"kubernetes.io/projected/f007a0c1-7881-4fcd-8bc0-8ccf0d02d5b3-kube-api-access-2hlsj\") pod \"watcher-kuttl-api-0\" (UID: \"f007a0c1-7881-4fcd-8bc0-8ccf0d02d5b3\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:00:20 crc kubenswrapper[4720]: I0122 07:00:20.325852 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f007a0c1-7881-4fcd-8bc0-8ccf0d02d5b3-logs\") pod \"watcher-kuttl-api-0\" (UID: \"f007a0c1-7881-4fcd-8bc0-8ccf0d02d5b3\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:00:20 crc kubenswrapper[4720]: I0122 07:00:20.325885 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/f007a0c1-7881-4fcd-8bc0-8ccf0d02d5b3-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"f007a0c1-7881-4fcd-8bc0-8ccf0d02d5b3\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:00:20 crc kubenswrapper[4720]: I0122 07:00:20.325926 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f007a0c1-7881-4fcd-8bc0-8ccf0d02d5b3-public-tls-certs\") pod \"watcher-kuttl-api-0\" (UID: \"f007a0c1-7881-4fcd-8bc0-8ccf0d02d5b3\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:00:20 crc kubenswrapper[4720]: I0122 07:00:20.325967 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f007a0c1-7881-4fcd-8bc0-8ccf0d02d5b3-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"f007a0c1-7881-4fcd-8bc0-8ccf0d02d5b3\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:00:20 crc kubenswrapper[4720]: I0122 07:00:20.325984 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bed144f1-70f3-49fa-a09c-f10b5b03c2c2-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"bed144f1-70f3-49fa-a09c-f10b5b03c2c2\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 07:00:20 crc kubenswrapper[4720]: I0122 07:00:20.326000 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bed144f1-70f3-49fa-a09c-f10b5b03c2c2-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"bed144f1-70f3-49fa-a09c-f10b5b03c2c2\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 07:00:20 crc kubenswrapper[4720]: I0122 07:00:20.326036 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bed144f1-70f3-49fa-a09c-f10b5b03c2c2-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"bed144f1-70f3-49fa-a09c-f10b5b03c2c2\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 07:00:20 crc kubenswrapper[4720]: I0122 07:00:20.327280 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f007a0c1-7881-4fcd-8bc0-8ccf0d02d5b3-logs\") pod \"watcher-kuttl-api-0\" (UID: \"f007a0c1-7881-4fcd-8bc0-8ccf0d02d5b3\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:00:20 crc kubenswrapper[4720]: I0122 07:00:20.332490 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/f007a0c1-7881-4fcd-8bc0-8ccf0d02d5b3-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"f007a0c1-7881-4fcd-8bc0-8ccf0d02d5b3\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:00:20 crc kubenswrapper[4720]: I0122 07:00:20.334517 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f007a0c1-7881-4fcd-8bc0-8ccf0d02d5b3-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"f007a0c1-7881-4fcd-8bc0-8ccf0d02d5b3\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:00:20 crc kubenswrapper[4720]: I0122 07:00:20.335528 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f007a0c1-7881-4fcd-8bc0-8ccf0d02d5b3-public-tls-certs\") pod \"watcher-kuttl-api-0\" (UID: \"f007a0c1-7881-4fcd-8bc0-8ccf0d02d5b3\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:00:20 crc kubenswrapper[4720]: I0122 07:00:20.336106 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f007a0c1-7881-4fcd-8bc0-8ccf0d02d5b3-internal-tls-certs\") pod \"watcher-kuttl-api-0\" (UID: \"f007a0c1-7881-4fcd-8bc0-8ccf0d02d5b3\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:00:20 crc kubenswrapper[4720]: I0122 07:00:20.347277 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f007a0c1-7881-4fcd-8bc0-8ccf0d02d5b3-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"f007a0c1-7881-4fcd-8bc0-8ccf0d02d5b3\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:00:20 crc kubenswrapper[4720]: I0122 07:00:20.354097 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-xsbk9" Jan 22 07:00:20 crc kubenswrapper[4720]: I0122 07:00:20.378742 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-xsbk9" Jan 22 07:00:20 crc kubenswrapper[4720]: I0122 07:00:20.382179 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2hlsj\" (UniqueName: \"kubernetes.io/projected/f007a0c1-7881-4fcd-8bc0-8ccf0d02d5b3-kube-api-access-2hlsj\") pod \"watcher-kuttl-api-0\" (UID: \"f007a0c1-7881-4fcd-8bc0-8ccf0d02d5b3\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:00:20 crc kubenswrapper[4720]: I0122 07:00:20.427466 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bed144f1-70f3-49fa-a09c-f10b5b03c2c2-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"bed144f1-70f3-49fa-a09c-f10b5b03c2c2\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 07:00:20 crc kubenswrapper[4720]: I0122 07:00:20.427519 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bed144f1-70f3-49fa-a09c-f10b5b03c2c2-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"bed144f1-70f3-49fa-a09c-f10b5b03c2c2\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 07:00:20 crc kubenswrapper[4720]: I0122 07:00:20.427558 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/035a1409-387d-4f2a-a89e-b36a2036e29b-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"035a1409-387d-4f2a-a89e-b36a2036e29b\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 07:00:20 crc kubenswrapper[4720]: I0122 07:00:20.427600 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xck6h\" (UniqueName: \"kubernetes.io/projected/035a1409-387d-4f2a-a89e-b36a2036e29b-kube-api-access-xck6h\") pod \"watcher-kuttl-applier-0\" (UID: \"035a1409-387d-4f2a-a89e-b36a2036e29b\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 07:00:20 crc kubenswrapper[4720]: I0122 07:00:20.427630 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bed144f1-70f3-49fa-a09c-f10b5b03c2c2-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"bed144f1-70f3-49fa-a09c-f10b5b03c2c2\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 07:00:20 crc kubenswrapper[4720]: I0122 07:00:20.427712 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xq8lz\" (UniqueName: \"kubernetes.io/projected/bed144f1-70f3-49fa-a09c-f10b5b03c2c2-kube-api-access-xq8lz\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"bed144f1-70f3-49fa-a09c-f10b5b03c2c2\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 07:00:20 crc kubenswrapper[4720]: I0122 07:00:20.427783 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/bed144f1-70f3-49fa-a09c-f10b5b03c2c2-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"bed144f1-70f3-49fa-a09c-f10b5b03c2c2\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 07:00:20 crc kubenswrapper[4720]: I0122 07:00:20.427828 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/035a1409-387d-4f2a-a89e-b36a2036e29b-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"035a1409-387d-4f2a-a89e-b36a2036e29b\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 07:00:20 crc kubenswrapper[4720]: I0122 07:00:20.427895 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/035a1409-387d-4f2a-a89e-b36a2036e29b-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"035a1409-387d-4f2a-a89e-b36a2036e29b\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 07:00:20 crc kubenswrapper[4720]: I0122 07:00:20.428775 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bed144f1-70f3-49fa-a09c-f10b5b03c2c2-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"bed144f1-70f3-49fa-a09c-f10b5b03c2c2\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 07:00:20 crc kubenswrapper[4720]: I0122 07:00:20.432331 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bed144f1-70f3-49fa-a09c-f10b5b03c2c2-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"bed144f1-70f3-49fa-a09c-f10b5b03c2c2\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 07:00:20 crc kubenswrapper[4720]: I0122 07:00:20.434627 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/bed144f1-70f3-49fa-a09c-f10b5b03c2c2-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"bed144f1-70f3-49fa-a09c-f10b5b03c2c2\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 07:00:20 crc kubenswrapper[4720]: I0122 07:00:20.435549 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bed144f1-70f3-49fa-a09c-f10b5b03c2c2-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"bed144f1-70f3-49fa-a09c-f10b5b03c2c2\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 07:00:20 crc kubenswrapper[4720]: I0122 07:00:20.438835 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-xsbk9" Jan 22 07:00:20 crc kubenswrapper[4720]: I0122 07:00:20.453774 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:00:20 crc kubenswrapper[4720]: I0122 07:00:20.454080 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xq8lz\" (UniqueName: \"kubernetes.io/projected/bed144f1-70f3-49fa-a09c-f10b5b03c2c2-kube-api-access-xq8lz\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"bed144f1-70f3-49fa-a09c-f10b5b03c2c2\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 07:00:20 crc kubenswrapper[4720]: I0122 07:00:20.496989 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 07:00:20 crc kubenswrapper[4720]: I0122 07:00:20.529868 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/035a1409-387d-4f2a-a89e-b36a2036e29b-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"035a1409-387d-4f2a-a89e-b36a2036e29b\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 07:00:20 crc kubenswrapper[4720]: I0122 07:00:20.529981 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/035a1409-387d-4f2a-a89e-b36a2036e29b-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"035a1409-387d-4f2a-a89e-b36a2036e29b\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 07:00:20 crc kubenswrapper[4720]: I0122 07:00:20.530017 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/035a1409-387d-4f2a-a89e-b36a2036e29b-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"035a1409-387d-4f2a-a89e-b36a2036e29b\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 07:00:20 crc kubenswrapper[4720]: I0122 07:00:20.530040 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xck6h\" (UniqueName: \"kubernetes.io/projected/035a1409-387d-4f2a-a89e-b36a2036e29b-kube-api-access-xck6h\") pod \"watcher-kuttl-applier-0\" (UID: \"035a1409-387d-4f2a-a89e-b36a2036e29b\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 07:00:20 crc kubenswrapper[4720]: I0122 07:00:20.531856 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/035a1409-387d-4f2a-a89e-b36a2036e29b-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"035a1409-387d-4f2a-a89e-b36a2036e29b\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 07:00:20 crc kubenswrapper[4720]: I0122 07:00:20.537159 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/035a1409-387d-4f2a-a89e-b36a2036e29b-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"035a1409-387d-4f2a-a89e-b36a2036e29b\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 07:00:20 crc kubenswrapper[4720]: I0122 07:00:20.538777 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/035a1409-387d-4f2a-a89e-b36a2036e29b-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"035a1409-387d-4f2a-a89e-b36a2036e29b\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 07:00:20 crc kubenswrapper[4720]: I0122 07:00:20.548241 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xck6h\" (UniqueName: \"kubernetes.io/projected/035a1409-387d-4f2a-a89e-b36a2036e29b-kube-api-access-xck6h\") pod \"watcher-kuttl-applier-0\" (UID: \"035a1409-387d-4f2a-a89e-b36a2036e29b\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 07:00:20 crc kubenswrapper[4720]: I0122 07:00:20.607453 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 07:00:20 crc kubenswrapper[4720]: I0122 07:00:20.934041 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-xsbk9" Jan 22 07:00:20 crc kubenswrapper[4720]: I0122 07:00:20.957360 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 22 07:00:21 crc kubenswrapper[4720]: I0122 07:00:21.074633 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 22 07:00:21 crc kubenswrapper[4720]: W0122 07:00:21.079519 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbed144f1_70f3_49fa_a09c_f10b5b03c2c2.slice/crio-66c32b876fffea5e333ea1a61417dfa58d24232db81b70622ecc6f11c6f0279e WatchSource:0}: Error finding container 66c32b876fffea5e333ea1a61417dfa58d24232db81b70622ecc6f11c6f0279e: Status 404 returned error can't find the container with id 66c32b876fffea5e333ea1a61417dfa58d24232db81b70622ecc6f11c6f0279e Jan 22 07:00:21 crc kubenswrapper[4720]: I0122 07:00:21.202514 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 22 07:00:21 crc kubenswrapper[4720]: I0122 07:00:21.890191 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"f007a0c1-7881-4fcd-8bc0-8ccf0d02d5b3","Type":"ContainerStarted","Data":"1db3d09bad7855318971cbe1d6bd273e4743d05d5883734959276b641d41bd40"} Jan 22 07:00:21 crc kubenswrapper[4720]: I0122 07:00:21.890528 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"f007a0c1-7881-4fcd-8bc0-8ccf0d02d5b3","Type":"ContainerStarted","Data":"b4629172d61515df83acd3714a677421224badc9b4f3dbea53218d0dee09a956"} Jan 22 07:00:21 crc kubenswrapper[4720]: I0122 07:00:21.890706 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:00:21 crc kubenswrapper[4720]: I0122 07:00:21.890745 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"f007a0c1-7881-4fcd-8bc0-8ccf0d02d5b3","Type":"ContainerStarted","Data":"31b1fe3a17d412bc2a158228b58d352590d651f89bb32df266078ef1c105bb3f"} Jan 22 07:00:21 crc kubenswrapper[4720]: I0122 07:00:21.892968 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"bed144f1-70f3-49fa-a09c-f10b5b03c2c2","Type":"ContainerStarted","Data":"3ba5eb6be0c78618c80b5fa97f5d88da65fd9d70e021735cc3ca8480aadcd0a4"} Jan 22 07:00:21 crc kubenswrapper[4720]: I0122 07:00:21.893026 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"bed144f1-70f3-49fa-a09c-f10b5b03c2c2","Type":"ContainerStarted","Data":"66c32b876fffea5e333ea1a61417dfa58d24232db81b70622ecc6f11c6f0279e"} Jan 22 07:00:21 crc kubenswrapper[4720]: I0122 07:00:21.895220 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"035a1409-387d-4f2a-a89e-b36a2036e29b","Type":"ContainerStarted","Data":"6be262e202ffb82e2f3ecff3a6641adf81a5e307769b5b0bd2a197728b6160e4"} Jan 22 07:00:21 crc kubenswrapper[4720]: I0122 07:00:21.895267 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"035a1409-387d-4f2a-a89e-b36a2036e29b","Type":"ContainerStarted","Data":"ea58adb7236bd130551297576a1decef80e9b28491e169690f84963c7add1c7b"} Jan 22 07:00:21 crc kubenswrapper[4720]: I0122 07:00:21.922502 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-api-0" podStartSLOduration=1.9224769400000001 podStartE2EDuration="1.92247694s" podCreationTimestamp="2026-01-22 07:00:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 07:00:21.916027175 +0000 UTC m=+1514.057933900" watchObservedRunningTime="2026-01-22 07:00:21.92247694 +0000 UTC m=+1514.064383645" Jan 22 07:00:21 crc kubenswrapper[4720]: I0122 07:00:21.945008 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podStartSLOduration=1.944972433 podStartE2EDuration="1.944972433s" podCreationTimestamp="2026-01-22 07:00:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 07:00:21.94034563 +0000 UTC m=+1514.082252335" watchObservedRunningTime="2026-01-22 07:00:21.944972433 +0000 UTC m=+1514.086879128" Jan 22 07:00:21 crc kubenswrapper[4720]: I0122 07:00:21.970787 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podStartSLOduration=1.9707660200000001 podStartE2EDuration="1.97076602s" podCreationTimestamp="2026-01-22 07:00:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 07:00:21.968379902 +0000 UTC m=+1514.110286607" watchObservedRunningTime="2026-01-22 07:00:21.97076602 +0000 UTC m=+1514.112672725" Jan 22 07:00:23 crc kubenswrapper[4720]: I0122 07:00:23.928237 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-xsbk9"] Jan 22 07:00:23 crc kubenswrapper[4720]: I0122 07:00:23.929930 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-xsbk9" podUID="2cef4b87-a58e-4efe-a1ac-16cc86a676b1" containerName="registry-server" containerID="cri-o://f34130507fc0f8de34174d97d010e2b4a8efda78a424dc4814e80fc9d37afdcc" gracePeriod=2 Jan 22 07:00:24 crc kubenswrapper[4720]: I0122 07:00:24.446471 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xsbk9" Jan 22 07:00:24 crc kubenswrapper[4720]: I0122 07:00:24.531279 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f459b\" (UniqueName: \"kubernetes.io/projected/2cef4b87-a58e-4efe-a1ac-16cc86a676b1-kube-api-access-f459b\") pod \"2cef4b87-a58e-4efe-a1ac-16cc86a676b1\" (UID: \"2cef4b87-a58e-4efe-a1ac-16cc86a676b1\") " Jan 22 07:00:24 crc kubenswrapper[4720]: I0122 07:00:24.531465 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2cef4b87-a58e-4efe-a1ac-16cc86a676b1-utilities\") pod \"2cef4b87-a58e-4efe-a1ac-16cc86a676b1\" (UID: \"2cef4b87-a58e-4efe-a1ac-16cc86a676b1\") " Jan 22 07:00:24 crc kubenswrapper[4720]: I0122 07:00:24.531501 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2cef4b87-a58e-4efe-a1ac-16cc86a676b1-catalog-content\") pod \"2cef4b87-a58e-4efe-a1ac-16cc86a676b1\" (UID: \"2cef4b87-a58e-4efe-a1ac-16cc86a676b1\") " Jan 22 07:00:24 crc kubenswrapper[4720]: I0122 07:00:24.532537 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2cef4b87-a58e-4efe-a1ac-16cc86a676b1-utilities" (OuterVolumeSpecName: "utilities") pod "2cef4b87-a58e-4efe-a1ac-16cc86a676b1" (UID: "2cef4b87-a58e-4efe-a1ac-16cc86a676b1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 07:00:24 crc kubenswrapper[4720]: I0122 07:00:24.538048 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2cef4b87-a58e-4efe-a1ac-16cc86a676b1-kube-api-access-f459b" (OuterVolumeSpecName: "kube-api-access-f459b") pod "2cef4b87-a58e-4efe-a1ac-16cc86a676b1" (UID: "2cef4b87-a58e-4efe-a1ac-16cc86a676b1"). InnerVolumeSpecName "kube-api-access-f459b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 07:00:24 crc kubenswrapper[4720]: I0122 07:00:24.558305 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2cef4b87-a58e-4efe-a1ac-16cc86a676b1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2cef4b87-a58e-4efe-a1ac-16cc86a676b1" (UID: "2cef4b87-a58e-4efe-a1ac-16cc86a676b1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 07:00:24 crc kubenswrapper[4720]: I0122 07:00:24.633556 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f459b\" (UniqueName: \"kubernetes.io/projected/2cef4b87-a58e-4efe-a1ac-16cc86a676b1-kube-api-access-f459b\") on node \"crc\" DevicePath \"\"" Jan 22 07:00:24 crc kubenswrapper[4720]: I0122 07:00:24.633638 4720 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2cef4b87-a58e-4efe-a1ac-16cc86a676b1-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 07:00:24 crc kubenswrapper[4720]: I0122 07:00:24.633809 4720 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2cef4b87-a58e-4efe-a1ac-16cc86a676b1-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 07:00:24 crc kubenswrapper[4720]: I0122 07:00:24.931336 4720 generic.go:334] "Generic (PLEG): container finished" podID="2cef4b87-a58e-4efe-a1ac-16cc86a676b1" containerID="f34130507fc0f8de34174d97d010e2b4a8efda78a424dc4814e80fc9d37afdcc" exitCode=0 Jan 22 07:00:24 crc kubenswrapper[4720]: I0122 07:00:24.931556 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xsbk9" event={"ID":"2cef4b87-a58e-4efe-a1ac-16cc86a676b1","Type":"ContainerDied","Data":"f34130507fc0f8de34174d97d010e2b4a8efda78a424dc4814e80fc9d37afdcc"} Jan 22 07:00:24 crc kubenswrapper[4720]: I0122 07:00:24.931692 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xsbk9" Jan 22 07:00:24 crc kubenswrapper[4720]: I0122 07:00:24.931718 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xsbk9" event={"ID":"2cef4b87-a58e-4efe-a1ac-16cc86a676b1","Type":"ContainerDied","Data":"e5c0d496141c824ca7420b84ca9c47f16358c9955d044116b7099cfea21d1174"} Jan 22 07:00:24 crc kubenswrapper[4720]: I0122 07:00:24.931749 4720 scope.go:117] "RemoveContainer" containerID="f34130507fc0f8de34174d97d010e2b4a8efda78a424dc4814e80fc9d37afdcc" Jan 22 07:00:24 crc kubenswrapper[4720]: I0122 07:00:24.954621 4720 scope.go:117] "RemoveContainer" containerID="74d7f8ba9e401c59ca210448c1e0b390a84f87abbd883d27f33e608e667c83ea" Jan 22 07:00:24 crc kubenswrapper[4720]: I0122 07:00:24.980207 4720 scope.go:117] "RemoveContainer" containerID="e2b1f9076c311768aee796f92a7d26733bc64f44da4e9760fdfee6a1dad85df5" Jan 22 07:00:24 crc kubenswrapper[4720]: I0122 07:00:24.983631 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-xsbk9"] Jan 22 07:00:24 crc kubenswrapper[4720]: I0122 07:00:24.993566 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-xsbk9"] Jan 22 07:00:25 crc kubenswrapper[4720]: I0122 07:00:25.003662 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:00:25 crc kubenswrapper[4720]: I0122 07:00:25.023197 4720 scope.go:117] "RemoveContainer" containerID="f34130507fc0f8de34174d97d010e2b4a8efda78a424dc4814e80fc9d37afdcc" Jan 22 07:00:25 crc kubenswrapper[4720]: E0122 07:00:25.024566 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f34130507fc0f8de34174d97d010e2b4a8efda78a424dc4814e80fc9d37afdcc\": container with ID starting with f34130507fc0f8de34174d97d010e2b4a8efda78a424dc4814e80fc9d37afdcc not found: ID does not exist" containerID="f34130507fc0f8de34174d97d010e2b4a8efda78a424dc4814e80fc9d37afdcc" Jan 22 07:00:25 crc kubenswrapper[4720]: I0122 07:00:25.024626 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f34130507fc0f8de34174d97d010e2b4a8efda78a424dc4814e80fc9d37afdcc"} err="failed to get container status \"f34130507fc0f8de34174d97d010e2b4a8efda78a424dc4814e80fc9d37afdcc\": rpc error: code = NotFound desc = could not find container \"f34130507fc0f8de34174d97d010e2b4a8efda78a424dc4814e80fc9d37afdcc\": container with ID starting with f34130507fc0f8de34174d97d010e2b4a8efda78a424dc4814e80fc9d37afdcc not found: ID does not exist" Jan 22 07:00:25 crc kubenswrapper[4720]: I0122 07:00:25.024659 4720 scope.go:117] "RemoveContainer" containerID="74d7f8ba9e401c59ca210448c1e0b390a84f87abbd883d27f33e608e667c83ea" Jan 22 07:00:25 crc kubenswrapper[4720]: E0122 07:00:25.025144 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"74d7f8ba9e401c59ca210448c1e0b390a84f87abbd883d27f33e608e667c83ea\": container with ID starting with 74d7f8ba9e401c59ca210448c1e0b390a84f87abbd883d27f33e608e667c83ea not found: ID does not exist" containerID="74d7f8ba9e401c59ca210448c1e0b390a84f87abbd883d27f33e608e667c83ea" Jan 22 07:00:25 crc kubenswrapper[4720]: I0122 07:00:25.025253 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"74d7f8ba9e401c59ca210448c1e0b390a84f87abbd883d27f33e608e667c83ea"} err="failed to get container status \"74d7f8ba9e401c59ca210448c1e0b390a84f87abbd883d27f33e608e667c83ea\": rpc error: code = NotFound desc = could not find container \"74d7f8ba9e401c59ca210448c1e0b390a84f87abbd883d27f33e608e667c83ea\": container with ID starting with 74d7f8ba9e401c59ca210448c1e0b390a84f87abbd883d27f33e608e667c83ea not found: ID does not exist" Jan 22 07:00:25 crc kubenswrapper[4720]: I0122 07:00:25.025341 4720 scope.go:117] "RemoveContainer" containerID="e2b1f9076c311768aee796f92a7d26733bc64f44da4e9760fdfee6a1dad85df5" Jan 22 07:00:25 crc kubenswrapper[4720]: E0122 07:00:25.025730 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e2b1f9076c311768aee796f92a7d26733bc64f44da4e9760fdfee6a1dad85df5\": container with ID starting with e2b1f9076c311768aee796f92a7d26733bc64f44da4e9760fdfee6a1dad85df5 not found: ID does not exist" containerID="e2b1f9076c311768aee796f92a7d26733bc64f44da4e9760fdfee6a1dad85df5" Jan 22 07:00:25 crc kubenswrapper[4720]: I0122 07:00:25.025817 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e2b1f9076c311768aee796f92a7d26733bc64f44da4e9760fdfee6a1dad85df5"} err="failed to get container status \"e2b1f9076c311768aee796f92a7d26733bc64f44da4e9760fdfee6a1dad85df5\": rpc error: code = NotFound desc = could not find container \"e2b1f9076c311768aee796f92a7d26733bc64f44da4e9760fdfee6a1dad85df5\": container with ID starting with e2b1f9076c311768aee796f92a7d26733bc64f44da4e9760fdfee6a1dad85df5 not found: ID does not exist" Jan 22 07:00:25 crc kubenswrapper[4720]: I0122 07:00:25.455292 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:00:25 crc kubenswrapper[4720]: I0122 07:00:25.608775 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 07:00:26 crc kubenswrapper[4720]: I0122 07:00:26.223286 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2cef4b87-a58e-4efe-a1ac-16cc86a676b1" path="/var/lib/kubelet/pods/2cef4b87-a58e-4efe-a1ac-16cc86a676b1/volumes" Jan 22 07:00:30 crc kubenswrapper[4720]: I0122 07:00:30.454978 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:00:30 crc kubenswrapper[4720]: I0122 07:00:30.467718 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:00:30 crc kubenswrapper[4720]: I0122 07:00:30.498645 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 07:00:30 crc kubenswrapper[4720]: I0122 07:00:30.528533 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 07:00:30 crc kubenswrapper[4720]: I0122 07:00:30.609154 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 07:00:30 crc kubenswrapper[4720]: I0122 07:00:30.639215 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 07:00:30 crc kubenswrapper[4720]: I0122 07:00:30.985145 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 07:00:30 crc kubenswrapper[4720]: I0122 07:00:30.994402 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:00:31 crc kubenswrapper[4720]: I0122 07:00:31.015804 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 07:00:31 crc kubenswrapper[4720]: I0122 07:00:31.025389 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 07:00:32 crc kubenswrapper[4720]: I0122 07:00:32.216140 4720 prober.go:107] "Probe failed" probeType="Readiness" pod="watcher-kuttl-default/ceilometer-0" podUID="0003d040-a30c-45fb-9521-41221cb33286" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.138:3000/\": dial tcp 10.217.0.138:3000: i/o timeout" Jan 22 07:00:33 crc kubenswrapper[4720]: I0122 07:00:33.565970 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 07:00:33 crc kubenswrapper[4720]: I0122 07:00:33.566397 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="e48ac237-80c1-4cca-9e7d-8610d3467cd7" containerName="ceilometer-central-agent" containerID="cri-o://0f8b14088edb2c3713ecbd556d0f8d10eaf0a8426de801b9e7bb400a843cad12" gracePeriod=30 Jan 22 07:00:33 crc kubenswrapper[4720]: I0122 07:00:33.566503 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="e48ac237-80c1-4cca-9e7d-8610d3467cd7" containerName="proxy-httpd" containerID="cri-o://4ba0580ad485b8814f20f0361de16c519d8af2131daa30699cc8d6b0b8015bbe" gracePeriod=30 Jan 22 07:00:33 crc kubenswrapper[4720]: I0122 07:00:33.566547 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="e48ac237-80c1-4cca-9e7d-8610d3467cd7" containerName="ceilometer-notification-agent" containerID="cri-o://aee39d7714577108bc858eeff2c7095cf9735e79aba0484e93fc980f07fe2e1c" gracePeriod=30 Jan 22 07:00:33 crc kubenswrapper[4720]: I0122 07:00:33.566489 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="e48ac237-80c1-4cca-9e7d-8610d3467cd7" containerName="sg-core" containerID="cri-o://65b26f295e4bfd0776e384c37036f71b29391b936a23f0bf2ba43d7ce1025dff" gracePeriod=30 Jan 22 07:00:33 crc kubenswrapper[4720]: I0122 07:00:33.578962 4720 prober.go:107] "Probe failed" probeType="Readiness" pod="watcher-kuttl-default/ceilometer-0" podUID="e48ac237-80c1-4cca-9e7d-8610d3467cd7" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 22 07:00:34 crc kubenswrapper[4720]: I0122 07:00:34.045902 4720 generic.go:334] "Generic (PLEG): container finished" podID="e48ac237-80c1-4cca-9e7d-8610d3467cd7" containerID="4ba0580ad485b8814f20f0361de16c519d8af2131daa30699cc8d6b0b8015bbe" exitCode=0 Jan 22 07:00:34 crc kubenswrapper[4720]: I0122 07:00:34.046309 4720 generic.go:334] "Generic (PLEG): container finished" podID="e48ac237-80c1-4cca-9e7d-8610d3467cd7" containerID="65b26f295e4bfd0776e384c37036f71b29391b936a23f0bf2ba43d7ce1025dff" exitCode=2 Jan 22 07:00:34 crc kubenswrapper[4720]: I0122 07:00:34.046325 4720 generic.go:334] "Generic (PLEG): container finished" podID="e48ac237-80c1-4cca-9e7d-8610d3467cd7" containerID="0f8b14088edb2c3713ecbd556d0f8d10eaf0a8426de801b9e7bb400a843cad12" exitCode=0 Jan 22 07:00:34 crc kubenswrapper[4720]: I0122 07:00:34.045967 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"e48ac237-80c1-4cca-9e7d-8610d3467cd7","Type":"ContainerDied","Data":"4ba0580ad485b8814f20f0361de16c519d8af2131daa30699cc8d6b0b8015bbe"} Jan 22 07:00:34 crc kubenswrapper[4720]: I0122 07:00:34.046390 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"e48ac237-80c1-4cca-9e7d-8610d3467cd7","Type":"ContainerDied","Data":"65b26f295e4bfd0776e384c37036f71b29391b936a23f0bf2ba43d7ce1025dff"} Jan 22 07:00:34 crc kubenswrapper[4720]: I0122 07:00:34.046409 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"e48ac237-80c1-4cca-9e7d-8610d3467cd7","Type":"ContainerDied","Data":"0f8b14088edb2c3713ecbd556d0f8d10eaf0a8426de801b9e7bb400a843cad12"} Jan 22 07:00:35 crc kubenswrapper[4720]: I0122 07:00:35.672901 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 22 07:00:35 crc kubenswrapper[4720]: I0122 07:00:35.673552 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="f007a0c1-7881-4fcd-8bc0-8ccf0d02d5b3" containerName="watcher-api" containerID="cri-o://1db3d09bad7855318971cbe1d6bd273e4743d05d5883734959276b641d41bd40" gracePeriod=30 Jan 22 07:00:35 crc kubenswrapper[4720]: I0122 07:00:35.674821 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="f007a0c1-7881-4fcd-8bc0-8ccf0d02d5b3" containerName="watcher-kuttl-api-log" containerID="cri-o://b4629172d61515df83acd3714a677421224badc9b4f3dbea53218d0dee09a956" gracePeriod=30 Jan 22 07:00:36 crc kubenswrapper[4720]: I0122 07:00:36.065256 4720 generic.go:334] "Generic (PLEG): container finished" podID="f007a0c1-7881-4fcd-8bc0-8ccf0d02d5b3" containerID="b4629172d61515df83acd3714a677421224badc9b4f3dbea53218d0dee09a956" exitCode=143 Jan 22 07:00:36 crc kubenswrapper[4720]: I0122 07:00:36.065347 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"f007a0c1-7881-4fcd-8bc0-8ccf0d02d5b3","Type":"ContainerDied","Data":"b4629172d61515df83acd3714a677421224badc9b4f3dbea53218d0dee09a956"} Jan 22 07:00:37 crc kubenswrapper[4720]: I0122 07:00:37.063077 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:00:37 crc kubenswrapper[4720]: I0122 07:00:37.075866 4720 generic.go:334] "Generic (PLEG): container finished" podID="f007a0c1-7881-4fcd-8bc0-8ccf0d02d5b3" containerID="1db3d09bad7855318971cbe1d6bd273e4743d05d5883734959276b641d41bd40" exitCode=0 Jan 22 07:00:37 crc kubenswrapper[4720]: I0122 07:00:37.075942 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"f007a0c1-7881-4fcd-8bc0-8ccf0d02d5b3","Type":"ContainerDied","Data":"1db3d09bad7855318971cbe1d6bd273e4743d05d5883734959276b641d41bd40"} Jan 22 07:00:37 crc kubenswrapper[4720]: I0122 07:00:37.075985 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"f007a0c1-7881-4fcd-8bc0-8ccf0d02d5b3","Type":"ContainerDied","Data":"31b1fe3a17d412bc2a158228b58d352590d651f89bb32df266078ef1c105bb3f"} Jan 22 07:00:37 crc kubenswrapper[4720]: I0122 07:00:37.076009 4720 scope.go:117] "RemoveContainer" containerID="1db3d09bad7855318971cbe1d6bd273e4743d05d5883734959276b641d41bd40" Jan 22 07:00:37 crc kubenswrapper[4720]: I0122 07:00:37.076165 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:00:37 crc kubenswrapper[4720]: I0122 07:00:37.107475 4720 scope.go:117] "RemoveContainer" containerID="b4629172d61515df83acd3714a677421224badc9b4f3dbea53218d0dee09a956" Jan 22 07:00:37 crc kubenswrapper[4720]: I0122 07:00:37.128781 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f007a0c1-7881-4fcd-8bc0-8ccf0d02d5b3-public-tls-certs\") pod \"f007a0c1-7881-4fcd-8bc0-8ccf0d02d5b3\" (UID: \"f007a0c1-7881-4fcd-8bc0-8ccf0d02d5b3\") " Jan 22 07:00:37 crc kubenswrapper[4720]: I0122 07:00:37.128875 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f007a0c1-7881-4fcd-8bc0-8ccf0d02d5b3-internal-tls-certs\") pod \"f007a0c1-7881-4fcd-8bc0-8ccf0d02d5b3\" (UID: \"f007a0c1-7881-4fcd-8bc0-8ccf0d02d5b3\") " Jan 22 07:00:37 crc kubenswrapper[4720]: I0122 07:00:37.128990 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2hlsj\" (UniqueName: \"kubernetes.io/projected/f007a0c1-7881-4fcd-8bc0-8ccf0d02d5b3-kube-api-access-2hlsj\") pod \"f007a0c1-7881-4fcd-8bc0-8ccf0d02d5b3\" (UID: \"f007a0c1-7881-4fcd-8bc0-8ccf0d02d5b3\") " Jan 22 07:00:37 crc kubenswrapper[4720]: I0122 07:00:37.129234 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f007a0c1-7881-4fcd-8bc0-8ccf0d02d5b3-config-data\") pod \"f007a0c1-7881-4fcd-8bc0-8ccf0d02d5b3\" (UID: \"f007a0c1-7881-4fcd-8bc0-8ccf0d02d5b3\") " Jan 22 07:00:37 crc kubenswrapper[4720]: I0122 07:00:37.129449 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f007a0c1-7881-4fcd-8bc0-8ccf0d02d5b3-logs\") pod \"f007a0c1-7881-4fcd-8bc0-8ccf0d02d5b3\" (UID: \"f007a0c1-7881-4fcd-8bc0-8ccf0d02d5b3\") " Jan 22 07:00:37 crc kubenswrapper[4720]: I0122 07:00:37.129518 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f007a0c1-7881-4fcd-8bc0-8ccf0d02d5b3-combined-ca-bundle\") pod \"f007a0c1-7881-4fcd-8bc0-8ccf0d02d5b3\" (UID: \"f007a0c1-7881-4fcd-8bc0-8ccf0d02d5b3\") " Jan 22 07:00:37 crc kubenswrapper[4720]: I0122 07:00:37.129561 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/f007a0c1-7881-4fcd-8bc0-8ccf0d02d5b3-custom-prometheus-ca\") pod \"f007a0c1-7881-4fcd-8bc0-8ccf0d02d5b3\" (UID: \"f007a0c1-7881-4fcd-8bc0-8ccf0d02d5b3\") " Jan 22 07:00:37 crc kubenswrapper[4720]: I0122 07:00:37.129973 4720 scope.go:117] "RemoveContainer" containerID="1db3d09bad7855318971cbe1d6bd273e4743d05d5883734959276b641d41bd40" Jan 22 07:00:37 crc kubenswrapper[4720]: E0122 07:00:37.130985 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1db3d09bad7855318971cbe1d6bd273e4743d05d5883734959276b641d41bd40\": container with ID starting with 1db3d09bad7855318971cbe1d6bd273e4743d05d5883734959276b641d41bd40 not found: ID does not exist" containerID="1db3d09bad7855318971cbe1d6bd273e4743d05d5883734959276b641d41bd40" Jan 22 07:00:37 crc kubenswrapper[4720]: I0122 07:00:37.131042 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f007a0c1-7881-4fcd-8bc0-8ccf0d02d5b3-logs" (OuterVolumeSpecName: "logs") pod "f007a0c1-7881-4fcd-8bc0-8ccf0d02d5b3" (UID: "f007a0c1-7881-4fcd-8bc0-8ccf0d02d5b3"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 07:00:37 crc kubenswrapper[4720]: I0122 07:00:37.131046 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1db3d09bad7855318971cbe1d6bd273e4743d05d5883734959276b641d41bd40"} err="failed to get container status \"1db3d09bad7855318971cbe1d6bd273e4743d05d5883734959276b641d41bd40\": rpc error: code = NotFound desc = could not find container \"1db3d09bad7855318971cbe1d6bd273e4743d05d5883734959276b641d41bd40\": container with ID starting with 1db3d09bad7855318971cbe1d6bd273e4743d05d5883734959276b641d41bd40 not found: ID does not exist" Jan 22 07:00:37 crc kubenswrapper[4720]: I0122 07:00:37.131087 4720 scope.go:117] "RemoveContainer" containerID="b4629172d61515df83acd3714a677421224badc9b4f3dbea53218d0dee09a956" Jan 22 07:00:37 crc kubenswrapper[4720]: E0122 07:00:37.135183 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b4629172d61515df83acd3714a677421224badc9b4f3dbea53218d0dee09a956\": container with ID starting with b4629172d61515df83acd3714a677421224badc9b4f3dbea53218d0dee09a956 not found: ID does not exist" containerID="b4629172d61515df83acd3714a677421224badc9b4f3dbea53218d0dee09a956" Jan 22 07:00:37 crc kubenswrapper[4720]: I0122 07:00:37.135256 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b4629172d61515df83acd3714a677421224badc9b4f3dbea53218d0dee09a956"} err="failed to get container status \"b4629172d61515df83acd3714a677421224badc9b4f3dbea53218d0dee09a956\": rpc error: code = NotFound desc = could not find container \"b4629172d61515df83acd3714a677421224badc9b4f3dbea53218d0dee09a956\": container with ID starting with b4629172d61515df83acd3714a677421224badc9b4f3dbea53218d0dee09a956 not found: ID does not exist" Jan 22 07:00:37 crc kubenswrapper[4720]: I0122 07:00:37.151326 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f007a0c1-7881-4fcd-8bc0-8ccf0d02d5b3-kube-api-access-2hlsj" (OuterVolumeSpecName: "kube-api-access-2hlsj") pod "f007a0c1-7881-4fcd-8bc0-8ccf0d02d5b3" (UID: "f007a0c1-7881-4fcd-8bc0-8ccf0d02d5b3"). InnerVolumeSpecName "kube-api-access-2hlsj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 07:00:37 crc kubenswrapper[4720]: I0122 07:00:37.187159 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f007a0c1-7881-4fcd-8bc0-8ccf0d02d5b3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f007a0c1-7881-4fcd-8bc0-8ccf0d02d5b3" (UID: "f007a0c1-7881-4fcd-8bc0-8ccf0d02d5b3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:00:37 crc kubenswrapper[4720]: I0122 07:00:37.187176 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f007a0c1-7881-4fcd-8bc0-8ccf0d02d5b3-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "f007a0c1-7881-4fcd-8bc0-8ccf0d02d5b3" (UID: "f007a0c1-7881-4fcd-8bc0-8ccf0d02d5b3"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:00:37 crc kubenswrapper[4720]: I0122 07:00:37.220050 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f007a0c1-7881-4fcd-8bc0-8ccf0d02d5b3-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "f007a0c1-7881-4fcd-8bc0-8ccf0d02d5b3" (UID: "f007a0c1-7881-4fcd-8bc0-8ccf0d02d5b3"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:00:37 crc kubenswrapper[4720]: I0122 07:00:37.233165 4720 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f007a0c1-7881-4fcd-8bc0-8ccf0d02d5b3-logs\") on node \"crc\" DevicePath \"\"" Jan 22 07:00:37 crc kubenswrapper[4720]: I0122 07:00:37.233228 4720 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f007a0c1-7881-4fcd-8bc0-8ccf0d02d5b3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 07:00:37 crc kubenswrapper[4720]: I0122 07:00:37.233239 4720 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/f007a0c1-7881-4fcd-8bc0-8ccf0d02d5b3-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Jan 22 07:00:37 crc kubenswrapper[4720]: I0122 07:00:37.233248 4720 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f007a0c1-7881-4fcd-8bc0-8ccf0d02d5b3-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 22 07:00:37 crc kubenswrapper[4720]: I0122 07:00:37.233258 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2hlsj\" (UniqueName: \"kubernetes.io/projected/f007a0c1-7881-4fcd-8bc0-8ccf0d02d5b3-kube-api-access-2hlsj\") on node \"crc\" DevicePath \"\"" Jan 22 07:00:37 crc kubenswrapper[4720]: I0122 07:00:37.239399 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f007a0c1-7881-4fcd-8bc0-8ccf0d02d5b3-config-data" (OuterVolumeSpecName: "config-data") pod "f007a0c1-7881-4fcd-8bc0-8ccf0d02d5b3" (UID: "f007a0c1-7881-4fcd-8bc0-8ccf0d02d5b3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:00:37 crc kubenswrapper[4720]: I0122 07:00:37.244635 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f007a0c1-7881-4fcd-8bc0-8ccf0d02d5b3-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "f007a0c1-7881-4fcd-8bc0-8ccf0d02d5b3" (UID: "f007a0c1-7881-4fcd-8bc0-8ccf0d02d5b3"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:00:37 crc kubenswrapper[4720]: I0122 07:00:37.338694 4720 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f007a0c1-7881-4fcd-8bc0-8ccf0d02d5b3-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 22 07:00:37 crc kubenswrapper[4720]: I0122 07:00:37.339111 4720 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f007a0c1-7881-4fcd-8bc0-8ccf0d02d5b3-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 07:00:37 crc kubenswrapper[4720]: I0122 07:00:37.422589 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 22 07:00:37 crc kubenswrapper[4720]: I0122 07:00:37.434562 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 22 07:00:37 crc kubenswrapper[4720]: I0122 07:00:37.455306 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 22 07:00:37 crc kubenswrapper[4720]: E0122 07:00:37.455876 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2cef4b87-a58e-4efe-a1ac-16cc86a676b1" containerName="registry-server" Jan 22 07:00:37 crc kubenswrapper[4720]: I0122 07:00:37.455900 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="2cef4b87-a58e-4efe-a1ac-16cc86a676b1" containerName="registry-server" Jan 22 07:00:37 crc kubenswrapper[4720]: E0122 07:00:37.455928 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f007a0c1-7881-4fcd-8bc0-8ccf0d02d5b3" containerName="watcher-api" Jan 22 07:00:37 crc kubenswrapper[4720]: I0122 07:00:37.455939 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="f007a0c1-7881-4fcd-8bc0-8ccf0d02d5b3" containerName="watcher-api" Jan 22 07:00:37 crc kubenswrapper[4720]: E0122 07:00:37.455956 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2cef4b87-a58e-4efe-a1ac-16cc86a676b1" containerName="extract-content" Jan 22 07:00:37 crc kubenswrapper[4720]: I0122 07:00:37.455963 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="2cef4b87-a58e-4efe-a1ac-16cc86a676b1" containerName="extract-content" Jan 22 07:00:37 crc kubenswrapper[4720]: E0122 07:00:37.455975 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f007a0c1-7881-4fcd-8bc0-8ccf0d02d5b3" containerName="watcher-kuttl-api-log" Jan 22 07:00:37 crc kubenswrapper[4720]: I0122 07:00:37.455983 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="f007a0c1-7881-4fcd-8bc0-8ccf0d02d5b3" containerName="watcher-kuttl-api-log" Jan 22 07:00:37 crc kubenswrapper[4720]: E0122 07:00:37.455998 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2cef4b87-a58e-4efe-a1ac-16cc86a676b1" containerName="extract-utilities" Jan 22 07:00:37 crc kubenswrapper[4720]: I0122 07:00:37.456004 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="2cef4b87-a58e-4efe-a1ac-16cc86a676b1" containerName="extract-utilities" Jan 22 07:00:37 crc kubenswrapper[4720]: I0122 07:00:37.456181 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="2cef4b87-a58e-4efe-a1ac-16cc86a676b1" containerName="registry-server" Jan 22 07:00:37 crc kubenswrapper[4720]: I0122 07:00:37.456203 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="f007a0c1-7881-4fcd-8bc0-8ccf0d02d5b3" containerName="watcher-kuttl-api-log" Jan 22 07:00:37 crc kubenswrapper[4720]: I0122 07:00:37.456216 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="f007a0c1-7881-4fcd-8bc0-8ccf0d02d5b3" containerName="watcher-api" Jan 22 07:00:37 crc kubenswrapper[4720]: I0122 07:00:37.457571 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:00:37 crc kubenswrapper[4720]: I0122 07:00:37.460607 4720 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-api-config-data" Jan 22 07:00:37 crc kubenswrapper[4720]: I0122 07:00:37.461048 4720 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-watcher-public-svc" Jan 22 07:00:37 crc kubenswrapper[4720]: I0122 07:00:37.461299 4720 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-watcher-internal-svc" Jan 22 07:00:37 crc kubenswrapper[4720]: I0122 07:00:37.468774 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 22 07:00:37 crc kubenswrapper[4720]: I0122 07:00:37.650426 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a08f27d-f169-4ef4-9154-a2df454c5d27-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"7a08f27d-f169-4ef4-9154-a2df454c5d27\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:00:37 crc kubenswrapper[4720]: I0122 07:00:37.650477 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7a08f27d-f169-4ef4-9154-a2df454c5d27-public-tls-certs\") pod \"watcher-kuttl-api-0\" (UID: \"7a08f27d-f169-4ef4-9154-a2df454c5d27\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:00:37 crc kubenswrapper[4720]: I0122 07:00:37.650526 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/7a08f27d-f169-4ef4-9154-a2df454c5d27-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"7a08f27d-f169-4ef4-9154-a2df454c5d27\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:00:37 crc kubenswrapper[4720]: I0122 07:00:37.650558 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a08f27d-f169-4ef4-9154-a2df454c5d27-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"7a08f27d-f169-4ef4-9154-a2df454c5d27\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:00:37 crc kubenswrapper[4720]: I0122 07:00:37.650600 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bqm8j\" (UniqueName: \"kubernetes.io/projected/7a08f27d-f169-4ef4-9154-a2df454c5d27-kube-api-access-bqm8j\") pod \"watcher-kuttl-api-0\" (UID: \"7a08f27d-f169-4ef4-9154-a2df454c5d27\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:00:37 crc kubenswrapper[4720]: I0122 07:00:37.650648 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7a08f27d-f169-4ef4-9154-a2df454c5d27-internal-tls-certs\") pod \"watcher-kuttl-api-0\" (UID: \"7a08f27d-f169-4ef4-9154-a2df454c5d27\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:00:37 crc kubenswrapper[4720]: I0122 07:00:37.650672 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7a08f27d-f169-4ef4-9154-a2df454c5d27-logs\") pod \"watcher-kuttl-api-0\" (UID: \"7a08f27d-f169-4ef4-9154-a2df454c5d27\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:00:37 crc kubenswrapper[4720]: I0122 07:00:37.720552 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:00:37 crc kubenswrapper[4720]: I0122 07:00:37.752225 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a08f27d-f169-4ef4-9154-a2df454c5d27-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"7a08f27d-f169-4ef4-9154-a2df454c5d27\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:00:37 crc kubenswrapper[4720]: I0122 07:00:37.752279 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7a08f27d-f169-4ef4-9154-a2df454c5d27-public-tls-certs\") pod \"watcher-kuttl-api-0\" (UID: \"7a08f27d-f169-4ef4-9154-a2df454c5d27\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:00:37 crc kubenswrapper[4720]: I0122 07:00:37.752324 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/7a08f27d-f169-4ef4-9154-a2df454c5d27-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"7a08f27d-f169-4ef4-9154-a2df454c5d27\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:00:37 crc kubenswrapper[4720]: I0122 07:00:37.752364 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a08f27d-f169-4ef4-9154-a2df454c5d27-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"7a08f27d-f169-4ef4-9154-a2df454c5d27\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:00:37 crc kubenswrapper[4720]: I0122 07:00:37.752395 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bqm8j\" (UniqueName: \"kubernetes.io/projected/7a08f27d-f169-4ef4-9154-a2df454c5d27-kube-api-access-bqm8j\") pod \"watcher-kuttl-api-0\" (UID: \"7a08f27d-f169-4ef4-9154-a2df454c5d27\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:00:37 crc kubenswrapper[4720]: I0122 07:00:37.752435 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7a08f27d-f169-4ef4-9154-a2df454c5d27-internal-tls-certs\") pod \"watcher-kuttl-api-0\" (UID: \"7a08f27d-f169-4ef4-9154-a2df454c5d27\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:00:37 crc kubenswrapper[4720]: I0122 07:00:37.752453 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7a08f27d-f169-4ef4-9154-a2df454c5d27-logs\") pod \"watcher-kuttl-api-0\" (UID: \"7a08f27d-f169-4ef4-9154-a2df454c5d27\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:00:37 crc kubenswrapper[4720]: I0122 07:00:37.753012 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7a08f27d-f169-4ef4-9154-a2df454c5d27-logs\") pod \"watcher-kuttl-api-0\" (UID: \"7a08f27d-f169-4ef4-9154-a2df454c5d27\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:00:37 crc kubenswrapper[4720]: I0122 07:00:37.757708 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7a08f27d-f169-4ef4-9154-a2df454c5d27-public-tls-certs\") pod \"watcher-kuttl-api-0\" (UID: \"7a08f27d-f169-4ef4-9154-a2df454c5d27\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:00:37 crc kubenswrapper[4720]: I0122 07:00:37.764799 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7a08f27d-f169-4ef4-9154-a2df454c5d27-internal-tls-certs\") pod \"watcher-kuttl-api-0\" (UID: \"7a08f27d-f169-4ef4-9154-a2df454c5d27\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:00:37 crc kubenswrapper[4720]: I0122 07:00:37.765260 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a08f27d-f169-4ef4-9154-a2df454c5d27-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"7a08f27d-f169-4ef4-9154-a2df454c5d27\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:00:37 crc kubenswrapper[4720]: I0122 07:00:37.767813 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a08f27d-f169-4ef4-9154-a2df454c5d27-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"7a08f27d-f169-4ef4-9154-a2df454c5d27\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:00:37 crc kubenswrapper[4720]: I0122 07:00:37.768230 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/7a08f27d-f169-4ef4-9154-a2df454c5d27-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"7a08f27d-f169-4ef4-9154-a2df454c5d27\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:00:37 crc kubenswrapper[4720]: I0122 07:00:37.772887 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bqm8j\" (UniqueName: \"kubernetes.io/projected/7a08f27d-f169-4ef4-9154-a2df454c5d27-kube-api-access-bqm8j\") pod \"watcher-kuttl-api-0\" (UID: \"7a08f27d-f169-4ef4-9154-a2df454c5d27\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:00:37 crc kubenswrapper[4720]: I0122 07:00:37.783120 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:00:37 crc kubenswrapper[4720]: I0122 07:00:37.856116 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e48ac237-80c1-4cca-9e7d-8610d3467cd7-run-httpd\") pod \"e48ac237-80c1-4cca-9e7d-8610d3467cd7\" (UID: \"e48ac237-80c1-4cca-9e7d-8610d3467cd7\") " Jan 22 07:00:37 crc kubenswrapper[4720]: I0122 07:00:37.856197 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e48ac237-80c1-4cca-9e7d-8610d3467cd7-combined-ca-bundle\") pod \"e48ac237-80c1-4cca-9e7d-8610d3467cd7\" (UID: \"e48ac237-80c1-4cca-9e7d-8610d3467cd7\") " Jan 22 07:00:37 crc kubenswrapper[4720]: I0122 07:00:37.856339 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e48ac237-80c1-4cca-9e7d-8610d3467cd7-log-httpd\") pod \"e48ac237-80c1-4cca-9e7d-8610d3467cd7\" (UID: \"e48ac237-80c1-4cca-9e7d-8610d3467cd7\") " Jan 22 07:00:37 crc kubenswrapper[4720]: I0122 07:00:37.856406 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/e48ac237-80c1-4cca-9e7d-8610d3467cd7-ceilometer-tls-certs\") pod \"e48ac237-80c1-4cca-9e7d-8610d3467cd7\" (UID: \"e48ac237-80c1-4cca-9e7d-8610d3467cd7\") " Jan 22 07:00:37 crc kubenswrapper[4720]: I0122 07:00:37.856446 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e48ac237-80c1-4cca-9e7d-8610d3467cd7-scripts\") pod \"e48ac237-80c1-4cca-9e7d-8610d3467cd7\" (UID: \"e48ac237-80c1-4cca-9e7d-8610d3467cd7\") " Jan 22 07:00:37 crc kubenswrapper[4720]: I0122 07:00:37.856478 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e48ac237-80c1-4cca-9e7d-8610d3467cd7-sg-core-conf-yaml\") pod \"e48ac237-80c1-4cca-9e7d-8610d3467cd7\" (UID: \"e48ac237-80c1-4cca-9e7d-8610d3467cd7\") " Jan 22 07:00:37 crc kubenswrapper[4720]: I0122 07:00:37.856561 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e48ac237-80c1-4cca-9e7d-8610d3467cd7-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "e48ac237-80c1-4cca-9e7d-8610d3467cd7" (UID: "e48ac237-80c1-4cca-9e7d-8610d3467cd7"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 07:00:37 crc kubenswrapper[4720]: I0122 07:00:37.856614 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e48ac237-80c1-4cca-9e7d-8610d3467cd7-config-data\") pod \"e48ac237-80c1-4cca-9e7d-8610d3467cd7\" (UID: \"e48ac237-80c1-4cca-9e7d-8610d3467cd7\") " Jan 22 07:00:37 crc kubenswrapper[4720]: I0122 07:00:37.856653 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mgg65\" (UniqueName: \"kubernetes.io/projected/e48ac237-80c1-4cca-9e7d-8610d3467cd7-kube-api-access-mgg65\") pod \"e48ac237-80c1-4cca-9e7d-8610d3467cd7\" (UID: \"e48ac237-80c1-4cca-9e7d-8610d3467cd7\") " Jan 22 07:00:37 crc kubenswrapper[4720]: I0122 07:00:37.856810 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e48ac237-80c1-4cca-9e7d-8610d3467cd7-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "e48ac237-80c1-4cca-9e7d-8610d3467cd7" (UID: "e48ac237-80c1-4cca-9e7d-8610d3467cd7"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 07:00:37 crc kubenswrapper[4720]: I0122 07:00:37.857167 4720 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e48ac237-80c1-4cca-9e7d-8610d3467cd7-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 22 07:00:37 crc kubenswrapper[4720]: I0122 07:00:37.857227 4720 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e48ac237-80c1-4cca-9e7d-8610d3467cd7-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 22 07:00:37 crc kubenswrapper[4720]: I0122 07:00:37.861231 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e48ac237-80c1-4cca-9e7d-8610d3467cd7-scripts" (OuterVolumeSpecName: "scripts") pod "e48ac237-80c1-4cca-9e7d-8610d3467cd7" (UID: "e48ac237-80c1-4cca-9e7d-8610d3467cd7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:00:37 crc kubenswrapper[4720]: I0122 07:00:37.867472 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e48ac237-80c1-4cca-9e7d-8610d3467cd7-kube-api-access-mgg65" (OuterVolumeSpecName: "kube-api-access-mgg65") pod "e48ac237-80c1-4cca-9e7d-8610d3467cd7" (UID: "e48ac237-80c1-4cca-9e7d-8610d3467cd7"). InnerVolumeSpecName "kube-api-access-mgg65". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 07:00:37 crc kubenswrapper[4720]: I0122 07:00:37.892104 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e48ac237-80c1-4cca-9e7d-8610d3467cd7-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "e48ac237-80c1-4cca-9e7d-8610d3467cd7" (UID: "e48ac237-80c1-4cca-9e7d-8610d3467cd7"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:00:37 crc kubenswrapper[4720]: I0122 07:00:37.928235 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e48ac237-80c1-4cca-9e7d-8610d3467cd7-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "e48ac237-80c1-4cca-9e7d-8610d3467cd7" (UID: "e48ac237-80c1-4cca-9e7d-8610d3467cd7"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:00:37 crc kubenswrapper[4720]: I0122 07:00:37.956987 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e48ac237-80c1-4cca-9e7d-8610d3467cd7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e48ac237-80c1-4cca-9e7d-8610d3467cd7" (UID: "e48ac237-80c1-4cca-9e7d-8610d3467cd7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:00:37 crc kubenswrapper[4720]: I0122 07:00:37.960035 4720 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/e48ac237-80c1-4cca-9e7d-8610d3467cd7-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 22 07:00:37 crc kubenswrapper[4720]: I0122 07:00:37.960090 4720 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e48ac237-80c1-4cca-9e7d-8610d3467cd7-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 07:00:37 crc kubenswrapper[4720]: I0122 07:00:37.960111 4720 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e48ac237-80c1-4cca-9e7d-8610d3467cd7-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 22 07:00:37 crc kubenswrapper[4720]: I0122 07:00:37.960125 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mgg65\" (UniqueName: \"kubernetes.io/projected/e48ac237-80c1-4cca-9e7d-8610d3467cd7-kube-api-access-mgg65\") on node \"crc\" DevicePath \"\"" Jan 22 07:00:37 crc kubenswrapper[4720]: I0122 07:00:37.960140 4720 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e48ac237-80c1-4cca-9e7d-8610d3467cd7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 07:00:37 crc kubenswrapper[4720]: I0122 07:00:37.980973 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e48ac237-80c1-4cca-9e7d-8610d3467cd7-config-data" (OuterVolumeSpecName: "config-data") pod "e48ac237-80c1-4cca-9e7d-8610d3467cd7" (UID: "e48ac237-80c1-4cca-9e7d-8610d3467cd7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:00:38 crc kubenswrapper[4720]: I0122 07:00:38.061861 4720 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e48ac237-80c1-4cca-9e7d-8610d3467cd7-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 07:00:38 crc kubenswrapper[4720]: I0122 07:00:38.095577 4720 generic.go:334] "Generic (PLEG): container finished" podID="e48ac237-80c1-4cca-9e7d-8610d3467cd7" containerID="aee39d7714577108bc858eeff2c7095cf9735e79aba0484e93fc980f07fe2e1c" exitCode=0 Jan 22 07:00:38 crc kubenswrapper[4720]: I0122 07:00:38.095626 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"e48ac237-80c1-4cca-9e7d-8610d3467cd7","Type":"ContainerDied","Data":"aee39d7714577108bc858eeff2c7095cf9735e79aba0484e93fc980f07fe2e1c"} Jan 22 07:00:38 crc kubenswrapper[4720]: I0122 07:00:38.095660 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"e48ac237-80c1-4cca-9e7d-8610d3467cd7","Type":"ContainerDied","Data":"f2cee8ad16ec9e3fae675e79833d9c55f19ebb3881dfcb16b40cee765725e338"} Jan 22 07:00:38 crc kubenswrapper[4720]: I0122 07:00:38.095682 4720 scope.go:117] "RemoveContainer" containerID="4ba0580ad485b8814f20f0361de16c519d8af2131daa30699cc8d6b0b8015bbe" Jan 22 07:00:38 crc kubenswrapper[4720]: I0122 07:00:38.095825 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:00:38 crc kubenswrapper[4720]: I0122 07:00:38.139854 4720 scope.go:117] "RemoveContainer" containerID="65b26f295e4bfd0776e384c37036f71b29391b936a23f0bf2ba43d7ce1025dff" Jan 22 07:00:38 crc kubenswrapper[4720]: I0122 07:00:38.146105 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 07:00:38 crc kubenswrapper[4720]: I0122 07:00:38.166839 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 07:00:38 crc kubenswrapper[4720]: I0122 07:00:38.170146 4720 scope.go:117] "RemoveContainer" containerID="aee39d7714577108bc858eeff2c7095cf9735e79aba0484e93fc980f07fe2e1c" Jan 22 07:00:38 crc kubenswrapper[4720]: I0122 07:00:38.182216 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 07:00:38 crc kubenswrapper[4720]: E0122 07:00:38.182652 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e48ac237-80c1-4cca-9e7d-8610d3467cd7" containerName="ceilometer-notification-agent" Jan 22 07:00:38 crc kubenswrapper[4720]: I0122 07:00:38.182671 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="e48ac237-80c1-4cca-9e7d-8610d3467cd7" containerName="ceilometer-notification-agent" Jan 22 07:00:38 crc kubenswrapper[4720]: E0122 07:00:38.182696 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e48ac237-80c1-4cca-9e7d-8610d3467cd7" containerName="sg-core" Jan 22 07:00:38 crc kubenswrapper[4720]: I0122 07:00:38.182703 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="e48ac237-80c1-4cca-9e7d-8610d3467cd7" containerName="sg-core" Jan 22 07:00:38 crc kubenswrapper[4720]: E0122 07:00:38.182718 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e48ac237-80c1-4cca-9e7d-8610d3467cd7" containerName="ceilometer-central-agent" Jan 22 07:00:38 crc kubenswrapper[4720]: I0122 07:00:38.182724 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="e48ac237-80c1-4cca-9e7d-8610d3467cd7" containerName="ceilometer-central-agent" Jan 22 07:00:38 crc kubenswrapper[4720]: E0122 07:00:38.182738 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e48ac237-80c1-4cca-9e7d-8610d3467cd7" containerName="proxy-httpd" Jan 22 07:00:38 crc kubenswrapper[4720]: I0122 07:00:38.182744 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="e48ac237-80c1-4cca-9e7d-8610d3467cd7" containerName="proxy-httpd" Jan 22 07:00:38 crc kubenswrapper[4720]: I0122 07:00:38.182892 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="e48ac237-80c1-4cca-9e7d-8610d3467cd7" containerName="ceilometer-central-agent" Jan 22 07:00:38 crc kubenswrapper[4720]: I0122 07:00:38.182924 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="e48ac237-80c1-4cca-9e7d-8610d3467cd7" containerName="sg-core" Jan 22 07:00:38 crc kubenswrapper[4720]: I0122 07:00:38.182936 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="e48ac237-80c1-4cca-9e7d-8610d3467cd7" containerName="ceilometer-notification-agent" Jan 22 07:00:38 crc kubenswrapper[4720]: I0122 07:00:38.182942 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="e48ac237-80c1-4cca-9e7d-8610d3467cd7" containerName="proxy-httpd" Jan 22 07:00:38 crc kubenswrapper[4720]: I0122 07:00:38.184360 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:00:38 crc kubenswrapper[4720]: I0122 07:00:38.187854 4720 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-config-data" Jan 22 07:00:38 crc kubenswrapper[4720]: I0122 07:00:38.187986 4720 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-ceilometer-internal-svc" Jan 22 07:00:38 crc kubenswrapper[4720]: I0122 07:00:38.188198 4720 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-scripts" Jan 22 07:00:38 crc kubenswrapper[4720]: I0122 07:00:38.199032 4720 scope.go:117] "RemoveContainer" containerID="0f8b14088edb2c3713ecbd556d0f8d10eaf0a8426de801b9e7bb400a843cad12" Jan 22 07:00:38 crc kubenswrapper[4720]: I0122 07:00:38.205834 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 07:00:38 crc kubenswrapper[4720]: I0122 07:00:38.251007 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e48ac237-80c1-4cca-9e7d-8610d3467cd7" path="/var/lib/kubelet/pods/e48ac237-80c1-4cca-9e7d-8610d3467cd7/volumes" Jan 22 07:00:38 crc kubenswrapper[4720]: I0122 07:00:38.255821 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f007a0c1-7881-4fcd-8bc0-8ccf0d02d5b3" path="/var/lib/kubelet/pods/f007a0c1-7881-4fcd-8bc0-8ccf0d02d5b3/volumes" Jan 22 07:00:38 crc kubenswrapper[4720]: I0122 07:00:38.271339 4720 scope.go:117] "RemoveContainer" containerID="4ba0580ad485b8814f20f0361de16c519d8af2131daa30699cc8d6b0b8015bbe" Jan 22 07:00:38 crc kubenswrapper[4720]: E0122 07:00:38.272709 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4ba0580ad485b8814f20f0361de16c519d8af2131daa30699cc8d6b0b8015bbe\": container with ID starting with 4ba0580ad485b8814f20f0361de16c519d8af2131daa30699cc8d6b0b8015bbe not found: ID does not exist" containerID="4ba0580ad485b8814f20f0361de16c519d8af2131daa30699cc8d6b0b8015bbe" Jan 22 07:00:38 crc kubenswrapper[4720]: I0122 07:00:38.272773 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4ba0580ad485b8814f20f0361de16c519d8af2131daa30699cc8d6b0b8015bbe"} err="failed to get container status \"4ba0580ad485b8814f20f0361de16c519d8af2131daa30699cc8d6b0b8015bbe\": rpc error: code = NotFound desc = could not find container \"4ba0580ad485b8814f20f0361de16c519d8af2131daa30699cc8d6b0b8015bbe\": container with ID starting with 4ba0580ad485b8814f20f0361de16c519d8af2131daa30699cc8d6b0b8015bbe not found: ID does not exist" Jan 22 07:00:38 crc kubenswrapper[4720]: I0122 07:00:38.272808 4720 scope.go:117] "RemoveContainer" containerID="65b26f295e4bfd0776e384c37036f71b29391b936a23f0bf2ba43d7ce1025dff" Jan 22 07:00:38 crc kubenswrapper[4720]: E0122 07:00:38.274237 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"65b26f295e4bfd0776e384c37036f71b29391b936a23f0bf2ba43d7ce1025dff\": container with ID starting with 65b26f295e4bfd0776e384c37036f71b29391b936a23f0bf2ba43d7ce1025dff not found: ID does not exist" containerID="65b26f295e4bfd0776e384c37036f71b29391b936a23f0bf2ba43d7ce1025dff" Jan 22 07:00:38 crc kubenswrapper[4720]: I0122 07:00:38.274358 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"65b26f295e4bfd0776e384c37036f71b29391b936a23f0bf2ba43d7ce1025dff"} err="failed to get container status \"65b26f295e4bfd0776e384c37036f71b29391b936a23f0bf2ba43d7ce1025dff\": rpc error: code = NotFound desc = could not find container \"65b26f295e4bfd0776e384c37036f71b29391b936a23f0bf2ba43d7ce1025dff\": container with ID starting with 65b26f295e4bfd0776e384c37036f71b29391b936a23f0bf2ba43d7ce1025dff not found: ID does not exist" Jan 22 07:00:38 crc kubenswrapper[4720]: I0122 07:00:38.274395 4720 scope.go:117] "RemoveContainer" containerID="aee39d7714577108bc858eeff2c7095cf9735e79aba0484e93fc980f07fe2e1c" Jan 22 07:00:38 crc kubenswrapper[4720]: E0122 07:00:38.278503 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aee39d7714577108bc858eeff2c7095cf9735e79aba0484e93fc980f07fe2e1c\": container with ID starting with aee39d7714577108bc858eeff2c7095cf9735e79aba0484e93fc980f07fe2e1c not found: ID does not exist" containerID="aee39d7714577108bc858eeff2c7095cf9735e79aba0484e93fc980f07fe2e1c" Jan 22 07:00:38 crc kubenswrapper[4720]: I0122 07:00:38.278558 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aee39d7714577108bc858eeff2c7095cf9735e79aba0484e93fc980f07fe2e1c"} err="failed to get container status \"aee39d7714577108bc858eeff2c7095cf9735e79aba0484e93fc980f07fe2e1c\": rpc error: code = NotFound desc = could not find container \"aee39d7714577108bc858eeff2c7095cf9735e79aba0484e93fc980f07fe2e1c\": container with ID starting with aee39d7714577108bc858eeff2c7095cf9735e79aba0484e93fc980f07fe2e1c not found: ID does not exist" Jan 22 07:00:38 crc kubenswrapper[4720]: I0122 07:00:38.278593 4720 scope.go:117] "RemoveContainer" containerID="0f8b14088edb2c3713ecbd556d0f8d10eaf0a8426de801b9e7bb400a843cad12" Jan 22 07:00:38 crc kubenswrapper[4720]: E0122 07:00:38.279292 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0f8b14088edb2c3713ecbd556d0f8d10eaf0a8426de801b9e7bb400a843cad12\": container with ID starting with 0f8b14088edb2c3713ecbd556d0f8d10eaf0a8426de801b9e7bb400a843cad12 not found: ID does not exist" containerID="0f8b14088edb2c3713ecbd556d0f8d10eaf0a8426de801b9e7bb400a843cad12" Jan 22 07:00:38 crc kubenswrapper[4720]: I0122 07:00:38.279373 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0f8b14088edb2c3713ecbd556d0f8d10eaf0a8426de801b9e7bb400a843cad12"} err="failed to get container status \"0f8b14088edb2c3713ecbd556d0f8d10eaf0a8426de801b9e7bb400a843cad12\": rpc error: code = NotFound desc = could not find container \"0f8b14088edb2c3713ecbd556d0f8d10eaf0a8426de801b9e7bb400a843cad12\": container with ID starting with 0f8b14088edb2c3713ecbd556d0f8d10eaf0a8426de801b9e7bb400a843cad12 not found: ID does not exist" Jan 22 07:00:38 crc kubenswrapper[4720]: I0122 07:00:38.292955 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 22 07:00:38 crc kubenswrapper[4720]: I0122 07:00:38.374466 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4f12fc81-5045-4a93-95ea-8a57050168c5-scripts\") pod \"ceilometer-0\" (UID: \"4f12fc81-5045-4a93-95ea-8a57050168c5\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:00:38 crc kubenswrapper[4720]: I0122 07:00:38.374547 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4f12fc81-5045-4a93-95ea-8a57050168c5-config-data\") pod \"ceilometer-0\" (UID: \"4f12fc81-5045-4a93-95ea-8a57050168c5\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:00:38 crc kubenswrapper[4720]: I0122 07:00:38.374597 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4f12fc81-5045-4a93-95ea-8a57050168c5-run-httpd\") pod \"ceilometer-0\" (UID: \"4f12fc81-5045-4a93-95ea-8a57050168c5\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:00:38 crc kubenswrapper[4720]: I0122 07:00:38.374638 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tx2kb\" (UniqueName: \"kubernetes.io/projected/4f12fc81-5045-4a93-95ea-8a57050168c5-kube-api-access-tx2kb\") pod \"ceilometer-0\" (UID: \"4f12fc81-5045-4a93-95ea-8a57050168c5\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:00:38 crc kubenswrapper[4720]: I0122 07:00:38.376966 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4f12fc81-5045-4a93-95ea-8a57050168c5-log-httpd\") pod \"ceilometer-0\" (UID: \"4f12fc81-5045-4a93-95ea-8a57050168c5\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:00:38 crc kubenswrapper[4720]: I0122 07:00:38.377015 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4f12fc81-5045-4a93-95ea-8a57050168c5-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4f12fc81-5045-4a93-95ea-8a57050168c5\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:00:38 crc kubenswrapper[4720]: I0122 07:00:38.377044 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f12fc81-5045-4a93-95ea-8a57050168c5-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4f12fc81-5045-4a93-95ea-8a57050168c5\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:00:38 crc kubenswrapper[4720]: I0122 07:00:38.377255 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/4f12fc81-5045-4a93-95ea-8a57050168c5-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"4f12fc81-5045-4a93-95ea-8a57050168c5\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:00:38 crc kubenswrapper[4720]: I0122 07:00:38.478700 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4f12fc81-5045-4a93-95ea-8a57050168c5-scripts\") pod \"ceilometer-0\" (UID: \"4f12fc81-5045-4a93-95ea-8a57050168c5\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:00:38 crc kubenswrapper[4720]: I0122 07:00:38.478761 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4f12fc81-5045-4a93-95ea-8a57050168c5-config-data\") pod \"ceilometer-0\" (UID: \"4f12fc81-5045-4a93-95ea-8a57050168c5\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:00:38 crc kubenswrapper[4720]: I0122 07:00:38.478794 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4f12fc81-5045-4a93-95ea-8a57050168c5-run-httpd\") pod \"ceilometer-0\" (UID: \"4f12fc81-5045-4a93-95ea-8a57050168c5\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:00:38 crc kubenswrapper[4720]: I0122 07:00:38.478820 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tx2kb\" (UniqueName: \"kubernetes.io/projected/4f12fc81-5045-4a93-95ea-8a57050168c5-kube-api-access-tx2kb\") pod \"ceilometer-0\" (UID: \"4f12fc81-5045-4a93-95ea-8a57050168c5\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:00:38 crc kubenswrapper[4720]: I0122 07:00:38.478853 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4f12fc81-5045-4a93-95ea-8a57050168c5-log-httpd\") pod \"ceilometer-0\" (UID: \"4f12fc81-5045-4a93-95ea-8a57050168c5\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:00:38 crc kubenswrapper[4720]: I0122 07:00:38.478877 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4f12fc81-5045-4a93-95ea-8a57050168c5-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4f12fc81-5045-4a93-95ea-8a57050168c5\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:00:38 crc kubenswrapper[4720]: I0122 07:00:38.478898 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f12fc81-5045-4a93-95ea-8a57050168c5-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4f12fc81-5045-4a93-95ea-8a57050168c5\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:00:38 crc kubenswrapper[4720]: I0122 07:00:38.478946 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/4f12fc81-5045-4a93-95ea-8a57050168c5-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"4f12fc81-5045-4a93-95ea-8a57050168c5\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:00:38 crc kubenswrapper[4720]: I0122 07:00:38.482503 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4f12fc81-5045-4a93-95ea-8a57050168c5-run-httpd\") pod \"ceilometer-0\" (UID: \"4f12fc81-5045-4a93-95ea-8a57050168c5\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:00:38 crc kubenswrapper[4720]: I0122 07:00:38.482555 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4f12fc81-5045-4a93-95ea-8a57050168c5-log-httpd\") pod \"ceilometer-0\" (UID: \"4f12fc81-5045-4a93-95ea-8a57050168c5\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:00:38 crc kubenswrapper[4720]: I0122 07:00:38.485640 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4f12fc81-5045-4a93-95ea-8a57050168c5-scripts\") pod \"ceilometer-0\" (UID: \"4f12fc81-5045-4a93-95ea-8a57050168c5\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:00:38 crc kubenswrapper[4720]: I0122 07:00:38.487087 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4f12fc81-5045-4a93-95ea-8a57050168c5-config-data\") pod \"ceilometer-0\" (UID: \"4f12fc81-5045-4a93-95ea-8a57050168c5\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:00:38 crc kubenswrapper[4720]: I0122 07:00:38.487967 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f12fc81-5045-4a93-95ea-8a57050168c5-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4f12fc81-5045-4a93-95ea-8a57050168c5\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:00:38 crc kubenswrapper[4720]: I0122 07:00:38.495715 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4f12fc81-5045-4a93-95ea-8a57050168c5-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4f12fc81-5045-4a93-95ea-8a57050168c5\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:00:38 crc kubenswrapper[4720]: I0122 07:00:38.498161 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/4f12fc81-5045-4a93-95ea-8a57050168c5-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"4f12fc81-5045-4a93-95ea-8a57050168c5\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:00:38 crc kubenswrapper[4720]: I0122 07:00:38.511450 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tx2kb\" (UniqueName: \"kubernetes.io/projected/4f12fc81-5045-4a93-95ea-8a57050168c5-kube-api-access-tx2kb\") pod \"ceilometer-0\" (UID: \"4f12fc81-5045-4a93-95ea-8a57050168c5\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:00:38 crc kubenswrapper[4720]: I0122 07:00:38.521401 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:00:39 crc kubenswrapper[4720]: I0122 07:00:39.015225 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 07:00:39 crc kubenswrapper[4720]: I0122 07:00:39.106000 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"7a08f27d-f169-4ef4-9154-a2df454c5d27","Type":"ContainerStarted","Data":"9dfa328418f7ea9c8e67a7d77bd0a945db5c8865e8cacda3f8f31ecaf91bf4ee"} Jan 22 07:00:39 crc kubenswrapper[4720]: I0122 07:00:39.106070 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"7a08f27d-f169-4ef4-9154-a2df454c5d27","Type":"ContainerStarted","Data":"866cad808f01296143c6c86db190f792ed85b387465f8d21a996f93b5a44e60d"} Jan 22 07:00:39 crc kubenswrapper[4720]: I0122 07:00:39.106083 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"7a08f27d-f169-4ef4-9154-a2df454c5d27","Type":"ContainerStarted","Data":"7b4e5ee684f977d5ff5bcf47ca080135e3692d2773d394788b124da755fd784b"} Jan 22 07:00:39 crc kubenswrapper[4720]: I0122 07:00:39.106378 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:00:39 crc kubenswrapper[4720]: I0122 07:00:39.108318 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"4f12fc81-5045-4a93-95ea-8a57050168c5","Type":"ContainerStarted","Data":"2fd3d16a5f07d31a39c7fc6ed725d5e88e1fa8cfd60562bc2fb7723a681cc9bc"} Jan 22 07:00:39 crc kubenswrapper[4720]: I0122 07:00:39.133377 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-api-0" podStartSLOduration=2.133356718 podStartE2EDuration="2.133356718s" podCreationTimestamp="2026-01-22 07:00:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 07:00:39.131476184 +0000 UTC m=+1531.273382909" watchObservedRunningTime="2026-01-22 07:00:39.133356718 +0000 UTC m=+1531.275263423" Jan 22 07:00:40 crc kubenswrapper[4720]: I0122 07:00:40.121322 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"4f12fc81-5045-4a93-95ea-8a57050168c5","Type":"ContainerStarted","Data":"5b5e77bf6746ec20e215d385c4e01d0594d986a094232a3c55141cf9187b04a3"} Jan 22 07:00:40 crc kubenswrapper[4720]: I0122 07:00:40.334561 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-k9fts"] Jan 22 07:00:40 crc kubenswrapper[4720]: I0122 07:00:40.342701 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-k9fts"] Jan 22 07:00:40 crc kubenswrapper[4720]: I0122 07:00:40.434480 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 22 07:00:40 crc kubenswrapper[4720]: I0122 07:00:40.434771 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podUID="035a1409-387d-4f2a-a89e-b36a2036e29b" containerName="watcher-applier" containerID="cri-o://6be262e202ffb82e2f3ecff3a6641adf81a5e307769b5b0bd2a197728b6160e4" gracePeriod=30 Jan 22 07:00:40 crc kubenswrapper[4720]: I0122 07:00:40.499028 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher5b9d-account-delete-p2vqf"] Jan 22 07:00:40 crc kubenswrapper[4720]: I0122 07:00:40.500984 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher5b9d-account-delete-p2vqf" Jan 22 07:00:40 crc kubenswrapper[4720]: I0122 07:00:40.536257 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 22 07:00:40 crc kubenswrapper[4720]: I0122 07:00:40.557795 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher5b9d-account-delete-p2vqf"] Jan 22 07:00:40 crc kubenswrapper[4720]: I0122 07:00:40.599248 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 22 07:00:40 crc kubenswrapper[4720]: I0122 07:00:40.599540 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="bed144f1-70f3-49fa-a09c-f10b5b03c2c2" containerName="watcher-decision-engine" containerID="cri-o://3ba5eb6be0c78618c80b5fa97f5d88da65fd9d70e021735cc3ca8480aadcd0a4" gracePeriod=30 Jan 22 07:00:40 crc kubenswrapper[4720]: E0122 07:00:40.617351 4720 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6be262e202ffb82e2f3ecff3a6641adf81a5e307769b5b0bd2a197728b6160e4" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Jan 22 07:00:40 crc kubenswrapper[4720]: I0122 07:00:40.629350 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r8lm6\" (UniqueName: \"kubernetes.io/projected/6ec1b0a8-dfba-4e68-a9f4-38aa45d8a972-kube-api-access-r8lm6\") pod \"watcher5b9d-account-delete-p2vqf\" (UID: \"6ec1b0a8-dfba-4e68-a9f4-38aa45d8a972\") " pod="watcher-kuttl-default/watcher5b9d-account-delete-p2vqf" Jan 22 07:00:40 crc kubenswrapper[4720]: I0122 07:00:40.629438 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6ec1b0a8-dfba-4e68-a9f4-38aa45d8a972-operator-scripts\") pod \"watcher5b9d-account-delete-p2vqf\" (UID: \"6ec1b0a8-dfba-4e68-a9f4-38aa45d8a972\") " pod="watcher-kuttl-default/watcher5b9d-account-delete-p2vqf" Jan 22 07:00:40 crc kubenswrapper[4720]: E0122 07:00:40.634992 4720 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6be262e202ffb82e2f3ecff3a6641adf81a5e307769b5b0bd2a197728b6160e4" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Jan 22 07:00:40 crc kubenswrapper[4720]: E0122 07:00:40.643897 4720 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6be262e202ffb82e2f3ecff3a6641adf81a5e307769b5b0bd2a197728b6160e4" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Jan 22 07:00:40 crc kubenswrapper[4720]: E0122 07:00:40.643980 4720 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podUID="035a1409-387d-4f2a-a89e-b36a2036e29b" containerName="watcher-applier" Jan 22 07:00:40 crc kubenswrapper[4720]: I0122 07:00:40.732217 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r8lm6\" (UniqueName: \"kubernetes.io/projected/6ec1b0a8-dfba-4e68-a9f4-38aa45d8a972-kube-api-access-r8lm6\") pod \"watcher5b9d-account-delete-p2vqf\" (UID: \"6ec1b0a8-dfba-4e68-a9f4-38aa45d8a972\") " pod="watcher-kuttl-default/watcher5b9d-account-delete-p2vqf" Jan 22 07:00:40 crc kubenswrapper[4720]: I0122 07:00:40.732311 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6ec1b0a8-dfba-4e68-a9f4-38aa45d8a972-operator-scripts\") pod \"watcher5b9d-account-delete-p2vqf\" (UID: \"6ec1b0a8-dfba-4e68-a9f4-38aa45d8a972\") " pod="watcher-kuttl-default/watcher5b9d-account-delete-p2vqf" Jan 22 07:00:40 crc kubenswrapper[4720]: I0122 07:00:40.733258 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6ec1b0a8-dfba-4e68-a9f4-38aa45d8a972-operator-scripts\") pod \"watcher5b9d-account-delete-p2vqf\" (UID: \"6ec1b0a8-dfba-4e68-a9f4-38aa45d8a972\") " pod="watcher-kuttl-default/watcher5b9d-account-delete-p2vqf" Jan 22 07:00:40 crc kubenswrapper[4720]: I0122 07:00:40.757456 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r8lm6\" (UniqueName: \"kubernetes.io/projected/6ec1b0a8-dfba-4e68-a9f4-38aa45d8a972-kube-api-access-r8lm6\") pod \"watcher5b9d-account-delete-p2vqf\" (UID: \"6ec1b0a8-dfba-4e68-a9f4-38aa45d8a972\") " pod="watcher-kuttl-default/watcher5b9d-account-delete-p2vqf" Jan 22 07:00:40 crc kubenswrapper[4720]: I0122 07:00:40.945400 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher5b9d-account-delete-p2vqf" Jan 22 07:00:41 crc kubenswrapper[4720]: I0122 07:00:41.136380 4720 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 22 07:00:41 crc kubenswrapper[4720]: I0122 07:00:41.136951 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="7a08f27d-f169-4ef4-9154-a2df454c5d27" containerName="watcher-kuttl-api-log" containerID="cri-o://866cad808f01296143c6c86db190f792ed85b387465f8d21a996f93b5a44e60d" gracePeriod=30 Jan 22 07:00:41 crc kubenswrapper[4720]: I0122 07:00:41.136586 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"4f12fc81-5045-4a93-95ea-8a57050168c5","Type":"ContainerStarted","Data":"6d3016b2bd4304885105404af4fe7de822f33519c0cb4c7bb64562fc86bb0536"} Jan 22 07:00:41 crc kubenswrapper[4720]: I0122 07:00:41.137634 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="7a08f27d-f169-4ef4-9154-a2df454c5d27" containerName="watcher-api" containerID="cri-o://9dfa328418f7ea9c8e67a7d77bd0a945db5c8865e8cacda3f8f31ecaf91bf4ee" gracePeriod=30 Jan 22 07:00:41 crc kubenswrapper[4720]: I0122 07:00:41.156708 4720 prober.go:107] "Probe failed" probeType="Readiness" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="7a08f27d-f169-4ef4-9154-a2df454c5d27" containerName="watcher-api" probeResult="failure" output="Get \"https://10.217.0.156:9322/\": EOF" Jan 22 07:00:41 crc kubenswrapper[4720]: I0122 07:00:41.569139 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher5b9d-account-delete-p2vqf"] Jan 22 07:00:42 crc kubenswrapper[4720]: I0122 07:00:42.147205 4720 generic.go:334] "Generic (PLEG): container finished" podID="7a08f27d-f169-4ef4-9154-a2df454c5d27" containerID="866cad808f01296143c6c86db190f792ed85b387465f8d21a996f93b5a44e60d" exitCode=143 Jan 22 07:00:42 crc kubenswrapper[4720]: I0122 07:00:42.147294 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"7a08f27d-f169-4ef4-9154-a2df454c5d27","Type":"ContainerDied","Data":"866cad808f01296143c6c86db190f792ed85b387465f8d21a996f93b5a44e60d"} Jan 22 07:00:42 crc kubenswrapper[4720]: I0122 07:00:42.151980 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher5b9d-account-delete-p2vqf" event={"ID":"6ec1b0a8-dfba-4e68-a9f4-38aa45d8a972","Type":"ContainerStarted","Data":"68fcd493888b7f71aac73f22324b312a2abe1d9eebc546619dc858b40fce8675"} Jan 22 07:00:42 crc kubenswrapper[4720]: I0122 07:00:42.152039 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher5b9d-account-delete-p2vqf" event={"ID":"6ec1b0a8-dfba-4e68-a9f4-38aa45d8a972","Type":"ContainerStarted","Data":"978b76742be8c1bca7388977818689d935d6a654964ca851a4422376b14cf670"} Jan 22 07:00:42 crc kubenswrapper[4720]: I0122 07:00:42.189169 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher5b9d-account-delete-p2vqf" podStartSLOduration=2.189141913 podStartE2EDuration="2.189141913s" podCreationTimestamp="2026-01-22 07:00:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 07:00:42.167741801 +0000 UTC m=+1534.309648506" watchObservedRunningTime="2026-01-22 07:00:42.189141913 +0000 UTC m=+1534.331048608" Jan 22 07:00:42 crc kubenswrapper[4720]: I0122 07:00:42.223349 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="93d003a3-f42f-4a33-8960-be8f156d121c" path="/var/lib/kubelet/pods/93d003a3-f42f-4a33-8960-be8f156d121c/volumes" Jan 22 07:00:42 crc kubenswrapper[4720]: I0122 07:00:42.783277 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:00:43 crc kubenswrapper[4720]: I0122 07:00:43.162134 4720 generic.go:334] "Generic (PLEG): container finished" podID="6ec1b0a8-dfba-4e68-a9f4-38aa45d8a972" containerID="68fcd493888b7f71aac73f22324b312a2abe1d9eebc546619dc858b40fce8675" exitCode=0 Jan 22 07:00:43 crc kubenswrapper[4720]: I0122 07:00:43.162250 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher5b9d-account-delete-p2vqf" event={"ID":"6ec1b0a8-dfba-4e68-a9f4-38aa45d8a972","Type":"ContainerDied","Data":"68fcd493888b7f71aac73f22324b312a2abe1d9eebc546619dc858b40fce8675"} Jan 22 07:00:43 crc kubenswrapper[4720]: I0122 07:00:43.165587 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"4f12fc81-5045-4a93-95ea-8a57050168c5","Type":"ContainerStarted","Data":"292c458995b0883a4a33284f1333814e41d5050459517ccdd6cd16ec5ab0d1e9"} Jan 22 07:00:44 crc kubenswrapper[4720]: I0122 07:00:44.202416 4720 generic.go:334] "Generic (PLEG): container finished" podID="035a1409-387d-4f2a-a89e-b36a2036e29b" containerID="6be262e202ffb82e2f3ecff3a6641adf81a5e307769b5b0bd2a197728b6160e4" exitCode=0 Jan 22 07:00:44 crc kubenswrapper[4720]: I0122 07:00:44.202734 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"035a1409-387d-4f2a-a89e-b36a2036e29b","Type":"ContainerDied","Data":"6be262e202ffb82e2f3ecff3a6641adf81a5e307769b5b0bd2a197728b6160e4"} Jan 22 07:00:44 crc kubenswrapper[4720]: I0122 07:00:44.375291 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 07:00:44 crc kubenswrapper[4720]: I0122 07:00:44.411447 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/035a1409-387d-4f2a-a89e-b36a2036e29b-config-data\") pod \"035a1409-387d-4f2a-a89e-b36a2036e29b\" (UID: \"035a1409-387d-4f2a-a89e-b36a2036e29b\") " Jan 22 07:00:44 crc kubenswrapper[4720]: I0122 07:00:44.411546 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/035a1409-387d-4f2a-a89e-b36a2036e29b-combined-ca-bundle\") pod \"035a1409-387d-4f2a-a89e-b36a2036e29b\" (UID: \"035a1409-387d-4f2a-a89e-b36a2036e29b\") " Jan 22 07:00:44 crc kubenswrapper[4720]: I0122 07:00:44.411568 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xck6h\" (UniqueName: \"kubernetes.io/projected/035a1409-387d-4f2a-a89e-b36a2036e29b-kube-api-access-xck6h\") pod \"035a1409-387d-4f2a-a89e-b36a2036e29b\" (UID: \"035a1409-387d-4f2a-a89e-b36a2036e29b\") " Jan 22 07:00:44 crc kubenswrapper[4720]: I0122 07:00:44.411616 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/035a1409-387d-4f2a-a89e-b36a2036e29b-logs\") pod \"035a1409-387d-4f2a-a89e-b36a2036e29b\" (UID: \"035a1409-387d-4f2a-a89e-b36a2036e29b\") " Jan 22 07:00:44 crc kubenswrapper[4720]: I0122 07:00:44.423991 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/035a1409-387d-4f2a-a89e-b36a2036e29b-logs" (OuterVolumeSpecName: "logs") pod "035a1409-387d-4f2a-a89e-b36a2036e29b" (UID: "035a1409-387d-4f2a-a89e-b36a2036e29b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 07:00:44 crc kubenswrapper[4720]: I0122 07:00:44.431483 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/035a1409-387d-4f2a-a89e-b36a2036e29b-kube-api-access-xck6h" (OuterVolumeSpecName: "kube-api-access-xck6h") pod "035a1409-387d-4f2a-a89e-b36a2036e29b" (UID: "035a1409-387d-4f2a-a89e-b36a2036e29b"). InnerVolumeSpecName "kube-api-access-xck6h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 07:00:44 crc kubenswrapper[4720]: I0122 07:00:44.474288 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/035a1409-387d-4f2a-a89e-b36a2036e29b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "035a1409-387d-4f2a-a89e-b36a2036e29b" (UID: "035a1409-387d-4f2a-a89e-b36a2036e29b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:00:44 crc kubenswrapper[4720]: I0122 07:00:44.513121 4720 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/035a1409-387d-4f2a-a89e-b36a2036e29b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 07:00:44 crc kubenswrapper[4720]: I0122 07:00:44.513429 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xck6h\" (UniqueName: \"kubernetes.io/projected/035a1409-387d-4f2a-a89e-b36a2036e29b-kube-api-access-xck6h\") on node \"crc\" DevicePath \"\"" Jan 22 07:00:44 crc kubenswrapper[4720]: I0122 07:00:44.513445 4720 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/035a1409-387d-4f2a-a89e-b36a2036e29b-logs\") on node \"crc\" DevicePath \"\"" Jan 22 07:00:44 crc kubenswrapper[4720]: I0122 07:00:44.525117 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/035a1409-387d-4f2a-a89e-b36a2036e29b-config-data" (OuterVolumeSpecName: "config-data") pod "035a1409-387d-4f2a-a89e-b36a2036e29b" (UID: "035a1409-387d-4f2a-a89e-b36a2036e29b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:00:44 crc kubenswrapper[4720]: I0122 07:00:44.615403 4720 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/035a1409-387d-4f2a-a89e-b36a2036e29b-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 07:00:44 crc kubenswrapper[4720]: I0122 07:00:44.674204 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher5b9d-account-delete-p2vqf" Jan 22 07:00:44 crc kubenswrapper[4720]: I0122 07:00:44.720619 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6ec1b0a8-dfba-4e68-a9f4-38aa45d8a972-operator-scripts\") pod \"6ec1b0a8-dfba-4e68-a9f4-38aa45d8a972\" (UID: \"6ec1b0a8-dfba-4e68-a9f4-38aa45d8a972\") " Jan 22 07:00:44 crc kubenswrapper[4720]: I0122 07:00:44.720693 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r8lm6\" (UniqueName: \"kubernetes.io/projected/6ec1b0a8-dfba-4e68-a9f4-38aa45d8a972-kube-api-access-r8lm6\") pod \"6ec1b0a8-dfba-4e68-a9f4-38aa45d8a972\" (UID: \"6ec1b0a8-dfba-4e68-a9f4-38aa45d8a972\") " Jan 22 07:00:44 crc kubenswrapper[4720]: I0122 07:00:44.721367 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ec1b0a8-dfba-4e68-a9f4-38aa45d8a972-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6ec1b0a8-dfba-4e68-a9f4-38aa45d8a972" (UID: "6ec1b0a8-dfba-4e68-a9f4-38aa45d8a972"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 07:00:44 crc kubenswrapper[4720]: I0122 07:00:44.724714 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ec1b0a8-dfba-4e68-a9f4-38aa45d8a972-kube-api-access-r8lm6" (OuterVolumeSpecName: "kube-api-access-r8lm6") pod "6ec1b0a8-dfba-4e68-a9f4-38aa45d8a972" (UID: "6ec1b0a8-dfba-4e68-a9f4-38aa45d8a972"). InnerVolumeSpecName "kube-api-access-r8lm6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 07:00:44 crc kubenswrapper[4720]: I0122 07:00:44.822503 4720 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6ec1b0a8-dfba-4e68-a9f4-38aa45d8a972-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 07:00:44 crc kubenswrapper[4720]: I0122 07:00:44.822847 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r8lm6\" (UniqueName: \"kubernetes.io/projected/6ec1b0a8-dfba-4e68-a9f4-38aa45d8a972-kube-api-access-r8lm6\") on node \"crc\" DevicePath \"\"" Jan 22 07:00:45 crc kubenswrapper[4720]: I0122 07:00:45.028048 4720 prober.go:107] "Probe failed" probeType="Readiness" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="7a08f27d-f169-4ef4-9154-a2df454c5d27" containerName="watcher-api" probeResult="failure" output="Get \"https://10.217.0.156:9322/\": read tcp 10.217.0.2:47510->10.217.0.156:9322: read: connection reset by peer" Jan 22 07:00:45 crc kubenswrapper[4720]: I0122 07:00:45.028688 4720 prober.go:107] "Probe failed" probeType="Readiness" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="7a08f27d-f169-4ef4-9154-a2df454c5d27" containerName="watcher-api" probeResult="failure" output="Get \"https://10.217.0.156:9322/\": dial tcp 10.217.0.156:9322: connect: connection refused" Jan 22 07:00:45 crc kubenswrapper[4720]: I0122 07:00:45.054725 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 07:00:45 crc kubenswrapper[4720]: I0122 07:00:45.221292 4720 generic.go:334] "Generic (PLEG): container finished" podID="bed144f1-70f3-49fa-a09c-f10b5b03c2c2" containerID="3ba5eb6be0c78618c80b5fa97f5d88da65fd9d70e021735cc3ca8480aadcd0a4" exitCode=0 Jan 22 07:00:45 crc kubenswrapper[4720]: I0122 07:00:45.221384 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"bed144f1-70f3-49fa-a09c-f10b5b03c2c2","Type":"ContainerDied","Data":"3ba5eb6be0c78618c80b5fa97f5d88da65fd9d70e021735cc3ca8480aadcd0a4"} Jan 22 07:00:45 crc kubenswrapper[4720]: I0122 07:00:45.223590 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"035a1409-387d-4f2a-a89e-b36a2036e29b","Type":"ContainerDied","Data":"ea58adb7236bd130551297576a1decef80e9b28491e169690f84963c7add1c7b"} Jan 22 07:00:45 crc kubenswrapper[4720]: I0122 07:00:45.223639 4720 scope.go:117] "RemoveContainer" containerID="6be262e202ffb82e2f3ecff3a6641adf81a5e307769b5b0bd2a197728b6160e4" Jan 22 07:00:45 crc kubenswrapper[4720]: I0122 07:00:45.223802 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 07:00:45 crc kubenswrapper[4720]: I0122 07:00:45.233683 4720 generic.go:334] "Generic (PLEG): container finished" podID="7a08f27d-f169-4ef4-9154-a2df454c5d27" containerID="9dfa328418f7ea9c8e67a7d77bd0a945db5c8865e8cacda3f8f31ecaf91bf4ee" exitCode=0 Jan 22 07:00:45 crc kubenswrapper[4720]: I0122 07:00:45.233819 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"7a08f27d-f169-4ef4-9154-a2df454c5d27","Type":"ContainerDied","Data":"9dfa328418f7ea9c8e67a7d77bd0a945db5c8865e8cacda3f8f31ecaf91bf4ee"} Jan 22 07:00:45 crc kubenswrapper[4720]: I0122 07:00:45.241035 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher5b9d-account-delete-p2vqf" event={"ID":"6ec1b0a8-dfba-4e68-a9f4-38aa45d8a972","Type":"ContainerDied","Data":"978b76742be8c1bca7388977818689d935d6a654964ca851a4422376b14cf670"} Jan 22 07:00:45 crc kubenswrapper[4720]: I0122 07:00:45.241085 4720 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="978b76742be8c1bca7388977818689d935d6a654964ca851a4422376b14cf670" Jan 22 07:00:45 crc kubenswrapper[4720]: I0122 07:00:45.241162 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher5b9d-account-delete-p2vqf" Jan 22 07:00:45 crc kubenswrapper[4720]: I0122 07:00:45.248262 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"4f12fc81-5045-4a93-95ea-8a57050168c5","Type":"ContainerStarted","Data":"9e98f49a7c116496ca36ef9c6a5f3cca8799203b5c279698204bc30fe9dddc7e"} Jan 22 07:00:45 crc kubenswrapper[4720]: I0122 07:00:45.249448 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:00:45 crc kubenswrapper[4720]: I0122 07:00:45.279972 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/ceilometer-0" podStartSLOduration=1.9523720660000001 podStartE2EDuration="7.279889667s" podCreationTimestamp="2026-01-22 07:00:38 +0000 UTC" firstStartedPulling="2026-01-22 07:00:39.024353873 +0000 UTC m=+1531.166260578" lastFinishedPulling="2026-01-22 07:00:44.351871474 +0000 UTC m=+1536.493778179" observedRunningTime="2026-01-22 07:00:45.277387676 +0000 UTC m=+1537.419294391" watchObservedRunningTime="2026-01-22 07:00:45.279889667 +0000 UTC m=+1537.421796372" Jan 22 07:00:45 crc kubenswrapper[4720]: I0122 07:00:45.411103 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 07:00:45 crc kubenswrapper[4720]: I0122 07:00:45.436785 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 22 07:00:45 crc kubenswrapper[4720]: I0122 07:00:45.442642 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 22 07:00:45 crc kubenswrapper[4720]: I0122 07:00:45.537438 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/bed144f1-70f3-49fa-a09c-f10b5b03c2c2-custom-prometheus-ca\") pod \"bed144f1-70f3-49fa-a09c-f10b5b03c2c2\" (UID: \"bed144f1-70f3-49fa-a09c-f10b5b03c2c2\") " Jan 22 07:00:45 crc kubenswrapper[4720]: I0122 07:00:45.537550 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bed144f1-70f3-49fa-a09c-f10b5b03c2c2-logs\") pod \"bed144f1-70f3-49fa-a09c-f10b5b03c2c2\" (UID: \"bed144f1-70f3-49fa-a09c-f10b5b03c2c2\") " Jan 22 07:00:45 crc kubenswrapper[4720]: I0122 07:00:45.537715 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bed144f1-70f3-49fa-a09c-f10b5b03c2c2-combined-ca-bundle\") pod \"bed144f1-70f3-49fa-a09c-f10b5b03c2c2\" (UID: \"bed144f1-70f3-49fa-a09c-f10b5b03c2c2\") " Jan 22 07:00:45 crc kubenswrapper[4720]: I0122 07:00:45.537753 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xq8lz\" (UniqueName: \"kubernetes.io/projected/bed144f1-70f3-49fa-a09c-f10b5b03c2c2-kube-api-access-xq8lz\") pod \"bed144f1-70f3-49fa-a09c-f10b5b03c2c2\" (UID: \"bed144f1-70f3-49fa-a09c-f10b5b03c2c2\") " Jan 22 07:00:45 crc kubenswrapper[4720]: I0122 07:00:45.537843 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bed144f1-70f3-49fa-a09c-f10b5b03c2c2-config-data\") pod \"bed144f1-70f3-49fa-a09c-f10b5b03c2c2\" (UID: \"bed144f1-70f3-49fa-a09c-f10b5b03c2c2\") " Jan 22 07:00:45 crc kubenswrapper[4720]: I0122 07:00:45.538120 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bed144f1-70f3-49fa-a09c-f10b5b03c2c2-logs" (OuterVolumeSpecName: "logs") pod "bed144f1-70f3-49fa-a09c-f10b5b03c2c2" (UID: "bed144f1-70f3-49fa-a09c-f10b5b03c2c2"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 07:00:45 crc kubenswrapper[4720]: I0122 07:00:45.538811 4720 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bed144f1-70f3-49fa-a09c-f10b5b03c2c2-logs\") on node \"crc\" DevicePath \"\"" Jan 22 07:00:45 crc kubenswrapper[4720]: I0122 07:00:45.543922 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bed144f1-70f3-49fa-a09c-f10b5b03c2c2-kube-api-access-xq8lz" (OuterVolumeSpecName: "kube-api-access-xq8lz") pod "bed144f1-70f3-49fa-a09c-f10b5b03c2c2" (UID: "bed144f1-70f3-49fa-a09c-f10b5b03c2c2"). InnerVolumeSpecName "kube-api-access-xq8lz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 07:00:45 crc kubenswrapper[4720]: I0122 07:00:45.570021 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bed144f1-70f3-49fa-a09c-f10b5b03c2c2-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "bed144f1-70f3-49fa-a09c-f10b5b03c2c2" (UID: "bed144f1-70f3-49fa-a09c-f10b5b03c2c2"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:00:45 crc kubenswrapper[4720]: I0122 07:00:45.590183 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bed144f1-70f3-49fa-a09c-f10b5b03c2c2-config-data" (OuterVolumeSpecName: "config-data") pod "bed144f1-70f3-49fa-a09c-f10b5b03c2c2" (UID: "bed144f1-70f3-49fa-a09c-f10b5b03c2c2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:00:45 crc kubenswrapper[4720]: I0122 07:00:45.604297 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bed144f1-70f3-49fa-a09c-f10b5b03c2c2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bed144f1-70f3-49fa-a09c-f10b5b03c2c2" (UID: "bed144f1-70f3-49fa-a09c-f10b5b03c2c2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:00:45 crc kubenswrapper[4720]: I0122 07:00:45.642055 4720 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/bed144f1-70f3-49fa-a09c-f10b5b03c2c2-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Jan 22 07:00:45 crc kubenswrapper[4720]: I0122 07:00:45.642096 4720 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bed144f1-70f3-49fa-a09c-f10b5b03c2c2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 07:00:45 crc kubenswrapper[4720]: I0122 07:00:45.642109 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xq8lz\" (UniqueName: \"kubernetes.io/projected/bed144f1-70f3-49fa-a09c-f10b5b03c2c2-kube-api-access-xq8lz\") on node \"crc\" DevicePath \"\"" Jan 22 07:00:45 crc kubenswrapper[4720]: I0122 07:00:45.642121 4720 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bed144f1-70f3-49fa-a09c-f10b5b03c2c2-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 07:00:45 crc kubenswrapper[4720]: I0122 07:00:45.675225 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:00:45 crc kubenswrapper[4720]: I0122 07:00:45.745021 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7a08f27d-f169-4ef4-9154-a2df454c5d27-public-tls-certs\") pod \"7a08f27d-f169-4ef4-9154-a2df454c5d27\" (UID: \"7a08f27d-f169-4ef4-9154-a2df454c5d27\") " Jan 22 07:00:45 crc kubenswrapper[4720]: I0122 07:00:45.745101 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bqm8j\" (UniqueName: \"kubernetes.io/projected/7a08f27d-f169-4ef4-9154-a2df454c5d27-kube-api-access-bqm8j\") pod \"7a08f27d-f169-4ef4-9154-a2df454c5d27\" (UID: \"7a08f27d-f169-4ef4-9154-a2df454c5d27\") " Jan 22 07:00:45 crc kubenswrapper[4720]: I0122 07:00:45.745990 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a08f27d-f169-4ef4-9154-a2df454c5d27-combined-ca-bundle\") pod \"7a08f27d-f169-4ef4-9154-a2df454c5d27\" (UID: \"7a08f27d-f169-4ef4-9154-a2df454c5d27\") " Jan 22 07:00:45 crc kubenswrapper[4720]: I0122 07:00:45.746111 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/7a08f27d-f169-4ef4-9154-a2df454c5d27-custom-prometheus-ca\") pod \"7a08f27d-f169-4ef4-9154-a2df454c5d27\" (UID: \"7a08f27d-f169-4ef4-9154-a2df454c5d27\") " Jan 22 07:00:45 crc kubenswrapper[4720]: I0122 07:00:45.746147 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7a08f27d-f169-4ef4-9154-a2df454c5d27-logs\") pod \"7a08f27d-f169-4ef4-9154-a2df454c5d27\" (UID: \"7a08f27d-f169-4ef4-9154-a2df454c5d27\") " Jan 22 07:00:45 crc kubenswrapper[4720]: I0122 07:00:45.746191 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a08f27d-f169-4ef4-9154-a2df454c5d27-config-data\") pod \"7a08f27d-f169-4ef4-9154-a2df454c5d27\" (UID: \"7a08f27d-f169-4ef4-9154-a2df454c5d27\") " Jan 22 07:00:45 crc kubenswrapper[4720]: I0122 07:00:45.746220 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7a08f27d-f169-4ef4-9154-a2df454c5d27-internal-tls-certs\") pod \"7a08f27d-f169-4ef4-9154-a2df454c5d27\" (UID: \"7a08f27d-f169-4ef4-9154-a2df454c5d27\") " Jan 22 07:00:45 crc kubenswrapper[4720]: I0122 07:00:45.746547 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7a08f27d-f169-4ef4-9154-a2df454c5d27-logs" (OuterVolumeSpecName: "logs") pod "7a08f27d-f169-4ef4-9154-a2df454c5d27" (UID: "7a08f27d-f169-4ef4-9154-a2df454c5d27"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 07:00:45 crc kubenswrapper[4720]: I0122 07:00:45.747144 4720 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7a08f27d-f169-4ef4-9154-a2df454c5d27-logs\") on node \"crc\" DevicePath \"\"" Jan 22 07:00:45 crc kubenswrapper[4720]: I0122 07:00:45.751212 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a08f27d-f169-4ef4-9154-a2df454c5d27-kube-api-access-bqm8j" (OuterVolumeSpecName: "kube-api-access-bqm8j") pod "7a08f27d-f169-4ef4-9154-a2df454c5d27" (UID: "7a08f27d-f169-4ef4-9154-a2df454c5d27"). InnerVolumeSpecName "kube-api-access-bqm8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 07:00:45 crc kubenswrapper[4720]: I0122 07:00:45.769400 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a08f27d-f169-4ef4-9154-a2df454c5d27-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "7a08f27d-f169-4ef4-9154-a2df454c5d27" (UID: "7a08f27d-f169-4ef4-9154-a2df454c5d27"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:00:45 crc kubenswrapper[4720]: I0122 07:00:45.771054 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a08f27d-f169-4ef4-9154-a2df454c5d27-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7a08f27d-f169-4ef4-9154-a2df454c5d27" (UID: "7a08f27d-f169-4ef4-9154-a2df454c5d27"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:00:45 crc kubenswrapper[4720]: I0122 07:00:45.788063 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a08f27d-f169-4ef4-9154-a2df454c5d27-config-data" (OuterVolumeSpecName: "config-data") pod "7a08f27d-f169-4ef4-9154-a2df454c5d27" (UID: "7a08f27d-f169-4ef4-9154-a2df454c5d27"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:00:45 crc kubenswrapper[4720]: I0122 07:00:45.788749 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a08f27d-f169-4ef4-9154-a2df454c5d27-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "7a08f27d-f169-4ef4-9154-a2df454c5d27" (UID: "7a08f27d-f169-4ef4-9154-a2df454c5d27"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:00:45 crc kubenswrapper[4720]: I0122 07:00:45.794519 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a08f27d-f169-4ef4-9154-a2df454c5d27-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "7a08f27d-f169-4ef4-9154-a2df454c5d27" (UID: "7a08f27d-f169-4ef4-9154-a2df454c5d27"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:00:45 crc kubenswrapper[4720]: I0122 07:00:45.847883 4720 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a08f27d-f169-4ef4-9154-a2df454c5d27-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 07:00:45 crc kubenswrapper[4720]: I0122 07:00:45.847935 4720 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7a08f27d-f169-4ef4-9154-a2df454c5d27-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 22 07:00:45 crc kubenswrapper[4720]: I0122 07:00:45.847946 4720 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7a08f27d-f169-4ef4-9154-a2df454c5d27-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 22 07:00:45 crc kubenswrapper[4720]: I0122 07:00:45.847956 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bqm8j\" (UniqueName: \"kubernetes.io/projected/7a08f27d-f169-4ef4-9154-a2df454c5d27-kube-api-access-bqm8j\") on node \"crc\" DevicePath \"\"" Jan 22 07:00:45 crc kubenswrapper[4720]: I0122 07:00:45.847965 4720 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a08f27d-f169-4ef4-9154-a2df454c5d27-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 07:00:45 crc kubenswrapper[4720]: I0122 07:00:45.847973 4720 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/7a08f27d-f169-4ef4-9154-a2df454c5d27-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Jan 22 07:00:46 crc kubenswrapper[4720]: I0122 07:00:46.222476 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="035a1409-387d-4f2a-a89e-b36a2036e29b" path="/var/lib/kubelet/pods/035a1409-387d-4f2a-a89e-b36a2036e29b/volumes" Jan 22 07:00:46 crc kubenswrapper[4720]: I0122 07:00:46.260276 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"7a08f27d-f169-4ef4-9154-a2df454c5d27","Type":"ContainerDied","Data":"7b4e5ee684f977d5ff5bcf47ca080135e3692d2773d394788b124da755fd784b"} Jan 22 07:00:46 crc kubenswrapper[4720]: I0122 07:00:46.260550 4720 scope.go:117] "RemoveContainer" containerID="9dfa328418f7ea9c8e67a7d77bd0a945db5c8865e8cacda3f8f31ecaf91bf4ee" Jan 22 07:00:46 crc kubenswrapper[4720]: I0122 07:00:46.260765 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:00:46 crc kubenswrapper[4720]: I0122 07:00:46.266854 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"bed144f1-70f3-49fa-a09c-f10b5b03c2c2","Type":"ContainerDied","Data":"66c32b876fffea5e333ea1a61417dfa58d24232db81b70622ecc6f11c6f0279e"} Jan 22 07:00:46 crc kubenswrapper[4720]: I0122 07:00:46.266999 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 07:00:46 crc kubenswrapper[4720]: I0122 07:00:46.269427 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="4f12fc81-5045-4a93-95ea-8a57050168c5" containerName="ceilometer-central-agent" containerID="cri-o://5b5e77bf6746ec20e215d385c4e01d0594d986a094232a3c55141cf9187b04a3" gracePeriod=30 Jan 22 07:00:46 crc kubenswrapper[4720]: I0122 07:00:46.269534 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="4f12fc81-5045-4a93-95ea-8a57050168c5" containerName="proxy-httpd" containerID="cri-o://9e98f49a7c116496ca36ef9c6a5f3cca8799203b5c279698204bc30fe9dddc7e" gracePeriod=30 Jan 22 07:00:46 crc kubenswrapper[4720]: I0122 07:00:46.269565 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="4f12fc81-5045-4a93-95ea-8a57050168c5" containerName="sg-core" containerID="cri-o://292c458995b0883a4a33284f1333814e41d5050459517ccdd6cd16ec5ab0d1e9" gracePeriod=30 Jan 22 07:00:46 crc kubenswrapper[4720]: I0122 07:00:46.269636 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="4f12fc81-5045-4a93-95ea-8a57050168c5" containerName="ceilometer-notification-agent" containerID="cri-o://6d3016b2bd4304885105404af4fe7de822f33519c0cb4c7bb64562fc86bb0536" gracePeriod=30 Jan 22 07:00:46 crc kubenswrapper[4720]: I0122 07:00:46.305414 4720 scope.go:117] "RemoveContainer" containerID="866cad808f01296143c6c86db190f792ed85b387465f8d21a996f93b5a44e60d" Jan 22 07:00:46 crc kubenswrapper[4720]: I0122 07:00:46.308509 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 22 07:00:46 crc kubenswrapper[4720]: I0122 07:00:46.321388 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 22 07:00:46 crc kubenswrapper[4720]: I0122 07:00:46.333230 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 22 07:00:46 crc kubenswrapper[4720]: I0122 07:00:46.339664 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 22 07:00:46 crc kubenswrapper[4720]: I0122 07:00:46.365599 4720 scope.go:117] "RemoveContainer" containerID="3ba5eb6be0c78618c80b5fa97f5d88da65fd9d70e021735cc3ca8480aadcd0a4" Jan 22 07:00:47 crc kubenswrapper[4720]: I0122 07:00:47.127830 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:00:47 crc kubenswrapper[4720]: I0122 07:00:47.181516 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4f12fc81-5045-4a93-95ea-8a57050168c5-config-data\") pod \"4f12fc81-5045-4a93-95ea-8a57050168c5\" (UID: \"4f12fc81-5045-4a93-95ea-8a57050168c5\") " Jan 22 07:00:47 crc kubenswrapper[4720]: I0122 07:00:47.181688 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4f12fc81-5045-4a93-95ea-8a57050168c5-run-httpd\") pod \"4f12fc81-5045-4a93-95ea-8a57050168c5\" (UID: \"4f12fc81-5045-4a93-95ea-8a57050168c5\") " Jan 22 07:00:47 crc kubenswrapper[4720]: I0122 07:00:47.181741 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/4f12fc81-5045-4a93-95ea-8a57050168c5-ceilometer-tls-certs\") pod \"4f12fc81-5045-4a93-95ea-8a57050168c5\" (UID: \"4f12fc81-5045-4a93-95ea-8a57050168c5\") " Jan 22 07:00:47 crc kubenswrapper[4720]: I0122 07:00:47.181791 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4f12fc81-5045-4a93-95ea-8a57050168c5-scripts\") pod \"4f12fc81-5045-4a93-95ea-8a57050168c5\" (UID: \"4f12fc81-5045-4a93-95ea-8a57050168c5\") " Jan 22 07:00:47 crc kubenswrapper[4720]: I0122 07:00:47.181830 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4f12fc81-5045-4a93-95ea-8a57050168c5-log-httpd\") pod \"4f12fc81-5045-4a93-95ea-8a57050168c5\" (UID: \"4f12fc81-5045-4a93-95ea-8a57050168c5\") " Jan 22 07:00:47 crc kubenswrapper[4720]: I0122 07:00:47.181869 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tx2kb\" (UniqueName: \"kubernetes.io/projected/4f12fc81-5045-4a93-95ea-8a57050168c5-kube-api-access-tx2kb\") pod \"4f12fc81-5045-4a93-95ea-8a57050168c5\" (UID: \"4f12fc81-5045-4a93-95ea-8a57050168c5\") " Jan 22 07:00:47 crc kubenswrapper[4720]: I0122 07:00:47.181893 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f12fc81-5045-4a93-95ea-8a57050168c5-combined-ca-bundle\") pod \"4f12fc81-5045-4a93-95ea-8a57050168c5\" (UID: \"4f12fc81-5045-4a93-95ea-8a57050168c5\") " Jan 22 07:00:47 crc kubenswrapper[4720]: I0122 07:00:47.181969 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4f12fc81-5045-4a93-95ea-8a57050168c5-sg-core-conf-yaml\") pod \"4f12fc81-5045-4a93-95ea-8a57050168c5\" (UID: \"4f12fc81-5045-4a93-95ea-8a57050168c5\") " Jan 22 07:00:47 crc kubenswrapper[4720]: I0122 07:00:47.182886 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4f12fc81-5045-4a93-95ea-8a57050168c5-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "4f12fc81-5045-4a93-95ea-8a57050168c5" (UID: "4f12fc81-5045-4a93-95ea-8a57050168c5"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 07:00:47 crc kubenswrapper[4720]: I0122 07:00:47.183851 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4f12fc81-5045-4a93-95ea-8a57050168c5-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "4f12fc81-5045-4a93-95ea-8a57050168c5" (UID: "4f12fc81-5045-4a93-95ea-8a57050168c5"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 07:00:47 crc kubenswrapper[4720]: I0122 07:00:47.187577 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4f12fc81-5045-4a93-95ea-8a57050168c5-kube-api-access-tx2kb" (OuterVolumeSpecName: "kube-api-access-tx2kb") pod "4f12fc81-5045-4a93-95ea-8a57050168c5" (UID: "4f12fc81-5045-4a93-95ea-8a57050168c5"). InnerVolumeSpecName "kube-api-access-tx2kb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 07:00:47 crc kubenswrapper[4720]: I0122 07:00:47.187841 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4f12fc81-5045-4a93-95ea-8a57050168c5-scripts" (OuterVolumeSpecName: "scripts") pod "4f12fc81-5045-4a93-95ea-8a57050168c5" (UID: "4f12fc81-5045-4a93-95ea-8a57050168c5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:00:47 crc kubenswrapper[4720]: I0122 07:00:47.208162 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4f12fc81-5045-4a93-95ea-8a57050168c5-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "4f12fc81-5045-4a93-95ea-8a57050168c5" (UID: "4f12fc81-5045-4a93-95ea-8a57050168c5"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:00:47 crc kubenswrapper[4720]: I0122 07:00:47.227885 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4f12fc81-5045-4a93-95ea-8a57050168c5-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "4f12fc81-5045-4a93-95ea-8a57050168c5" (UID: "4f12fc81-5045-4a93-95ea-8a57050168c5"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:00:47 crc kubenswrapper[4720]: I0122 07:00:47.248625 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4f12fc81-5045-4a93-95ea-8a57050168c5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4f12fc81-5045-4a93-95ea-8a57050168c5" (UID: "4f12fc81-5045-4a93-95ea-8a57050168c5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:00:47 crc kubenswrapper[4720]: I0122 07:00:47.277386 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4f12fc81-5045-4a93-95ea-8a57050168c5-config-data" (OuterVolumeSpecName: "config-data") pod "4f12fc81-5045-4a93-95ea-8a57050168c5" (UID: "4f12fc81-5045-4a93-95ea-8a57050168c5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:00:47 crc kubenswrapper[4720]: I0122 07:00:47.284379 4720 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4f12fc81-5045-4a93-95ea-8a57050168c5-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 07:00:47 crc kubenswrapper[4720]: I0122 07:00:47.284430 4720 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4f12fc81-5045-4a93-95ea-8a57050168c5-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 22 07:00:47 crc kubenswrapper[4720]: I0122 07:00:47.284446 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tx2kb\" (UniqueName: \"kubernetes.io/projected/4f12fc81-5045-4a93-95ea-8a57050168c5-kube-api-access-tx2kb\") on node \"crc\" DevicePath \"\"" Jan 22 07:00:47 crc kubenswrapper[4720]: I0122 07:00:47.284466 4720 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f12fc81-5045-4a93-95ea-8a57050168c5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 07:00:47 crc kubenswrapper[4720]: I0122 07:00:47.284479 4720 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4f12fc81-5045-4a93-95ea-8a57050168c5-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 22 07:00:47 crc kubenswrapper[4720]: I0122 07:00:47.284489 4720 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4f12fc81-5045-4a93-95ea-8a57050168c5-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 07:00:47 crc kubenswrapper[4720]: I0122 07:00:47.284501 4720 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4f12fc81-5045-4a93-95ea-8a57050168c5-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 22 07:00:47 crc kubenswrapper[4720]: I0122 07:00:47.284512 4720 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/4f12fc81-5045-4a93-95ea-8a57050168c5-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 22 07:00:47 crc kubenswrapper[4720]: I0122 07:00:47.287059 4720 generic.go:334] "Generic (PLEG): container finished" podID="4f12fc81-5045-4a93-95ea-8a57050168c5" containerID="9e98f49a7c116496ca36ef9c6a5f3cca8799203b5c279698204bc30fe9dddc7e" exitCode=0 Jan 22 07:00:47 crc kubenswrapper[4720]: I0122 07:00:47.287103 4720 generic.go:334] "Generic (PLEG): container finished" podID="4f12fc81-5045-4a93-95ea-8a57050168c5" containerID="292c458995b0883a4a33284f1333814e41d5050459517ccdd6cd16ec5ab0d1e9" exitCode=2 Jan 22 07:00:47 crc kubenswrapper[4720]: I0122 07:00:47.287114 4720 generic.go:334] "Generic (PLEG): container finished" podID="4f12fc81-5045-4a93-95ea-8a57050168c5" containerID="6d3016b2bd4304885105404af4fe7de822f33519c0cb4c7bb64562fc86bb0536" exitCode=0 Jan 22 07:00:47 crc kubenswrapper[4720]: I0122 07:00:47.287122 4720 generic.go:334] "Generic (PLEG): container finished" podID="4f12fc81-5045-4a93-95ea-8a57050168c5" containerID="5b5e77bf6746ec20e215d385c4e01d0594d986a094232a3c55141cf9187b04a3" exitCode=0 Jan 22 07:00:47 crc kubenswrapper[4720]: I0122 07:00:47.287167 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"4f12fc81-5045-4a93-95ea-8a57050168c5","Type":"ContainerDied","Data":"9e98f49a7c116496ca36ef9c6a5f3cca8799203b5c279698204bc30fe9dddc7e"} Jan 22 07:00:47 crc kubenswrapper[4720]: I0122 07:00:47.287200 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"4f12fc81-5045-4a93-95ea-8a57050168c5","Type":"ContainerDied","Data":"292c458995b0883a4a33284f1333814e41d5050459517ccdd6cd16ec5ab0d1e9"} Jan 22 07:00:47 crc kubenswrapper[4720]: I0122 07:00:47.287210 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"4f12fc81-5045-4a93-95ea-8a57050168c5","Type":"ContainerDied","Data":"6d3016b2bd4304885105404af4fe7de822f33519c0cb4c7bb64562fc86bb0536"} Jan 22 07:00:47 crc kubenswrapper[4720]: I0122 07:00:47.287218 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"4f12fc81-5045-4a93-95ea-8a57050168c5","Type":"ContainerDied","Data":"5b5e77bf6746ec20e215d385c4e01d0594d986a094232a3c55141cf9187b04a3"} Jan 22 07:00:47 crc kubenswrapper[4720]: I0122 07:00:47.287227 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"4f12fc81-5045-4a93-95ea-8a57050168c5","Type":"ContainerDied","Data":"2fd3d16a5f07d31a39c7fc6ed725d5e88e1fa8cfd60562bc2fb7723a681cc9bc"} Jan 22 07:00:47 crc kubenswrapper[4720]: I0122 07:00:47.287248 4720 scope.go:117] "RemoveContainer" containerID="9e98f49a7c116496ca36ef9c6a5f3cca8799203b5c279698204bc30fe9dddc7e" Jan 22 07:00:47 crc kubenswrapper[4720]: I0122 07:00:47.287401 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:00:47 crc kubenswrapper[4720]: I0122 07:00:47.357090 4720 scope.go:117] "RemoveContainer" containerID="292c458995b0883a4a33284f1333814e41d5050459517ccdd6cd16ec5ab0d1e9" Jan 22 07:00:47 crc kubenswrapper[4720]: I0122 07:00:47.363741 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 07:00:47 crc kubenswrapper[4720]: I0122 07:00:47.372168 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 07:00:47 crc kubenswrapper[4720]: I0122 07:00:47.378964 4720 scope.go:117] "RemoveContainer" containerID="6d3016b2bd4304885105404af4fe7de822f33519c0cb4c7bb64562fc86bb0536" Jan 22 07:00:47 crc kubenswrapper[4720]: I0122 07:00:47.390953 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 07:00:47 crc kubenswrapper[4720]: E0122 07:00:47.391506 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f12fc81-5045-4a93-95ea-8a57050168c5" containerName="sg-core" Jan 22 07:00:47 crc kubenswrapper[4720]: I0122 07:00:47.391530 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f12fc81-5045-4a93-95ea-8a57050168c5" containerName="sg-core" Jan 22 07:00:47 crc kubenswrapper[4720]: E0122 07:00:47.391551 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f12fc81-5045-4a93-95ea-8a57050168c5" containerName="ceilometer-notification-agent" Jan 22 07:00:47 crc kubenswrapper[4720]: I0122 07:00:47.391559 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f12fc81-5045-4a93-95ea-8a57050168c5" containerName="ceilometer-notification-agent" Jan 22 07:00:47 crc kubenswrapper[4720]: E0122 07:00:47.391572 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f12fc81-5045-4a93-95ea-8a57050168c5" containerName="proxy-httpd" Jan 22 07:00:47 crc kubenswrapper[4720]: I0122 07:00:47.391580 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f12fc81-5045-4a93-95ea-8a57050168c5" containerName="proxy-httpd" Jan 22 07:00:47 crc kubenswrapper[4720]: E0122 07:00:47.391597 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a08f27d-f169-4ef4-9154-a2df454c5d27" containerName="watcher-api" Jan 22 07:00:47 crc kubenswrapper[4720]: I0122 07:00:47.391607 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a08f27d-f169-4ef4-9154-a2df454c5d27" containerName="watcher-api" Jan 22 07:00:47 crc kubenswrapper[4720]: E0122 07:00:47.391627 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f12fc81-5045-4a93-95ea-8a57050168c5" containerName="ceilometer-central-agent" Jan 22 07:00:47 crc kubenswrapper[4720]: I0122 07:00:47.391635 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f12fc81-5045-4a93-95ea-8a57050168c5" containerName="ceilometer-central-agent" Jan 22 07:00:47 crc kubenswrapper[4720]: E0122 07:00:47.391646 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bed144f1-70f3-49fa-a09c-f10b5b03c2c2" containerName="watcher-decision-engine" Jan 22 07:00:47 crc kubenswrapper[4720]: I0122 07:00:47.391654 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="bed144f1-70f3-49fa-a09c-f10b5b03c2c2" containerName="watcher-decision-engine" Jan 22 07:00:47 crc kubenswrapper[4720]: E0122 07:00:47.391676 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ec1b0a8-dfba-4e68-a9f4-38aa45d8a972" containerName="mariadb-account-delete" Jan 22 07:00:47 crc kubenswrapper[4720]: I0122 07:00:47.391685 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ec1b0a8-dfba-4e68-a9f4-38aa45d8a972" containerName="mariadb-account-delete" Jan 22 07:00:47 crc kubenswrapper[4720]: E0122 07:00:47.391699 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a08f27d-f169-4ef4-9154-a2df454c5d27" containerName="watcher-kuttl-api-log" Jan 22 07:00:47 crc kubenswrapper[4720]: I0122 07:00:47.391707 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a08f27d-f169-4ef4-9154-a2df454c5d27" containerName="watcher-kuttl-api-log" Jan 22 07:00:47 crc kubenswrapper[4720]: E0122 07:00:47.391718 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="035a1409-387d-4f2a-a89e-b36a2036e29b" containerName="watcher-applier" Jan 22 07:00:47 crc kubenswrapper[4720]: I0122 07:00:47.391726 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="035a1409-387d-4f2a-a89e-b36a2036e29b" containerName="watcher-applier" Jan 22 07:00:47 crc kubenswrapper[4720]: I0122 07:00:47.391991 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="4f12fc81-5045-4a93-95ea-8a57050168c5" containerName="proxy-httpd" Jan 22 07:00:47 crc kubenswrapper[4720]: I0122 07:00:47.392006 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="035a1409-387d-4f2a-a89e-b36a2036e29b" containerName="watcher-applier" Jan 22 07:00:47 crc kubenswrapper[4720]: I0122 07:00:47.392113 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="7a08f27d-f169-4ef4-9154-a2df454c5d27" containerName="watcher-kuttl-api-log" Jan 22 07:00:47 crc kubenswrapper[4720]: I0122 07:00:47.392128 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="4f12fc81-5045-4a93-95ea-8a57050168c5" containerName="ceilometer-central-agent" Jan 22 07:00:47 crc kubenswrapper[4720]: I0122 07:00:47.392144 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="6ec1b0a8-dfba-4e68-a9f4-38aa45d8a972" containerName="mariadb-account-delete" Jan 22 07:00:47 crc kubenswrapper[4720]: I0122 07:00:47.392160 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="7a08f27d-f169-4ef4-9154-a2df454c5d27" containerName="watcher-api" Jan 22 07:00:47 crc kubenswrapper[4720]: I0122 07:00:47.392173 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="bed144f1-70f3-49fa-a09c-f10b5b03c2c2" containerName="watcher-decision-engine" Jan 22 07:00:47 crc kubenswrapper[4720]: I0122 07:00:47.392183 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="4f12fc81-5045-4a93-95ea-8a57050168c5" containerName="ceilometer-notification-agent" Jan 22 07:00:47 crc kubenswrapper[4720]: I0122 07:00:47.392194 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="4f12fc81-5045-4a93-95ea-8a57050168c5" containerName="sg-core" Jan 22 07:00:47 crc kubenswrapper[4720]: I0122 07:00:47.395942 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:00:47 crc kubenswrapper[4720]: I0122 07:00:47.398622 4720 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-config-data" Jan 22 07:00:47 crc kubenswrapper[4720]: I0122 07:00:47.399008 4720 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-ceilometer-internal-svc" Jan 22 07:00:47 crc kubenswrapper[4720]: I0122 07:00:47.399547 4720 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-scripts" Jan 22 07:00:47 crc kubenswrapper[4720]: I0122 07:00:47.411156 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 07:00:47 crc kubenswrapper[4720]: I0122 07:00:47.417241 4720 scope.go:117] "RemoveContainer" containerID="5b5e77bf6746ec20e215d385c4e01d0594d986a094232a3c55141cf9187b04a3" Jan 22 07:00:47 crc kubenswrapper[4720]: I0122 07:00:47.439796 4720 scope.go:117] "RemoveContainer" containerID="9e98f49a7c116496ca36ef9c6a5f3cca8799203b5c279698204bc30fe9dddc7e" Jan 22 07:00:47 crc kubenswrapper[4720]: E0122 07:00:47.440487 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9e98f49a7c116496ca36ef9c6a5f3cca8799203b5c279698204bc30fe9dddc7e\": container with ID starting with 9e98f49a7c116496ca36ef9c6a5f3cca8799203b5c279698204bc30fe9dddc7e not found: ID does not exist" containerID="9e98f49a7c116496ca36ef9c6a5f3cca8799203b5c279698204bc30fe9dddc7e" Jan 22 07:00:47 crc kubenswrapper[4720]: I0122 07:00:47.440533 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9e98f49a7c116496ca36ef9c6a5f3cca8799203b5c279698204bc30fe9dddc7e"} err="failed to get container status \"9e98f49a7c116496ca36ef9c6a5f3cca8799203b5c279698204bc30fe9dddc7e\": rpc error: code = NotFound desc = could not find container \"9e98f49a7c116496ca36ef9c6a5f3cca8799203b5c279698204bc30fe9dddc7e\": container with ID starting with 9e98f49a7c116496ca36ef9c6a5f3cca8799203b5c279698204bc30fe9dddc7e not found: ID does not exist" Jan 22 07:00:47 crc kubenswrapper[4720]: I0122 07:00:47.440560 4720 scope.go:117] "RemoveContainer" containerID="292c458995b0883a4a33284f1333814e41d5050459517ccdd6cd16ec5ab0d1e9" Jan 22 07:00:47 crc kubenswrapper[4720]: E0122 07:00:47.440822 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"292c458995b0883a4a33284f1333814e41d5050459517ccdd6cd16ec5ab0d1e9\": container with ID starting with 292c458995b0883a4a33284f1333814e41d5050459517ccdd6cd16ec5ab0d1e9 not found: ID does not exist" containerID="292c458995b0883a4a33284f1333814e41d5050459517ccdd6cd16ec5ab0d1e9" Jan 22 07:00:47 crc kubenswrapper[4720]: I0122 07:00:47.440845 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"292c458995b0883a4a33284f1333814e41d5050459517ccdd6cd16ec5ab0d1e9"} err="failed to get container status \"292c458995b0883a4a33284f1333814e41d5050459517ccdd6cd16ec5ab0d1e9\": rpc error: code = NotFound desc = could not find container \"292c458995b0883a4a33284f1333814e41d5050459517ccdd6cd16ec5ab0d1e9\": container with ID starting with 292c458995b0883a4a33284f1333814e41d5050459517ccdd6cd16ec5ab0d1e9 not found: ID does not exist" Jan 22 07:00:47 crc kubenswrapper[4720]: I0122 07:00:47.440864 4720 scope.go:117] "RemoveContainer" containerID="6d3016b2bd4304885105404af4fe7de822f33519c0cb4c7bb64562fc86bb0536" Jan 22 07:00:47 crc kubenswrapper[4720]: E0122 07:00:47.441302 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6d3016b2bd4304885105404af4fe7de822f33519c0cb4c7bb64562fc86bb0536\": container with ID starting with 6d3016b2bd4304885105404af4fe7de822f33519c0cb4c7bb64562fc86bb0536 not found: ID does not exist" containerID="6d3016b2bd4304885105404af4fe7de822f33519c0cb4c7bb64562fc86bb0536" Jan 22 07:00:47 crc kubenswrapper[4720]: I0122 07:00:47.441347 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6d3016b2bd4304885105404af4fe7de822f33519c0cb4c7bb64562fc86bb0536"} err="failed to get container status \"6d3016b2bd4304885105404af4fe7de822f33519c0cb4c7bb64562fc86bb0536\": rpc error: code = NotFound desc = could not find container \"6d3016b2bd4304885105404af4fe7de822f33519c0cb4c7bb64562fc86bb0536\": container with ID starting with 6d3016b2bd4304885105404af4fe7de822f33519c0cb4c7bb64562fc86bb0536 not found: ID does not exist" Jan 22 07:00:47 crc kubenswrapper[4720]: I0122 07:00:47.441380 4720 scope.go:117] "RemoveContainer" containerID="5b5e77bf6746ec20e215d385c4e01d0594d986a094232a3c55141cf9187b04a3" Jan 22 07:00:47 crc kubenswrapper[4720]: E0122 07:00:47.441695 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5b5e77bf6746ec20e215d385c4e01d0594d986a094232a3c55141cf9187b04a3\": container with ID starting with 5b5e77bf6746ec20e215d385c4e01d0594d986a094232a3c55141cf9187b04a3 not found: ID does not exist" containerID="5b5e77bf6746ec20e215d385c4e01d0594d986a094232a3c55141cf9187b04a3" Jan 22 07:00:47 crc kubenswrapper[4720]: I0122 07:00:47.441748 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5b5e77bf6746ec20e215d385c4e01d0594d986a094232a3c55141cf9187b04a3"} err="failed to get container status \"5b5e77bf6746ec20e215d385c4e01d0594d986a094232a3c55141cf9187b04a3\": rpc error: code = NotFound desc = could not find container \"5b5e77bf6746ec20e215d385c4e01d0594d986a094232a3c55141cf9187b04a3\": container with ID starting with 5b5e77bf6746ec20e215d385c4e01d0594d986a094232a3c55141cf9187b04a3 not found: ID does not exist" Jan 22 07:00:47 crc kubenswrapper[4720]: I0122 07:00:47.441770 4720 scope.go:117] "RemoveContainer" containerID="9e98f49a7c116496ca36ef9c6a5f3cca8799203b5c279698204bc30fe9dddc7e" Jan 22 07:00:47 crc kubenswrapper[4720]: I0122 07:00:47.441988 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9e98f49a7c116496ca36ef9c6a5f3cca8799203b5c279698204bc30fe9dddc7e"} err="failed to get container status \"9e98f49a7c116496ca36ef9c6a5f3cca8799203b5c279698204bc30fe9dddc7e\": rpc error: code = NotFound desc = could not find container \"9e98f49a7c116496ca36ef9c6a5f3cca8799203b5c279698204bc30fe9dddc7e\": container with ID starting with 9e98f49a7c116496ca36ef9c6a5f3cca8799203b5c279698204bc30fe9dddc7e not found: ID does not exist" Jan 22 07:00:47 crc kubenswrapper[4720]: I0122 07:00:47.442010 4720 scope.go:117] "RemoveContainer" containerID="292c458995b0883a4a33284f1333814e41d5050459517ccdd6cd16ec5ab0d1e9" Jan 22 07:00:47 crc kubenswrapper[4720]: I0122 07:00:47.442191 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"292c458995b0883a4a33284f1333814e41d5050459517ccdd6cd16ec5ab0d1e9"} err="failed to get container status \"292c458995b0883a4a33284f1333814e41d5050459517ccdd6cd16ec5ab0d1e9\": rpc error: code = NotFound desc = could not find container \"292c458995b0883a4a33284f1333814e41d5050459517ccdd6cd16ec5ab0d1e9\": container with ID starting with 292c458995b0883a4a33284f1333814e41d5050459517ccdd6cd16ec5ab0d1e9 not found: ID does not exist" Jan 22 07:00:47 crc kubenswrapper[4720]: I0122 07:00:47.442213 4720 scope.go:117] "RemoveContainer" containerID="6d3016b2bd4304885105404af4fe7de822f33519c0cb4c7bb64562fc86bb0536" Jan 22 07:00:47 crc kubenswrapper[4720]: I0122 07:00:47.442448 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6d3016b2bd4304885105404af4fe7de822f33519c0cb4c7bb64562fc86bb0536"} err="failed to get container status \"6d3016b2bd4304885105404af4fe7de822f33519c0cb4c7bb64562fc86bb0536\": rpc error: code = NotFound desc = could not find container \"6d3016b2bd4304885105404af4fe7de822f33519c0cb4c7bb64562fc86bb0536\": container with ID starting with 6d3016b2bd4304885105404af4fe7de822f33519c0cb4c7bb64562fc86bb0536 not found: ID does not exist" Jan 22 07:00:47 crc kubenswrapper[4720]: I0122 07:00:47.442470 4720 scope.go:117] "RemoveContainer" containerID="5b5e77bf6746ec20e215d385c4e01d0594d986a094232a3c55141cf9187b04a3" Jan 22 07:00:47 crc kubenswrapper[4720]: I0122 07:00:47.442648 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5b5e77bf6746ec20e215d385c4e01d0594d986a094232a3c55141cf9187b04a3"} err="failed to get container status \"5b5e77bf6746ec20e215d385c4e01d0594d986a094232a3c55141cf9187b04a3\": rpc error: code = NotFound desc = could not find container \"5b5e77bf6746ec20e215d385c4e01d0594d986a094232a3c55141cf9187b04a3\": container with ID starting with 5b5e77bf6746ec20e215d385c4e01d0594d986a094232a3c55141cf9187b04a3 not found: ID does not exist" Jan 22 07:00:47 crc kubenswrapper[4720]: I0122 07:00:47.442668 4720 scope.go:117] "RemoveContainer" containerID="9e98f49a7c116496ca36ef9c6a5f3cca8799203b5c279698204bc30fe9dddc7e" Jan 22 07:00:47 crc kubenswrapper[4720]: I0122 07:00:47.442836 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9e98f49a7c116496ca36ef9c6a5f3cca8799203b5c279698204bc30fe9dddc7e"} err="failed to get container status \"9e98f49a7c116496ca36ef9c6a5f3cca8799203b5c279698204bc30fe9dddc7e\": rpc error: code = NotFound desc = could not find container \"9e98f49a7c116496ca36ef9c6a5f3cca8799203b5c279698204bc30fe9dddc7e\": container with ID starting with 9e98f49a7c116496ca36ef9c6a5f3cca8799203b5c279698204bc30fe9dddc7e not found: ID does not exist" Jan 22 07:00:47 crc kubenswrapper[4720]: I0122 07:00:47.442854 4720 scope.go:117] "RemoveContainer" containerID="292c458995b0883a4a33284f1333814e41d5050459517ccdd6cd16ec5ab0d1e9" Jan 22 07:00:47 crc kubenswrapper[4720]: I0122 07:00:47.443027 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"292c458995b0883a4a33284f1333814e41d5050459517ccdd6cd16ec5ab0d1e9"} err="failed to get container status \"292c458995b0883a4a33284f1333814e41d5050459517ccdd6cd16ec5ab0d1e9\": rpc error: code = NotFound desc = could not find container \"292c458995b0883a4a33284f1333814e41d5050459517ccdd6cd16ec5ab0d1e9\": container with ID starting with 292c458995b0883a4a33284f1333814e41d5050459517ccdd6cd16ec5ab0d1e9 not found: ID does not exist" Jan 22 07:00:47 crc kubenswrapper[4720]: I0122 07:00:47.443044 4720 scope.go:117] "RemoveContainer" containerID="6d3016b2bd4304885105404af4fe7de822f33519c0cb4c7bb64562fc86bb0536" Jan 22 07:00:47 crc kubenswrapper[4720]: I0122 07:00:47.443235 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6d3016b2bd4304885105404af4fe7de822f33519c0cb4c7bb64562fc86bb0536"} err="failed to get container status \"6d3016b2bd4304885105404af4fe7de822f33519c0cb4c7bb64562fc86bb0536\": rpc error: code = NotFound desc = could not find container \"6d3016b2bd4304885105404af4fe7de822f33519c0cb4c7bb64562fc86bb0536\": container with ID starting with 6d3016b2bd4304885105404af4fe7de822f33519c0cb4c7bb64562fc86bb0536 not found: ID does not exist" Jan 22 07:00:47 crc kubenswrapper[4720]: I0122 07:00:47.443255 4720 scope.go:117] "RemoveContainer" containerID="5b5e77bf6746ec20e215d385c4e01d0594d986a094232a3c55141cf9187b04a3" Jan 22 07:00:47 crc kubenswrapper[4720]: I0122 07:00:47.443470 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5b5e77bf6746ec20e215d385c4e01d0594d986a094232a3c55141cf9187b04a3"} err="failed to get container status \"5b5e77bf6746ec20e215d385c4e01d0594d986a094232a3c55141cf9187b04a3\": rpc error: code = NotFound desc = could not find container \"5b5e77bf6746ec20e215d385c4e01d0594d986a094232a3c55141cf9187b04a3\": container with ID starting with 5b5e77bf6746ec20e215d385c4e01d0594d986a094232a3c55141cf9187b04a3 not found: ID does not exist" Jan 22 07:00:47 crc kubenswrapper[4720]: I0122 07:00:47.443509 4720 scope.go:117] "RemoveContainer" containerID="9e98f49a7c116496ca36ef9c6a5f3cca8799203b5c279698204bc30fe9dddc7e" Jan 22 07:00:47 crc kubenswrapper[4720]: I0122 07:00:47.443942 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9e98f49a7c116496ca36ef9c6a5f3cca8799203b5c279698204bc30fe9dddc7e"} err="failed to get container status \"9e98f49a7c116496ca36ef9c6a5f3cca8799203b5c279698204bc30fe9dddc7e\": rpc error: code = NotFound desc = could not find container \"9e98f49a7c116496ca36ef9c6a5f3cca8799203b5c279698204bc30fe9dddc7e\": container with ID starting with 9e98f49a7c116496ca36ef9c6a5f3cca8799203b5c279698204bc30fe9dddc7e not found: ID does not exist" Jan 22 07:00:47 crc kubenswrapper[4720]: I0122 07:00:47.444008 4720 scope.go:117] "RemoveContainer" containerID="292c458995b0883a4a33284f1333814e41d5050459517ccdd6cd16ec5ab0d1e9" Jan 22 07:00:47 crc kubenswrapper[4720]: I0122 07:00:47.444528 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"292c458995b0883a4a33284f1333814e41d5050459517ccdd6cd16ec5ab0d1e9"} err="failed to get container status \"292c458995b0883a4a33284f1333814e41d5050459517ccdd6cd16ec5ab0d1e9\": rpc error: code = NotFound desc = could not find container \"292c458995b0883a4a33284f1333814e41d5050459517ccdd6cd16ec5ab0d1e9\": container with ID starting with 292c458995b0883a4a33284f1333814e41d5050459517ccdd6cd16ec5ab0d1e9 not found: ID does not exist" Jan 22 07:00:47 crc kubenswrapper[4720]: I0122 07:00:47.444590 4720 scope.go:117] "RemoveContainer" containerID="6d3016b2bd4304885105404af4fe7de822f33519c0cb4c7bb64562fc86bb0536" Jan 22 07:00:47 crc kubenswrapper[4720]: I0122 07:00:47.445103 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6d3016b2bd4304885105404af4fe7de822f33519c0cb4c7bb64562fc86bb0536"} err="failed to get container status \"6d3016b2bd4304885105404af4fe7de822f33519c0cb4c7bb64562fc86bb0536\": rpc error: code = NotFound desc = could not find container \"6d3016b2bd4304885105404af4fe7de822f33519c0cb4c7bb64562fc86bb0536\": container with ID starting with 6d3016b2bd4304885105404af4fe7de822f33519c0cb4c7bb64562fc86bb0536 not found: ID does not exist" Jan 22 07:00:47 crc kubenswrapper[4720]: I0122 07:00:47.445126 4720 scope.go:117] "RemoveContainer" containerID="5b5e77bf6746ec20e215d385c4e01d0594d986a094232a3c55141cf9187b04a3" Jan 22 07:00:47 crc kubenswrapper[4720]: I0122 07:00:47.445380 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5b5e77bf6746ec20e215d385c4e01d0594d986a094232a3c55141cf9187b04a3"} err="failed to get container status \"5b5e77bf6746ec20e215d385c4e01d0594d986a094232a3c55141cf9187b04a3\": rpc error: code = NotFound desc = could not find container \"5b5e77bf6746ec20e215d385c4e01d0594d986a094232a3c55141cf9187b04a3\": container with ID starting with 5b5e77bf6746ec20e215d385c4e01d0594d986a094232a3c55141cf9187b04a3 not found: ID does not exist" Jan 22 07:00:47 crc kubenswrapper[4720]: I0122 07:00:47.487111 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qt9ft\" (UniqueName: \"kubernetes.io/projected/c27b3f23-c680-4c3b-9986-f86e585bd220-kube-api-access-qt9ft\") pod \"ceilometer-0\" (UID: \"c27b3f23-c680-4c3b-9986-f86e585bd220\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:00:47 crc kubenswrapper[4720]: I0122 07:00:47.487174 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/c27b3f23-c680-4c3b-9986-f86e585bd220-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"c27b3f23-c680-4c3b-9986-f86e585bd220\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:00:47 crc kubenswrapper[4720]: I0122 07:00:47.487201 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c27b3f23-c680-4c3b-9986-f86e585bd220-run-httpd\") pod \"ceilometer-0\" (UID: \"c27b3f23-c680-4c3b-9986-f86e585bd220\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:00:47 crc kubenswrapper[4720]: I0122 07:00:47.487293 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c27b3f23-c680-4c3b-9986-f86e585bd220-config-data\") pod \"ceilometer-0\" (UID: \"c27b3f23-c680-4c3b-9986-f86e585bd220\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:00:47 crc kubenswrapper[4720]: I0122 07:00:47.487341 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c27b3f23-c680-4c3b-9986-f86e585bd220-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c27b3f23-c680-4c3b-9986-f86e585bd220\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:00:47 crc kubenswrapper[4720]: I0122 07:00:47.487367 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c27b3f23-c680-4c3b-9986-f86e585bd220-scripts\") pod \"ceilometer-0\" (UID: \"c27b3f23-c680-4c3b-9986-f86e585bd220\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:00:47 crc kubenswrapper[4720]: I0122 07:00:47.487398 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c27b3f23-c680-4c3b-9986-f86e585bd220-log-httpd\") pod \"ceilometer-0\" (UID: \"c27b3f23-c680-4c3b-9986-f86e585bd220\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:00:47 crc kubenswrapper[4720]: I0122 07:00:47.487470 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c27b3f23-c680-4c3b-9986-f86e585bd220-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c27b3f23-c680-4c3b-9986-f86e585bd220\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:00:47 crc kubenswrapper[4720]: I0122 07:00:47.589652 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c27b3f23-c680-4c3b-9986-f86e585bd220-log-httpd\") pod \"ceilometer-0\" (UID: \"c27b3f23-c680-4c3b-9986-f86e585bd220\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:00:47 crc kubenswrapper[4720]: I0122 07:00:47.589728 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c27b3f23-c680-4c3b-9986-f86e585bd220-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c27b3f23-c680-4c3b-9986-f86e585bd220\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:00:47 crc kubenswrapper[4720]: I0122 07:00:47.589792 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qt9ft\" (UniqueName: \"kubernetes.io/projected/c27b3f23-c680-4c3b-9986-f86e585bd220-kube-api-access-qt9ft\") pod \"ceilometer-0\" (UID: \"c27b3f23-c680-4c3b-9986-f86e585bd220\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:00:47 crc kubenswrapper[4720]: I0122 07:00:47.589814 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/c27b3f23-c680-4c3b-9986-f86e585bd220-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"c27b3f23-c680-4c3b-9986-f86e585bd220\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:00:47 crc kubenswrapper[4720]: I0122 07:00:47.589831 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c27b3f23-c680-4c3b-9986-f86e585bd220-run-httpd\") pod \"ceilometer-0\" (UID: \"c27b3f23-c680-4c3b-9986-f86e585bd220\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:00:47 crc kubenswrapper[4720]: I0122 07:00:47.589872 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c27b3f23-c680-4c3b-9986-f86e585bd220-config-data\") pod \"ceilometer-0\" (UID: \"c27b3f23-c680-4c3b-9986-f86e585bd220\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:00:47 crc kubenswrapper[4720]: I0122 07:00:47.589903 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c27b3f23-c680-4c3b-9986-f86e585bd220-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c27b3f23-c680-4c3b-9986-f86e585bd220\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:00:47 crc kubenswrapper[4720]: I0122 07:00:47.589957 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c27b3f23-c680-4c3b-9986-f86e585bd220-scripts\") pod \"ceilometer-0\" (UID: \"c27b3f23-c680-4c3b-9986-f86e585bd220\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:00:47 crc kubenswrapper[4720]: I0122 07:00:47.591306 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c27b3f23-c680-4c3b-9986-f86e585bd220-log-httpd\") pod \"ceilometer-0\" (UID: \"c27b3f23-c680-4c3b-9986-f86e585bd220\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:00:47 crc kubenswrapper[4720]: I0122 07:00:47.592253 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c27b3f23-c680-4c3b-9986-f86e585bd220-run-httpd\") pod \"ceilometer-0\" (UID: \"c27b3f23-c680-4c3b-9986-f86e585bd220\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:00:47 crc kubenswrapper[4720]: I0122 07:00:47.597163 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c27b3f23-c680-4c3b-9986-f86e585bd220-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c27b3f23-c680-4c3b-9986-f86e585bd220\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:00:47 crc kubenswrapper[4720]: I0122 07:00:47.597996 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/c27b3f23-c680-4c3b-9986-f86e585bd220-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"c27b3f23-c680-4c3b-9986-f86e585bd220\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:00:47 crc kubenswrapper[4720]: I0122 07:00:47.598322 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c27b3f23-c680-4c3b-9986-f86e585bd220-config-data\") pod \"ceilometer-0\" (UID: \"c27b3f23-c680-4c3b-9986-f86e585bd220\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:00:47 crc kubenswrapper[4720]: I0122 07:00:47.599025 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c27b3f23-c680-4c3b-9986-f86e585bd220-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c27b3f23-c680-4c3b-9986-f86e585bd220\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:00:47 crc kubenswrapper[4720]: I0122 07:00:47.599350 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c27b3f23-c680-4c3b-9986-f86e585bd220-scripts\") pod \"ceilometer-0\" (UID: \"c27b3f23-c680-4c3b-9986-f86e585bd220\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:00:47 crc kubenswrapper[4720]: I0122 07:00:47.623625 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qt9ft\" (UniqueName: \"kubernetes.io/projected/c27b3f23-c680-4c3b-9986-f86e585bd220-kube-api-access-qt9ft\") pod \"ceilometer-0\" (UID: \"c27b3f23-c680-4c3b-9986-f86e585bd220\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:00:47 crc kubenswrapper[4720]: I0122 07:00:47.720528 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:00:48 crc kubenswrapper[4720]: I0122 07:00:48.265692 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4f12fc81-5045-4a93-95ea-8a57050168c5" path="/var/lib/kubelet/pods/4f12fc81-5045-4a93-95ea-8a57050168c5/volumes" Jan 22 07:00:48 crc kubenswrapper[4720]: I0122 07:00:48.266960 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7a08f27d-f169-4ef4-9154-a2df454c5d27" path="/var/lib/kubelet/pods/7a08f27d-f169-4ef4-9154-a2df454c5d27/volumes" Jan 22 07:00:48 crc kubenswrapper[4720]: I0122 07:00:48.268059 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bed144f1-70f3-49fa-a09c-f10b5b03c2c2" path="/var/lib/kubelet/pods/bed144f1-70f3-49fa-a09c-f10b5b03c2c2/volumes" Jan 22 07:00:48 crc kubenswrapper[4720]: W0122 07:00:48.357793 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc27b3f23_c680_4c3b_9986_f86e585bd220.slice/crio-715bd0b0805943accca0a301efd63edd114c86648c37594377994f81c7bb7e54 WatchSource:0}: Error finding container 715bd0b0805943accca0a301efd63edd114c86648c37594377994f81c7bb7e54: Status 404 returned error can't find the container with id 715bd0b0805943accca0a301efd63edd114c86648c37594377994f81c7bb7e54 Jan 22 07:00:48 crc kubenswrapper[4720]: I0122 07:00:48.365877 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 07:00:49 crc kubenswrapper[4720]: I0122 07:00:49.320770 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"c27b3f23-c680-4c3b-9986-f86e585bd220","Type":"ContainerStarted","Data":"715bd0b0805943accca0a301efd63edd114c86648c37594377994f81c7bb7e54"} Jan 22 07:00:50 crc kubenswrapper[4720]: I0122 07:00:50.537456 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher5b9d-account-delete-p2vqf"] Jan 22 07:00:50 crc kubenswrapper[4720]: I0122 07:00:50.546084 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-5b9d-account-create-update-hh4vt"] Jan 22 07:00:50 crc kubenswrapper[4720]: I0122 07:00:50.553215 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-db-create-2dww7"] Jan 22 07:00:50 crc kubenswrapper[4720]: I0122 07:00:50.561533 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher5b9d-account-delete-p2vqf"] Jan 22 07:00:50 crc kubenswrapper[4720]: I0122 07:00:50.568395 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-5b9d-account-create-update-hh4vt"] Jan 22 07:00:50 crc kubenswrapper[4720]: I0122 07:00:50.574178 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-db-create-2dww7"] Jan 22 07:00:51 crc kubenswrapper[4720]: I0122 07:00:51.224024 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-db-create-7sq2j"] Jan 22 07:00:51 crc kubenswrapper[4720]: I0122 07:00:51.225877 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-7sq2j" Jan 22 07:00:51 crc kubenswrapper[4720]: I0122 07:00:51.243407 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-95d6-account-create-update-pxjbc"] Jan 22 07:00:51 crc kubenswrapper[4720]: I0122 07:00:51.245385 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-95d6-account-create-update-pxjbc" Jan 22 07:00:51 crc kubenswrapper[4720]: I0122 07:00:51.247456 4720 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-db-secret" Jan 22 07:00:51 crc kubenswrapper[4720]: I0122 07:00:51.258901 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-db-create-7sq2j"] Jan 22 07:00:51 crc kubenswrapper[4720]: I0122 07:00:51.272943 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-95d6-account-create-update-pxjbc"] Jan 22 07:00:51 crc kubenswrapper[4720]: I0122 07:00:51.402799 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bl9q4\" (UniqueName: \"kubernetes.io/projected/2b5de5b3-1410-4b2c-92ab-85730d07e10c-kube-api-access-bl9q4\") pod \"watcher-db-create-7sq2j\" (UID: \"2b5de5b3-1410-4b2c-92ab-85730d07e10c\") " pod="watcher-kuttl-default/watcher-db-create-7sq2j" Jan 22 07:00:51 crc kubenswrapper[4720]: I0122 07:00:51.402928 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2b5de5b3-1410-4b2c-92ab-85730d07e10c-operator-scripts\") pod \"watcher-db-create-7sq2j\" (UID: \"2b5de5b3-1410-4b2c-92ab-85730d07e10c\") " pod="watcher-kuttl-default/watcher-db-create-7sq2j" Jan 22 07:00:51 crc kubenswrapper[4720]: I0122 07:00:51.403354 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5dpkb\" (UniqueName: \"kubernetes.io/projected/4ebd5b4a-64cb-4011-a9ff-483f4643d5b2-kube-api-access-5dpkb\") pod \"watcher-95d6-account-create-update-pxjbc\" (UID: \"4ebd5b4a-64cb-4011-a9ff-483f4643d5b2\") " pod="watcher-kuttl-default/watcher-95d6-account-create-update-pxjbc" Jan 22 07:00:51 crc kubenswrapper[4720]: I0122 07:00:51.403530 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4ebd5b4a-64cb-4011-a9ff-483f4643d5b2-operator-scripts\") pod \"watcher-95d6-account-create-update-pxjbc\" (UID: \"4ebd5b4a-64cb-4011-a9ff-483f4643d5b2\") " pod="watcher-kuttl-default/watcher-95d6-account-create-update-pxjbc" Jan 22 07:00:51 crc kubenswrapper[4720]: I0122 07:00:51.505596 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2b5de5b3-1410-4b2c-92ab-85730d07e10c-operator-scripts\") pod \"watcher-db-create-7sq2j\" (UID: \"2b5de5b3-1410-4b2c-92ab-85730d07e10c\") " pod="watcher-kuttl-default/watcher-db-create-7sq2j" Jan 22 07:00:51 crc kubenswrapper[4720]: I0122 07:00:51.505747 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5dpkb\" (UniqueName: \"kubernetes.io/projected/4ebd5b4a-64cb-4011-a9ff-483f4643d5b2-kube-api-access-5dpkb\") pod \"watcher-95d6-account-create-update-pxjbc\" (UID: \"4ebd5b4a-64cb-4011-a9ff-483f4643d5b2\") " pod="watcher-kuttl-default/watcher-95d6-account-create-update-pxjbc" Jan 22 07:00:51 crc kubenswrapper[4720]: I0122 07:00:51.506968 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2b5de5b3-1410-4b2c-92ab-85730d07e10c-operator-scripts\") pod \"watcher-db-create-7sq2j\" (UID: \"2b5de5b3-1410-4b2c-92ab-85730d07e10c\") " pod="watcher-kuttl-default/watcher-db-create-7sq2j" Jan 22 07:00:51 crc kubenswrapper[4720]: I0122 07:00:51.507061 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4ebd5b4a-64cb-4011-a9ff-483f4643d5b2-operator-scripts\") pod \"watcher-95d6-account-create-update-pxjbc\" (UID: \"4ebd5b4a-64cb-4011-a9ff-483f4643d5b2\") " pod="watcher-kuttl-default/watcher-95d6-account-create-update-pxjbc" Jan 22 07:00:51 crc kubenswrapper[4720]: I0122 07:00:51.507121 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bl9q4\" (UniqueName: \"kubernetes.io/projected/2b5de5b3-1410-4b2c-92ab-85730d07e10c-kube-api-access-bl9q4\") pod \"watcher-db-create-7sq2j\" (UID: \"2b5de5b3-1410-4b2c-92ab-85730d07e10c\") " pod="watcher-kuttl-default/watcher-db-create-7sq2j" Jan 22 07:00:51 crc kubenswrapper[4720]: I0122 07:00:51.507970 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4ebd5b4a-64cb-4011-a9ff-483f4643d5b2-operator-scripts\") pod \"watcher-95d6-account-create-update-pxjbc\" (UID: \"4ebd5b4a-64cb-4011-a9ff-483f4643d5b2\") " pod="watcher-kuttl-default/watcher-95d6-account-create-update-pxjbc" Jan 22 07:00:51 crc kubenswrapper[4720]: I0122 07:00:51.529975 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5dpkb\" (UniqueName: \"kubernetes.io/projected/4ebd5b4a-64cb-4011-a9ff-483f4643d5b2-kube-api-access-5dpkb\") pod \"watcher-95d6-account-create-update-pxjbc\" (UID: \"4ebd5b4a-64cb-4011-a9ff-483f4643d5b2\") " pod="watcher-kuttl-default/watcher-95d6-account-create-update-pxjbc" Jan 22 07:00:51 crc kubenswrapper[4720]: I0122 07:00:51.535700 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bl9q4\" (UniqueName: \"kubernetes.io/projected/2b5de5b3-1410-4b2c-92ab-85730d07e10c-kube-api-access-bl9q4\") pod \"watcher-db-create-7sq2j\" (UID: \"2b5de5b3-1410-4b2c-92ab-85730d07e10c\") " pod="watcher-kuttl-default/watcher-db-create-7sq2j" Jan 22 07:00:51 crc kubenswrapper[4720]: I0122 07:00:51.551669 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-7sq2j" Jan 22 07:00:51 crc kubenswrapper[4720]: I0122 07:00:51.580421 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-95d6-account-create-update-pxjbc" Jan 22 07:00:52 crc kubenswrapper[4720]: W0122 07:00:52.218124 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4ebd5b4a_64cb_4011_a9ff_483f4643d5b2.slice/crio-685e6ad16573c4e6cda22e26e1b2e72ed8e4da236e9678b9f4544146734c8caf WatchSource:0}: Error finding container 685e6ad16573c4e6cda22e26e1b2e72ed8e4da236e9678b9f4544146734c8caf: Status 404 returned error can't find the container with id 685e6ad16573c4e6cda22e26e1b2e72ed8e4da236e9678b9f4544146734c8caf Jan 22 07:00:52 crc kubenswrapper[4720]: I0122 07:00:52.222616 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="51c71a30-7002-460d-aaab-0a7bc54247fa" path="/var/lib/kubelet/pods/51c71a30-7002-460d-aaab-0a7bc54247fa/volumes" Jan 22 07:00:52 crc kubenswrapper[4720]: I0122 07:00:52.233695 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ec1b0a8-dfba-4e68-a9f4-38aa45d8a972" path="/var/lib/kubelet/pods/6ec1b0a8-dfba-4e68-a9f4-38aa45d8a972/volumes" Jan 22 07:00:52 crc kubenswrapper[4720]: I0122 07:00:52.234437 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8d5a9f1b-a220-45c7-8902-634631838ea7" path="/var/lib/kubelet/pods/8d5a9f1b-a220-45c7-8902-634631838ea7/volumes" Jan 22 07:00:52 crc kubenswrapper[4720]: I0122 07:00:52.237488 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-95d6-account-create-update-pxjbc"] Jan 22 07:00:52 crc kubenswrapper[4720]: I0122 07:00:52.350831 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-db-create-7sq2j"] Jan 22 07:00:52 crc kubenswrapper[4720]: I0122 07:00:52.366987 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"c27b3f23-c680-4c3b-9986-f86e585bd220","Type":"ContainerStarted","Data":"757303994a3020292d3ae5ec21cab4ba66ba73fd0ecbabf9e9a7cc6b3c47f8ad"} Jan 22 07:00:52 crc kubenswrapper[4720]: I0122 07:00:52.379799 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-95d6-account-create-update-pxjbc" event={"ID":"4ebd5b4a-64cb-4011-a9ff-483f4643d5b2","Type":"ContainerStarted","Data":"685e6ad16573c4e6cda22e26e1b2e72ed8e4da236e9678b9f4544146734c8caf"} Jan 22 07:00:53 crc kubenswrapper[4720]: I0122 07:00:53.415447 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-95d6-account-create-update-pxjbc" event={"ID":"4ebd5b4a-64cb-4011-a9ff-483f4643d5b2","Type":"ContainerStarted","Data":"339c85955f662dca6dad9a2d3eccc74d3dc10f9483591310bd00b569425c12cc"} Jan 22 07:00:53 crc kubenswrapper[4720]: I0122 07:00:53.419830 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-7sq2j" event={"ID":"2b5de5b3-1410-4b2c-92ab-85730d07e10c","Type":"ContainerStarted","Data":"0774a0bdfd2635d1f7fe0d734d0e352b30cf542549b9e5983b4edd34c4e1cd83"} Jan 22 07:00:53 crc kubenswrapper[4720]: I0122 07:00:53.419879 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-7sq2j" event={"ID":"2b5de5b3-1410-4b2c-92ab-85730d07e10c","Type":"ContainerStarted","Data":"2a912b5dadf91274fd630c10560f880678ebf65c84da6d5169b20bfc95d2052b"} Jan 22 07:00:53 crc kubenswrapper[4720]: I0122 07:00:53.427882 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"c27b3f23-c680-4c3b-9986-f86e585bd220","Type":"ContainerStarted","Data":"05aa2dfae7f3f14ff9245077c19cfc721fcce175fbb298d1ad254fd44a8085a1"} Jan 22 07:00:53 crc kubenswrapper[4720]: I0122 07:00:53.435196 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-95d6-account-create-update-pxjbc" podStartSLOduration=2.435163127 podStartE2EDuration="2.435163127s" podCreationTimestamp="2026-01-22 07:00:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 07:00:53.430683459 +0000 UTC m=+1545.572590184" watchObservedRunningTime="2026-01-22 07:00:53.435163127 +0000 UTC m=+1545.577069832" Jan 22 07:00:53 crc kubenswrapper[4720]: I0122 07:00:53.453830 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-db-create-7sq2j" podStartSLOduration=2.453690566 podStartE2EDuration="2.453690566s" podCreationTimestamp="2026-01-22 07:00:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 07:00:53.451493804 +0000 UTC m=+1545.593400529" watchObservedRunningTime="2026-01-22 07:00:53.453690566 +0000 UTC m=+1545.595597271" Jan 22 07:00:54 crc kubenswrapper[4720]: I0122 07:00:54.439427 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"c27b3f23-c680-4c3b-9986-f86e585bd220","Type":"ContainerStarted","Data":"8426a058bfe5723aa93e1fba658e508219b244875be7e0a7dc492c4894145da4"} Jan 22 07:00:54 crc kubenswrapper[4720]: I0122 07:00:54.442577 4720 generic.go:334] "Generic (PLEG): container finished" podID="4ebd5b4a-64cb-4011-a9ff-483f4643d5b2" containerID="339c85955f662dca6dad9a2d3eccc74d3dc10f9483591310bd00b569425c12cc" exitCode=0 Jan 22 07:00:54 crc kubenswrapper[4720]: I0122 07:00:54.443123 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-95d6-account-create-update-pxjbc" event={"ID":"4ebd5b4a-64cb-4011-a9ff-483f4643d5b2","Type":"ContainerDied","Data":"339c85955f662dca6dad9a2d3eccc74d3dc10f9483591310bd00b569425c12cc"} Jan 22 07:00:54 crc kubenswrapper[4720]: I0122 07:00:54.444716 4720 generic.go:334] "Generic (PLEG): container finished" podID="2b5de5b3-1410-4b2c-92ab-85730d07e10c" containerID="0774a0bdfd2635d1f7fe0d734d0e352b30cf542549b9e5983b4edd34c4e1cd83" exitCode=0 Jan 22 07:00:54 crc kubenswrapper[4720]: I0122 07:00:54.444779 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-7sq2j" event={"ID":"2b5de5b3-1410-4b2c-92ab-85730d07e10c","Type":"ContainerDied","Data":"0774a0bdfd2635d1f7fe0d734d0e352b30cf542549b9e5983b4edd34c4e1cd83"} Jan 22 07:00:55 crc kubenswrapper[4720]: I0122 07:00:55.937295 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-7sq2j" Jan 22 07:00:56 crc kubenswrapper[4720]: I0122 07:00:56.003084 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-95d6-account-create-update-pxjbc" Jan 22 07:00:56 crc kubenswrapper[4720]: I0122 07:00:56.116623 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bl9q4\" (UniqueName: \"kubernetes.io/projected/2b5de5b3-1410-4b2c-92ab-85730d07e10c-kube-api-access-bl9q4\") pod \"2b5de5b3-1410-4b2c-92ab-85730d07e10c\" (UID: \"2b5de5b3-1410-4b2c-92ab-85730d07e10c\") " Jan 22 07:00:56 crc kubenswrapper[4720]: I0122 07:00:56.116701 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4ebd5b4a-64cb-4011-a9ff-483f4643d5b2-operator-scripts\") pod \"4ebd5b4a-64cb-4011-a9ff-483f4643d5b2\" (UID: \"4ebd5b4a-64cb-4011-a9ff-483f4643d5b2\") " Jan 22 07:00:56 crc kubenswrapper[4720]: I0122 07:00:56.116792 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2b5de5b3-1410-4b2c-92ab-85730d07e10c-operator-scripts\") pod \"2b5de5b3-1410-4b2c-92ab-85730d07e10c\" (UID: \"2b5de5b3-1410-4b2c-92ab-85730d07e10c\") " Jan 22 07:00:56 crc kubenswrapper[4720]: I0122 07:00:56.116841 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5dpkb\" (UniqueName: \"kubernetes.io/projected/4ebd5b4a-64cb-4011-a9ff-483f4643d5b2-kube-api-access-5dpkb\") pod \"4ebd5b4a-64cb-4011-a9ff-483f4643d5b2\" (UID: \"4ebd5b4a-64cb-4011-a9ff-483f4643d5b2\") " Jan 22 07:00:56 crc kubenswrapper[4720]: I0122 07:00:56.117443 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4ebd5b4a-64cb-4011-a9ff-483f4643d5b2-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4ebd5b4a-64cb-4011-a9ff-483f4643d5b2" (UID: "4ebd5b4a-64cb-4011-a9ff-483f4643d5b2"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 07:00:56 crc kubenswrapper[4720]: I0122 07:00:56.118191 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2b5de5b3-1410-4b2c-92ab-85730d07e10c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2b5de5b3-1410-4b2c-92ab-85730d07e10c" (UID: "2b5de5b3-1410-4b2c-92ab-85730d07e10c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 07:00:56 crc kubenswrapper[4720]: I0122 07:00:56.123928 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ebd5b4a-64cb-4011-a9ff-483f4643d5b2-kube-api-access-5dpkb" (OuterVolumeSpecName: "kube-api-access-5dpkb") pod "4ebd5b4a-64cb-4011-a9ff-483f4643d5b2" (UID: "4ebd5b4a-64cb-4011-a9ff-483f4643d5b2"). InnerVolumeSpecName "kube-api-access-5dpkb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 07:00:56 crc kubenswrapper[4720]: I0122 07:00:56.125243 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b5de5b3-1410-4b2c-92ab-85730d07e10c-kube-api-access-bl9q4" (OuterVolumeSpecName: "kube-api-access-bl9q4") pod "2b5de5b3-1410-4b2c-92ab-85730d07e10c" (UID: "2b5de5b3-1410-4b2c-92ab-85730d07e10c"). InnerVolumeSpecName "kube-api-access-bl9q4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 07:00:56 crc kubenswrapper[4720]: I0122 07:00:56.220656 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bl9q4\" (UniqueName: \"kubernetes.io/projected/2b5de5b3-1410-4b2c-92ab-85730d07e10c-kube-api-access-bl9q4\") on node \"crc\" DevicePath \"\"" Jan 22 07:00:56 crc kubenswrapper[4720]: I0122 07:00:56.220697 4720 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4ebd5b4a-64cb-4011-a9ff-483f4643d5b2-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 07:00:56 crc kubenswrapper[4720]: I0122 07:00:56.220708 4720 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2b5de5b3-1410-4b2c-92ab-85730d07e10c-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 07:00:56 crc kubenswrapper[4720]: I0122 07:00:56.220722 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5dpkb\" (UniqueName: \"kubernetes.io/projected/4ebd5b4a-64cb-4011-a9ff-483f4643d5b2-kube-api-access-5dpkb\") on node \"crc\" DevicePath \"\"" Jan 22 07:00:56 crc kubenswrapper[4720]: I0122 07:00:56.466411 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-95d6-account-create-update-pxjbc" event={"ID":"4ebd5b4a-64cb-4011-a9ff-483f4643d5b2","Type":"ContainerDied","Data":"685e6ad16573c4e6cda22e26e1b2e72ed8e4da236e9678b9f4544146734c8caf"} Jan 22 07:00:56 crc kubenswrapper[4720]: I0122 07:00:56.466492 4720 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="685e6ad16573c4e6cda22e26e1b2e72ed8e4da236e9678b9f4544146734c8caf" Jan 22 07:00:56 crc kubenswrapper[4720]: I0122 07:00:56.466432 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-95d6-account-create-update-pxjbc" Jan 22 07:00:56 crc kubenswrapper[4720]: I0122 07:00:56.478429 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-7sq2j" event={"ID":"2b5de5b3-1410-4b2c-92ab-85730d07e10c","Type":"ContainerDied","Data":"2a912b5dadf91274fd630c10560f880678ebf65c84da6d5169b20bfc95d2052b"} Jan 22 07:00:56 crc kubenswrapper[4720]: I0122 07:00:56.478476 4720 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2a912b5dadf91274fd630c10560f880678ebf65c84da6d5169b20bfc95d2052b" Jan 22 07:00:56 crc kubenswrapper[4720]: I0122 07:00:56.478559 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-7sq2j" Jan 22 07:00:56 crc kubenswrapper[4720]: I0122 07:00:56.496028 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"c27b3f23-c680-4c3b-9986-f86e585bd220","Type":"ContainerStarted","Data":"66e14d3c508834dcb4a06d4a6feaa2ee0b3399bb1f5a553c5723c0f5f75caabb"} Jan 22 07:00:56 crc kubenswrapper[4720]: I0122 07:00:56.497511 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:00:56 crc kubenswrapper[4720]: I0122 07:00:56.522006 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/ceilometer-0" podStartSLOduration=2.34997424 podStartE2EDuration="9.521983088s" podCreationTimestamp="2026-01-22 07:00:47 +0000 UTC" firstStartedPulling="2026-01-22 07:00:48.360943134 +0000 UTC m=+1540.502849839" lastFinishedPulling="2026-01-22 07:00:55.532951982 +0000 UTC m=+1547.674858687" observedRunningTime="2026-01-22 07:00:56.519131297 +0000 UTC m=+1548.661038022" watchObservedRunningTime="2026-01-22 07:00:56.521983088 +0000 UTC m=+1548.663889793" Jan 22 07:01:00 crc kubenswrapper[4720]: I0122 07:01:00.134604 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/keystone-cron-29484421-kq8hq"] Jan 22 07:01:00 crc kubenswrapper[4720]: E0122 07:01:00.135603 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b5de5b3-1410-4b2c-92ab-85730d07e10c" containerName="mariadb-database-create" Jan 22 07:01:00 crc kubenswrapper[4720]: I0122 07:01:00.135620 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b5de5b3-1410-4b2c-92ab-85730d07e10c" containerName="mariadb-database-create" Jan 22 07:01:00 crc kubenswrapper[4720]: E0122 07:01:00.135634 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ebd5b4a-64cb-4011-a9ff-483f4643d5b2" containerName="mariadb-account-create-update" Jan 22 07:01:00 crc kubenswrapper[4720]: I0122 07:01:00.135651 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ebd5b4a-64cb-4011-a9ff-483f4643d5b2" containerName="mariadb-account-create-update" Jan 22 07:01:00 crc kubenswrapper[4720]: I0122 07:01:00.135886 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b5de5b3-1410-4b2c-92ab-85730d07e10c" containerName="mariadb-database-create" Jan 22 07:01:00 crc kubenswrapper[4720]: I0122 07:01:00.135902 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ebd5b4a-64cb-4011-a9ff-483f4643d5b2" containerName="mariadb-account-create-update" Jan 22 07:01:00 crc kubenswrapper[4720]: I0122 07:01:00.136863 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-cron-29484421-kq8hq" Jan 22 07:01:00 crc kubenswrapper[4720]: I0122 07:01:00.148140 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/keystone-cron-29484421-kq8hq"] Jan 22 07:01:00 crc kubenswrapper[4720]: I0122 07:01:00.193319 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/305d07dc-f843-4277-b728-1f00028fbac5-combined-ca-bundle\") pod \"keystone-cron-29484421-kq8hq\" (UID: \"305d07dc-f843-4277-b728-1f00028fbac5\") " pod="watcher-kuttl-default/keystone-cron-29484421-kq8hq" Jan 22 07:01:00 crc kubenswrapper[4720]: I0122 07:01:00.193667 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/305d07dc-f843-4277-b728-1f00028fbac5-config-data\") pod \"keystone-cron-29484421-kq8hq\" (UID: \"305d07dc-f843-4277-b728-1f00028fbac5\") " pod="watcher-kuttl-default/keystone-cron-29484421-kq8hq" Jan 22 07:01:00 crc kubenswrapper[4720]: I0122 07:01:00.193943 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-phk4h\" (UniqueName: \"kubernetes.io/projected/305d07dc-f843-4277-b728-1f00028fbac5-kube-api-access-phk4h\") pod \"keystone-cron-29484421-kq8hq\" (UID: \"305d07dc-f843-4277-b728-1f00028fbac5\") " pod="watcher-kuttl-default/keystone-cron-29484421-kq8hq" Jan 22 07:01:00 crc kubenswrapper[4720]: I0122 07:01:00.194134 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/305d07dc-f843-4277-b728-1f00028fbac5-fernet-keys\") pod \"keystone-cron-29484421-kq8hq\" (UID: \"305d07dc-f843-4277-b728-1f00028fbac5\") " pod="watcher-kuttl-default/keystone-cron-29484421-kq8hq" Jan 22 07:01:00 crc kubenswrapper[4720]: I0122 07:01:00.297352 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-phk4h\" (UniqueName: \"kubernetes.io/projected/305d07dc-f843-4277-b728-1f00028fbac5-kube-api-access-phk4h\") pod \"keystone-cron-29484421-kq8hq\" (UID: \"305d07dc-f843-4277-b728-1f00028fbac5\") " pod="watcher-kuttl-default/keystone-cron-29484421-kq8hq" Jan 22 07:01:00 crc kubenswrapper[4720]: I0122 07:01:00.297526 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/305d07dc-f843-4277-b728-1f00028fbac5-fernet-keys\") pod \"keystone-cron-29484421-kq8hq\" (UID: \"305d07dc-f843-4277-b728-1f00028fbac5\") " pod="watcher-kuttl-default/keystone-cron-29484421-kq8hq" Jan 22 07:01:00 crc kubenswrapper[4720]: I0122 07:01:00.297638 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/305d07dc-f843-4277-b728-1f00028fbac5-combined-ca-bundle\") pod \"keystone-cron-29484421-kq8hq\" (UID: \"305d07dc-f843-4277-b728-1f00028fbac5\") " pod="watcher-kuttl-default/keystone-cron-29484421-kq8hq" Jan 22 07:01:00 crc kubenswrapper[4720]: I0122 07:01:00.297846 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/305d07dc-f843-4277-b728-1f00028fbac5-config-data\") pod \"keystone-cron-29484421-kq8hq\" (UID: \"305d07dc-f843-4277-b728-1f00028fbac5\") " pod="watcher-kuttl-default/keystone-cron-29484421-kq8hq" Jan 22 07:01:00 crc kubenswrapper[4720]: I0122 07:01:00.313277 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/305d07dc-f843-4277-b728-1f00028fbac5-fernet-keys\") pod \"keystone-cron-29484421-kq8hq\" (UID: \"305d07dc-f843-4277-b728-1f00028fbac5\") " pod="watcher-kuttl-default/keystone-cron-29484421-kq8hq" Jan 22 07:01:00 crc kubenswrapper[4720]: I0122 07:01:00.314049 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/305d07dc-f843-4277-b728-1f00028fbac5-combined-ca-bundle\") pod \"keystone-cron-29484421-kq8hq\" (UID: \"305d07dc-f843-4277-b728-1f00028fbac5\") " pod="watcher-kuttl-default/keystone-cron-29484421-kq8hq" Jan 22 07:01:00 crc kubenswrapper[4720]: I0122 07:01:00.316234 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-phk4h\" (UniqueName: \"kubernetes.io/projected/305d07dc-f843-4277-b728-1f00028fbac5-kube-api-access-phk4h\") pod \"keystone-cron-29484421-kq8hq\" (UID: \"305d07dc-f843-4277-b728-1f00028fbac5\") " pod="watcher-kuttl-default/keystone-cron-29484421-kq8hq" Jan 22 07:01:00 crc kubenswrapper[4720]: I0122 07:01:00.316040 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/305d07dc-f843-4277-b728-1f00028fbac5-config-data\") pod \"keystone-cron-29484421-kq8hq\" (UID: \"305d07dc-f843-4277-b728-1f00028fbac5\") " pod="watcher-kuttl-default/keystone-cron-29484421-kq8hq" Jan 22 07:01:00 crc kubenswrapper[4720]: I0122 07:01:00.472317 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-cron-29484421-kq8hq" Jan 22 07:01:01 crc kubenswrapper[4720]: I0122 07:01:01.011104 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/keystone-cron-29484421-kq8hq"] Jan 22 07:01:01 crc kubenswrapper[4720]: I0122 07:01:01.551814 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-cron-29484421-kq8hq" event={"ID":"305d07dc-f843-4277-b728-1f00028fbac5","Type":"ContainerStarted","Data":"20ec6a7aa8ab0b1b55e7f1146cc4b46d845585b852dd8b40bee4f7f1fcf0a710"} Jan 22 07:01:01 crc kubenswrapper[4720]: I0122 07:01:01.552325 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-cron-29484421-kq8hq" event={"ID":"305d07dc-f843-4277-b728-1f00028fbac5","Type":"ContainerStarted","Data":"c9c05f05b476d7dadbc3096c6a55fb220548ed447b987ad3ad14af3a6b30d8ca"} Jan 22 07:01:01 crc kubenswrapper[4720]: I0122 07:01:01.704091 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/keystone-cron-29484421-kq8hq" podStartSLOduration=1.704064923 podStartE2EDuration="1.704064923s" podCreationTimestamp="2026-01-22 07:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 07:01:01.698413682 +0000 UTC m=+1553.840320387" watchObservedRunningTime="2026-01-22 07:01:01.704064923 +0000 UTC m=+1553.845971648" Jan 22 07:01:01 crc kubenswrapper[4720]: I0122 07:01:01.770684 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-kzmdm"] Jan 22 07:01:01 crc kubenswrapper[4720]: I0122 07:01:01.772119 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-kzmdm" Jan 22 07:01:01 crc kubenswrapper[4720]: I0122 07:01:01.774628 4720 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-watcher-kuttl-dockercfg-6kf7j" Jan 22 07:01:01 crc kubenswrapper[4720]: I0122 07:01:01.775283 4720 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-config-data" Jan 22 07:01:01 crc kubenswrapper[4720]: I0122 07:01:01.784603 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-kzmdm"] Jan 22 07:01:01 crc kubenswrapper[4720]: I0122 07:01:01.843800 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kpchh\" (UniqueName: \"kubernetes.io/projected/cf4d25e1-55a2-47e0-8c43-e138cee6d47c-kube-api-access-kpchh\") pod \"watcher-kuttl-db-sync-kzmdm\" (UID: \"cf4d25e1-55a2-47e0-8c43-e138cee6d47c\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-kzmdm" Jan 22 07:01:01 crc kubenswrapper[4720]: I0122 07:01:01.843856 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/cf4d25e1-55a2-47e0-8c43-e138cee6d47c-db-sync-config-data\") pod \"watcher-kuttl-db-sync-kzmdm\" (UID: \"cf4d25e1-55a2-47e0-8c43-e138cee6d47c\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-kzmdm" Jan 22 07:01:01 crc kubenswrapper[4720]: I0122 07:01:01.843972 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf4d25e1-55a2-47e0-8c43-e138cee6d47c-config-data\") pod \"watcher-kuttl-db-sync-kzmdm\" (UID: \"cf4d25e1-55a2-47e0-8c43-e138cee6d47c\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-kzmdm" Jan 22 07:01:01 crc kubenswrapper[4720]: I0122 07:01:01.844044 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf4d25e1-55a2-47e0-8c43-e138cee6d47c-combined-ca-bundle\") pod \"watcher-kuttl-db-sync-kzmdm\" (UID: \"cf4d25e1-55a2-47e0-8c43-e138cee6d47c\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-kzmdm" Jan 22 07:01:01 crc kubenswrapper[4720]: I0122 07:01:01.945368 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf4d25e1-55a2-47e0-8c43-e138cee6d47c-config-data\") pod \"watcher-kuttl-db-sync-kzmdm\" (UID: \"cf4d25e1-55a2-47e0-8c43-e138cee6d47c\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-kzmdm" Jan 22 07:01:01 crc kubenswrapper[4720]: I0122 07:01:01.945437 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf4d25e1-55a2-47e0-8c43-e138cee6d47c-combined-ca-bundle\") pod \"watcher-kuttl-db-sync-kzmdm\" (UID: \"cf4d25e1-55a2-47e0-8c43-e138cee6d47c\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-kzmdm" Jan 22 07:01:01 crc kubenswrapper[4720]: I0122 07:01:01.945531 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kpchh\" (UniqueName: \"kubernetes.io/projected/cf4d25e1-55a2-47e0-8c43-e138cee6d47c-kube-api-access-kpchh\") pod \"watcher-kuttl-db-sync-kzmdm\" (UID: \"cf4d25e1-55a2-47e0-8c43-e138cee6d47c\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-kzmdm" Jan 22 07:01:01 crc kubenswrapper[4720]: I0122 07:01:01.945552 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/cf4d25e1-55a2-47e0-8c43-e138cee6d47c-db-sync-config-data\") pod \"watcher-kuttl-db-sync-kzmdm\" (UID: \"cf4d25e1-55a2-47e0-8c43-e138cee6d47c\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-kzmdm" Jan 22 07:01:01 crc kubenswrapper[4720]: I0122 07:01:01.950561 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf4d25e1-55a2-47e0-8c43-e138cee6d47c-combined-ca-bundle\") pod \"watcher-kuttl-db-sync-kzmdm\" (UID: \"cf4d25e1-55a2-47e0-8c43-e138cee6d47c\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-kzmdm" Jan 22 07:01:01 crc kubenswrapper[4720]: I0122 07:01:01.951971 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/cf4d25e1-55a2-47e0-8c43-e138cee6d47c-db-sync-config-data\") pod \"watcher-kuttl-db-sync-kzmdm\" (UID: \"cf4d25e1-55a2-47e0-8c43-e138cee6d47c\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-kzmdm" Jan 22 07:01:01 crc kubenswrapper[4720]: I0122 07:01:01.953950 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf4d25e1-55a2-47e0-8c43-e138cee6d47c-config-data\") pod \"watcher-kuttl-db-sync-kzmdm\" (UID: \"cf4d25e1-55a2-47e0-8c43-e138cee6d47c\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-kzmdm" Jan 22 07:01:01 crc kubenswrapper[4720]: I0122 07:01:01.974456 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kpchh\" (UniqueName: \"kubernetes.io/projected/cf4d25e1-55a2-47e0-8c43-e138cee6d47c-kube-api-access-kpchh\") pod \"watcher-kuttl-db-sync-kzmdm\" (UID: \"cf4d25e1-55a2-47e0-8c43-e138cee6d47c\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-kzmdm" Jan 22 07:01:02 crc kubenswrapper[4720]: I0122 07:01:02.090317 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-kzmdm" Jan 22 07:01:02 crc kubenswrapper[4720]: I0122 07:01:02.137031 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-wfms8"] Jan 22 07:01:02 crc kubenswrapper[4720]: I0122 07:01:02.139202 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wfms8" Jan 22 07:01:02 crc kubenswrapper[4720]: I0122 07:01:02.153859 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wfms8"] Jan 22 07:01:02 crc kubenswrapper[4720]: I0122 07:01:02.258571 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pdrqv\" (UniqueName: \"kubernetes.io/projected/8bbf1f83-e710-4d55-9902-74eb351624bc-kube-api-access-pdrqv\") pod \"community-operators-wfms8\" (UID: \"8bbf1f83-e710-4d55-9902-74eb351624bc\") " pod="openshift-marketplace/community-operators-wfms8" Jan 22 07:01:02 crc kubenswrapper[4720]: I0122 07:01:02.258897 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8bbf1f83-e710-4d55-9902-74eb351624bc-utilities\") pod \"community-operators-wfms8\" (UID: \"8bbf1f83-e710-4d55-9902-74eb351624bc\") " pod="openshift-marketplace/community-operators-wfms8" Jan 22 07:01:02 crc kubenswrapper[4720]: I0122 07:01:02.258943 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8bbf1f83-e710-4d55-9902-74eb351624bc-catalog-content\") pod \"community-operators-wfms8\" (UID: \"8bbf1f83-e710-4d55-9902-74eb351624bc\") " pod="openshift-marketplace/community-operators-wfms8" Jan 22 07:01:02 crc kubenswrapper[4720]: I0122 07:01:02.360211 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pdrqv\" (UniqueName: \"kubernetes.io/projected/8bbf1f83-e710-4d55-9902-74eb351624bc-kube-api-access-pdrqv\") pod \"community-operators-wfms8\" (UID: \"8bbf1f83-e710-4d55-9902-74eb351624bc\") " pod="openshift-marketplace/community-operators-wfms8" Jan 22 07:01:02 crc kubenswrapper[4720]: I0122 07:01:02.360288 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8bbf1f83-e710-4d55-9902-74eb351624bc-utilities\") pod \"community-operators-wfms8\" (UID: \"8bbf1f83-e710-4d55-9902-74eb351624bc\") " pod="openshift-marketplace/community-operators-wfms8" Jan 22 07:01:02 crc kubenswrapper[4720]: I0122 07:01:02.360327 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8bbf1f83-e710-4d55-9902-74eb351624bc-catalog-content\") pod \"community-operators-wfms8\" (UID: \"8bbf1f83-e710-4d55-9902-74eb351624bc\") " pod="openshift-marketplace/community-operators-wfms8" Jan 22 07:01:02 crc kubenswrapper[4720]: I0122 07:01:02.360973 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8bbf1f83-e710-4d55-9902-74eb351624bc-utilities\") pod \"community-operators-wfms8\" (UID: \"8bbf1f83-e710-4d55-9902-74eb351624bc\") " pod="openshift-marketplace/community-operators-wfms8" Jan 22 07:01:02 crc kubenswrapper[4720]: I0122 07:01:02.361127 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8bbf1f83-e710-4d55-9902-74eb351624bc-catalog-content\") pod \"community-operators-wfms8\" (UID: \"8bbf1f83-e710-4d55-9902-74eb351624bc\") " pod="openshift-marketplace/community-operators-wfms8" Jan 22 07:01:02 crc kubenswrapper[4720]: I0122 07:01:02.385224 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pdrqv\" (UniqueName: \"kubernetes.io/projected/8bbf1f83-e710-4d55-9902-74eb351624bc-kube-api-access-pdrqv\") pod \"community-operators-wfms8\" (UID: \"8bbf1f83-e710-4d55-9902-74eb351624bc\") " pod="openshift-marketplace/community-operators-wfms8" Jan 22 07:01:02 crc kubenswrapper[4720]: I0122 07:01:02.502006 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wfms8" Jan 22 07:01:02 crc kubenswrapper[4720]: I0122 07:01:02.668091 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-kzmdm"] Jan 22 07:01:03 crc kubenswrapper[4720]: I0122 07:01:03.147682 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wfms8"] Jan 22 07:01:03 crc kubenswrapper[4720]: I0122 07:01:03.572031 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-kzmdm" event={"ID":"cf4d25e1-55a2-47e0-8c43-e138cee6d47c","Type":"ContainerStarted","Data":"69e0c984c8e825301e66dd11bd51bdfac2c9df7c554d0cd08fa9f6efd71e1f91"} Jan 22 07:01:03 crc kubenswrapper[4720]: I0122 07:01:03.572108 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-kzmdm" event={"ID":"cf4d25e1-55a2-47e0-8c43-e138cee6d47c","Type":"ContainerStarted","Data":"a447c7bf6777644ab2b276f5d779a67e016d8a37b51ae12a8a4984fe68691e58"} Jan 22 07:01:03 crc kubenswrapper[4720]: I0122 07:01:03.574185 4720 generic.go:334] "Generic (PLEG): container finished" podID="8bbf1f83-e710-4d55-9902-74eb351624bc" containerID="fe5d57e73f151375c4890821fc5af1f8bff53e21409e07974c4fc352aae8ab14" exitCode=0 Jan 22 07:01:03 crc kubenswrapper[4720]: I0122 07:01:03.574247 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wfms8" event={"ID":"8bbf1f83-e710-4d55-9902-74eb351624bc","Type":"ContainerDied","Data":"fe5d57e73f151375c4890821fc5af1f8bff53e21409e07974c4fc352aae8ab14"} Jan 22 07:01:03 crc kubenswrapper[4720]: I0122 07:01:03.574306 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wfms8" event={"ID":"8bbf1f83-e710-4d55-9902-74eb351624bc","Type":"ContainerStarted","Data":"a709d33a95e78051a10176d7383c7dc593faf73d444ac1a662f078f174e45df0"} Jan 22 07:01:03 crc kubenswrapper[4720]: I0122 07:01:03.599298 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-db-sync-kzmdm" podStartSLOduration=2.599257388 podStartE2EDuration="2.599257388s" podCreationTimestamp="2026-01-22 07:01:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 07:01:03.59686791 +0000 UTC m=+1555.738774625" watchObservedRunningTime="2026-01-22 07:01:03.599257388 +0000 UTC m=+1555.741164093" Jan 22 07:01:04 crc kubenswrapper[4720]: I0122 07:01:04.654579 4720 generic.go:334] "Generic (PLEG): container finished" podID="305d07dc-f843-4277-b728-1f00028fbac5" containerID="20ec6a7aa8ab0b1b55e7f1146cc4b46d845585b852dd8b40bee4f7f1fcf0a710" exitCode=0 Jan 22 07:01:04 crc kubenswrapper[4720]: I0122 07:01:04.654859 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-cron-29484421-kq8hq" event={"ID":"305d07dc-f843-4277-b728-1f00028fbac5","Type":"ContainerDied","Data":"20ec6a7aa8ab0b1b55e7f1146cc4b46d845585b852dd8b40bee4f7f1fcf0a710"} Jan 22 07:01:05 crc kubenswrapper[4720]: I0122 07:01:05.665394 4720 generic.go:334] "Generic (PLEG): container finished" podID="8bbf1f83-e710-4d55-9902-74eb351624bc" containerID="5827002a73daaea07bd47420f997a62c64f470eb146102fe6046411ba77a5c77" exitCode=0 Jan 22 07:01:05 crc kubenswrapper[4720]: I0122 07:01:05.665488 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wfms8" event={"ID":"8bbf1f83-e710-4d55-9902-74eb351624bc","Type":"ContainerDied","Data":"5827002a73daaea07bd47420f997a62c64f470eb146102fe6046411ba77a5c77"} Jan 22 07:01:06 crc kubenswrapper[4720]: I0122 07:01:06.227466 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-cron-29484421-kq8hq" Jan 22 07:01:06 crc kubenswrapper[4720]: I0122 07:01:06.322709 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/305d07dc-f843-4277-b728-1f00028fbac5-config-data\") pod \"305d07dc-f843-4277-b728-1f00028fbac5\" (UID: \"305d07dc-f843-4277-b728-1f00028fbac5\") " Jan 22 07:01:06 crc kubenswrapper[4720]: I0122 07:01:06.322769 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-phk4h\" (UniqueName: \"kubernetes.io/projected/305d07dc-f843-4277-b728-1f00028fbac5-kube-api-access-phk4h\") pod \"305d07dc-f843-4277-b728-1f00028fbac5\" (UID: \"305d07dc-f843-4277-b728-1f00028fbac5\") " Jan 22 07:01:06 crc kubenswrapper[4720]: I0122 07:01:06.322953 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/305d07dc-f843-4277-b728-1f00028fbac5-fernet-keys\") pod \"305d07dc-f843-4277-b728-1f00028fbac5\" (UID: \"305d07dc-f843-4277-b728-1f00028fbac5\") " Jan 22 07:01:06 crc kubenswrapper[4720]: I0122 07:01:06.323011 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/305d07dc-f843-4277-b728-1f00028fbac5-combined-ca-bundle\") pod \"305d07dc-f843-4277-b728-1f00028fbac5\" (UID: \"305d07dc-f843-4277-b728-1f00028fbac5\") " Jan 22 07:01:06 crc kubenswrapper[4720]: I0122 07:01:06.344590 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/305d07dc-f843-4277-b728-1f00028fbac5-kube-api-access-phk4h" (OuterVolumeSpecName: "kube-api-access-phk4h") pod "305d07dc-f843-4277-b728-1f00028fbac5" (UID: "305d07dc-f843-4277-b728-1f00028fbac5"). InnerVolumeSpecName "kube-api-access-phk4h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 07:01:06 crc kubenswrapper[4720]: I0122 07:01:06.346098 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/305d07dc-f843-4277-b728-1f00028fbac5-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "305d07dc-f843-4277-b728-1f00028fbac5" (UID: "305d07dc-f843-4277-b728-1f00028fbac5"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:01:06 crc kubenswrapper[4720]: I0122 07:01:06.430646 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-phk4h\" (UniqueName: \"kubernetes.io/projected/305d07dc-f843-4277-b728-1f00028fbac5-kube-api-access-phk4h\") on node \"crc\" DevicePath \"\"" Jan 22 07:01:06 crc kubenswrapper[4720]: I0122 07:01:06.431051 4720 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/305d07dc-f843-4277-b728-1f00028fbac5-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 22 07:01:06 crc kubenswrapper[4720]: I0122 07:01:06.445475 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/305d07dc-f843-4277-b728-1f00028fbac5-config-data" (OuterVolumeSpecName: "config-data") pod "305d07dc-f843-4277-b728-1f00028fbac5" (UID: "305d07dc-f843-4277-b728-1f00028fbac5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:01:06 crc kubenswrapper[4720]: I0122 07:01:06.525765 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/305d07dc-f843-4277-b728-1f00028fbac5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "305d07dc-f843-4277-b728-1f00028fbac5" (UID: "305d07dc-f843-4277-b728-1f00028fbac5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:01:06 crc kubenswrapper[4720]: I0122 07:01:06.537287 4720 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/305d07dc-f843-4277-b728-1f00028fbac5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 07:01:06 crc kubenswrapper[4720]: I0122 07:01:06.537334 4720 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/305d07dc-f843-4277-b728-1f00028fbac5-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 07:01:06 crc kubenswrapper[4720]: I0122 07:01:06.721697 4720 generic.go:334] "Generic (PLEG): container finished" podID="cf4d25e1-55a2-47e0-8c43-e138cee6d47c" containerID="69e0c984c8e825301e66dd11bd51bdfac2c9df7c554d0cd08fa9f6efd71e1f91" exitCode=0 Jan 22 07:01:06 crc kubenswrapper[4720]: I0122 07:01:06.721784 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-kzmdm" event={"ID":"cf4d25e1-55a2-47e0-8c43-e138cee6d47c","Type":"ContainerDied","Data":"69e0c984c8e825301e66dd11bd51bdfac2c9df7c554d0cd08fa9f6efd71e1f91"} Jan 22 07:01:06 crc kubenswrapper[4720]: I0122 07:01:06.749796 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wfms8" event={"ID":"8bbf1f83-e710-4d55-9902-74eb351624bc","Type":"ContainerStarted","Data":"bea1b396ffb7a2f8abd87f60dd17acc8a7e4483571e0b7154f0689282d9478f5"} Jan 22 07:01:06 crc kubenswrapper[4720]: I0122 07:01:06.778599 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-cron-29484421-kq8hq" event={"ID":"305d07dc-f843-4277-b728-1f00028fbac5","Type":"ContainerDied","Data":"c9c05f05b476d7dadbc3096c6a55fb220548ed447b987ad3ad14af3a6b30d8ca"} Jan 22 07:01:06 crc kubenswrapper[4720]: I0122 07:01:06.779054 4720 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c9c05f05b476d7dadbc3096c6a55fb220548ed447b987ad3ad14af3a6b30d8ca" Jan 22 07:01:06 crc kubenswrapper[4720]: I0122 07:01:06.779020 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-cron-29484421-kq8hq" Jan 22 07:01:06 crc kubenswrapper[4720]: I0122 07:01:06.794826 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-wfms8" podStartSLOduration=2.190827476 podStartE2EDuration="4.794804867s" podCreationTimestamp="2026-01-22 07:01:02 +0000 UTC" firstStartedPulling="2026-01-22 07:01:03.575622662 +0000 UTC m=+1555.717529367" lastFinishedPulling="2026-01-22 07:01:06.179600053 +0000 UTC m=+1558.321506758" observedRunningTime="2026-01-22 07:01:06.783774001 +0000 UTC m=+1558.925680706" watchObservedRunningTime="2026-01-22 07:01:06.794804867 +0000 UTC m=+1558.936711572" Jan 22 07:01:08 crc kubenswrapper[4720]: I0122 07:01:08.194340 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-kzmdm" Jan 22 07:01:08 crc kubenswrapper[4720]: I0122 07:01:08.267386 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/cf4d25e1-55a2-47e0-8c43-e138cee6d47c-db-sync-config-data\") pod \"cf4d25e1-55a2-47e0-8c43-e138cee6d47c\" (UID: \"cf4d25e1-55a2-47e0-8c43-e138cee6d47c\") " Jan 22 07:01:08 crc kubenswrapper[4720]: I0122 07:01:08.267474 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf4d25e1-55a2-47e0-8c43-e138cee6d47c-config-data\") pod \"cf4d25e1-55a2-47e0-8c43-e138cee6d47c\" (UID: \"cf4d25e1-55a2-47e0-8c43-e138cee6d47c\") " Jan 22 07:01:08 crc kubenswrapper[4720]: I0122 07:01:08.267506 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kpchh\" (UniqueName: \"kubernetes.io/projected/cf4d25e1-55a2-47e0-8c43-e138cee6d47c-kube-api-access-kpchh\") pod \"cf4d25e1-55a2-47e0-8c43-e138cee6d47c\" (UID: \"cf4d25e1-55a2-47e0-8c43-e138cee6d47c\") " Jan 22 07:01:08 crc kubenswrapper[4720]: I0122 07:01:08.275194 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf4d25e1-55a2-47e0-8c43-e138cee6d47c-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "cf4d25e1-55a2-47e0-8c43-e138cee6d47c" (UID: "cf4d25e1-55a2-47e0-8c43-e138cee6d47c"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:01:08 crc kubenswrapper[4720]: I0122 07:01:08.279015 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf4d25e1-55a2-47e0-8c43-e138cee6d47c-kube-api-access-kpchh" (OuterVolumeSpecName: "kube-api-access-kpchh") pod "cf4d25e1-55a2-47e0-8c43-e138cee6d47c" (UID: "cf4d25e1-55a2-47e0-8c43-e138cee6d47c"). InnerVolumeSpecName "kube-api-access-kpchh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 07:01:08 crc kubenswrapper[4720]: I0122 07:01:08.325195 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf4d25e1-55a2-47e0-8c43-e138cee6d47c-config-data" (OuterVolumeSpecName: "config-data") pod "cf4d25e1-55a2-47e0-8c43-e138cee6d47c" (UID: "cf4d25e1-55a2-47e0-8c43-e138cee6d47c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:01:08 crc kubenswrapper[4720]: I0122 07:01:08.368877 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf4d25e1-55a2-47e0-8c43-e138cee6d47c-combined-ca-bundle\") pod \"cf4d25e1-55a2-47e0-8c43-e138cee6d47c\" (UID: \"cf4d25e1-55a2-47e0-8c43-e138cee6d47c\") " Jan 22 07:01:08 crc kubenswrapper[4720]: I0122 07:01:08.369565 4720 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/cf4d25e1-55a2-47e0-8c43-e138cee6d47c-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 07:01:08 crc kubenswrapper[4720]: I0122 07:01:08.369589 4720 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf4d25e1-55a2-47e0-8c43-e138cee6d47c-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 07:01:08 crc kubenswrapper[4720]: I0122 07:01:08.369601 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kpchh\" (UniqueName: \"kubernetes.io/projected/cf4d25e1-55a2-47e0-8c43-e138cee6d47c-kube-api-access-kpchh\") on node \"crc\" DevicePath \"\"" Jan 22 07:01:08 crc kubenswrapper[4720]: I0122 07:01:08.397431 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf4d25e1-55a2-47e0-8c43-e138cee6d47c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cf4d25e1-55a2-47e0-8c43-e138cee6d47c" (UID: "cf4d25e1-55a2-47e0-8c43-e138cee6d47c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:01:08 crc kubenswrapper[4720]: I0122 07:01:08.471174 4720 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf4d25e1-55a2-47e0-8c43-e138cee6d47c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 07:01:08 crc kubenswrapper[4720]: I0122 07:01:08.798567 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-kzmdm" event={"ID":"cf4d25e1-55a2-47e0-8c43-e138cee6d47c","Type":"ContainerDied","Data":"a447c7bf6777644ab2b276f5d779a67e016d8a37b51ae12a8a4984fe68691e58"} Jan 22 07:01:08 crc kubenswrapper[4720]: I0122 07:01:08.798612 4720 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a447c7bf6777644ab2b276f5d779a67e016d8a37b51ae12a8a4984fe68691e58" Jan 22 07:01:08 crc kubenswrapper[4720]: I0122 07:01:08.798872 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-kzmdm" Jan 22 07:01:09 crc kubenswrapper[4720]: I0122 07:01:09.055335 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 22 07:01:09 crc kubenswrapper[4720]: E0122 07:01:09.055764 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf4d25e1-55a2-47e0-8c43-e138cee6d47c" containerName="watcher-kuttl-db-sync" Jan 22 07:01:09 crc kubenswrapper[4720]: I0122 07:01:09.055782 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf4d25e1-55a2-47e0-8c43-e138cee6d47c" containerName="watcher-kuttl-db-sync" Jan 22 07:01:09 crc kubenswrapper[4720]: E0122 07:01:09.055796 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="305d07dc-f843-4277-b728-1f00028fbac5" containerName="keystone-cron" Jan 22 07:01:09 crc kubenswrapper[4720]: I0122 07:01:09.055804 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="305d07dc-f843-4277-b728-1f00028fbac5" containerName="keystone-cron" Jan 22 07:01:09 crc kubenswrapper[4720]: I0122 07:01:09.055998 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf4d25e1-55a2-47e0-8c43-e138cee6d47c" containerName="watcher-kuttl-db-sync" Jan 22 07:01:09 crc kubenswrapper[4720]: I0122 07:01:09.056020 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="305d07dc-f843-4277-b728-1f00028fbac5" containerName="keystone-cron" Jan 22 07:01:09 crc kubenswrapper[4720]: I0122 07:01:09.056595 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 07:01:09 crc kubenswrapper[4720]: I0122 07:01:09.060061 4720 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-applier-config-data" Jan 22 07:01:09 crc kubenswrapper[4720]: I0122 07:01:09.060521 4720 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-watcher-kuttl-dockercfg-6kf7j" Jan 22 07:01:09 crc kubenswrapper[4720]: I0122 07:01:09.065670 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 22 07:01:09 crc kubenswrapper[4720]: I0122 07:01:09.077944 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 22 07:01:09 crc kubenswrapper[4720]: I0122 07:01:09.081841 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b1f4244a-771e-422c-823a-385d4c50bc05-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"b1f4244a-771e-422c-823a-385d4c50bc05\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 07:01:09 crc kubenswrapper[4720]: I0122 07:01:09.081932 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b1f4244a-771e-422c-823a-385d4c50bc05-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"b1f4244a-771e-422c-823a-385d4c50bc05\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 07:01:09 crc kubenswrapper[4720]: I0122 07:01:09.081955 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-64zf8\" (UniqueName: \"kubernetes.io/projected/b1f4244a-771e-422c-823a-385d4c50bc05-kube-api-access-64zf8\") pod \"watcher-kuttl-applier-0\" (UID: \"b1f4244a-771e-422c-823a-385d4c50bc05\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 07:01:09 crc kubenswrapper[4720]: I0122 07:01:09.081978 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1f4244a-771e-422c-823a-385d4c50bc05-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"b1f4244a-771e-422c-823a-385d4c50bc05\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 07:01:09 crc kubenswrapper[4720]: I0122 07:01:09.086435 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:01:09 crc kubenswrapper[4720]: I0122 07:01:09.089926 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 22 07:01:09 crc kubenswrapper[4720]: I0122 07:01:09.090361 4720 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-watcher-public-svc" Jan 22 07:01:09 crc kubenswrapper[4720]: I0122 07:01:09.092531 4720 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-api-config-data" Jan 22 07:01:09 crc kubenswrapper[4720]: I0122 07:01:09.119746 4720 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-watcher-internal-svc" Jan 22 07:01:09 crc kubenswrapper[4720]: I0122 07:01:09.184431 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-64zf8\" (UniqueName: \"kubernetes.io/projected/b1f4244a-771e-422c-823a-385d4c50bc05-kube-api-access-64zf8\") pod \"watcher-kuttl-applier-0\" (UID: \"b1f4244a-771e-422c-823a-385d4c50bc05\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 07:01:09 crc kubenswrapper[4720]: I0122 07:01:09.184782 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1f4244a-771e-422c-823a-385d4c50bc05-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"b1f4244a-771e-422c-823a-385d4c50bc05\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 07:01:09 crc kubenswrapper[4720]: I0122 07:01:09.184929 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b1f4244a-771e-422c-823a-385d4c50bc05-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"b1f4244a-771e-422c-823a-385d4c50bc05\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 07:01:09 crc kubenswrapper[4720]: I0122 07:01:09.184993 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b1f4244a-771e-422c-823a-385d4c50bc05-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"b1f4244a-771e-422c-823a-385d4c50bc05\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 07:01:09 crc kubenswrapper[4720]: I0122 07:01:09.185454 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b1f4244a-771e-422c-823a-385d4c50bc05-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"b1f4244a-771e-422c-823a-385d4c50bc05\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 07:01:09 crc kubenswrapper[4720]: I0122 07:01:09.212954 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1f4244a-771e-422c-823a-385d4c50bc05-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"b1f4244a-771e-422c-823a-385d4c50bc05\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 07:01:09 crc kubenswrapper[4720]: I0122 07:01:09.215319 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b1f4244a-771e-422c-823a-385d4c50bc05-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"b1f4244a-771e-422c-823a-385d4c50bc05\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 07:01:09 crc kubenswrapper[4720]: I0122 07:01:09.226377 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 22 07:01:09 crc kubenswrapper[4720]: I0122 07:01:09.228362 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 07:01:09 crc kubenswrapper[4720]: I0122 07:01:09.232444 4720 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-decision-engine-config-data" Jan 22 07:01:09 crc kubenswrapper[4720]: I0122 07:01:09.249140 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 22 07:01:09 crc kubenswrapper[4720]: I0122 07:01:09.249882 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-64zf8\" (UniqueName: \"kubernetes.io/projected/b1f4244a-771e-422c-823a-385d4c50bc05-kube-api-access-64zf8\") pod \"watcher-kuttl-applier-0\" (UID: \"b1f4244a-771e-422c-823a-385d4c50bc05\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 07:01:09 crc kubenswrapper[4720]: I0122 07:01:09.287602 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/b7fd8a18-2d71-474c-83e4-b7789274ac42-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"b7fd8a18-2d71-474c-83e4-b7789274ac42\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:01:09 crc kubenswrapper[4720]: I0122 07:01:09.287668 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b7fd8a18-2d71-474c-83e4-b7789274ac42-logs\") pod \"watcher-kuttl-api-0\" (UID: \"b7fd8a18-2d71-474c-83e4-b7789274ac42\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:01:09 crc kubenswrapper[4720]: I0122 07:01:09.287699 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bzqlf\" (UniqueName: \"kubernetes.io/projected/b7fd8a18-2d71-474c-83e4-b7789274ac42-kube-api-access-bzqlf\") pod \"watcher-kuttl-api-0\" (UID: \"b7fd8a18-2d71-474c-83e4-b7789274ac42\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:01:09 crc kubenswrapper[4720]: I0122 07:01:09.287749 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b7fd8a18-2d71-474c-83e4-b7789274ac42-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"b7fd8a18-2d71-474c-83e4-b7789274ac42\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:01:09 crc kubenswrapper[4720]: I0122 07:01:09.287795 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b7fd8a18-2d71-474c-83e4-b7789274ac42-public-tls-certs\") pod \"watcher-kuttl-api-0\" (UID: \"b7fd8a18-2d71-474c-83e4-b7789274ac42\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:01:09 crc kubenswrapper[4720]: I0122 07:01:09.287833 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b7fd8a18-2d71-474c-83e4-b7789274ac42-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"b7fd8a18-2d71-474c-83e4-b7789274ac42\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:01:09 crc kubenswrapper[4720]: I0122 07:01:09.287868 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b7fd8a18-2d71-474c-83e4-b7789274ac42-internal-tls-certs\") pod \"watcher-kuttl-api-0\" (UID: \"b7fd8a18-2d71-474c-83e4-b7789274ac42\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:01:09 crc kubenswrapper[4720]: I0122 07:01:09.377213 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 07:01:09 crc kubenswrapper[4720]: I0122 07:01:09.389774 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b7fd8a18-2d71-474c-83e4-b7789274ac42-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"b7fd8a18-2d71-474c-83e4-b7789274ac42\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:01:09 crc kubenswrapper[4720]: I0122 07:01:09.389852 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b7fd8a18-2d71-474c-83e4-b7789274ac42-public-tls-certs\") pod \"watcher-kuttl-api-0\" (UID: \"b7fd8a18-2d71-474c-83e4-b7789274ac42\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:01:09 crc kubenswrapper[4720]: I0122 07:01:09.389903 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf960b01-5c0d-4a0d-aae9-ffaeff4af5d5-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"bf960b01-5c0d-4a0d-aae9-ffaeff4af5d5\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 07:01:09 crc kubenswrapper[4720]: I0122 07:01:09.389957 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b7fd8a18-2d71-474c-83e4-b7789274ac42-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"b7fd8a18-2d71-474c-83e4-b7789274ac42\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:01:09 crc kubenswrapper[4720]: I0122 07:01:09.390022 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b7fd8a18-2d71-474c-83e4-b7789274ac42-internal-tls-certs\") pod \"watcher-kuttl-api-0\" (UID: \"b7fd8a18-2d71-474c-83e4-b7789274ac42\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:01:09 crc kubenswrapper[4720]: I0122 07:01:09.390060 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mfdbd\" (UniqueName: \"kubernetes.io/projected/bf960b01-5c0d-4a0d-aae9-ffaeff4af5d5-kube-api-access-mfdbd\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"bf960b01-5c0d-4a0d-aae9-ffaeff4af5d5\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 07:01:09 crc kubenswrapper[4720]: I0122 07:01:09.390091 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf960b01-5c0d-4a0d-aae9-ffaeff4af5d5-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"bf960b01-5c0d-4a0d-aae9-ffaeff4af5d5\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 07:01:09 crc kubenswrapper[4720]: I0122 07:01:09.390123 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bf960b01-5c0d-4a0d-aae9-ffaeff4af5d5-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"bf960b01-5c0d-4a0d-aae9-ffaeff4af5d5\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 07:01:09 crc kubenswrapper[4720]: I0122 07:01:09.390179 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/b7fd8a18-2d71-474c-83e4-b7789274ac42-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"b7fd8a18-2d71-474c-83e4-b7789274ac42\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:01:09 crc kubenswrapper[4720]: I0122 07:01:09.390211 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b7fd8a18-2d71-474c-83e4-b7789274ac42-logs\") pod \"watcher-kuttl-api-0\" (UID: \"b7fd8a18-2d71-474c-83e4-b7789274ac42\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:01:09 crc kubenswrapper[4720]: I0122 07:01:09.390251 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bzqlf\" (UniqueName: \"kubernetes.io/projected/b7fd8a18-2d71-474c-83e4-b7789274ac42-kube-api-access-bzqlf\") pod \"watcher-kuttl-api-0\" (UID: \"b7fd8a18-2d71-474c-83e4-b7789274ac42\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:01:09 crc kubenswrapper[4720]: I0122 07:01:09.390319 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/bf960b01-5c0d-4a0d-aae9-ffaeff4af5d5-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"bf960b01-5c0d-4a0d-aae9-ffaeff4af5d5\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 07:01:09 crc kubenswrapper[4720]: I0122 07:01:09.395825 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b7fd8a18-2d71-474c-83e4-b7789274ac42-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"b7fd8a18-2d71-474c-83e4-b7789274ac42\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:01:09 crc kubenswrapper[4720]: I0122 07:01:09.398459 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b7fd8a18-2d71-474c-83e4-b7789274ac42-logs\") pod \"watcher-kuttl-api-0\" (UID: \"b7fd8a18-2d71-474c-83e4-b7789274ac42\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:01:09 crc kubenswrapper[4720]: I0122 07:01:09.401721 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/b7fd8a18-2d71-474c-83e4-b7789274ac42-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"b7fd8a18-2d71-474c-83e4-b7789274ac42\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:01:09 crc kubenswrapper[4720]: I0122 07:01:09.404129 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b7fd8a18-2d71-474c-83e4-b7789274ac42-public-tls-certs\") pod \"watcher-kuttl-api-0\" (UID: \"b7fd8a18-2d71-474c-83e4-b7789274ac42\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:01:09 crc kubenswrapper[4720]: I0122 07:01:09.404455 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b7fd8a18-2d71-474c-83e4-b7789274ac42-internal-tls-certs\") pod \"watcher-kuttl-api-0\" (UID: \"b7fd8a18-2d71-474c-83e4-b7789274ac42\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:01:09 crc kubenswrapper[4720]: I0122 07:01:09.405406 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b7fd8a18-2d71-474c-83e4-b7789274ac42-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"b7fd8a18-2d71-474c-83e4-b7789274ac42\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:01:09 crc kubenswrapper[4720]: I0122 07:01:09.417107 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bzqlf\" (UniqueName: \"kubernetes.io/projected/b7fd8a18-2d71-474c-83e4-b7789274ac42-kube-api-access-bzqlf\") pod \"watcher-kuttl-api-0\" (UID: \"b7fd8a18-2d71-474c-83e4-b7789274ac42\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:01:09 crc kubenswrapper[4720]: I0122 07:01:09.460132 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:01:09 crc kubenswrapper[4720]: I0122 07:01:09.492155 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf960b01-5c0d-4a0d-aae9-ffaeff4af5d5-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"bf960b01-5c0d-4a0d-aae9-ffaeff4af5d5\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 07:01:09 crc kubenswrapper[4720]: I0122 07:01:09.492256 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mfdbd\" (UniqueName: \"kubernetes.io/projected/bf960b01-5c0d-4a0d-aae9-ffaeff4af5d5-kube-api-access-mfdbd\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"bf960b01-5c0d-4a0d-aae9-ffaeff4af5d5\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 07:01:09 crc kubenswrapper[4720]: I0122 07:01:09.492292 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf960b01-5c0d-4a0d-aae9-ffaeff4af5d5-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"bf960b01-5c0d-4a0d-aae9-ffaeff4af5d5\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 07:01:09 crc kubenswrapper[4720]: I0122 07:01:09.492323 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bf960b01-5c0d-4a0d-aae9-ffaeff4af5d5-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"bf960b01-5c0d-4a0d-aae9-ffaeff4af5d5\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 07:01:09 crc kubenswrapper[4720]: I0122 07:01:09.492389 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/bf960b01-5c0d-4a0d-aae9-ffaeff4af5d5-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"bf960b01-5c0d-4a0d-aae9-ffaeff4af5d5\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 07:01:09 crc kubenswrapper[4720]: I0122 07:01:09.497509 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/bf960b01-5c0d-4a0d-aae9-ffaeff4af5d5-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"bf960b01-5c0d-4a0d-aae9-ffaeff4af5d5\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 07:01:09 crc kubenswrapper[4720]: I0122 07:01:09.498050 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bf960b01-5c0d-4a0d-aae9-ffaeff4af5d5-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"bf960b01-5c0d-4a0d-aae9-ffaeff4af5d5\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 07:01:09 crc kubenswrapper[4720]: I0122 07:01:09.502625 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf960b01-5c0d-4a0d-aae9-ffaeff4af5d5-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"bf960b01-5c0d-4a0d-aae9-ffaeff4af5d5\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 07:01:09 crc kubenswrapper[4720]: I0122 07:01:09.506794 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf960b01-5c0d-4a0d-aae9-ffaeff4af5d5-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"bf960b01-5c0d-4a0d-aae9-ffaeff4af5d5\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 07:01:09 crc kubenswrapper[4720]: I0122 07:01:09.519774 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mfdbd\" (UniqueName: \"kubernetes.io/projected/bf960b01-5c0d-4a0d-aae9-ffaeff4af5d5-kube-api-access-mfdbd\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"bf960b01-5c0d-4a0d-aae9-ffaeff4af5d5\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 07:01:09 crc kubenswrapper[4720]: I0122 07:01:09.617930 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 07:01:09 crc kubenswrapper[4720]: I0122 07:01:09.909142 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 22 07:01:09 crc kubenswrapper[4720]: W0122 07:01:09.916193 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb1f4244a_771e_422c_823a_385d4c50bc05.slice/crio-7f5c8aa847640b0570d9b91429d3fc7cb5c8d6825deb06fe026e822bd904c7b7 WatchSource:0}: Error finding container 7f5c8aa847640b0570d9b91429d3fc7cb5c8d6825deb06fe026e822bd904c7b7: Status 404 returned error can't find the container with id 7f5c8aa847640b0570d9b91429d3fc7cb5c8d6825deb06fe026e822bd904c7b7 Jan 22 07:01:10 crc kubenswrapper[4720]: I0122 07:01:10.004605 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 22 07:01:10 crc kubenswrapper[4720]: I0122 07:01:10.137029 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 22 07:01:10 crc kubenswrapper[4720]: I0122 07:01:10.822690 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"b7fd8a18-2d71-474c-83e4-b7789274ac42","Type":"ContainerStarted","Data":"46aaf4e4d4b0065c632b0f69d56ede8fc010aa80eea29954dc7648e37cd17766"} Jan 22 07:01:10 crc kubenswrapper[4720]: I0122 07:01:10.823835 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"bf960b01-5c0d-4a0d-aae9-ffaeff4af5d5","Type":"ContainerStarted","Data":"2deadc8c176c6d684aeb0532bf24f59a053b34b679866f8473d3493c980a6ddd"} Jan 22 07:01:10 crc kubenswrapper[4720]: I0122 07:01:10.825543 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"b1f4244a-771e-422c-823a-385d4c50bc05","Type":"ContainerStarted","Data":"e24266a3e9b29047ddb048ce6185bdd1d1d92dde91c780a7f5b7e17e97177229"} Jan 22 07:01:10 crc kubenswrapper[4720]: I0122 07:01:10.825610 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"b1f4244a-771e-422c-823a-385d4c50bc05","Type":"ContainerStarted","Data":"7f5c8aa847640b0570d9b91429d3fc7cb5c8d6825deb06fe026e822bd904c7b7"} Jan 22 07:01:11 crc kubenswrapper[4720]: I0122 07:01:11.836517 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"b7fd8a18-2d71-474c-83e4-b7789274ac42","Type":"ContainerStarted","Data":"b54b07fec61ed15a3e16ee62a54fe2c9cfe0d2ae0c39be951a71d9735a84a9d0"} Jan 22 07:01:11 crc kubenswrapper[4720]: I0122 07:01:11.836934 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:01:11 crc kubenswrapper[4720]: I0122 07:01:11.836955 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"b7fd8a18-2d71-474c-83e4-b7789274ac42","Type":"ContainerStarted","Data":"7b28e8cd3afd54a30e97ca37d482cc2c07ab84f8171505e65ddc7aaba1922c2c"} Jan 22 07:01:11 crc kubenswrapper[4720]: I0122 07:01:11.839229 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"bf960b01-5c0d-4a0d-aae9-ffaeff4af5d5","Type":"ContainerStarted","Data":"54f351b36a550966aaa4f2d9fc1d0e810b27b0908f58fc1fdee03390ef15d8c9"} Jan 22 07:01:11 crc kubenswrapper[4720]: I0122 07:01:11.868668 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-api-0" podStartSLOduration=2.86863665 podStartE2EDuration="2.86863665s" podCreationTimestamp="2026-01-22 07:01:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 07:01:11.857238632 +0000 UTC m=+1563.999145347" watchObservedRunningTime="2026-01-22 07:01:11.86863665 +0000 UTC m=+1564.010543355" Jan 22 07:01:11 crc kubenswrapper[4720]: I0122 07:01:11.883277 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podStartSLOduration=2.883248316 podStartE2EDuration="2.883248316s" podCreationTimestamp="2026-01-22 07:01:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 07:01:11.877564838 +0000 UTC m=+1564.019471563" watchObservedRunningTime="2026-01-22 07:01:11.883248316 +0000 UTC m=+1564.025155021" Jan 22 07:01:11 crc kubenswrapper[4720]: I0122 07:01:11.906265 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podStartSLOduration=2.906235496 podStartE2EDuration="2.906235496s" podCreationTimestamp="2026-01-22 07:01:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 07:01:11.896779012 +0000 UTC m=+1564.038685737" watchObservedRunningTime="2026-01-22 07:01:11.906235496 +0000 UTC m=+1564.048142201" Jan 22 07:01:12 crc kubenswrapper[4720]: I0122 07:01:12.503136 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-wfms8" Jan 22 07:01:12 crc kubenswrapper[4720]: I0122 07:01:12.503320 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-wfms8" Jan 22 07:01:12 crc kubenswrapper[4720]: I0122 07:01:12.558768 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-wfms8" Jan 22 07:01:12 crc kubenswrapper[4720]: I0122 07:01:12.921976 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-wfms8" Jan 22 07:01:14 crc kubenswrapper[4720]: I0122 07:01:14.308217 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:01:14 crc kubenswrapper[4720]: I0122 07:01:14.377714 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 07:01:14 crc kubenswrapper[4720]: I0122 07:01:14.461278 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:01:16 crc kubenswrapper[4720]: I0122 07:01:16.126929 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wfms8"] Jan 22 07:01:16 crc kubenswrapper[4720]: I0122 07:01:16.127565 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-wfms8" podUID="8bbf1f83-e710-4d55-9902-74eb351624bc" containerName="registry-server" containerID="cri-o://bea1b396ffb7a2f8abd87f60dd17acc8a7e4483571e0b7154f0689282d9478f5" gracePeriod=2 Jan 22 07:01:17 crc kubenswrapper[4720]: I0122 07:01:17.733116 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:01:19 crc kubenswrapper[4720]: I0122 07:01:19.378063 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 07:01:19 crc kubenswrapper[4720]: I0122 07:01:19.406734 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 07:01:19 crc kubenswrapper[4720]: I0122 07:01:19.461108 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:01:19 crc kubenswrapper[4720]: I0122 07:01:19.473791 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:01:19 crc kubenswrapper[4720]: I0122 07:01:19.618949 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 07:01:19 crc kubenswrapper[4720]: I0122 07:01:19.646986 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 07:01:19 crc kubenswrapper[4720]: I0122 07:01:19.924766 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 07:01:19 crc kubenswrapper[4720]: I0122 07:01:19.936556 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:01:19 crc kubenswrapper[4720]: I0122 07:01:19.970463 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 07:01:19 crc kubenswrapper[4720]: I0122 07:01:19.971009 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 07:01:21 crc kubenswrapper[4720]: I0122 07:01:21.945270 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-wfms8_8bbf1f83-e710-4d55-9902-74eb351624bc/registry-server/0.log" Jan 22 07:01:21 crc kubenswrapper[4720]: I0122 07:01:21.947071 4720 generic.go:334] "Generic (PLEG): container finished" podID="8bbf1f83-e710-4d55-9902-74eb351624bc" containerID="bea1b396ffb7a2f8abd87f60dd17acc8a7e4483571e0b7154f0689282d9478f5" exitCode=137 Jan 22 07:01:21 crc kubenswrapper[4720]: I0122 07:01:21.948075 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wfms8" event={"ID":"8bbf1f83-e710-4d55-9902-74eb351624bc","Type":"ContainerDied","Data":"bea1b396ffb7a2f8abd87f60dd17acc8a7e4483571e0b7154f0689282d9478f5"} Jan 22 07:01:22 crc kubenswrapper[4720]: I0122 07:01:22.310403 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-wfms8_8bbf1f83-e710-4d55-9902-74eb351624bc/registry-server/0.log" Jan 22 07:01:22 crc kubenswrapper[4720]: I0122 07:01:22.311474 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wfms8" Jan 22 07:01:22 crc kubenswrapper[4720]: I0122 07:01:22.466854 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8bbf1f83-e710-4d55-9902-74eb351624bc-catalog-content\") pod \"8bbf1f83-e710-4d55-9902-74eb351624bc\" (UID: \"8bbf1f83-e710-4d55-9902-74eb351624bc\") " Jan 22 07:01:22 crc kubenswrapper[4720]: I0122 07:01:22.467278 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pdrqv\" (UniqueName: \"kubernetes.io/projected/8bbf1f83-e710-4d55-9902-74eb351624bc-kube-api-access-pdrqv\") pod \"8bbf1f83-e710-4d55-9902-74eb351624bc\" (UID: \"8bbf1f83-e710-4d55-9902-74eb351624bc\") " Jan 22 07:01:22 crc kubenswrapper[4720]: I0122 07:01:22.467407 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8bbf1f83-e710-4d55-9902-74eb351624bc-utilities\") pod \"8bbf1f83-e710-4d55-9902-74eb351624bc\" (UID: \"8bbf1f83-e710-4d55-9902-74eb351624bc\") " Jan 22 07:01:22 crc kubenswrapper[4720]: I0122 07:01:22.468500 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8bbf1f83-e710-4d55-9902-74eb351624bc-utilities" (OuterVolumeSpecName: "utilities") pod "8bbf1f83-e710-4d55-9902-74eb351624bc" (UID: "8bbf1f83-e710-4d55-9902-74eb351624bc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 07:01:22 crc kubenswrapper[4720]: I0122 07:01:22.484280 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8bbf1f83-e710-4d55-9902-74eb351624bc-kube-api-access-pdrqv" (OuterVolumeSpecName: "kube-api-access-pdrqv") pod "8bbf1f83-e710-4d55-9902-74eb351624bc" (UID: "8bbf1f83-e710-4d55-9902-74eb351624bc"). InnerVolumeSpecName "kube-api-access-pdrqv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 07:01:22 crc kubenswrapper[4720]: I0122 07:01:22.530386 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8bbf1f83-e710-4d55-9902-74eb351624bc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8bbf1f83-e710-4d55-9902-74eb351624bc" (UID: "8bbf1f83-e710-4d55-9902-74eb351624bc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 07:01:22 crc kubenswrapper[4720]: I0122 07:01:22.569717 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pdrqv\" (UniqueName: \"kubernetes.io/projected/8bbf1f83-e710-4d55-9902-74eb351624bc-kube-api-access-pdrqv\") on node \"crc\" DevicePath \"\"" Jan 22 07:01:22 crc kubenswrapper[4720]: I0122 07:01:22.569754 4720 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8bbf1f83-e710-4d55-9902-74eb351624bc-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 07:01:22 crc kubenswrapper[4720]: I0122 07:01:22.569769 4720 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8bbf1f83-e710-4d55-9902-74eb351624bc-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 07:01:22 crc kubenswrapper[4720]: I0122 07:01:22.960277 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-wfms8_8bbf1f83-e710-4d55-9902-74eb351624bc/registry-server/0.log" Jan 22 07:01:22 crc kubenswrapper[4720]: I0122 07:01:22.962099 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wfms8" event={"ID":"8bbf1f83-e710-4d55-9902-74eb351624bc","Type":"ContainerDied","Data":"a709d33a95e78051a10176d7383c7dc593faf73d444ac1a662f078f174e45df0"} Jan 22 07:01:22 crc kubenswrapper[4720]: I0122 07:01:22.962177 4720 scope.go:117] "RemoveContainer" containerID="bea1b396ffb7a2f8abd87f60dd17acc8a7e4483571e0b7154f0689282d9478f5" Jan 22 07:01:22 crc kubenswrapper[4720]: I0122 07:01:22.962187 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wfms8" Jan 22 07:01:23 crc kubenswrapper[4720]: I0122 07:01:23.018180 4720 scope.go:117] "RemoveContainer" containerID="5827002a73daaea07bd47420f997a62c64f470eb146102fe6046411ba77a5c77" Jan 22 07:01:23 crc kubenswrapper[4720]: I0122 07:01:23.028411 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wfms8"] Jan 22 07:01:23 crc kubenswrapper[4720]: I0122 07:01:23.044953 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-wfms8"] Jan 22 07:01:23 crc kubenswrapper[4720]: I0122 07:01:23.068599 4720 scope.go:117] "RemoveContainer" containerID="fe5d57e73f151375c4890821fc5af1f8bff53e21409e07974c4fc352aae8ab14" Jan 22 07:01:24 crc kubenswrapper[4720]: I0122 07:01:24.221765 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8bbf1f83-e710-4d55-9902-74eb351624bc" path="/var/lib/kubelet/pods/8bbf1f83-e710-4d55-9902-74eb351624bc/volumes" Jan 22 07:01:24 crc kubenswrapper[4720]: I0122 07:01:24.463262 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 07:01:24 crc kubenswrapper[4720]: I0122 07:01:24.463693 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="c27b3f23-c680-4c3b-9986-f86e585bd220" containerName="ceilometer-central-agent" containerID="cri-o://757303994a3020292d3ae5ec21cab4ba66ba73fd0ecbabf9e9a7cc6b3c47f8ad" gracePeriod=30 Jan 22 07:01:24 crc kubenswrapper[4720]: I0122 07:01:24.463780 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="c27b3f23-c680-4c3b-9986-f86e585bd220" containerName="ceilometer-notification-agent" containerID="cri-o://05aa2dfae7f3f14ff9245077c19cfc721fcce175fbb298d1ad254fd44a8085a1" gracePeriod=30 Jan 22 07:01:24 crc kubenswrapper[4720]: I0122 07:01:24.463883 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="c27b3f23-c680-4c3b-9986-f86e585bd220" containerName="proxy-httpd" containerID="cri-o://66e14d3c508834dcb4a06d4a6feaa2ee0b3399bb1f5a553c5723c0f5f75caabb" gracePeriod=30 Jan 22 07:01:24 crc kubenswrapper[4720]: I0122 07:01:24.465099 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="c27b3f23-c680-4c3b-9986-f86e585bd220" containerName="sg-core" containerID="cri-o://8426a058bfe5723aa93e1fba658e508219b244875be7e0a7dc492c4894145da4" gracePeriod=30 Jan 22 07:01:25 crc kubenswrapper[4720]: I0122 07:01:25.001481 4720 generic.go:334] "Generic (PLEG): container finished" podID="c27b3f23-c680-4c3b-9986-f86e585bd220" containerID="66e14d3c508834dcb4a06d4a6feaa2ee0b3399bb1f5a553c5723c0f5f75caabb" exitCode=0 Jan 22 07:01:25 crc kubenswrapper[4720]: I0122 07:01:25.001738 4720 generic.go:334] "Generic (PLEG): container finished" podID="c27b3f23-c680-4c3b-9986-f86e585bd220" containerID="8426a058bfe5723aa93e1fba658e508219b244875be7e0a7dc492c4894145da4" exitCode=2 Jan 22 07:01:25 crc kubenswrapper[4720]: I0122 07:01:25.001796 4720 generic.go:334] "Generic (PLEG): container finished" podID="c27b3f23-c680-4c3b-9986-f86e585bd220" containerID="757303994a3020292d3ae5ec21cab4ba66ba73fd0ecbabf9e9a7cc6b3c47f8ad" exitCode=0 Jan 22 07:01:25 crc kubenswrapper[4720]: I0122 07:01:25.001903 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"c27b3f23-c680-4c3b-9986-f86e585bd220","Type":"ContainerDied","Data":"66e14d3c508834dcb4a06d4a6feaa2ee0b3399bb1f5a553c5723c0f5f75caabb"} Jan 22 07:01:25 crc kubenswrapper[4720]: I0122 07:01:25.002004 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"c27b3f23-c680-4c3b-9986-f86e585bd220","Type":"ContainerDied","Data":"8426a058bfe5723aa93e1fba658e508219b244875be7e0a7dc492c4894145da4"} Jan 22 07:01:25 crc kubenswrapper[4720]: I0122 07:01:25.002116 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"c27b3f23-c680-4c3b-9986-f86e585bd220","Type":"ContainerDied","Data":"757303994a3020292d3ae5ec21cab4ba66ba73fd0ecbabf9e9a7cc6b3c47f8ad"} Jan 22 07:01:30 crc kubenswrapper[4720]: I0122 07:01:30.080987 4720 generic.go:334] "Generic (PLEG): container finished" podID="c27b3f23-c680-4c3b-9986-f86e585bd220" containerID="05aa2dfae7f3f14ff9245077c19cfc721fcce175fbb298d1ad254fd44a8085a1" exitCode=0 Jan 22 07:01:30 crc kubenswrapper[4720]: I0122 07:01:30.081022 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"c27b3f23-c680-4c3b-9986-f86e585bd220","Type":"ContainerDied","Data":"05aa2dfae7f3f14ff9245077c19cfc721fcce175fbb298d1ad254fd44a8085a1"} Jan 22 07:01:30 crc kubenswrapper[4720]: I0122 07:01:30.498836 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:01:30 crc kubenswrapper[4720]: I0122 07:01:30.598741 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/c27b3f23-c680-4c3b-9986-f86e585bd220-ceilometer-tls-certs\") pod \"c27b3f23-c680-4c3b-9986-f86e585bd220\" (UID: \"c27b3f23-c680-4c3b-9986-f86e585bd220\") " Jan 22 07:01:30 crc kubenswrapper[4720]: I0122 07:01:30.598810 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c27b3f23-c680-4c3b-9986-f86e585bd220-combined-ca-bundle\") pod \"c27b3f23-c680-4c3b-9986-f86e585bd220\" (UID: \"c27b3f23-c680-4c3b-9986-f86e585bd220\") " Jan 22 07:01:30 crc kubenswrapper[4720]: I0122 07:01:30.598948 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qt9ft\" (UniqueName: \"kubernetes.io/projected/c27b3f23-c680-4c3b-9986-f86e585bd220-kube-api-access-qt9ft\") pod \"c27b3f23-c680-4c3b-9986-f86e585bd220\" (UID: \"c27b3f23-c680-4c3b-9986-f86e585bd220\") " Jan 22 07:01:30 crc kubenswrapper[4720]: I0122 07:01:30.598993 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c27b3f23-c680-4c3b-9986-f86e585bd220-sg-core-conf-yaml\") pod \"c27b3f23-c680-4c3b-9986-f86e585bd220\" (UID: \"c27b3f23-c680-4c3b-9986-f86e585bd220\") " Jan 22 07:01:30 crc kubenswrapper[4720]: I0122 07:01:30.599160 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c27b3f23-c680-4c3b-9986-f86e585bd220-run-httpd\") pod \"c27b3f23-c680-4c3b-9986-f86e585bd220\" (UID: \"c27b3f23-c680-4c3b-9986-f86e585bd220\") " Jan 22 07:01:30 crc kubenswrapper[4720]: I0122 07:01:30.599186 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c27b3f23-c680-4c3b-9986-f86e585bd220-scripts\") pod \"c27b3f23-c680-4c3b-9986-f86e585bd220\" (UID: \"c27b3f23-c680-4c3b-9986-f86e585bd220\") " Jan 22 07:01:30 crc kubenswrapper[4720]: I0122 07:01:30.599229 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c27b3f23-c680-4c3b-9986-f86e585bd220-log-httpd\") pod \"c27b3f23-c680-4c3b-9986-f86e585bd220\" (UID: \"c27b3f23-c680-4c3b-9986-f86e585bd220\") " Jan 22 07:01:30 crc kubenswrapper[4720]: I0122 07:01:30.599266 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c27b3f23-c680-4c3b-9986-f86e585bd220-config-data\") pod \"c27b3f23-c680-4c3b-9986-f86e585bd220\" (UID: \"c27b3f23-c680-4c3b-9986-f86e585bd220\") " Jan 22 07:01:30 crc kubenswrapper[4720]: I0122 07:01:30.599981 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c27b3f23-c680-4c3b-9986-f86e585bd220-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "c27b3f23-c680-4c3b-9986-f86e585bd220" (UID: "c27b3f23-c680-4c3b-9986-f86e585bd220"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 07:01:30 crc kubenswrapper[4720]: I0122 07:01:30.600333 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c27b3f23-c680-4c3b-9986-f86e585bd220-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "c27b3f23-c680-4c3b-9986-f86e585bd220" (UID: "c27b3f23-c680-4c3b-9986-f86e585bd220"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 07:01:30 crc kubenswrapper[4720]: I0122 07:01:30.606431 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c27b3f23-c680-4c3b-9986-f86e585bd220-kube-api-access-qt9ft" (OuterVolumeSpecName: "kube-api-access-qt9ft") pod "c27b3f23-c680-4c3b-9986-f86e585bd220" (UID: "c27b3f23-c680-4c3b-9986-f86e585bd220"). InnerVolumeSpecName "kube-api-access-qt9ft". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 07:01:30 crc kubenswrapper[4720]: I0122 07:01:30.607149 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c27b3f23-c680-4c3b-9986-f86e585bd220-scripts" (OuterVolumeSpecName: "scripts") pod "c27b3f23-c680-4c3b-9986-f86e585bd220" (UID: "c27b3f23-c680-4c3b-9986-f86e585bd220"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:01:30 crc kubenswrapper[4720]: I0122 07:01:30.638742 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c27b3f23-c680-4c3b-9986-f86e585bd220-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "c27b3f23-c680-4c3b-9986-f86e585bd220" (UID: "c27b3f23-c680-4c3b-9986-f86e585bd220"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:01:30 crc kubenswrapper[4720]: I0122 07:01:30.657566 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c27b3f23-c680-4c3b-9986-f86e585bd220-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "c27b3f23-c680-4c3b-9986-f86e585bd220" (UID: "c27b3f23-c680-4c3b-9986-f86e585bd220"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:01:30 crc kubenswrapper[4720]: I0122 07:01:30.682300 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c27b3f23-c680-4c3b-9986-f86e585bd220-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c27b3f23-c680-4c3b-9986-f86e585bd220" (UID: "c27b3f23-c680-4c3b-9986-f86e585bd220"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:01:30 crc kubenswrapper[4720]: I0122 07:01:30.698436 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c27b3f23-c680-4c3b-9986-f86e585bd220-config-data" (OuterVolumeSpecName: "config-data") pod "c27b3f23-c680-4c3b-9986-f86e585bd220" (UID: "c27b3f23-c680-4c3b-9986-f86e585bd220"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:01:30 crc kubenswrapper[4720]: I0122 07:01:30.716395 4720 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c27b3f23-c680-4c3b-9986-f86e585bd220-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 07:01:30 crc kubenswrapper[4720]: I0122 07:01:30.716440 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qt9ft\" (UniqueName: \"kubernetes.io/projected/c27b3f23-c680-4c3b-9986-f86e585bd220-kube-api-access-qt9ft\") on node \"crc\" DevicePath \"\"" Jan 22 07:01:30 crc kubenswrapper[4720]: I0122 07:01:30.716457 4720 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c27b3f23-c680-4c3b-9986-f86e585bd220-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 22 07:01:30 crc kubenswrapper[4720]: I0122 07:01:30.716473 4720 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c27b3f23-c680-4c3b-9986-f86e585bd220-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 22 07:01:30 crc kubenswrapper[4720]: I0122 07:01:30.716485 4720 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c27b3f23-c680-4c3b-9986-f86e585bd220-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 07:01:30 crc kubenswrapper[4720]: I0122 07:01:30.716495 4720 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c27b3f23-c680-4c3b-9986-f86e585bd220-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 22 07:01:30 crc kubenswrapper[4720]: I0122 07:01:30.716509 4720 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c27b3f23-c680-4c3b-9986-f86e585bd220-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 07:01:30 crc kubenswrapper[4720]: I0122 07:01:30.716520 4720 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/c27b3f23-c680-4c3b-9986-f86e585bd220-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 22 07:01:31 crc kubenswrapper[4720]: I0122 07:01:31.093125 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"c27b3f23-c680-4c3b-9986-f86e585bd220","Type":"ContainerDied","Data":"715bd0b0805943accca0a301efd63edd114c86648c37594377994f81c7bb7e54"} Jan 22 07:01:31 crc kubenswrapper[4720]: I0122 07:01:31.094443 4720 scope.go:117] "RemoveContainer" containerID="66e14d3c508834dcb4a06d4a6feaa2ee0b3399bb1f5a553c5723c0f5f75caabb" Jan 22 07:01:31 crc kubenswrapper[4720]: I0122 07:01:31.093362 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:01:31 crc kubenswrapper[4720]: I0122 07:01:31.156543 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 07:01:31 crc kubenswrapper[4720]: I0122 07:01:31.179378 4720 scope.go:117] "RemoveContainer" containerID="8426a058bfe5723aa93e1fba658e508219b244875be7e0a7dc492c4894145da4" Jan 22 07:01:31 crc kubenswrapper[4720]: I0122 07:01:31.186329 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 07:01:31 crc kubenswrapper[4720]: I0122 07:01:31.356141 4720 scope.go:117] "RemoveContainer" containerID="05aa2dfae7f3f14ff9245077c19cfc721fcce175fbb298d1ad254fd44a8085a1" Jan 22 07:01:31 crc kubenswrapper[4720]: I0122 07:01:31.388878 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 07:01:31 crc kubenswrapper[4720]: E0122 07:01:31.389369 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c27b3f23-c680-4c3b-9986-f86e585bd220" containerName="ceilometer-central-agent" Jan 22 07:01:31 crc kubenswrapper[4720]: I0122 07:01:31.389393 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="c27b3f23-c680-4c3b-9986-f86e585bd220" containerName="ceilometer-central-agent" Jan 22 07:01:31 crc kubenswrapper[4720]: E0122 07:01:31.389417 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c27b3f23-c680-4c3b-9986-f86e585bd220" containerName="ceilometer-notification-agent" Jan 22 07:01:31 crc kubenswrapper[4720]: I0122 07:01:31.389424 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="c27b3f23-c680-4c3b-9986-f86e585bd220" containerName="ceilometer-notification-agent" Jan 22 07:01:31 crc kubenswrapper[4720]: E0122 07:01:31.389436 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8bbf1f83-e710-4d55-9902-74eb351624bc" containerName="registry-server" Jan 22 07:01:31 crc kubenswrapper[4720]: I0122 07:01:31.389442 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="8bbf1f83-e710-4d55-9902-74eb351624bc" containerName="registry-server" Jan 22 07:01:31 crc kubenswrapper[4720]: E0122 07:01:31.389458 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8bbf1f83-e710-4d55-9902-74eb351624bc" containerName="extract-utilities" Jan 22 07:01:31 crc kubenswrapper[4720]: I0122 07:01:31.389464 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="8bbf1f83-e710-4d55-9902-74eb351624bc" containerName="extract-utilities" Jan 22 07:01:31 crc kubenswrapper[4720]: E0122 07:01:31.389479 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8bbf1f83-e710-4d55-9902-74eb351624bc" containerName="extract-content" Jan 22 07:01:31 crc kubenswrapper[4720]: I0122 07:01:31.389485 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="8bbf1f83-e710-4d55-9902-74eb351624bc" containerName="extract-content" Jan 22 07:01:31 crc kubenswrapper[4720]: E0122 07:01:31.389495 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c27b3f23-c680-4c3b-9986-f86e585bd220" containerName="sg-core" Jan 22 07:01:31 crc kubenswrapper[4720]: I0122 07:01:31.389501 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="c27b3f23-c680-4c3b-9986-f86e585bd220" containerName="sg-core" Jan 22 07:01:31 crc kubenswrapper[4720]: E0122 07:01:31.389515 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c27b3f23-c680-4c3b-9986-f86e585bd220" containerName="proxy-httpd" Jan 22 07:01:31 crc kubenswrapper[4720]: I0122 07:01:31.389522 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="c27b3f23-c680-4c3b-9986-f86e585bd220" containerName="proxy-httpd" Jan 22 07:01:31 crc kubenswrapper[4720]: I0122 07:01:31.389721 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="8bbf1f83-e710-4d55-9902-74eb351624bc" containerName="registry-server" Jan 22 07:01:31 crc kubenswrapper[4720]: I0122 07:01:31.389733 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="c27b3f23-c680-4c3b-9986-f86e585bd220" containerName="ceilometer-central-agent" Jan 22 07:01:31 crc kubenswrapper[4720]: I0122 07:01:31.389745 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="c27b3f23-c680-4c3b-9986-f86e585bd220" containerName="ceilometer-notification-agent" Jan 22 07:01:31 crc kubenswrapper[4720]: I0122 07:01:31.389758 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="c27b3f23-c680-4c3b-9986-f86e585bd220" containerName="sg-core" Jan 22 07:01:31 crc kubenswrapper[4720]: I0122 07:01:31.389774 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="c27b3f23-c680-4c3b-9986-f86e585bd220" containerName="proxy-httpd" Jan 22 07:01:31 crc kubenswrapper[4720]: I0122 07:01:31.391510 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:01:31 crc kubenswrapper[4720]: I0122 07:01:31.393506 4720 scope.go:117] "RemoveContainer" containerID="757303994a3020292d3ae5ec21cab4ba66ba73fd0ecbabf9e9a7cc6b3c47f8ad" Jan 22 07:01:31 crc kubenswrapper[4720]: I0122 07:01:31.396483 4720 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-ceilometer-internal-svc" Jan 22 07:01:31 crc kubenswrapper[4720]: I0122 07:01:31.396682 4720 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-scripts" Jan 22 07:01:31 crc kubenswrapper[4720]: I0122 07:01:31.396818 4720 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-config-data" Jan 22 07:01:31 crc kubenswrapper[4720]: I0122 07:01:31.407401 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 07:01:31 crc kubenswrapper[4720]: I0122 07:01:31.542499 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7ck78\" (UniqueName: \"kubernetes.io/projected/9cf69410-41fe-483a-a2f1-03fb54dbf10e-kube-api-access-7ck78\") pod \"ceilometer-0\" (UID: \"9cf69410-41fe-483a-a2f1-03fb54dbf10e\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:01:31 crc kubenswrapper[4720]: I0122 07:01:31.542568 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9cf69410-41fe-483a-a2f1-03fb54dbf10e-log-httpd\") pod \"ceilometer-0\" (UID: \"9cf69410-41fe-483a-a2f1-03fb54dbf10e\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:01:31 crc kubenswrapper[4720]: I0122 07:01:31.542810 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/9cf69410-41fe-483a-a2f1-03fb54dbf10e-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"9cf69410-41fe-483a-a2f1-03fb54dbf10e\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:01:31 crc kubenswrapper[4720]: I0122 07:01:31.542877 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9cf69410-41fe-483a-a2f1-03fb54dbf10e-run-httpd\") pod \"ceilometer-0\" (UID: \"9cf69410-41fe-483a-a2f1-03fb54dbf10e\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:01:31 crc kubenswrapper[4720]: I0122 07:01:31.543034 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9cf69410-41fe-483a-a2f1-03fb54dbf10e-scripts\") pod \"ceilometer-0\" (UID: \"9cf69410-41fe-483a-a2f1-03fb54dbf10e\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:01:31 crc kubenswrapper[4720]: I0122 07:01:31.543205 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9cf69410-41fe-483a-a2f1-03fb54dbf10e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9cf69410-41fe-483a-a2f1-03fb54dbf10e\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:01:31 crc kubenswrapper[4720]: I0122 07:01:31.543389 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9cf69410-41fe-483a-a2f1-03fb54dbf10e-config-data\") pod \"ceilometer-0\" (UID: \"9cf69410-41fe-483a-a2f1-03fb54dbf10e\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:01:31 crc kubenswrapper[4720]: I0122 07:01:31.543489 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9cf69410-41fe-483a-a2f1-03fb54dbf10e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9cf69410-41fe-483a-a2f1-03fb54dbf10e\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:01:31 crc kubenswrapper[4720]: I0122 07:01:31.645259 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9cf69410-41fe-483a-a2f1-03fb54dbf10e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9cf69410-41fe-483a-a2f1-03fb54dbf10e\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:01:31 crc kubenswrapper[4720]: I0122 07:01:31.645344 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9cf69410-41fe-483a-a2f1-03fb54dbf10e-config-data\") pod \"ceilometer-0\" (UID: \"9cf69410-41fe-483a-a2f1-03fb54dbf10e\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:01:31 crc kubenswrapper[4720]: I0122 07:01:31.645369 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9cf69410-41fe-483a-a2f1-03fb54dbf10e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9cf69410-41fe-483a-a2f1-03fb54dbf10e\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:01:31 crc kubenswrapper[4720]: I0122 07:01:31.645414 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7ck78\" (UniqueName: \"kubernetes.io/projected/9cf69410-41fe-483a-a2f1-03fb54dbf10e-kube-api-access-7ck78\") pod \"ceilometer-0\" (UID: \"9cf69410-41fe-483a-a2f1-03fb54dbf10e\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:01:31 crc kubenswrapper[4720]: I0122 07:01:31.645438 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9cf69410-41fe-483a-a2f1-03fb54dbf10e-log-httpd\") pod \"ceilometer-0\" (UID: \"9cf69410-41fe-483a-a2f1-03fb54dbf10e\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:01:31 crc kubenswrapper[4720]: I0122 07:01:31.645469 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/9cf69410-41fe-483a-a2f1-03fb54dbf10e-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"9cf69410-41fe-483a-a2f1-03fb54dbf10e\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:01:31 crc kubenswrapper[4720]: I0122 07:01:31.645487 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9cf69410-41fe-483a-a2f1-03fb54dbf10e-run-httpd\") pod \"ceilometer-0\" (UID: \"9cf69410-41fe-483a-a2f1-03fb54dbf10e\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:01:31 crc kubenswrapper[4720]: I0122 07:01:31.645527 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9cf69410-41fe-483a-a2f1-03fb54dbf10e-scripts\") pod \"ceilometer-0\" (UID: \"9cf69410-41fe-483a-a2f1-03fb54dbf10e\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:01:31 crc kubenswrapper[4720]: I0122 07:01:31.646766 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9cf69410-41fe-483a-a2f1-03fb54dbf10e-log-httpd\") pod \"ceilometer-0\" (UID: \"9cf69410-41fe-483a-a2f1-03fb54dbf10e\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:01:31 crc kubenswrapper[4720]: I0122 07:01:31.647820 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9cf69410-41fe-483a-a2f1-03fb54dbf10e-run-httpd\") pod \"ceilometer-0\" (UID: \"9cf69410-41fe-483a-a2f1-03fb54dbf10e\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:01:31 crc kubenswrapper[4720]: I0122 07:01:31.651146 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9cf69410-41fe-483a-a2f1-03fb54dbf10e-config-data\") pod \"ceilometer-0\" (UID: \"9cf69410-41fe-483a-a2f1-03fb54dbf10e\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:01:31 crc kubenswrapper[4720]: I0122 07:01:31.664817 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/9cf69410-41fe-483a-a2f1-03fb54dbf10e-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"9cf69410-41fe-483a-a2f1-03fb54dbf10e\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:01:31 crc kubenswrapper[4720]: I0122 07:01:31.664895 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9cf69410-41fe-483a-a2f1-03fb54dbf10e-scripts\") pod \"ceilometer-0\" (UID: \"9cf69410-41fe-483a-a2f1-03fb54dbf10e\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:01:31 crc kubenswrapper[4720]: I0122 07:01:31.665052 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9cf69410-41fe-483a-a2f1-03fb54dbf10e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9cf69410-41fe-483a-a2f1-03fb54dbf10e\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:01:31 crc kubenswrapper[4720]: I0122 07:01:31.665626 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9cf69410-41fe-483a-a2f1-03fb54dbf10e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9cf69410-41fe-483a-a2f1-03fb54dbf10e\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:01:31 crc kubenswrapper[4720]: I0122 07:01:31.669397 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7ck78\" (UniqueName: \"kubernetes.io/projected/9cf69410-41fe-483a-a2f1-03fb54dbf10e-kube-api-access-7ck78\") pod \"ceilometer-0\" (UID: \"9cf69410-41fe-483a-a2f1-03fb54dbf10e\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:01:31 crc kubenswrapper[4720]: I0122 07:01:31.715698 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:01:32 crc kubenswrapper[4720]: I0122 07:01:32.226063 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c27b3f23-c680-4c3b-9986-f86e585bd220" path="/var/lib/kubelet/pods/c27b3f23-c680-4c3b-9986-f86e585bd220/volumes" Jan 22 07:01:32 crc kubenswrapper[4720]: I0122 07:01:32.228028 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 07:01:33 crc kubenswrapper[4720]: I0122 07:01:33.151829 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"9cf69410-41fe-483a-a2f1-03fb54dbf10e","Type":"ContainerStarted","Data":"4f33ae768513bdbf51b80badd182e6e3b40dea74b927d30eddf391b6c049cdf6"} Jan 22 07:01:34 crc kubenswrapper[4720]: I0122 07:01:34.162764 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"9cf69410-41fe-483a-a2f1-03fb54dbf10e","Type":"ContainerStarted","Data":"ffcc4b4cb191fddf2160975d339b31c845a24c05c2f2a17a1de6f36b6aa03a6e"} Jan 22 07:01:35 crc kubenswrapper[4720]: I0122 07:01:35.179011 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"9cf69410-41fe-483a-a2f1-03fb54dbf10e","Type":"ContainerStarted","Data":"2ad21fae366c0f1a24b188588f88b7e594795af4b0d223d95813de0b419c5678"} Jan 22 07:01:37 crc kubenswrapper[4720]: I0122 07:01:37.200263 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"9cf69410-41fe-483a-a2f1-03fb54dbf10e","Type":"ContainerStarted","Data":"713c4effe31810a066a6c91cad09980b52d6f337165b56e1193f30a1f347dc33"} Jan 22 07:01:38 crc kubenswrapper[4720]: I0122 07:01:38.574704 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/memcached-0"] Jan 22 07:01:38 crc kubenswrapper[4720]: I0122 07:01:38.575521 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/memcached-0" podUID="0f11b752-39dd-4f60-b6e5-6f788a85f86a" containerName="memcached" containerID="cri-o://a6a06afc3c96b7c1b7603a84bc896d35ff9f51ef9b4f34e596bcf3ece6ddd1b9" gracePeriod=30 Jan 22 07:01:38 crc kubenswrapper[4720]: I0122 07:01:38.720354 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 22 07:01:38 crc kubenswrapper[4720]: I0122 07:01:38.720646 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podUID="b1f4244a-771e-422c-823a-385d4c50bc05" containerName="watcher-applier" containerID="cri-o://e24266a3e9b29047ddb048ce6185bdd1d1d92dde91c780a7f5b7e17e97177229" gracePeriod=30 Jan 22 07:01:38 crc kubenswrapper[4720]: I0122 07:01:38.741096 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 22 07:01:38 crc kubenswrapper[4720]: I0122 07:01:38.741654 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="bf960b01-5c0d-4a0d-aae9-ffaeff4af5d5" containerName="watcher-decision-engine" containerID="cri-o://54f351b36a550966aaa4f2d9fc1d0e810b27b0908f58fc1fdee03390ef15d8c9" gracePeriod=30 Jan 22 07:01:38 crc kubenswrapper[4720]: I0122 07:01:38.773274 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/keystone-bootstrap-2wf2m"] Jan 22 07:01:38 crc kubenswrapper[4720]: I0122 07:01:38.781135 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 22 07:01:38 crc kubenswrapper[4720]: I0122 07:01:38.781512 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="b7fd8a18-2d71-474c-83e4-b7789274ac42" containerName="watcher-kuttl-api-log" containerID="cri-o://7b28e8cd3afd54a30e97ca37d482cc2c07ab84f8171505e65ddc7aaba1922c2c" gracePeriod=30 Jan 22 07:01:38 crc kubenswrapper[4720]: I0122 07:01:38.782155 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="b7fd8a18-2d71-474c-83e4-b7789274ac42" containerName="watcher-api" containerID="cri-o://b54b07fec61ed15a3e16ee62a54fe2c9cfe0d2ae0c39be951a71d9735a84a9d0" gracePeriod=30 Jan 22 07:01:38 crc kubenswrapper[4720]: I0122 07:01:38.795638 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/keystone-bootstrap-2wf2m"] Jan 22 07:01:38 crc kubenswrapper[4720]: I0122 07:01:38.844360 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/keystone-bootstrap-6qk6s"] Jan 22 07:01:38 crc kubenswrapper[4720]: I0122 07:01:38.846229 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-bootstrap-6qk6s" Jan 22 07:01:38 crc kubenswrapper[4720]: I0122 07:01:38.849537 4720 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"osp-secret" Jan 22 07:01:38 crc kubenswrapper[4720]: I0122 07:01:38.853733 4720 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-memcached-mtls" Jan 22 07:01:38 crc kubenswrapper[4720]: I0122 07:01:38.863441 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/keystone-bootstrap-6qk6s"] Jan 22 07:01:38 crc kubenswrapper[4720]: I0122 07:01:38.897102 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/439270d4-5c94-4dba-8623-2d03bd7198d8-cert-memcached-mtls\") pod \"keystone-bootstrap-6qk6s\" (UID: \"439270d4-5c94-4dba-8623-2d03bd7198d8\") " pod="watcher-kuttl-default/keystone-bootstrap-6qk6s" Jan 22 07:01:38 crc kubenswrapper[4720]: I0122 07:01:38.897180 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/439270d4-5c94-4dba-8623-2d03bd7198d8-scripts\") pod \"keystone-bootstrap-6qk6s\" (UID: \"439270d4-5c94-4dba-8623-2d03bd7198d8\") " pod="watcher-kuttl-default/keystone-bootstrap-6qk6s" Jan 22 07:01:38 crc kubenswrapper[4720]: I0122 07:01:38.897214 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/439270d4-5c94-4dba-8623-2d03bd7198d8-fernet-keys\") pod \"keystone-bootstrap-6qk6s\" (UID: \"439270d4-5c94-4dba-8623-2d03bd7198d8\") " pod="watcher-kuttl-default/keystone-bootstrap-6qk6s" Jan 22 07:01:38 crc kubenswrapper[4720]: I0122 07:01:38.897260 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/439270d4-5c94-4dba-8623-2d03bd7198d8-combined-ca-bundle\") pod \"keystone-bootstrap-6qk6s\" (UID: \"439270d4-5c94-4dba-8623-2d03bd7198d8\") " pod="watcher-kuttl-default/keystone-bootstrap-6qk6s" Jan 22 07:01:38 crc kubenswrapper[4720]: I0122 07:01:38.897290 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/439270d4-5c94-4dba-8623-2d03bd7198d8-config-data\") pod \"keystone-bootstrap-6qk6s\" (UID: \"439270d4-5c94-4dba-8623-2d03bd7198d8\") " pod="watcher-kuttl-default/keystone-bootstrap-6qk6s" Jan 22 07:01:38 crc kubenswrapper[4720]: I0122 07:01:38.897307 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/439270d4-5c94-4dba-8623-2d03bd7198d8-credential-keys\") pod \"keystone-bootstrap-6qk6s\" (UID: \"439270d4-5c94-4dba-8623-2d03bd7198d8\") " pod="watcher-kuttl-default/keystone-bootstrap-6qk6s" Jan 22 07:01:38 crc kubenswrapper[4720]: I0122 07:01:38.897330 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rj2bf\" (UniqueName: \"kubernetes.io/projected/439270d4-5c94-4dba-8623-2d03bd7198d8-kube-api-access-rj2bf\") pod \"keystone-bootstrap-6qk6s\" (UID: \"439270d4-5c94-4dba-8623-2d03bd7198d8\") " pod="watcher-kuttl-default/keystone-bootstrap-6qk6s" Jan 22 07:01:38 crc kubenswrapper[4720]: I0122 07:01:38.999029 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/439270d4-5c94-4dba-8623-2d03bd7198d8-scripts\") pod \"keystone-bootstrap-6qk6s\" (UID: \"439270d4-5c94-4dba-8623-2d03bd7198d8\") " pod="watcher-kuttl-default/keystone-bootstrap-6qk6s" Jan 22 07:01:38 crc kubenswrapper[4720]: I0122 07:01:38.999106 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/439270d4-5c94-4dba-8623-2d03bd7198d8-fernet-keys\") pod \"keystone-bootstrap-6qk6s\" (UID: \"439270d4-5c94-4dba-8623-2d03bd7198d8\") " pod="watcher-kuttl-default/keystone-bootstrap-6qk6s" Jan 22 07:01:38 crc kubenswrapper[4720]: I0122 07:01:38.999177 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/439270d4-5c94-4dba-8623-2d03bd7198d8-combined-ca-bundle\") pod \"keystone-bootstrap-6qk6s\" (UID: \"439270d4-5c94-4dba-8623-2d03bd7198d8\") " pod="watcher-kuttl-default/keystone-bootstrap-6qk6s" Jan 22 07:01:38 crc kubenswrapper[4720]: I0122 07:01:38.999218 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/439270d4-5c94-4dba-8623-2d03bd7198d8-config-data\") pod \"keystone-bootstrap-6qk6s\" (UID: \"439270d4-5c94-4dba-8623-2d03bd7198d8\") " pod="watcher-kuttl-default/keystone-bootstrap-6qk6s" Jan 22 07:01:38 crc kubenswrapper[4720]: I0122 07:01:38.999239 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/439270d4-5c94-4dba-8623-2d03bd7198d8-credential-keys\") pod \"keystone-bootstrap-6qk6s\" (UID: \"439270d4-5c94-4dba-8623-2d03bd7198d8\") " pod="watcher-kuttl-default/keystone-bootstrap-6qk6s" Jan 22 07:01:38 crc kubenswrapper[4720]: I0122 07:01:38.999262 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rj2bf\" (UniqueName: \"kubernetes.io/projected/439270d4-5c94-4dba-8623-2d03bd7198d8-kube-api-access-rj2bf\") pod \"keystone-bootstrap-6qk6s\" (UID: \"439270d4-5c94-4dba-8623-2d03bd7198d8\") " pod="watcher-kuttl-default/keystone-bootstrap-6qk6s" Jan 22 07:01:38 crc kubenswrapper[4720]: I0122 07:01:38.999331 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/439270d4-5c94-4dba-8623-2d03bd7198d8-cert-memcached-mtls\") pod \"keystone-bootstrap-6qk6s\" (UID: \"439270d4-5c94-4dba-8623-2d03bd7198d8\") " pod="watcher-kuttl-default/keystone-bootstrap-6qk6s" Jan 22 07:01:39 crc kubenswrapper[4720]: I0122 07:01:39.004473 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/439270d4-5c94-4dba-8623-2d03bd7198d8-credential-keys\") pod \"keystone-bootstrap-6qk6s\" (UID: \"439270d4-5c94-4dba-8623-2d03bd7198d8\") " pod="watcher-kuttl-default/keystone-bootstrap-6qk6s" Jan 22 07:01:39 crc kubenswrapper[4720]: I0122 07:01:39.004501 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/439270d4-5c94-4dba-8623-2d03bd7198d8-cert-memcached-mtls\") pod \"keystone-bootstrap-6qk6s\" (UID: \"439270d4-5c94-4dba-8623-2d03bd7198d8\") " pod="watcher-kuttl-default/keystone-bootstrap-6qk6s" Jan 22 07:01:39 crc kubenswrapper[4720]: I0122 07:01:39.005843 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/439270d4-5c94-4dba-8623-2d03bd7198d8-fernet-keys\") pod \"keystone-bootstrap-6qk6s\" (UID: \"439270d4-5c94-4dba-8623-2d03bd7198d8\") " pod="watcher-kuttl-default/keystone-bootstrap-6qk6s" Jan 22 07:01:39 crc kubenswrapper[4720]: I0122 07:01:39.006656 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/439270d4-5c94-4dba-8623-2d03bd7198d8-config-data\") pod \"keystone-bootstrap-6qk6s\" (UID: \"439270d4-5c94-4dba-8623-2d03bd7198d8\") " pod="watcher-kuttl-default/keystone-bootstrap-6qk6s" Jan 22 07:01:39 crc kubenswrapper[4720]: I0122 07:01:39.006837 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/439270d4-5c94-4dba-8623-2d03bd7198d8-combined-ca-bundle\") pod \"keystone-bootstrap-6qk6s\" (UID: \"439270d4-5c94-4dba-8623-2d03bd7198d8\") " pod="watcher-kuttl-default/keystone-bootstrap-6qk6s" Jan 22 07:01:39 crc kubenswrapper[4720]: I0122 07:01:39.007201 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/439270d4-5c94-4dba-8623-2d03bd7198d8-scripts\") pod \"keystone-bootstrap-6qk6s\" (UID: \"439270d4-5c94-4dba-8623-2d03bd7198d8\") " pod="watcher-kuttl-default/keystone-bootstrap-6qk6s" Jan 22 07:01:39 crc kubenswrapper[4720]: I0122 07:01:39.018993 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rj2bf\" (UniqueName: \"kubernetes.io/projected/439270d4-5c94-4dba-8623-2d03bd7198d8-kube-api-access-rj2bf\") pod \"keystone-bootstrap-6qk6s\" (UID: \"439270d4-5c94-4dba-8623-2d03bd7198d8\") " pod="watcher-kuttl-default/keystone-bootstrap-6qk6s" Jan 22 07:01:39 crc kubenswrapper[4720]: I0122 07:01:39.168903 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-bootstrap-6qk6s" Jan 22 07:01:39 crc kubenswrapper[4720]: I0122 07:01:39.265574 4720 generic.go:334] "Generic (PLEG): container finished" podID="b7fd8a18-2d71-474c-83e4-b7789274ac42" containerID="7b28e8cd3afd54a30e97ca37d482cc2c07ab84f8171505e65ddc7aaba1922c2c" exitCode=143 Jan 22 07:01:39 crc kubenswrapper[4720]: I0122 07:01:39.265698 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"b7fd8a18-2d71-474c-83e4-b7789274ac42","Type":"ContainerDied","Data":"7b28e8cd3afd54a30e97ca37d482cc2c07ab84f8171505e65ddc7aaba1922c2c"} Jan 22 07:01:39 crc kubenswrapper[4720]: I0122 07:01:39.268489 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"9cf69410-41fe-483a-a2f1-03fb54dbf10e","Type":"ContainerStarted","Data":"c0f5a9c8abfeb8bb5df2d859b8ae16cb107e69c31d0e54471c72b4aa8e84b551"} Jan 22 07:01:39 crc kubenswrapper[4720]: I0122 07:01:39.269676 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:01:39 crc kubenswrapper[4720]: I0122 07:01:39.298141 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/ceilometer-0" podStartSLOduration=1.9804543620000001 podStartE2EDuration="8.298122116s" podCreationTimestamp="2026-01-22 07:01:31 +0000 UTC" firstStartedPulling="2026-01-22 07:01:32.223492153 +0000 UTC m=+1584.365398858" lastFinishedPulling="2026-01-22 07:01:38.541159907 +0000 UTC m=+1590.683066612" observedRunningTime="2026-01-22 07:01:39.296218293 +0000 UTC m=+1591.438125028" watchObservedRunningTime="2026-01-22 07:01:39.298122116 +0000 UTC m=+1591.440028821" Jan 22 07:01:39 crc kubenswrapper[4720]: E0122 07:01:39.390821 4720 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e24266a3e9b29047ddb048ce6185bdd1d1d92dde91c780a7f5b7e17e97177229" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Jan 22 07:01:39 crc kubenswrapper[4720]: E0122 07:01:39.395109 4720 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e24266a3e9b29047ddb048ce6185bdd1d1d92dde91c780a7f5b7e17e97177229" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Jan 22 07:01:39 crc kubenswrapper[4720]: E0122 07:01:39.408156 4720 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e24266a3e9b29047ddb048ce6185bdd1d1d92dde91c780a7f5b7e17e97177229" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Jan 22 07:01:39 crc kubenswrapper[4720]: E0122 07:01:39.408267 4720 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podUID="b1f4244a-771e-422c-823a-385d4c50bc05" containerName="watcher-applier" Jan 22 07:01:39 crc kubenswrapper[4720]: E0122 07:01:39.621217 4720 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="54f351b36a550966aaa4f2d9fc1d0e810b27b0908f58fc1fdee03390ef15d8c9" cmd=["/usr/bin/pgrep","-f","-r","DRST","watcher-decision-engine"] Jan 22 07:01:39 crc kubenswrapper[4720]: E0122 07:01:39.627198 4720 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="54f351b36a550966aaa4f2d9fc1d0e810b27b0908f58fc1fdee03390ef15d8c9" cmd=["/usr/bin/pgrep","-f","-r","DRST","watcher-decision-engine"] Jan 22 07:01:39 crc kubenswrapper[4720]: E0122 07:01:39.629424 4720 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="54f351b36a550966aaa4f2d9fc1d0e810b27b0908f58fc1fdee03390ef15d8c9" cmd=["/usr/bin/pgrep","-f","-r","DRST","watcher-decision-engine"] Jan 22 07:01:39 crc kubenswrapper[4720]: E0122 07:01:39.629476 4720 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="bf960b01-5c0d-4a0d-aae9-ffaeff4af5d5" containerName="watcher-decision-engine" Jan 22 07:01:39 crc kubenswrapper[4720]: I0122 07:01:39.747478 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/keystone-bootstrap-6qk6s"] Jan 22 07:01:39 crc kubenswrapper[4720]: I0122 07:01:39.912264 4720 prober.go:107] "Probe failed" probeType="Readiness" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="b7fd8a18-2d71-474c-83e4-b7789274ac42" containerName="watcher-api" probeResult="failure" output="Get \"https://10.217.0.166:9322/\": read tcp 10.217.0.2:35498->10.217.0.166:9322: read: connection reset by peer" Jan 22 07:01:39 crc kubenswrapper[4720]: I0122 07:01:39.912318 4720 prober.go:107] "Probe failed" probeType="Readiness" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="b7fd8a18-2d71-474c-83e4-b7789274ac42" containerName="watcher-kuttl-api-log" probeResult="failure" output="Get \"https://10.217.0.166:9322/\": read tcp 10.217.0.2:35506->10.217.0.166:9322: read: connection reset by peer" Jan 22 07:01:40 crc kubenswrapper[4720]: I0122 07:01:40.222691 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b9501447-d695-42bc-ab22-0422b2db3647" path="/var/lib/kubelet/pods/b9501447-d695-42bc-ab22-0422b2db3647/volumes" Jan 22 07:01:40 crc kubenswrapper[4720]: I0122 07:01:40.280564 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-bootstrap-6qk6s" event={"ID":"439270d4-5c94-4dba-8623-2d03bd7198d8","Type":"ContainerStarted","Data":"73e23a70af550acd5cfea602c080620f235562914d5d78f8d3841b460ec57067"} Jan 22 07:01:40 crc kubenswrapper[4720]: I0122 07:01:40.624123 4720 prober.go:107] "Probe failed" probeType="Readiness" pod="watcher-kuttl-default/memcached-0" podUID="0f11b752-39dd-4f60-b6e5-6f788a85f86a" containerName="memcached" probeResult="failure" output="dial tcp 10.217.0.107:11211: connect: connection refused" Jan 22 07:01:41 crc kubenswrapper[4720]: I0122 07:01:41.234781 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/memcached-0" Jan 22 07:01:41 crc kubenswrapper[4720]: I0122 07:01:41.316138 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-bootstrap-6qk6s" event={"ID":"439270d4-5c94-4dba-8623-2d03bd7198d8","Type":"ContainerStarted","Data":"99971402e2f98e7ec904431a823b04bf2b72c067ff53e86c7576d3fc53e0fe04"} Jan 22 07:01:41 crc kubenswrapper[4720]: I0122 07:01:41.337517 4720 generic.go:334] "Generic (PLEG): container finished" podID="b7fd8a18-2d71-474c-83e4-b7789274ac42" containerID="b54b07fec61ed15a3e16ee62a54fe2c9cfe0d2ae0c39be951a71d9735a84a9d0" exitCode=0 Jan 22 07:01:41 crc kubenswrapper[4720]: I0122 07:01:41.337665 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"b7fd8a18-2d71-474c-83e4-b7789274ac42","Type":"ContainerDied","Data":"b54b07fec61ed15a3e16ee62a54fe2c9cfe0d2ae0c39be951a71d9735a84a9d0"} Jan 22 07:01:41 crc kubenswrapper[4720]: I0122 07:01:41.337707 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"b7fd8a18-2d71-474c-83e4-b7789274ac42","Type":"ContainerDied","Data":"46aaf4e4d4b0065c632b0f69d56ede8fc010aa80eea29954dc7648e37cd17766"} Jan 22 07:01:41 crc kubenswrapper[4720]: I0122 07:01:41.337728 4720 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="46aaf4e4d4b0065c632b0f69d56ede8fc010aa80eea29954dc7648e37cd17766" Jan 22 07:01:41 crc kubenswrapper[4720]: I0122 07:01:41.350427 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/0f11b752-39dd-4f60-b6e5-6f788a85f86a-kolla-config\") pod \"0f11b752-39dd-4f60-b6e5-6f788a85f86a\" (UID: \"0f11b752-39dd-4f60-b6e5-6f788a85f86a\") " Jan 22 07:01:41 crc kubenswrapper[4720]: I0122 07:01:41.350514 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/0f11b752-39dd-4f60-b6e5-6f788a85f86a-memcached-tls-certs\") pod \"0f11b752-39dd-4f60-b6e5-6f788a85f86a\" (UID: \"0f11b752-39dd-4f60-b6e5-6f788a85f86a\") " Jan 22 07:01:41 crc kubenswrapper[4720]: I0122 07:01:41.350570 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0f11b752-39dd-4f60-b6e5-6f788a85f86a-config-data\") pod \"0f11b752-39dd-4f60-b6e5-6f788a85f86a\" (UID: \"0f11b752-39dd-4f60-b6e5-6f788a85f86a\") " Jan 22 07:01:41 crc kubenswrapper[4720]: I0122 07:01:41.350615 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f11b752-39dd-4f60-b6e5-6f788a85f86a-combined-ca-bundle\") pod \"0f11b752-39dd-4f60-b6e5-6f788a85f86a\" (UID: \"0f11b752-39dd-4f60-b6e5-6f788a85f86a\") " Jan 22 07:01:41 crc kubenswrapper[4720]: I0122 07:01:41.350930 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c4h4k\" (UniqueName: \"kubernetes.io/projected/0f11b752-39dd-4f60-b6e5-6f788a85f86a-kube-api-access-c4h4k\") pod \"0f11b752-39dd-4f60-b6e5-6f788a85f86a\" (UID: \"0f11b752-39dd-4f60-b6e5-6f788a85f86a\") " Jan 22 07:01:41 crc kubenswrapper[4720]: I0122 07:01:41.351382 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0f11b752-39dd-4f60-b6e5-6f788a85f86a-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "0f11b752-39dd-4f60-b6e5-6f788a85f86a" (UID: "0f11b752-39dd-4f60-b6e5-6f788a85f86a"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 07:01:41 crc kubenswrapper[4720]: I0122 07:01:41.351791 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0f11b752-39dd-4f60-b6e5-6f788a85f86a-config-data" (OuterVolumeSpecName: "config-data") pod "0f11b752-39dd-4f60-b6e5-6f788a85f86a" (UID: "0f11b752-39dd-4f60-b6e5-6f788a85f86a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 07:01:41 crc kubenswrapper[4720]: I0122 07:01:41.351859 4720 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/0f11b752-39dd-4f60-b6e5-6f788a85f86a-kolla-config\") on node \"crc\" DevicePath \"\"" Jan 22 07:01:41 crc kubenswrapper[4720]: I0122 07:01:41.354523 4720 generic.go:334] "Generic (PLEG): container finished" podID="0f11b752-39dd-4f60-b6e5-6f788a85f86a" containerID="a6a06afc3c96b7c1b7603a84bc896d35ff9f51ef9b4f34e596bcf3ece6ddd1b9" exitCode=0 Jan 22 07:01:41 crc kubenswrapper[4720]: I0122 07:01:41.355726 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/memcached-0" Jan 22 07:01:41 crc kubenswrapper[4720]: I0122 07:01:41.355877 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/memcached-0" event={"ID":"0f11b752-39dd-4f60-b6e5-6f788a85f86a","Type":"ContainerDied","Data":"a6a06afc3c96b7c1b7603a84bc896d35ff9f51ef9b4f34e596bcf3ece6ddd1b9"} Jan 22 07:01:41 crc kubenswrapper[4720]: I0122 07:01:41.355923 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/memcached-0" event={"ID":"0f11b752-39dd-4f60-b6e5-6f788a85f86a","Type":"ContainerDied","Data":"ccae151106b74c918b750440a199253eb3588be6e722310b96d8d0e7410450ba"} Jan 22 07:01:41 crc kubenswrapper[4720]: I0122 07:01:41.355950 4720 scope.go:117] "RemoveContainer" containerID="a6a06afc3c96b7c1b7603a84bc896d35ff9f51ef9b4f34e596bcf3ece6ddd1b9" Jan 22 07:01:41 crc kubenswrapper[4720]: I0122 07:01:41.365263 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/keystone-bootstrap-6qk6s" podStartSLOduration=3.365237162 podStartE2EDuration="3.365237162s" podCreationTimestamp="2026-01-22 07:01:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 07:01:41.362351282 +0000 UTC m=+1593.504258017" watchObservedRunningTime="2026-01-22 07:01:41.365237162 +0000 UTC m=+1593.507143877" Jan 22 07:01:41 crc kubenswrapper[4720]: I0122 07:01:41.365988 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0f11b752-39dd-4f60-b6e5-6f788a85f86a-kube-api-access-c4h4k" (OuterVolumeSpecName: "kube-api-access-c4h4k") pod "0f11b752-39dd-4f60-b6e5-6f788a85f86a" (UID: "0f11b752-39dd-4f60-b6e5-6f788a85f86a"). InnerVolumeSpecName "kube-api-access-c4h4k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 07:01:41 crc kubenswrapper[4720]: I0122 07:01:41.406232 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f11b752-39dd-4f60-b6e5-6f788a85f86a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0f11b752-39dd-4f60-b6e5-6f788a85f86a" (UID: "0f11b752-39dd-4f60-b6e5-6f788a85f86a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:01:41 crc kubenswrapper[4720]: I0122 07:01:41.415050 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f11b752-39dd-4f60-b6e5-6f788a85f86a-memcached-tls-certs" (OuterVolumeSpecName: "memcached-tls-certs") pod "0f11b752-39dd-4f60-b6e5-6f788a85f86a" (UID: "0f11b752-39dd-4f60-b6e5-6f788a85f86a"). InnerVolumeSpecName "memcached-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:01:41 crc kubenswrapper[4720]: I0122 07:01:41.442102 4720 scope.go:117] "RemoveContainer" containerID="a6a06afc3c96b7c1b7603a84bc896d35ff9f51ef9b4f34e596bcf3ece6ddd1b9" Jan 22 07:01:41 crc kubenswrapper[4720]: E0122 07:01:41.446353 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a6a06afc3c96b7c1b7603a84bc896d35ff9f51ef9b4f34e596bcf3ece6ddd1b9\": container with ID starting with a6a06afc3c96b7c1b7603a84bc896d35ff9f51ef9b4f34e596bcf3ece6ddd1b9 not found: ID does not exist" containerID="a6a06afc3c96b7c1b7603a84bc896d35ff9f51ef9b4f34e596bcf3ece6ddd1b9" Jan 22 07:01:41 crc kubenswrapper[4720]: I0122 07:01:41.446413 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a6a06afc3c96b7c1b7603a84bc896d35ff9f51ef9b4f34e596bcf3ece6ddd1b9"} err="failed to get container status \"a6a06afc3c96b7c1b7603a84bc896d35ff9f51ef9b4f34e596bcf3ece6ddd1b9\": rpc error: code = NotFound desc = could not find container \"a6a06afc3c96b7c1b7603a84bc896d35ff9f51ef9b4f34e596bcf3ece6ddd1b9\": container with ID starting with a6a06afc3c96b7c1b7603a84bc896d35ff9f51ef9b4f34e596bcf3ece6ddd1b9 not found: ID does not exist" Jan 22 07:01:41 crc kubenswrapper[4720]: I0122 07:01:41.454752 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:01:41 crc kubenswrapper[4720]: I0122 07:01:41.457082 4720 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0f11b752-39dd-4f60-b6e5-6f788a85f86a-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 07:01:41 crc kubenswrapper[4720]: I0122 07:01:41.457165 4720 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f11b752-39dd-4f60-b6e5-6f788a85f86a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 07:01:41 crc kubenswrapper[4720]: I0122 07:01:41.457178 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c4h4k\" (UniqueName: \"kubernetes.io/projected/0f11b752-39dd-4f60-b6e5-6f788a85f86a-kube-api-access-c4h4k\") on node \"crc\" DevicePath \"\"" Jan 22 07:01:41 crc kubenswrapper[4720]: I0122 07:01:41.457188 4720 reconciler_common.go:293] "Volume detached for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/0f11b752-39dd-4f60-b6e5-6f788a85f86a-memcached-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 22 07:01:41 crc kubenswrapper[4720]: I0122 07:01:41.558649 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b7fd8a18-2d71-474c-83e4-b7789274ac42-config-data\") pod \"b7fd8a18-2d71-474c-83e4-b7789274ac42\" (UID: \"b7fd8a18-2d71-474c-83e4-b7789274ac42\") " Jan 22 07:01:41 crc kubenswrapper[4720]: I0122 07:01:41.558740 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b7fd8a18-2d71-474c-83e4-b7789274ac42-logs\") pod \"b7fd8a18-2d71-474c-83e4-b7789274ac42\" (UID: \"b7fd8a18-2d71-474c-83e4-b7789274ac42\") " Jan 22 07:01:41 crc kubenswrapper[4720]: I0122 07:01:41.558823 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bzqlf\" (UniqueName: \"kubernetes.io/projected/b7fd8a18-2d71-474c-83e4-b7789274ac42-kube-api-access-bzqlf\") pod \"b7fd8a18-2d71-474c-83e4-b7789274ac42\" (UID: \"b7fd8a18-2d71-474c-83e4-b7789274ac42\") " Jan 22 07:01:41 crc kubenswrapper[4720]: I0122 07:01:41.558886 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b7fd8a18-2d71-474c-83e4-b7789274ac42-internal-tls-certs\") pod \"b7fd8a18-2d71-474c-83e4-b7789274ac42\" (UID: \"b7fd8a18-2d71-474c-83e4-b7789274ac42\") " Jan 22 07:01:41 crc kubenswrapper[4720]: I0122 07:01:41.558938 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/b7fd8a18-2d71-474c-83e4-b7789274ac42-custom-prometheus-ca\") pod \"b7fd8a18-2d71-474c-83e4-b7789274ac42\" (UID: \"b7fd8a18-2d71-474c-83e4-b7789274ac42\") " Jan 22 07:01:41 crc kubenswrapper[4720]: I0122 07:01:41.558972 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b7fd8a18-2d71-474c-83e4-b7789274ac42-combined-ca-bundle\") pod \"b7fd8a18-2d71-474c-83e4-b7789274ac42\" (UID: \"b7fd8a18-2d71-474c-83e4-b7789274ac42\") " Jan 22 07:01:41 crc kubenswrapper[4720]: I0122 07:01:41.559042 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b7fd8a18-2d71-474c-83e4-b7789274ac42-public-tls-certs\") pod \"b7fd8a18-2d71-474c-83e4-b7789274ac42\" (UID: \"b7fd8a18-2d71-474c-83e4-b7789274ac42\") " Jan 22 07:01:41 crc kubenswrapper[4720]: I0122 07:01:41.559585 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b7fd8a18-2d71-474c-83e4-b7789274ac42-logs" (OuterVolumeSpecName: "logs") pod "b7fd8a18-2d71-474c-83e4-b7789274ac42" (UID: "b7fd8a18-2d71-474c-83e4-b7789274ac42"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 07:01:41 crc kubenswrapper[4720]: I0122 07:01:41.560197 4720 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b7fd8a18-2d71-474c-83e4-b7789274ac42-logs\") on node \"crc\" DevicePath \"\"" Jan 22 07:01:41 crc kubenswrapper[4720]: I0122 07:01:41.568024 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b7fd8a18-2d71-474c-83e4-b7789274ac42-kube-api-access-bzqlf" (OuterVolumeSpecName: "kube-api-access-bzqlf") pod "b7fd8a18-2d71-474c-83e4-b7789274ac42" (UID: "b7fd8a18-2d71-474c-83e4-b7789274ac42"). InnerVolumeSpecName "kube-api-access-bzqlf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 07:01:41 crc kubenswrapper[4720]: I0122 07:01:41.591013 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b7fd8a18-2d71-474c-83e4-b7789274ac42-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b7fd8a18-2d71-474c-83e4-b7789274ac42" (UID: "b7fd8a18-2d71-474c-83e4-b7789274ac42"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:01:41 crc kubenswrapper[4720]: I0122 07:01:41.609977 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b7fd8a18-2d71-474c-83e4-b7789274ac42-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "b7fd8a18-2d71-474c-83e4-b7789274ac42" (UID: "b7fd8a18-2d71-474c-83e4-b7789274ac42"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:01:41 crc kubenswrapper[4720]: I0122 07:01:41.635900 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b7fd8a18-2d71-474c-83e4-b7789274ac42-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "b7fd8a18-2d71-474c-83e4-b7789274ac42" (UID: "b7fd8a18-2d71-474c-83e4-b7789274ac42"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:01:41 crc kubenswrapper[4720]: I0122 07:01:41.639475 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b7fd8a18-2d71-474c-83e4-b7789274ac42-config-data" (OuterVolumeSpecName: "config-data") pod "b7fd8a18-2d71-474c-83e4-b7789274ac42" (UID: "b7fd8a18-2d71-474c-83e4-b7789274ac42"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:01:41 crc kubenswrapper[4720]: I0122 07:01:41.640500 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b7fd8a18-2d71-474c-83e4-b7789274ac42-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "b7fd8a18-2d71-474c-83e4-b7789274ac42" (UID: "b7fd8a18-2d71-474c-83e4-b7789274ac42"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:01:41 crc kubenswrapper[4720]: I0122 07:01:41.662074 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bzqlf\" (UniqueName: \"kubernetes.io/projected/b7fd8a18-2d71-474c-83e4-b7789274ac42-kube-api-access-bzqlf\") on node \"crc\" DevicePath \"\"" Jan 22 07:01:41 crc kubenswrapper[4720]: I0122 07:01:41.662124 4720 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b7fd8a18-2d71-474c-83e4-b7789274ac42-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 22 07:01:41 crc kubenswrapper[4720]: I0122 07:01:41.662155 4720 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/b7fd8a18-2d71-474c-83e4-b7789274ac42-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Jan 22 07:01:41 crc kubenswrapper[4720]: I0122 07:01:41.662172 4720 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b7fd8a18-2d71-474c-83e4-b7789274ac42-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 07:01:41 crc kubenswrapper[4720]: I0122 07:01:41.662188 4720 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b7fd8a18-2d71-474c-83e4-b7789274ac42-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 22 07:01:41 crc kubenswrapper[4720]: I0122 07:01:41.662199 4720 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b7fd8a18-2d71-474c-83e4-b7789274ac42-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 07:01:41 crc kubenswrapper[4720]: I0122 07:01:41.698061 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/memcached-0"] Jan 22 07:01:41 crc kubenswrapper[4720]: I0122 07:01:41.712092 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/memcached-0"] Jan 22 07:01:41 crc kubenswrapper[4720]: I0122 07:01:41.722536 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/memcached-0"] Jan 22 07:01:41 crc kubenswrapper[4720]: E0122 07:01:41.723021 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b7fd8a18-2d71-474c-83e4-b7789274ac42" containerName="watcher-kuttl-api-log" Jan 22 07:01:41 crc kubenswrapper[4720]: I0122 07:01:41.723047 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7fd8a18-2d71-474c-83e4-b7789274ac42" containerName="watcher-kuttl-api-log" Jan 22 07:01:41 crc kubenswrapper[4720]: E0122 07:01:41.723068 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b7fd8a18-2d71-474c-83e4-b7789274ac42" containerName="watcher-api" Jan 22 07:01:41 crc kubenswrapper[4720]: I0122 07:01:41.723077 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7fd8a18-2d71-474c-83e4-b7789274ac42" containerName="watcher-api" Jan 22 07:01:41 crc kubenswrapper[4720]: E0122 07:01:41.723109 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f11b752-39dd-4f60-b6e5-6f788a85f86a" containerName="memcached" Jan 22 07:01:41 crc kubenswrapper[4720]: I0122 07:01:41.723118 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f11b752-39dd-4f60-b6e5-6f788a85f86a" containerName="memcached" Jan 22 07:01:41 crc kubenswrapper[4720]: I0122 07:01:41.723327 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f11b752-39dd-4f60-b6e5-6f788a85f86a" containerName="memcached" Jan 22 07:01:41 crc kubenswrapper[4720]: I0122 07:01:41.723345 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="b7fd8a18-2d71-474c-83e4-b7789274ac42" containerName="watcher-api" Jan 22 07:01:41 crc kubenswrapper[4720]: I0122 07:01:41.723358 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="b7fd8a18-2d71-474c-83e4-b7789274ac42" containerName="watcher-kuttl-api-log" Jan 22 07:01:41 crc kubenswrapper[4720]: I0122 07:01:41.724050 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/memcached-0" Jan 22 07:01:41 crc kubenswrapper[4720]: I0122 07:01:41.726516 4720 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"memcached-memcached-dockercfg-gtj4j" Jan 22 07:01:41 crc kubenswrapper[4720]: I0122 07:01:41.726791 4720 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-memcached-svc" Jan 22 07:01:41 crc kubenswrapper[4720]: I0122 07:01:41.735577 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"watcher-kuttl-default"/"memcached-config-data" Jan 22 07:01:41 crc kubenswrapper[4720]: I0122 07:01:41.736209 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/memcached-0"] Jan 22 07:01:41 crc kubenswrapper[4720]: I0122 07:01:41.867357 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db50c3b8-8300-4689-be75-dbcc3b10a27f-combined-ca-bundle\") pod \"memcached-0\" (UID: \"db50c3b8-8300-4689-be75-dbcc3b10a27f\") " pod="watcher-kuttl-default/memcached-0" Jan 22 07:01:41 crc kubenswrapper[4720]: I0122 07:01:41.869456 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/db50c3b8-8300-4689-be75-dbcc3b10a27f-kolla-config\") pod \"memcached-0\" (UID: \"db50c3b8-8300-4689-be75-dbcc3b10a27f\") " pod="watcher-kuttl-default/memcached-0" Jan 22 07:01:41 crc kubenswrapper[4720]: I0122 07:01:41.869521 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4sdv4\" (UniqueName: \"kubernetes.io/projected/db50c3b8-8300-4689-be75-dbcc3b10a27f-kube-api-access-4sdv4\") pod \"memcached-0\" (UID: \"db50c3b8-8300-4689-be75-dbcc3b10a27f\") " pod="watcher-kuttl-default/memcached-0" Jan 22 07:01:41 crc kubenswrapper[4720]: I0122 07:01:41.869546 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/db50c3b8-8300-4689-be75-dbcc3b10a27f-config-data\") pod \"memcached-0\" (UID: \"db50c3b8-8300-4689-be75-dbcc3b10a27f\") " pod="watcher-kuttl-default/memcached-0" Jan 22 07:01:41 crc kubenswrapper[4720]: I0122 07:01:41.869577 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/db50c3b8-8300-4689-be75-dbcc3b10a27f-memcached-tls-certs\") pod \"memcached-0\" (UID: \"db50c3b8-8300-4689-be75-dbcc3b10a27f\") " pod="watcher-kuttl-default/memcached-0" Jan 22 07:01:41 crc kubenswrapper[4720]: I0122 07:01:41.971215 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db50c3b8-8300-4689-be75-dbcc3b10a27f-combined-ca-bundle\") pod \"memcached-0\" (UID: \"db50c3b8-8300-4689-be75-dbcc3b10a27f\") " pod="watcher-kuttl-default/memcached-0" Jan 22 07:01:41 crc kubenswrapper[4720]: I0122 07:01:41.971428 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/db50c3b8-8300-4689-be75-dbcc3b10a27f-kolla-config\") pod \"memcached-0\" (UID: \"db50c3b8-8300-4689-be75-dbcc3b10a27f\") " pod="watcher-kuttl-default/memcached-0" Jan 22 07:01:41 crc kubenswrapper[4720]: I0122 07:01:41.971516 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4sdv4\" (UniqueName: \"kubernetes.io/projected/db50c3b8-8300-4689-be75-dbcc3b10a27f-kube-api-access-4sdv4\") pod \"memcached-0\" (UID: \"db50c3b8-8300-4689-be75-dbcc3b10a27f\") " pod="watcher-kuttl-default/memcached-0" Jan 22 07:01:41 crc kubenswrapper[4720]: I0122 07:01:41.971548 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/db50c3b8-8300-4689-be75-dbcc3b10a27f-config-data\") pod \"memcached-0\" (UID: \"db50c3b8-8300-4689-be75-dbcc3b10a27f\") " pod="watcher-kuttl-default/memcached-0" Jan 22 07:01:41 crc kubenswrapper[4720]: I0122 07:01:41.971606 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/db50c3b8-8300-4689-be75-dbcc3b10a27f-memcached-tls-certs\") pod \"memcached-0\" (UID: \"db50c3b8-8300-4689-be75-dbcc3b10a27f\") " pod="watcher-kuttl-default/memcached-0" Jan 22 07:01:41 crc kubenswrapper[4720]: I0122 07:01:41.972562 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/db50c3b8-8300-4689-be75-dbcc3b10a27f-kolla-config\") pod \"memcached-0\" (UID: \"db50c3b8-8300-4689-be75-dbcc3b10a27f\") " pod="watcher-kuttl-default/memcached-0" Jan 22 07:01:41 crc kubenswrapper[4720]: I0122 07:01:41.972674 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/db50c3b8-8300-4689-be75-dbcc3b10a27f-config-data\") pod \"memcached-0\" (UID: \"db50c3b8-8300-4689-be75-dbcc3b10a27f\") " pod="watcher-kuttl-default/memcached-0" Jan 22 07:01:41 crc kubenswrapper[4720]: I0122 07:01:41.975863 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/db50c3b8-8300-4689-be75-dbcc3b10a27f-memcached-tls-certs\") pod \"memcached-0\" (UID: \"db50c3b8-8300-4689-be75-dbcc3b10a27f\") " pod="watcher-kuttl-default/memcached-0" Jan 22 07:01:41 crc kubenswrapper[4720]: I0122 07:01:41.976568 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db50c3b8-8300-4689-be75-dbcc3b10a27f-combined-ca-bundle\") pod \"memcached-0\" (UID: \"db50c3b8-8300-4689-be75-dbcc3b10a27f\") " pod="watcher-kuttl-default/memcached-0" Jan 22 07:01:41 crc kubenswrapper[4720]: I0122 07:01:41.988659 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4sdv4\" (UniqueName: \"kubernetes.io/projected/db50c3b8-8300-4689-be75-dbcc3b10a27f-kube-api-access-4sdv4\") pod \"memcached-0\" (UID: \"db50c3b8-8300-4689-be75-dbcc3b10a27f\") " pod="watcher-kuttl-default/memcached-0" Jan 22 07:01:42 crc kubenswrapper[4720]: I0122 07:01:42.043664 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/memcached-0" Jan 22 07:01:42 crc kubenswrapper[4720]: I0122 07:01:42.103090 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 07:01:42 crc kubenswrapper[4720]: I0122 07:01:42.224258 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0f11b752-39dd-4f60-b6e5-6f788a85f86a" path="/var/lib/kubelet/pods/0f11b752-39dd-4f60-b6e5-6f788a85f86a/volumes" Jan 22 07:01:42 crc kubenswrapper[4720]: I0122 07:01:42.275944 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-64zf8\" (UniqueName: \"kubernetes.io/projected/b1f4244a-771e-422c-823a-385d4c50bc05-kube-api-access-64zf8\") pod \"b1f4244a-771e-422c-823a-385d4c50bc05\" (UID: \"b1f4244a-771e-422c-823a-385d4c50bc05\") " Jan 22 07:01:42 crc kubenswrapper[4720]: I0122 07:01:42.276054 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b1f4244a-771e-422c-823a-385d4c50bc05-config-data\") pod \"b1f4244a-771e-422c-823a-385d4c50bc05\" (UID: \"b1f4244a-771e-422c-823a-385d4c50bc05\") " Jan 22 07:01:42 crc kubenswrapper[4720]: I0122 07:01:42.276233 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b1f4244a-771e-422c-823a-385d4c50bc05-logs\") pod \"b1f4244a-771e-422c-823a-385d4c50bc05\" (UID: \"b1f4244a-771e-422c-823a-385d4c50bc05\") " Jan 22 07:01:42 crc kubenswrapper[4720]: I0122 07:01:42.276299 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1f4244a-771e-422c-823a-385d4c50bc05-combined-ca-bundle\") pod \"b1f4244a-771e-422c-823a-385d4c50bc05\" (UID: \"b1f4244a-771e-422c-823a-385d4c50bc05\") " Jan 22 07:01:42 crc kubenswrapper[4720]: I0122 07:01:42.277041 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b1f4244a-771e-422c-823a-385d4c50bc05-logs" (OuterVolumeSpecName: "logs") pod "b1f4244a-771e-422c-823a-385d4c50bc05" (UID: "b1f4244a-771e-422c-823a-385d4c50bc05"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 07:01:42 crc kubenswrapper[4720]: I0122 07:01:42.282293 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b1f4244a-771e-422c-823a-385d4c50bc05-kube-api-access-64zf8" (OuterVolumeSpecName: "kube-api-access-64zf8") pod "b1f4244a-771e-422c-823a-385d4c50bc05" (UID: "b1f4244a-771e-422c-823a-385d4c50bc05"). InnerVolumeSpecName "kube-api-access-64zf8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 07:01:42 crc kubenswrapper[4720]: I0122 07:01:42.309833 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b1f4244a-771e-422c-823a-385d4c50bc05-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b1f4244a-771e-422c-823a-385d4c50bc05" (UID: "b1f4244a-771e-422c-823a-385d4c50bc05"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:01:42 crc kubenswrapper[4720]: I0122 07:01:42.367179 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b1f4244a-771e-422c-823a-385d4c50bc05-config-data" (OuterVolumeSpecName: "config-data") pod "b1f4244a-771e-422c-823a-385d4c50bc05" (UID: "b1f4244a-771e-422c-823a-385d4c50bc05"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:01:42 crc kubenswrapper[4720]: I0122 07:01:42.378353 4720 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1f4244a-771e-422c-823a-385d4c50bc05-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 07:01:42 crc kubenswrapper[4720]: I0122 07:01:42.378413 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-64zf8\" (UniqueName: \"kubernetes.io/projected/b1f4244a-771e-422c-823a-385d4c50bc05-kube-api-access-64zf8\") on node \"crc\" DevicePath \"\"" Jan 22 07:01:42 crc kubenswrapper[4720]: I0122 07:01:42.378434 4720 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b1f4244a-771e-422c-823a-385d4c50bc05-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 07:01:42 crc kubenswrapper[4720]: I0122 07:01:42.378444 4720 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b1f4244a-771e-422c-823a-385d4c50bc05-logs\") on node \"crc\" DevicePath \"\"" Jan 22 07:01:42 crc kubenswrapper[4720]: I0122 07:01:42.379087 4720 generic.go:334] "Generic (PLEG): container finished" podID="b1f4244a-771e-422c-823a-385d4c50bc05" containerID="e24266a3e9b29047ddb048ce6185bdd1d1d92dde91c780a7f5b7e17e97177229" exitCode=0 Jan 22 07:01:42 crc kubenswrapper[4720]: I0122 07:01:42.379154 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"b1f4244a-771e-422c-823a-385d4c50bc05","Type":"ContainerDied","Data":"e24266a3e9b29047ddb048ce6185bdd1d1d92dde91c780a7f5b7e17e97177229"} Jan 22 07:01:42 crc kubenswrapper[4720]: I0122 07:01:42.379186 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"b1f4244a-771e-422c-823a-385d4c50bc05","Type":"ContainerDied","Data":"7f5c8aa847640b0570d9b91429d3fc7cb5c8d6825deb06fe026e822bd904c7b7"} Jan 22 07:01:42 crc kubenswrapper[4720]: I0122 07:01:42.379207 4720 scope.go:117] "RemoveContainer" containerID="e24266a3e9b29047ddb048ce6185bdd1d1d92dde91c780a7f5b7e17e97177229" Jan 22 07:01:42 crc kubenswrapper[4720]: I0122 07:01:42.379289 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 07:01:42 crc kubenswrapper[4720]: I0122 07:01:42.383734 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:01:42 crc kubenswrapper[4720]: I0122 07:01:42.440000 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 22 07:01:42 crc kubenswrapper[4720]: I0122 07:01:42.447430 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 22 07:01:42 crc kubenswrapper[4720]: I0122 07:01:42.461145 4720 scope.go:117] "RemoveContainer" containerID="e24266a3e9b29047ddb048ce6185bdd1d1d92dde91c780a7f5b7e17e97177229" Jan 22 07:01:42 crc kubenswrapper[4720]: E0122 07:01:42.466837 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e24266a3e9b29047ddb048ce6185bdd1d1d92dde91c780a7f5b7e17e97177229\": container with ID starting with e24266a3e9b29047ddb048ce6185bdd1d1d92dde91c780a7f5b7e17e97177229 not found: ID does not exist" containerID="e24266a3e9b29047ddb048ce6185bdd1d1d92dde91c780a7f5b7e17e97177229" Jan 22 07:01:42 crc kubenswrapper[4720]: I0122 07:01:42.466896 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e24266a3e9b29047ddb048ce6185bdd1d1d92dde91c780a7f5b7e17e97177229"} err="failed to get container status \"e24266a3e9b29047ddb048ce6185bdd1d1d92dde91c780a7f5b7e17e97177229\": rpc error: code = NotFound desc = could not find container \"e24266a3e9b29047ddb048ce6185bdd1d1d92dde91c780a7f5b7e17e97177229\": container with ID starting with e24266a3e9b29047ddb048ce6185bdd1d1d92dde91c780a7f5b7e17e97177229 not found: ID does not exist" Jan 22 07:01:42 crc kubenswrapper[4720]: I0122 07:01:42.529443 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 22 07:01:42 crc kubenswrapper[4720]: I0122 07:01:42.538880 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 22 07:01:42 crc kubenswrapper[4720]: E0122 07:01:42.539386 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1f4244a-771e-422c-823a-385d4c50bc05" containerName="watcher-applier" Jan 22 07:01:42 crc kubenswrapper[4720]: I0122 07:01:42.539406 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1f4244a-771e-422c-823a-385d4c50bc05" containerName="watcher-applier" Jan 22 07:01:42 crc kubenswrapper[4720]: I0122 07:01:42.539641 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="b1f4244a-771e-422c-823a-385d4c50bc05" containerName="watcher-applier" Jan 22 07:01:42 crc kubenswrapper[4720]: I0122 07:01:42.540335 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 07:01:42 crc kubenswrapper[4720]: I0122 07:01:42.543936 4720 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-applier-config-data" Jan 22 07:01:42 crc kubenswrapper[4720]: I0122 07:01:42.550414 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 22 07:01:42 crc kubenswrapper[4720]: I0122 07:01:42.554552 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 22 07:01:42 crc kubenswrapper[4720]: I0122 07:01:42.575222 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 22 07:01:42 crc kubenswrapper[4720]: I0122 07:01:42.579489 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:01:42 crc kubenswrapper[4720]: I0122 07:01:42.588883 4720 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-api-config-data" Jan 22 07:01:42 crc kubenswrapper[4720]: I0122 07:01:42.589843 4720 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-watcher-internal-svc" Jan 22 07:01:42 crc kubenswrapper[4720]: I0122 07:01:42.590650 4720 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-watcher-public-svc" Jan 22 07:01:42 crc kubenswrapper[4720]: I0122 07:01:42.595578 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 22 07:01:42 crc kubenswrapper[4720]: I0122 07:01:42.633801 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/memcached-0"] Jan 22 07:01:42 crc kubenswrapper[4720]: I0122 07:01:42.696453 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f80ec8a-6cdd-46dc-8e7c-7357052e1472-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"1f80ec8a-6cdd-46dc-8e7c-7357052e1472\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:01:42 crc kubenswrapper[4720]: I0122 07:01:42.696510 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1f80ec8a-6cdd-46dc-8e7c-7357052e1472-public-tls-certs\") pod \"watcher-kuttl-api-0\" (UID: \"1f80ec8a-6cdd-46dc-8e7c-7357052e1472\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:01:42 crc kubenswrapper[4720]: I0122 07:01:42.696541 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1f80ec8a-6cdd-46dc-8e7c-7357052e1472-logs\") pod \"watcher-kuttl-api-0\" (UID: \"1f80ec8a-6cdd-46dc-8e7c-7357052e1472\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:01:42 crc kubenswrapper[4720]: I0122 07:01:42.696560 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f80ec8a-6cdd-46dc-8e7c-7357052e1472-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"1f80ec8a-6cdd-46dc-8e7c-7357052e1472\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:01:42 crc kubenswrapper[4720]: I0122 07:01:42.697027 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f29a126f-9c3b-4569-bfe2-64a37f315aa8-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"f29a126f-9c3b-4569-bfe2-64a37f315aa8\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 07:01:42 crc kubenswrapper[4720]: I0122 07:01:42.697375 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x4cpw\" (UniqueName: \"kubernetes.io/projected/f29a126f-9c3b-4569-bfe2-64a37f315aa8-kube-api-access-x4cpw\") pod \"watcher-kuttl-applier-0\" (UID: \"f29a126f-9c3b-4569-bfe2-64a37f315aa8\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 07:01:42 crc kubenswrapper[4720]: I0122 07:01:42.697427 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1f80ec8a-6cdd-46dc-8e7c-7357052e1472-internal-tls-certs\") pod \"watcher-kuttl-api-0\" (UID: \"1f80ec8a-6cdd-46dc-8e7c-7357052e1472\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:01:42 crc kubenswrapper[4720]: I0122 07:01:42.697456 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f29a126f-9c3b-4569-bfe2-64a37f315aa8-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"f29a126f-9c3b-4569-bfe2-64a37f315aa8\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 07:01:42 crc kubenswrapper[4720]: I0122 07:01:42.697492 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f29a126f-9c3b-4569-bfe2-64a37f315aa8-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"f29a126f-9c3b-4569-bfe2-64a37f315aa8\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 07:01:42 crc kubenswrapper[4720]: I0122 07:01:42.697524 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/1f80ec8a-6cdd-46dc-8e7c-7357052e1472-cert-memcached-mtls\") pod \"watcher-kuttl-api-0\" (UID: \"1f80ec8a-6cdd-46dc-8e7c-7357052e1472\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:01:42 crc kubenswrapper[4720]: I0122 07:01:42.697545 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/f29a126f-9c3b-4569-bfe2-64a37f315aa8-cert-memcached-mtls\") pod \"watcher-kuttl-applier-0\" (UID: \"f29a126f-9c3b-4569-bfe2-64a37f315aa8\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 07:01:42 crc kubenswrapper[4720]: I0122 07:01:42.697583 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/1f80ec8a-6cdd-46dc-8e7c-7357052e1472-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"1f80ec8a-6cdd-46dc-8e7c-7357052e1472\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:01:42 crc kubenswrapper[4720]: I0122 07:01:42.697626 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j52xb\" (UniqueName: \"kubernetes.io/projected/1f80ec8a-6cdd-46dc-8e7c-7357052e1472-kube-api-access-j52xb\") pod \"watcher-kuttl-api-0\" (UID: \"1f80ec8a-6cdd-46dc-8e7c-7357052e1472\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:01:42 crc kubenswrapper[4720]: I0122 07:01:42.800552 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f80ec8a-6cdd-46dc-8e7c-7357052e1472-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"1f80ec8a-6cdd-46dc-8e7c-7357052e1472\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:01:42 crc kubenswrapper[4720]: I0122 07:01:42.800628 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1f80ec8a-6cdd-46dc-8e7c-7357052e1472-public-tls-certs\") pod \"watcher-kuttl-api-0\" (UID: \"1f80ec8a-6cdd-46dc-8e7c-7357052e1472\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:01:42 crc kubenswrapper[4720]: I0122 07:01:42.800663 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1f80ec8a-6cdd-46dc-8e7c-7357052e1472-logs\") pod \"watcher-kuttl-api-0\" (UID: \"1f80ec8a-6cdd-46dc-8e7c-7357052e1472\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:01:42 crc kubenswrapper[4720]: I0122 07:01:42.800683 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f80ec8a-6cdd-46dc-8e7c-7357052e1472-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"1f80ec8a-6cdd-46dc-8e7c-7357052e1472\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:01:42 crc kubenswrapper[4720]: I0122 07:01:42.800732 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f29a126f-9c3b-4569-bfe2-64a37f315aa8-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"f29a126f-9c3b-4569-bfe2-64a37f315aa8\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 07:01:42 crc kubenswrapper[4720]: I0122 07:01:42.800752 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x4cpw\" (UniqueName: \"kubernetes.io/projected/f29a126f-9c3b-4569-bfe2-64a37f315aa8-kube-api-access-x4cpw\") pod \"watcher-kuttl-applier-0\" (UID: \"f29a126f-9c3b-4569-bfe2-64a37f315aa8\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 07:01:42 crc kubenswrapper[4720]: I0122 07:01:42.800787 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1f80ec8a-6cdd-46dc-8e7c-7357052e1472-internal-tls-certs\") pod \"watcher-kuttl-api-0\" (UID: \"1f80ec8a-6cdd-46dc-8e7c-7357052e1472\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:01:42 crc kubenswrapper[4720]: I0122 07:01:42.800810 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f29a126f-9c3b-4569-bfe2-64a37f315aa8-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"f29a126f-9c3b-4569-bfe2-64a37f315aa8\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 07:01:42 crc kubenswrapper[4720]: I0122 07:01:42.800839 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f29a126f-9c3b-4569-bfe2-64a37f315aa8-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"f29a126f-9c3b-4569-bfe2-64a37f315aa8\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 07:01:42 crc kubenswrapper[4720]: I0122 07:01:42.800868 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/1f80ec8a-6cdd-46dc-8e7c-7357052e1472-cert-memcached-mtls\") pod \"watcher-kuttl-api-0\" (UID: \"1f80ec8a-6cdd-46dc-8e7c-7357052e1472\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:01:42 crc kubenswrapper[4720]: I0122 07:01:42.800888 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/f29a126f-9c3b-4569-bfe2-64a37f315aa8-cert-memcached-mtls\") pod \"watcher-kuttl-applier-0\" (UID: \"f29a126f-9c3b-4569-bfe2-64a37f315aa8\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 07:01:42 crc kubenswrapper[4720]: I0122 07:01:42.801018 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/1f80ec8a-6cdd-46dc-8e7c-7357052e1472-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"1f80ec8a-6cdd-46dc-8e7c-7357052e1472\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:01:42 crc kubenswrapper[4720]: I0122 07:01:42.801059 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j52xb\" (UniqueName: \"kubernetes.io/projected/1f80ec8a-6cdd-46dc-8e7c-7357052e1472-kube-api-access-j52xb\") pod \"watcher-kuttl-api-0\" (UID: \"1f80ec8a-6cdd-46dc-8e7c-7357052e1472\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:01:42 crc kubenswrapper[4720]: I0122 07:01:42.803926 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1f80ec8a-6cdd-46dc-8e7c-7357052e1472-logs\") pod \"watcher-kuttl-api-0\" (UID: \"1f80ec8a-6cdd-46dc-8e7c-7357052e1472\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:01:42 crc kubenswrapper[4720]: I0122 07:01:42.806721 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f80ec8a-6cdd-46dc-8e7c-7357052e1472-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"1f80ec8a-6cdd-46dc-8e7c-7357052e1472\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:01:42 crc kubenswrapper[4720]: I0122 07:01:42.807417 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f29a126f-9c3b-4569-bfe2-64a37f315aa8-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"f29a126f-9c3b-4569-bfe2-64a37f315aa8\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 07:01:42 crc kubenswrapper[4720]: I0122 07:01:42.811357 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/1f80ec8a-6cdd-46dc-8e7c-7357052e1472-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"1f80ec8a-6cdd-46dc-8e7c-7357052e1472\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:01:42 crc kubenswrapper[4720]: I0122 07:01:42.812501 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f29a126f-9c3b-4569-bfe2-64a37f315aa8-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"f29a126f-9c3b-4569-bfe2-64a37f315aa8\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 07:01:42 crc kubenswrapper[4720]: I0122 07:01:42.812959 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1f80ec8a-6cdd-46dc-8e7c-7357052e1472-public-tls-certs\") pod \"watcher-kuttl-api-0\" (UID: \"1f80ec8a-6cdd-46dc-8e7c-7357052e1472\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:01:42 crc kubenswrapper[4720]: I0122 07:01:42.814122 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f29a126f-9c3b-4569-bfe2-64a37f315aa8-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"f29a126f-9c3b-4569-bfe2-64a37f315aa8\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 07:01:42 crc kubenswrapper[4720]: I0122 07:01:42.814720 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f80ec8a-6cdd-46dc-8e7c-7357052e1472-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"1f80ec8a-6cdd-46dc-8e7c-7357052e1472\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:01:42 crc kubenswrapper[4720]: I0122 07:01:42.815276 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/1f80ec8a-6cdd-46dc-8e7c-7357052e1472-cert-memcached-mtls\") pod \"watcher-kuttl-api-0\" (UID: \"1f80ec8a-6cdd-46dc-8e7c-7357052e1472\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:01:42 crc kubenswrapper[4720]: I0122 07:01:42.816473 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1f80ec8a-6cdd-46dc-8e7c-7357052e1472-internal-tls-certs\") pod \"watcher-kuttl-api-0\" (UID: \"1f80ec8a-6cdd-46dc-8e7c-7357052e1472\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:01:42 crc kubenswrapper[4720]: I0122 07:01:42.823861 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/f29a126f-9c3b-4569-bfe2-64a37f315aa8-cert-memcached-mtls\") pod \"watcher-kuttl-applier-0\" (UID: \"f29a126f-9c3b-4569-bfe2-64a37f315aa8\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 07:01:42 crc kubenswrapper[4720]: I0122 07:01:42.823874 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x4cpw\" (UniqueName: \"kubernetes.io/projected/f29a126f-9c3b-4569-bfe2-64a37f315aa8-kube-api-access-x4cpw\") pod \"watcher-kuttl-applier-0\" (UID: \"f29a126f-9c3b-4569-bfe2-64a37f315aa8\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 07:01:42 crc kubenswrapper[4720]: I0122 07:01:42.826687 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j52xb\" (UniqueName: \"kubernetes.io/projected/1f80ec8a-6cdd-46dc-8e7c-7357052e1472-kube-api-access-j52xb\") pod \"watcher-kuttl-api-0\" (UID: \"1f80ec8a-6cdd-46dc-8e7c-7357052e1472\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:01:42 crc kubenswrapper[4720]: I0122 07:01:42.864489 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 07:01:42 crc kubenswrapper[4720]: I0122 07:01:42.901451 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:01:43 crc kubenswrapper[4720]: I0122 07:01:43.362571 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 22 07:01:43 crc kubenswrapper[4720]: W0122 07:01:43.368451 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf29a126f_9c3b_4569_bfe2_64a37f315aa8.slice/crio-1556095f37d8b084f2454409de12a0181cebfec940584a444e92696e269b72ce WatchSource:0}: Error finding container 1556095f37d8b084f2454409de12a0181cebfec940584a444e92696e269b72ce: Status 404 returned error can't find the container with id 1556095f37d8b084f2454409de12a0181cebfec940584a444e92696e269b72ce Jan 22 07:01:43 crc kubenswrapper[4720]: I0122 07:01:43.412238 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/memcached-0" event={"ID":"db50c3b8-8300-4689-be75-dbcc3b10a27f","Type":"ContainerStarted","Data":"d60eeb7ec6a4f45d04ef15913be3dfe8421a93cb84d415bd625b83b5f87a0328"} Jan 22 07:01:43 crc kubenswrapper[4720]: I0122 07:01:43.412296 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/memcached-0" event={"ID":"db50c3b8-8300-4689-be75-dbcc3b10a27f","Type":"ContainerStarted","Data":"ae03823b41279556022690f9dd6ca6173105352a3d45d090f9d6c229b34edb73"} Jan 22 07:01:43 crc kubenswrapper[4720]: I0122 07:01:43.426496 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"f29a126f-9c3b-4569-bfe2-64a37f315aa8","Type":"ContainerStarted","Data":"1556095f37d8b084f2454409de12a0181cebfec940584a444e92696e269b72ce"} Jan 22 07:01:43 crc kubenswrapper[4720]: I0122 07:01:43.463417 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 22 07:01:44 crc kubenswrapper[4720]: I0122 07:01:44.224119 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b1f4244a-771e-422c-823a-385d4c50bc05" path="/var/lib/kubelet/pods/b1f4244a-771e-422c-823a-385d4c50bc05/volumes" Jan 22 07:01:44 crc kubenswrapper[4720]: I0122 07:01:44.225487 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b7fd8a18-2d71-474c-83e4-b7789274ac42" path="/var/lib/kubelet/pods/b7fd8a18-2d71-474c-83e4-b7789274ac42/volumes" Jan 22 07:01:44 crc kubenswrapper[4720]: I0122 07:01:44.436486 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"f29a126f-9c3b-4569-bfe2-64a37f315aa8","Type":"ContainerStarted","Data":"4811e7d5ba0a5f055fb36035d842cd49a65552b05c7a230970af54280030f1d9"} Jan 22 07:01:44 crc kubenswrapper[4720]: I0122 07:01:44.439060 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"1f80ec8a-6cdd-46dc-8e7c-7357052e1472","Type":"ContainerStarted","Data":"0b9199ca3d21fddfab369a5c331549a1a8e3f2ce7e3c751d233d1a938282c4dd"} Jan 22 07:01:44 crc kubenswrapper[4720]: I0122 07:01:44.439097 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/memcached-0" Jan 22 07:01:44 crc kubenswrapper[4720]: I0122 07:01:44.439112 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"1f80ec8a-6cdd-46dc-8e7c-7357052e1472","Type":"ContainerStarted","Data":"f2174c7079bd18034cc53912c8d87095cb86ece2c3e2165ce2fbf54623ee0dff"} Jan 22 07:01:44 crc kubenswrapper[4720]: I0122 07:01:44.463081 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podStartSLOduration=2.463056982 podStartE2EDuration="2.463056982s" podCreationTimestamp="2026-01-22 07:01:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 07:01:44.461363785 +0000 UTC m=+1596.603270510" watchObservedRunningTime="2026-01-22 07:01:44.463056982 +0000 UTC m=+1596.604963687" Jan 22 07:01:44 crc kubenswrapper[4720]: I0122 07:01:44.502302 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/memcached-0" podStartSLOduration=3.502282003 podStartE2EDuration="3.502282003s" podCreationTimestamp="2026-01-22 07:01:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 07:01:44.480542328 +0000 UTC m=+1596.622449043" watchObservedRunningTime="2026-01-22 07:01:44.502282003 +0000 UTC m=+1596.644188708" Jan 22 07:01:45 crc kubenswrapper[4720]: I0122 07:01:45.449432 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"1f80ec8a-6cdd-46dc-8e7c-7357052e1472","Type":"ContainerStarted","Data":"1299b50e714b029a3186ec89c02bb0736b881576f76d5aa8e3ba6a0d75cf4a38"} Jan 22 07:01:45 crc kubenswrapper[4720]: I0122 07:01:45.471630 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-api-0" podStartSLOduration=3.471601089 podStartE2EDuration="3.471601089s" podCreationTimestamp="2026-01-22 07:01:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 07:01:45.466592219 +0000 UTC m=+1597.608498944" watchObservedRunningTime="2026-01-22 07:01:45.471601089 +0000 UTC m=+1597.613507804" Jan 22 07:01:46 crc kubenswrapper[4720]: I0122 07:01:46.458200 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:01:47 crc kubenswrapper[4720]: I0122 07:01:47.532942 4720 generic.go:334] "Generic (PLEG): container finished" podID="439270d4-5c94-4dba-8623-2d03bd7198d8" containerID="99971402e2f98e7ec904431a823b04bf2b72c067ff53e86c7576d3fc53e0fe04" exitCode=0 Jan 22 07:01:47 crc kubenswrapper[4720]: I0122 07:01:47.533124 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-bootstrap-6qk6s" event={"ID":"439270d4-5c94-4dba-8623-2d03bd7198d8","Type":"ContainerDied","Data":"99971402e2f98e7ec904431a823b04bf2b72c067ff53e86c7576d3fc53e0fe04"} Jan 22 07:01:47 crc kubenswrapper[4720]: I0122 07:01:47.557347 4720 generic.go:334] "Generic (PLEG): container finished" podID="bf960b01-5c0d-4a0d-aae9-ffaeff4af5d5" containerID="54f351b36a550966aaa4f2d9fc1d0e810b27b0908f58fc1fdee03390ef15d8c9" exitCode=0 Jan 22 07:01:47 crc kubenswrapper[4720]: I0122 07:01:47.557504 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"bf960b01-5c0d-4a0d-aae9-ffaeff4af5d5","Type":"ContainerDied","Data":"54f351b36a550966aaa4f2d9fc1d0e810b27b0908f58fc1fdee03390ef15d8c9"} Jan 22 07:01:47 crc kubenswrapper[4720]: I0122 07:01:47.727324 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 07:01:47 crc kubenswrapper[4720]: I0122 07:01:47.809113 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/bf960b01-5c0d-4a0d-aae9-ffaeff4af5d5-custom-prometheus-ca\") pod \"bf960b01-5c0d-4a0d-aae9-ffaeff4af5d5\" (UID: \"bf960b01-5c0d-4a0d-aae9-ffaeff4af5d5\") " Jan 22 07:01:47 crc kubenswrapper[4720]: I0122 07:01:47.809163 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mfdbd\" (UniqueName: \"kubernetes.io/projected/bf960b01-5c0d-4a0d-aae9-ffaeff4af5d5-kube-api-access-mfdbd\") pod \"bf960b01-5c0d-4a0d-aae9-ffaeff4af5d5\" (UID: \"bf960b01-5c0d-4a0d-aae9-ffaeff4af5d5\") " Jan 22 07:01:47 crc kubenswrapper[4720]: I0122 07:01:47.809206 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf960b01-5c0d-4a0d-aae9-ffaeff4af5d5-combined-ca-bundle\") pod \"bf960b01-5c0d-4a0d-aae9-ffaeff4af5d5\" (UID: \"bf960b01-5c0d-4a0d-aae9-ffaeff4af5d5\") " Jan 22 07:01:47 crc kubenswrapper[4720]: I0122 07:01:47.809247 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bf960b01-5c0d-4a0d-aae9-ffaeff4af5d5-logs\") pod \"bf960b01-5c0d-4a0d-aae9-ffaeff4af5d5\" (UID: \"bf960b01-5c0d-4a0d-aae9-ffaeff4af5d5\") " Jan 22 07:01:47 crc kubenswrapper[4720]: I0122 07:01:47.809270 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf960b01-5c0d-4a0d-aae9-ffaeff4af5d5-config-data\") pod \"bf960b01-5c0d-4a0d-aae9-ffaeff4af5d5\" (UID: \"bf960b01-5c0d-4a0d-aae9-ffaeff4af5d5\") " Jan 22 07:01:47 crc kubenswrapper[4720]: I0122 07:01:47.811860 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bf960b01-5c0d-4a0d-aae9-ffaeff4af5d5-logs" (OuterVolumeSpecName: "logs") pod "bf960b01-5c0d-4a0d-aae9-ffaeff4af5d5" (UID: "bf960b01-5c0d-4a0d-aae9-ffaeff4af5d5"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 07:01:47 crc kubenswrapper[4720]: I0122 07:01:47.816732 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf960b01-5c0d-4a0d-aae9-ffaeff4af5d5-kube-api-access-mfdbd" (OuterVolumeSpecName: "kube-api-access-mfdbd") pod "bf960b01-5c0d-4a0d-aae9-ffaeff4af5d5" (UID: "bf960b01-5c0d-4a0d-aae9-ffaeff4af5d5"). InnerVolumeSpecName "kube-api-access-mfdbd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 07:01:47 crc kubenswrapper[4720]: I0122 07:01:47.863986 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf960b01-5c0d-4a0d-aae9-ffaeff4af5d5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bf960b01-5c0d-4a0d-aae9-ffaeff4af5d5" (UID: "bf960b01-5c0d-4a0d-aae9-ffaeff4af5d5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:01:47 crc kubenswrapper[4720]: I0122 07:01:47.866079 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 07:01:47 crc kubenswrapper[4720]: I0122 07:01:47.872656 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf960b01-5c0d-4a0d-aae9-ffaeff4af5d5-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "bf960b01-5c0d-4a0d-aae9-ffaeff4af5d5" (UID: "bf960b01-5c0d-4a0d-aae9-ffaeff4af5d5"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:01:47 crc kubenswrapper[4720]: I0122 07:01:47.879457 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf960b01-5c0d-4a0d-aae9-ffaeff4af5d5-config-data" (OuterVolumeSpecName: "config-data") pod "bf960b01-5c0d-4a0d-aae9-ffaeff4af5d5" (UID: "bf960b01-5c0d-4a0d-aae9-ffaeff4af5d5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:01:47 crc kubenswrapper[4720]: I0122 07:01:47.901855 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:01:47 crc kubenswrapper[4720]: I0122 07:01:47.911161 4720 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bf960b01-5c0d-4a0d-aae9-ffaeff4af5d5-logs\") on node \"crc\" DevicePath \"\"" Jan 22 07:01:47 crc kubenswrapper[4720]: I0122 07:01:47.911441 4720 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf960b01-5c0d-4a0d-aae9-ffaeff4af5d5-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 07:01:47 crc kubenswrapper[4720]: I0122 07:01:47.911453 4720 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/bf960b01-5c0d-4a0d-aae9-ffaeff4af5d5-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Jan 22 07:01:47 crc kubenswrapper[4720]: I0122 07:01:47.911463 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mfdbd\" (UniqueName: \"kubernetes.io/projected/bf960b01-5c0d-4a0d-aae9-ffaeff4af5d5-kube-api-access-mfdbd\") on node \"crc\" DevicePath \"\"" Jan 22 07:01:47 crc kubenswrapper[4720]: I0122 07:01:47.911472 4720 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf960b01-5c0d-4a0d-aae9-ffaeff4af5d5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 07:01:48 crc kubenswrapper[4720]: I0122 07:01:48.570685 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"bf960b01-5c0d-4a0d-aae9-ffaeff4af5d5","Type":"ContainerDied","Data":"2deadc8c176c6d684aeb0532bf24f59a053b34b679866f8473d3493c980a6ddd"} Jan 22 07:01:48 crc kubenswrapper[4720]: I0122 07:01:48.570860 4720 scope.go:117] "RemoveContainer" containerID="54f351b36a550966aaa4f2d9fc1d0e810b27b0908f58fc1fdee03390ef15d8c9" Jan 22 07:01:48 crc kubenswrapper[4720]: I0122 07:01:48.570881 4720 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 22 07:01:48 crc kubenswrapper[4720]: I0122 07:01:48.570797 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 07:01:48 crc kubenswrapper[4720]: I0122 07:01:48.597375 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 22 07:01:48 crc kubenswrapper[4720]: I0122 07:01:48.609358 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 22 07:01:48 crc kubenswrapper[4720]: I0122 07:01:48.636374 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 22 07:01:48 crc kubenswrapper[4720]: E0122 07:01:48.636850 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf960b01-5c0d-4a0d-aae9-ffaeff4af5d5" containerName="watcher-decision-engine" Jan 22 07:01:48 crc kubenswrapper[4720]: I0122 07:01:48.636870 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf960b01-5c0d-4a0d-aae9-ffaeff4af5d5" containerName="watcher-decision-engine" Jan 22 07:01:48 crc kubenswrapper[4720]: I0122 07:01:48.637133 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf960b01-5c0d-4a0d-aae9-ffaeff4af5d5" containerName="watcher-decision-engine" Jan 22 07:01:48 crc kubenswrapper[4720]: I0122 07:01:48.638000 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 07:01:48 crc kubenswrapper[4720]: I0122 07:01:48.656194 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 22 07:01:48 crc kubenswrapper[4720]: I0122 07:01:48.660275 4720 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-decision-engine-config-data" Jan 22 07:01:48 crc kubenswrapper[4720]: I0122 07:01:48.741225 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/c8ac272a-b713-4024-a0e0-fd1873016edc-cert-memcached-mtls\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"c8ac272a-b713-4024-a0e0-fd1873016edc\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 07:01:48 crc kubenswrapper[4720]: I0122 07:01:48.741295 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b9f44\" (UniqueName: \"kubernetes.io/projected/c8ac272a-b713-4024-a0e0-fd1873016edc-kube-api-access-b9f44\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"c8ac272a-b713-4024-a0e0-fd1873016edc\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 07:01:48 crc kubenswrapper[4720]: I0122 07:01:48.741788 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8ac272a-b713-4024-a0e0-fd1873016edc-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"c8ac272a-b713-4024-a0e0-fd1873016edc\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 07:01:48 crc kubenswrapper[4720]: I0122 07:01:48.741893 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/c8ac272a-b713-4024-a0e0-fd1873016edc-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"c8ac272a-b713-4024-a0e0-fd1873016edc\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 07:01:48 crc kubenswrapper[4720]: I0122 07:01:48.741975 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c8ac272a-b713-4024-a0e0-fd1873016edc-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"c8ac272a-b713-4024-a0e0-fd1873016edc\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 07:01:48 crc kubenswrapper[4720]: I0122 07:01:48.742109 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c8ac272a-b713-4024-a0e0-fd1873016edc-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"c8ac272a-b713-4024-a0e0-fd1873016edc\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 07:01:48 crc kubenswrapper[4720]: I0122 07:01:48.844506 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8ac272a-b713-4024-a0e0-fd1873016edc-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"c8ac272a-b713-4024-a0e0-fd1873016edc\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 07:01:48 crc kubenswrapper[4720]: I0122 07:01:48.844568 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/c8ac272a-b713-4024-a0e0-fd1873016edc-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"c8ac272a-b713-4024-a0e0-fd1873016edc\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 07:01:48 crc kubenswrapper[4720]: I0122 07:01:48.844615 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c8ac272a-b713-4024-a0e0-fd1873016edc-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"c8ac272a-b713-4024-a0e0-fd1873016edc\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 07:01:48 crc kubenswrapper[4720]: I0122 07:01:48.844667 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c8ac272a-b713-4024-a0e0-fd1873016edc-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"c8ac272a-b713-4024-a0e0-fd1873016edc\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 07:01:48 crc kubenswrapper[4720]: I0122 07:01:48.844719 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/c8ac272a-b713-4024-a0e0-fd1873016edc-cert-memcached-mtls\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"c8ac272a-b713-4024-a0e0-fd1873016edc\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 07:01:48 crc kubenswrapper[4720]: I0122 07:01:48.844776 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b9f44\" (UniqueName: \"kubernetes.io/projected/c8ac272a-b713-4024-a0e0-fd1873016edc-kube-api-access-b9f44\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"c8ac272a-b713-4024-a0e0-fd1873016edc\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 07:01:48 crc kubenswrapper[4720]: I0122 07:01:48.851366 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c8ac272a-b713-4024-a0e0-fd1873016edc-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"c8ac272a-b713-4024-a0e0-fd1873016edc\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 07:01:48 crc kubenswrapper[4720]: I0122 07:01:48.853560 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/c8ac272a-b713-4024-a0e0-fd1873016edc-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"c8ac272a-b713-4024-a0e0-fd1873016edc\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 07:01:48 crc kubenswrapper[4720]: I0122 07:01:48.856838 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8ac272a-b713-4024-a0e0-fd1873016edc-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"c8ac272a-b713-4024-a0e0-fd1873016edc\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 07:01:48 crc kubenswrapper[4720]: I0122 07:01:48.862573 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/c8ac272a-b713-4024-a0e0-fd1873016edc-cert-memcached-mtls\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"c8ac272a-b713-4024-a0e0-fd1873016edc\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 07:01:48 crc kubenswrapper[4720]: I0122 07:01:48.866659 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c8ac272a-b713-4024-a0e0-fd1873016edc-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"c8ac272a-b713-4024-a0e0-fd1873016edc\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 07:01:48 crc kubenswrapper[4720]: I0122 07:01:48.867709 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b9f44\" (UniqueName: \"kubernetes.io/projected/c8ac272a-b713-4024-a0e0-fd1873016edc-kube-api-access-b9f44\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"c8ac272a-b713-4024-a0e0-fd1873016edc\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 07:01:48 crc kubenswrapper[4720]: I0122 07:01:48.966799 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 07:01:49 crc kubenswrapper[4720]: I0122 07:01:49.056481 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-bootstrap-6qk6s" Jan 22 07:01:49 crc kubenswrapper[4720]: I0122 07:01:49.161510 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/439270d4-5c94-4dba-8623-2d03bd7198d8-credential-keys\") pod \"439270d4-5c94-4dba-8623-2d03bd7198d8\" (UID: \"439270d4-5c94-4dba-8623-2d03bd7198d8\") " Jan 22 07:01:49 crc kubenswrapper[4720]: I0122 07:01:49.162472 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/439270d4-5c94-4dba-8623-2d03bd7198d8-fernet-keys\") pod \"439270d4-5c94-4dba-8623-2d03bd7198d8\" (UID: \"439270d4-5c94-4dba-8623-2d03bd7198d8\") " Jan 22 07:01:49 crc kubenswrapper[4720]: I0122 07:01:49.162959 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rj2bf\" (UniqueName: \"kubernetes.io/projected/439270d4-5c94-4dba-8623-2d03bd7198d8-kube-api-access-rj2bf\") pod \"439270d4-5c94-4dba-8623-2d03bd7198d8\" (UID: \"439270d4-5c94-4dba-8623-2d03bd7198d8\") " Jan 22 07:01:49 crc kubenswrapper[4720]: I0122 07:01:49.162990 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/439270d4-5c94-4dba-8623-2d03bd7198d8-scripts\") pod \"439270d4-5c94-4dba-8623-2d03bd7198d8\" (UID: \"439270d4-5c94-4dba-8623-2d03bd7198d8\") " Jan 22 07:01:49 crc kubenswrapper[4720]: I0122 07:01:49.163029 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/439270d4-5c94-4dba-8623-2d03bd7198d8-cert-memcached-mtls\") pod \"439270d4-5c94-4dba-8623-2d03bd7198d8\" (UID: \"439270d4-5c94-4dba-8623-2d03bd7198d8\") " Jan 22 07:01:49 crc kubenswrapper[4720]: I0122 07:01:49.163105 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/439270d4-5c94-4dba-8623-2d03bd7198d8-combined-ca-bundle\") pod \"439270d4-5c94-4dba-8623-2d03bd7198d8\" (UID: \"439270d4-5c94-4dba-8623-2d03bd7198d8\") " Jan 22 07:01:49 crc kubenswrapper[4720]: I0122 07:01:49.163145 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/439270d4-5c94-4dba-8623-2d03bd7198d8-config-data\") pod \"439270d4-5c94-4dba-8623-2d03bd7198d8\" (UID: \"439270d4-5c94-4dba-8623-2d03bd7198d8\") " Jan 22 07:01:49 crc kubenswrapper[4720]: I0122 07:01:49.171582 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/439270d4-5c94-4dba-8623-2d03bd7198d8-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "439270d4-5c94-4dba-8623-2d03bd7198d8" (UID: "439270d4-5c94-4dba-8623-2d03bd7198d8"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:01:49 crc kubenswrapper[4720]: I0122 07:01:49.171620 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/439270d4-5c94-4dba-8623-2d03bd7198d8-scripts" (OuterVolumeSpecName: "scripts") pod "439270d4-5c94-4dba-8623-2d03bd7198d8" (UID: "439270d4-5c94-4dba-8623-2d03bd7198d8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:01:49 crc kubenswrapper[4720]: I0122 07:01:49.172111 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/439270d4-5c94-4dba-8623-2d03bd7198d8-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "439270d4-5c94-4dba-8623-2d03bd7198d8" (UID: "439270d4-5c94-4dba-8623-2d03bd7198d8"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:01:49 crc kubenswrapper[4720]: I0122 07:01:49.172365 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/439270d4-5c94-4dba-8623-2d03bd7198d8-kube-api-access-rj2bf" (OuterVolumeSpecName: "kube-api-access-rj2bf") pod "439270d4-5c94-4dba-8623-2d03bd7198d8" (UID: "439270d4-5c94-4dba-8623-2d03bd7198d8"). InnerVolumeSpecName "kube-api-access-rj2bf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 07:01:49 crc kubenswrapper[4720]: I0122 07:01:49.195103 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/439270d4-5c94-4dba-8623-2d03bd7198d8-config-data" (OuterVolumeSpecName: "config-data") pod "439270d4-5c94-4dba-8623-2d03bd7198d8" (UID: "439270d4-5c94-4dba-8623-2d03bd7198d8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:01:49 crc kubenswrapper[4720]: I0122 07:01:49.196794 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/439270d4-5c94-4dba-8623-2d03bd7198d8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "439270d4-5c94-4dba-8623-2d03bd7198d8" (UID: "439270d4-5c94-4dba-8623-2d03bd7198d8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:01:49 crc kubenswrapper[4720]: I0122 07:01:49.254664 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:01:49 crc kubenswrapper[4720]: I0122 07:01:49.265700 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rj2bf\" (UniqueName: \"kubernetes.io/projected/439270d4-5c94-4dba-8623-2d03bd7198d8-kube-api-access-rj2bf\") on node \"crc\" DevicePath \"\"" Jan 22 07:01:49 crc kubenswrapper[4720]: I0122 07:01:49.265739 4720 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/439270d4-5c94-4dba-8623-2d03bd7198d8-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 07:01:49 crc kubenswrapper[4720]: I0122 07:01:49.265753 4720 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/439270d4-5c94-4dba-8623-2d03bd7198d8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 07:01:49 crc kubenswrapper[4720]: I0122 07:01:49.265764 4720 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/439270d4-5c94-4dba-8623-2d03bd7198d8-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 07:01:49 crc kubenswrapper[4720]: I0122 07:01:49.265777 4720 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/439270d4-5c94-4dba-8623-2d03bd7198d8-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 22 07:01:49 crc kubenswrapper[4720]: I0122 07:01:49.265793 4720 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/439270d4-5c94-4dba-8623-2d03bd7198d8-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 22 07:01:49 crc kubenswrapper[4720]: I0122 07:01:49.283676 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/439270d4-5c94-4dba-8623-2d03bd7198d8-cert-memcached-mtls" (OuterVolumeSpecName: "cert-memcached-mtls") pod "439270d4-5c94-4dba-8623-2d03bd7198d8" (UID: "439270d4-5c94-4dba-8623-2d03bd7198d8"). InnerVolumeSpecName "cert-memcached-mtls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:01:49 crc kubenswrapper[4720]: I0122 07:01:49.367639 4720 reconciler_common.go:293] "Volume detached for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/439270d4-5c94-4dba-8623-2d03bd7198d8-cert-memcached-mtls\") on node \"crc\" DevicePath \"\"" Jan 22 07:01:49 crc kubenswrapper[4720]: I0122 07:01:49.508340 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 22 07:01:49 crc kubenswrapper[4720]: I0122 07:01:49.598120 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"c8ac272a-b713-4024-a0e0-fd1873016edc","Type":"ContainerStarted","Data":"750a2060fdd3ee5269a6a5717685012607a85a2645aa3cd5e3ad4b09b466c1c8"} Jan 22 07:01:49 crc kubenswrapper[4720]: I0122 07:01:49.613304 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-bootstrap-6qk6s" Jan 22 07:01:49 crc kubenswrapper[4720]: I0122 07:01:49.614173 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-bootstrap-6qk6s" event={"ID":"439270d4-5c94-4dba-8623-2d03bd7198d8","Type":"ContainerDied","Data":"73e23a70af550acd5cfea602c080620f235562914d5d78f8d3841b460ec57067"} Jan 22 07:01:49 crc kubenswrapper[4720]: I0122 07:01:49.614275 4720 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="73e23a70af550acd5cfea602c080620f235562914d5d78f8d3841b460ec57067" Jan 22 07:01:50 crc kubenswrapper[4720]: I0122 07:01:50.227591 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf960b01-5c0d-4a0d-aae9-ffaeff4af5d5" path="/var/lib/kubelet/pods/bf960b01-5c0d-4a0d-aae9-ffaeff4af5d5/volumes" Jan 22 07:01:52 crc kubenswrapper[4720]: I0122 07:01:52.046002 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/memcached-0" Jan 22 07:01:52 crc kubenswrapper[4720]: I0122 07:01:52.189529 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/keystone-b68754746-52s4w"] Jan 22 07:01:52 crc kubenswrapper[4720]: E0122 07:01:52.190076 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="439270d4-5c94-4dba-8623-2d03bd7198d8" containerName="keystone-bootstrap" Jan 22 07:01:52 crc kubenswrapper[4720]: I0122 07:01:52.190101 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="439270d4-5c94-4dba-8623-2d03bd7198d8" containerName="keystone-bootstrap" Jan 22 07:01:52 crc kubenswrapper[4720]: I0122 07:01:52.190348 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="439270d4-5c94-4dba-8623-2d03bd7198d8" containerName="keystone-bootstrap" Jan 22 07:01:52 crc kubenswrapper[4720]: I0122 07:01:52.191181 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-b68754746-52s4w" Jan 22 07:01:52 crc kubenswrapper[4720]: I0122 07:01:52.207221 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/keystone-b68754746-52s4w"] Jan 22 07:01:52 crc kubenswrapper[4720]: I0122 07:01:52.221231 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/415f8b45-c7ea-49bc-aed1-1367c47fac0b-scripts\") pod \"keystone-b68754746-52s4w\" (UID: \"415f8b45-c7ea-49bc-aed1-1367c47fac0b\") " pod="watcher-kuttl-default/keystone-b68754746-52s4w" Jan 22 07:01:52 crc kubenswrapper[4720]: I0122 07:01:52.221288 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/415f8b45-c7ea-49bc-aed1-1367c47fac0b-internal-tls-certs\") pod \"keystone-b68754746-52s4w\" (UID: \"415f8b45-c7ea-49bc-aed1-1367c47fac0b\") " pod="watcher-kuttl-default/keystone-b68754746-52s4w" Jan 22 07:01:52 crc kubenswrapper[4720]: I0122 07:01:52.221314 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/415f8b45-c7ea-49bc-aed1-1367c47fac0b-public-tls-certs\") pod \"keystone-b68754746-52s4w\" (UID: \"415f8b45-c7ea-49bc-aed1-1367c47fac0b\") " pod="watcher-kuttl-default/keystone-b68754746-52s4w" Jan 22 07:01:52 crc kubenswrapper[4720]: I0122 07:01:52.221336 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/415f8b45-c7ea-49bc-aed1-1367c47fac0b-config-data\") pod \"keystone-b68754746-52s4w\" (UID: \"415f8b45-c7ea-49bc-aed1-1367c47fac0b\") " pod="watcher-kuttl-default/keystone-b68754746-52s4w" Jan 22 07:01:52 crc kubenswrapper[4720]: I0122 07:01:52.221363 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/415f8b45-c7ea-49bc-aed1-1367c47fac0b-credential-keys\") pod \"keystone-b68754746-52s4w\" (UID: \"415f8b45-c7ea-49bc-aed1-1367c47fac0b\") " pod="watcher-kuttl-default/keystone-b68754746-52s4w" Jan 22 07:01:52 crc kubenswrapper[4720]: I0122 07:01:52.221404 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4lzxl\" (UniqueName: \"kubernetes.io/projected/415f8b45-c7ea-49bc-aed1-1367c47fac0b-kube-api-access-4lzxl\") pod \"keystone-b68754746-52s4w\" (UID: \"415f8b45-c7ea-49bc-aed1-1367c47fac0b\") " pod="watcher-kuttl-default/keystone-b68754746-52s4w" Jan 22 07:01:52 crc kubenswrapper[4720]: I0122 07:01:52.221420 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/415f8b45-c7ea-49bc-aed1-1367c47fac0b-fernet-keys\") pod \"keystone-b68754746-52s4w\" (UID: \"415f8b45-c7ea-49bc-aed1-1367c47fac0b\") " pod="watcher-kuttl-default/keystone-b68754746-52s4w" Jan 22 07:01:52 crc kubenswrapper[4720]: I0122 07:01:52.221444 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/415f8b45-c7ea-49bc-aed1-1367c47fac0b-combined-ca-bundle\") pod \"keystone-b68754746-52s4w\" (UID: \"415f8b45-c7ea-49bc-aed1-1367c47fac0b\") " pod="watcher-kuttl-default/keystone-b68754746-52s4w" Jan 22 07:01:52 crc kubenswrapper[4720]: I0122 07:01:52.221458 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/415f8b45-c7ea-49bc-aed1-1367c47fac0b-cert-memcached-mtls\") pod \"keystone-b68754746-52s4w\" (UID: \"415f8b45-c7ea-49bc-aed1-1367c47fac0b\") " pod="watcher-kuttl-default/keystone-b68754746-52s4w" Jan 22 07:01:52 crc kubenswrapper[4720]: I0122 07:01:52.323368 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/415f8b45-c7ea-49bc-aed1-1367c47fac0b-credential-keys\") pod \"keystone-b68754746-52s4w\" (UID: \"415f8b45-c7ea-49bc-aed1-1367c47fac0b\") " pod="watcher-kuttl-default/keystone-b68754746-52s4w" Jan 22 07:01:52 crc kubenswrapper[4720]: I0122 07:01:52.323500 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4lzxl\" (UniqueName: \"kubernetes.io/projected/415f8b45-c7ea-49bc-aed1-1367c47fac0b-kube-api-access-4lzxl\") pod \"keystone-b68754746-52s4w\" (UID: \"415f8b45-c7ea-49bc-aed1-1367c47fac0b\") " pod="watcher-kuttl-default/keystone-b68754746-52s4w" Jan 22 07:01:52 crc kubenswrapper[4720]: I0122 07:01:52.323533 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/415f8b45-c7ea-49bc-aed1-1367c47fac0b-fernet-keys\") pod \"keystone-b68754746-52s4w\" (UID: \"415f8b45-c7ea-49bc-aed1-1367c47fac0b\") " pod="watcher-kuttl-default/keystone-b68754746-52s4w" Jan 22 07:01:52 crc kubenswrapper[4720]: I0122 07:01:52.323566 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/415f8b45-c7ea-49bc-aed1-1367c47fac0b-combined-ca-bundle\") pod \"keystone-b68754746-52s4w\" (UID: \"415f8b45-c7ea-49bc-aed1-1367c47fac0b\") " pod="watcher-kuttl-default/keystone-b68754746-52s4w" Jan 22 07:01:52 crc kubenswrapper[4720]: I0122 07:01:52.323590 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/415f8b45-c7ea-49bc-aed1-1367c47fac0b-cert-memcached-mtls\") pod \"keystone-b68754746-52s4w\" (UID: \"415f8b45-c7ea-49bc-aed1-1367c47fac0b\") " pod="watcher-kuttl-default/keystone-b68754746-52s4w" Jan 22 07:01:52 crc kubenswrapper[4720]: I0122 07:01:52.323695 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/415f8b45-c7ea-49bc-aed1-1367c47fac0b-scripts\") pod \"keystone-b68754746-52s4w\" (UID: \"415f8b45-c7ea-49bc-aed1-1367c47fac0b\") " pod="watcher-kuttl-default/keystone-b68754746-52s4w" Jan 22 07:01:52 crc kubenswrapper[4720]: I0122 07:01:52.323780 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/415f8b45-c7ea-49bc-aed1-1367c47fac0b-internal-tls-certs\") pod \"keystone-b68754746-52s4w\" (UID: \"415f8b45-c7ea-49bc-aed1-1367c47fac0b\") " pod="watcher-kuttl-default/keystone-b68754746-52s4w" Jan 22 07:01:52 crc kubenswrapper[4720]: I0122 07:01:52.323839 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/415f8b45-c7ea-49bc-aed1-1367c47fac0b-public-tls-certs\") pod \"keystone-b68754746-52s4w\" (UID: \"415f8b45-c7ea-49bc-aed1-1367c47fac0b\") " pod="watcher-kuttl-default/keystone-b68754746-52s4w" Jan 22 07:01:52 crc kubenswrapper[4720]: I0122 07:01:52.323877 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/415f8b45-c7ea-49bc-aed1-1367c47fac0b-config-data\") pod \"keystone-b68754746-52s4w\" (UID: \"415f8b45-c7ea-49bc-aed1-1367c47fac0b\") " pod="watcher-kuttl-default/keystone-b68754746-52s4w" Jan 22 07:01:52 crc kubenswrapper[4720]: I0122 07:01:52.328587 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/415f8b45-c7ea-49bc-aed1-1367c47fac0b-scripts\") pod \"keystone-b68754746-52s4w\" (UID: \"415f8b45-c7ea-49bc-aed1-1367c47fac0b\") " pod="watcher-kuttl-default/keystone-b68754746-52s4w" Jan 22 07:01:52 crc kubenswrapper[4720]: I0122 07:01:52.328709 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/415f8b45-c7ea-49bc-aed1-1367c47fac0b-credential-keys\") pod \"keystone-b68754746-52s4w\" (UID: \"415f8b45-c7ea-49bc-aed1-1367c47fac0b\") " pod="watcher-kuttl-default/keystone-b68754746-52s4w" Jan 22 07:01:52 crc kubenswrapper[4720]: I0122 07:01:52.329420 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/415f8b45-c7ea-49bc-aed1-1367c47fac0b-fernet-keys\") pod \"keystone-b68754746-52s4w\" (UID: \"415f8b45-c7ea-49bc-aed1-1367c47fac0b\") " pod="watcher-kuttl-default/keystone-b68754746-52s4w" Jan 22 07:01:52 crc kubenswrapper[4720]: I0122 07:01:52.329448 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/415f8b45-c7ea-49bc-aed1-1367c47fac0b-combined-ca-bundle\") pod \"keystone-b68754746-52s4w\" (UID: \"415f8b45-c7ea-49bc-aed1-1367c47fac0b\") " pod="watcher-kuttl-default/keystone-b68754746-52s4w" Jan 22 07:01:52 crc kubenswrapper[4720]: I0122 07:01:52.329819 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/415f8b45-c7ea-49bc-aed1-1367c47fac0b-public-tls-certs\") pod \"keystone-b68754746-52s4w\" (UID: \"415f8b45-c7ea-49bc-aed1-1367c47fac0b\") " pod="watcher-kuttl-default/keystone-b68754746-52s4w" Jan 22 07:01:52 crc kubenswrapper[4720]: I0122 07:01:52.333384 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/415f8b45-c7ea-49bc-aed1-1367c47fac0b-cert-memcached-mtls\") pod \"keystone-b68754746-52s4w\" (UID: \"415f8b45-c7ea-49bc-aed1-1367c47fac0b\") " pod="watcher-kuttl-default/keystone-b68754746-52s4w" Jan 22 07:01:52 crc kubenswrapper[4720]: I0122 07:01:52.333706 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/415f8b45-c7ea-49bc-aed1-1367c47fac0b-config-data\") pod \"keystone-b68754746-52s4w\" (UID: \"415f8b45-c7ea-49bc-aed1-1367c47fac0b\") " pod="watcher-kuttl-default/keystone-b68754746-52s4w" Jan 22 07:01:52 crc kubenswrapper[4720]: I0122 07:01:52.333850 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/415f8b45-c7ea-49bc-aed1-1367c47fac0b-internal-tls-certs\") pod \"keystone-b68754746-52s4w\" (UID: \"415f8b45-c7ea-49bc-aed1-1367c47fac0b\") " pod="watcher-kuttl-default/keystone-b68754746-52s4w" Jan 22 07:01:52 crc kubenswrapper[4720]: I0122 07:01:52.346886 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4lzxl\" (UniqueName: \"kubernetes.io/projected/415f8b45-c7ea-49bc-aed1-1367c47fac0b-kube-api-access-4lzxl\") pod \"keystone-b68754746-52s4w\" (UID: \"415f8b45-c7ea-49bc-aed1-1367c47fac0b\") " pod="watcher-kuttl-default/keystone-b68754746-52s4w" Jan 22 07:01:52 crc kubenswrapper[4720]: I0122 07:01:52.517801 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-b68754746-52s4w" Jan 22 07:01:52 crc kubenswrapper[4720]: I0122 07:01:52.865716 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 07:01:52 crc kubenswrapper[4720]: I0122 07:01:52.900405 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 07:01:52 crc kubenswrapper[4720]: I0122 07:01:52.901636 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:01:52 crc kubenswrapper[4720]: I0122 07:01:52.917779 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:01:52 crc kubenswrapper[4720]: I0122 07:01:52.987681 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/keystone-b68754746-52s4w"] Jan 22 07:01:52 crc kubenswrapper[4720]: W0122 07:01:52.989755 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod415f8b45_c7ea_49bc_aed1_1367c47fac0b.slice/crio-e1dfb81c9c3e2cc07825beb8b4525725a7295e893e22634e5f0c496cbfedd121 WatchSource:0}: Error finding container e1dfb81c9c3e2cc07825beb8b4525725a7295e893e22634e5f0c496cbfedd121: Status 404 returned error can't find the container with id e1dfb81c9c3e2cc07825beb8b4525725a7295e893e22634e5f0c496cbfedd121 Jan 22 07:01:53 crc kubenswrapper[4720]: I0122 07:01:53.674619 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-b68754746-52s4w" event={"ID":"415f8b45-c7ea-49bc-aed1-1367c47fac0b","Type":"ContainerStarted","Data":"e1dfb81c9c3e2cc07825beb8b4525725a7295e893e22634e5f0c496cbfedd121"} Jan 22 07:01:53 crc kubenswrapper[4720]: I0122 07:01:53.676291 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"c8ac272a-b713-4024-a0e0-fd1873016edc","Type":"ContainerStarted","Data":"2ca5b25aef2acf847d3623b64b617239ffa11f2d97e87979afd2c26af8d5ea14"} Jan 22 07:01:53 crc kubenswrapper[4720]: I0122 07:01:53.686168 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:01:53 crc kubenswrapper[4720]: I0122 07:01:53.700799 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 07:01:54 crc kubenswrapper[4720]: I0122 07:01:54.689294 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-b68754746-52s4w" event={"ID":"415f8b45-c7ea-49bc-aed1-1367c47fac0b","Type":"ContainerStarted","Data":"2e14911c1bdcb1a875d7d2e002ce00fbbaefa9315dddbe61160d89dea6c2fdca"} Jan 22 07:01:54 crc kubenswrapper[4720]: I0122 07:01:54.690027 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/keystone-b68754746-52s4w" Jan 22 07:01:54 crc kubenswrapper[4720]: I0122 07:01:54.729558 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/keystone-b68754746-52s4w" podStartSLOduration=2.729533961 podStartE2EDuration="2.729533961s" podCreationTimestamp="2026-01-22 07:01:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 07:01:54.720797638 +0000 UTC m=+1606.862704343" watchObservedRunningTime="2026-01-22 07:01:54.729533961 +0000 UTC m=+1606.871440666" Jan 22 07:01:54 crc kubenswrapper[4720]: I0122 07:01:54.758735 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podStartSLOduration=6.758711213 podStartE2EDuration="6.758711213s" podCreationTimestamp="2026-01-22 07:01:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 07:01:54.752760787 +0000 UTC m=+1606.894667522" watchObservedRunningTime="2026-01-22 07:01:54.758711213 +0000 UTC m=+1606.900617918" Jan 22 07:01:57 crc kubenswrapper[4720]: I0122 07:01:57.840790 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 22 07:01:57 crc kubenswrapper[4720]: I0122 07:01:57.841390 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="1f80ec8a-6cdd-46dc-8e7c-7357052e1472" containerName="watcher-kuttl-api-log" containerID="cri-o://0b9199ca3d21fddfab369a5c331549a1a8e3f2ce7e3c751d233d1a938282c4dd" gracePeriod=30 Jan 22 07:01:57 crc kubenswrapper[4720]: I0122 07:01:57.841481 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="1f80ec8a-6cdd-46dc-8e7c-7357052e1472" containerName="watcher-api" containerID="cri-o://1299b50e714b029a3186ec89c02bb0736b881576f76d5aa8e3ba6a0d75cf4a38" gracePeriod=30 Jan 22 07:01:58 crc kubenswrapper[4720]: I0122 07:01:58.709320 4720 prober.go:107] "Probe failed" probeType="Readiness" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="1f80ec8a-6cdd-46dc-8e7c-7357052e1472" containerName="watcher-api" probeResult="failure" output="Get \"https://10.217.0.172:9322/\": read tcp 10.217.0.2:40104->10.217.0.172:9322: read: connection reset by peer" Jan 22 07:01:58 crc kubenswrapper[4720]: I0122 07:01:58.709298 4720 prober.go:107] "Probe failed" probeType="Readiness" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="1f80ec8a-6cdd-46dc-8e7c-7357052e1472" containerName="watcher-kuttl-api-log" probeResult="failure" output="Get \"https://10.217.0.172:9322/\": read tcp 10.217.0.2:40088->10.217.0.172:9322: read: connection reset by peer" Jan 22 07:01:58 crc kubenswrapper[4720]: I0122 07:01:58.724372 4720 generic.go:334] "Generic (PLEG): container finished" podID="1f80ec8a-6cdd-46dc-8e7c-7357052e1472" containerID="0b9199ca3d21fddfab369a5c331549a1a8e3f2ce7e3c751d233d1a938282c4dd" exitCode=143 Jan 22 07:01:58 crc kubenswrapper[4720]: I0122 07:01:58.724441 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"1f80ec8a-6cdd-46dc-8e7c-7357052e1472","Type":"ContainerDied","Data":"0b9199ca3d21fddfab369a5c331549a1a8e3f2ce7e3c751d233d1a938282c4dd"} Jan 22 07:01:58 crc kubenswrapper[4720]: I0122 07:01:58.967546 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 07:01:58 crc kubenswrapper[4720]: I0122 07:01:58.997490 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 07:01:59 crc kubenswrapper[4720]: I0122 07:01:59.252271 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:01:59 crc kubenswrapper[4720]: I0122 07:01:59.331844 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/1f80ec8a-6cdd-46dc-8e7c-7357052e1472-custom-prometheus-ca\") pod \"1f80ec8a-6cdd-46dc-8e7c-7357052e1472\" (UID: \"1f80ec8a-6cdd-46dc-8e7c-7357052e1472\") " Jan 22 07:01:59 crc kubenswrapper[4720]: I0122 07:01:59.332052 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1f80ec8a-6cdd-46dc-8e7c-7357052e1472-logs\") pod \"1f80ec8a-6cdd-46dc-8e7c-7357052e1472\" (UID: \"1f80ec8a-6cdd-46dc-8e7c-7357052e1472\") " Jan 22 07:01:59 crc kubenswrapper[4720]: I0122 07:01:59.332088 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/1f80ec8a-6cdd-46dc-8e7c-7357052e1472-cert-memcached-mtls\") pod \"1f80ec8a-6cdd-46dc-8e7c-7357052e1472\" (UID: \"1f80ec8a-6cdd-46dc-8e7c-7357052e1472\") " Jan 22 07:01:59 crc kubenswrapper[4720]: I0122 07:01:59.332129 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j52xb\" (UniqueName: \"kubernetes.io/projected/1f80ec8a-6cdd-46dc-8e7c-7357052e1472-kube-api-access-j52xb\") pod \"1f80ec8a-6cdd-46dc-8e7c-7357052e1472\" (UID: \"1f80ec8a-6cdd-46dc-8e7c-7357052e1472\") " Jan 22 07:01:59 crc kubenswrapper[4720]: I0122 07:01:59.332201 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1f80ec8a-6cdd-46dc-8e7c-7357052e1472-public-tls-certs\") pod \"1f80ec8a-6cdd-46dc-8e7c-7357052e1472\" (UID: \"1f80ec8a-6cdd-46dc-8e7c-7357052e1472\") " Jan 22 07:01:59 crc kubenswrapper[4720]: I0122 07:01:59.332254 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1f80ec8a-6cdd-46dc-8e7c-7357052e1472-internal-tls-certs\") pod \"1f80ec8a-6cdd-46dc-8e7c-7357052e1472\" (UID: \"1f80ec8a-6cdd-46dc-8e7c-7357052e1472\") " Jan 22 07:01:59 crc kubenswrapper[4720]: I0122 07:01:59.332295 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f80ec8a-6cdd-46dc-8e7c-7357052e1472-combined-ca-bundle\") pod \"1f80ec8a-6cdd-46dc-8e7c-7357052e1472\" (UID: \"1f80ec8a-6cdd-46dc-8e7c-7357052e1472\") " Jan 22 07:01:59 crc kubenswrapper[4720]: I0122 07:01:59.332371 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f80ec8a-6cdd-46dc-8e7c-7357052e1472-config-data\") pod \"1f80ec8a-6cdd-46dc-8e7c-7357052e1472\" (UID: \"1f80ec8a-6cdd-46dc-8e7c-7357052e1472\") " Jan 22 07:01:59 crc kubenswrapper[4720]: I0122 07:01:59.332542 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1f80ec8a-6cdd-46dc-8e7c-7357052e1472-logs" (OuterVolumeSpecName: "logs") pod "1f80ec8a-6cdd-46dc-8e7c-7357052e1472" (UID: "1f80ec8a-6cdd-46dc-8e7c-7357052e1472"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 07:01:59 crc kubenswrapper[4720]: I0122 07:01:59.333079 4720 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1f80ec8a-6cdd-46dc-8e7c-7357052e1472-logs\") on node \"crc\" DevicePath \"\"" Jan 22 07:01:59 crc kubenswrapper[4720]: I0122 07:01:59.346246 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f80ec8a-6cdd-46dc-8e7c-7357052e1472-kube-api-access-j52xb" (OuterVolumeSpecName: "kube-api-access-j52xb") pod "1f80ec8a-6cdd-46dc-8e7c-7357052e1472" (UID: "1f80ec8a-6cdd-46dc-8e7c-7357052e1472"). InnerVolumeSpecName "kube-api-access-j52xb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 07:01:59 crc kubenswrapper[4720]: I0122 07:01:59.375034 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f80ec8a-6cdd-46dc-8e7c-7357052e1472-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "1f80ec8a-6cdd-46dc-8e7c-7357052e1472" (UID: "1f80ec8a-6cdd-46dc-8e7c-7357052e1472"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:01:59 crc kubenswrapper[4720]: I0122 07:01:59.405167 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f80ec8a-6cdd-46dc-8e7c-7357052e1472-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "1f80ec8a-6cdd-46dc-8e7c-7357052e1472" (UID: "1f80ec8a-6cdd-46dc-8e7c-7357052e1472"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:01:59 crc kubenswrapper[4720]: I0122 07:01:59.405229 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f80ec8a-6cdd-46dc-8e7c-7357052e1472-config-data" (OuterVolumeSpecName: "config-data") pod "1f80ec8a-6cdd-46dc-8e7c-7357052e1472" (UID: "1f80ec8a-6cdd-46dc-8e7c-7357052e1472"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:01:59 crc kubenswrapper[4720]: I0122 07:01:59.412693 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f80ec8a-6cdd-46dc-8e7c-7357052e1472-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1f80ec8a-6cdd-46dc-8e7c-7357052e1472" (UID: "1f80ec8a-6cdd-46dc-8e7c-7357052e1472"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:01:59 crc kubenswrapper[4720]: I0122 07:01:59.415987 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f80ec8a-6cdd-46dc-8e7c-7357052e1472-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "1f80ec8a-6cdd-46dc-8e7c-7357052e1472" (UID: "1f80ec8a-6cdd-46dc-8e7c-7357052e1472"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:01:59 crc kubenswrapper[4720]: I0122 07:01:59.435111 4720 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/1f80ec8a-6cdd-46dc-8e7c-7357052e1472-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Jan 22 07:01:59 crc kubenswrapper[4720]: I0122 07:01:59.435144 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j52xb\" (UniqueName: \"kubernetes.io/projected/1f80ec8a-6cdd-46dc-8e7c-7357052e1472-kube-api-access-j52xb\") on node \"crc\" DevicePath \"\"" Jan 22 07:01:59 crc kubenswrapper[4720]: I0122 07:01:59.435159 4720 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1f80ec8a-6cdd-46dc-8e7c-7357052e1472-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 22 07:01:59 crc kubenswrapper[4720]: I0122 07:01:59.435168 4720 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1f80ec8a-6cdd-46dc-8e7c-7357052e1472-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 22 07:01:59 crc kubenswrapper[4720]: I0122 07:01:59.435175 4720 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f80ec8a-6cdd-46dc-8e7c-7357052e1472-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 07:01:59 crc kubenswrapper[4720]: I0122 07:01:59.435186 4720 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f80ec8a-6cdd-46dc-8e7c-7357052e1472-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 07:01:59 crc kubenswrapper[4720]: I0122 07:01:59.435265 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f80ec8a-6cdd-46dc-8e7c-7357052e1472-cert-memcached-mtls" (OuterVolumeSpecName: "cert-memcached-mtls") pod "1f80ec8a-6cdd-46dc-8e7c-7357052e1472" (UID: "1f80ec8a-6cdd-46dc-8e7c-7357052e1472"). InnerVolumeSpecName "cert-memcached-mtls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:01:59 crc kubenswrapper[4720]: I0122 07:01:59.537624 4720 reconciler_common.go:293] "Volume detached for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/1f80ec8a-6cdd-46dc-8e7c-7357052e1472-cert-memcached-mtls\") on node \"crc\" DevicePath \"\"" Jan 22 07:01:59 crc kubenswrapper[4720]: I0122 07:01:59.734502 4720 generic.go:334] "Generic (PLEG): container finished" podID="1f80ec8a-6cdd-46dc-8e7c-7357052e1472" containerID="1299b50e714b029a3186ec89c02bb0736b881576f76d5aa8e3ba6a0d75cf4a38" exitCode=0 Jan 22 07:01:59 crc kubenswrapper[4720]: I0122 07:01:59.734593 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"1f80ec8a-6cdd-46dc-8e7c-7357052e1472","Type":"ContainerDied","Data":"1299b50e714b029a3186ec89c02bb0736b881576f76d5aa8e3ba6a0d75cf4a38"} Jan 22 07:01:59 crc kubenswrapper[4720]: I0122 07:01:59.734658 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"1f80ec8a-6cdd-46dc-8e7c-7357052e1472","Type":"ContainerDied","Data":"f2174c7079bd18034cc53912c8d87095cb86ece2c3e2165ce2fbf54623ee0dff"} Jan 22 07:01:59 crc kubenswrapper[4720]: I0122 07:01:59.734665 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:01:59 crc kubenswrapper[4720]: I0122 07:01:59.734683 4720 scope.go:117] "RemoveContainer" containerID="1299b50e714b029a3186ec89c02bb0736b881576f76d5aa8e3ba6a0d75cf4a38" Jan 22 07:01:59 crc kubenswrapper[4720]: I0122 07:01:59.735122 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 07:01:59 crc kubenswrapper[4720]: I0122 07:01:59.758317 4720 scope.go:117] "RemoveContainer" containerID="0b9199ca3d21fddfab369a5c331549a1a8e3f2ce7e3c751d233d1a938282c4dd" Jan 22 07:01:59 crc kubenswrapper[4720]: I0122 07:01:59.775736 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 07:01:59 crc kubenswrapper[4720]: I0122 07:01:59.789332 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 22 07:01:59 crc kubenswrapper[4720]: I0122 07:01:59.796442 4720 scope.go:117] "RemoveContainer" containerID="1299b50e714b029a3186ec89c02bb0736b881576f76d5aa8e3ba6a0d75cf4a38" Jan 22 07:01:59 crc kubenswrapper[4720]: E0122 07:01:59.797453 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1299b50e714b029a3186ec89c02bb0736b881576f76d5aa8e3ba6a0d75cf4a38\": container with ID starting with 1299b50e714b029a3186ec89c02bb0736b881576f76d5aa8e3ba6a0d75cf4a38 not found: ID does not exist" containerID="1299b50e714b029a3186ec89c02bb0736b881576f76d5aa8e3ba6a0d75cf4a38" Jan 22 07:01:59 crc kubenswrapper[4720]: I0122 07:01:59.797512 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1299b50e714b029a3186ec89c02bb0736b881576f76d5aa8e3ba6a0d75cf4a38"} err="failed to get container status \"1299b50e714b029a3186ec89c02bb0736b881576f76d5aa8e3ba6a0d75cf4a38\": rpc error: code = NotFound desc = could not find container \"1299b50e714b029a3186ec89c02bb0736b881576f76d5aa8e3ba6a0d75cf4a38\": container with ID starting with 1299b50e714b029a3186ec89c02bb0736b881576f76d5aa8e3ba6a0d75cf4a38 not found: ID does not exist" Jan 22 07:01:59 crc kubenswrapper[4720]: I0122 07:01:59.797566 4720 scope.go:117] "RemoveContainer" containerID="0b9199ca3d21fddfab369a5c331549a1a8e3f2ce7e3c751d233d1a938282c4dd" Jan 22 07:01:59 crc kubenswrapper[4720]: E0122 07:01:59.797997 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0b9199ca3d21fddfab369a5c331549a1a8e3f2ce7e3c751d233d1a938282c4dd\": container with ID starting with 0b9199ca3d21fddfab369a5c331549a1a8e3f2ce7e3c751d233d1a938282c4dd not found: ID does not exist" containerID="0b9199ca3d21fddfab369a5c331549a1a8e3f2ce7e3c751d233d1a938282c4dd" Jan 22 07:01:59 crc kubenswrapper[4720]: I0122 07:01:59.798022 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0b9199ca3d21fddfab369a5c331549a1a8e3f2ce7e3c751d233d1a938282c4dd"} err="failed to get container status \"0b9199ca3d21fddfab369a5c331549a1a8e3f2ce7e3c751d233d1a938282c4dd\": rpc error: code = NotFound desc = could not find container \"0b9199ca3d21fddfab369a5c331549a1a8e3f2ce7e3c751d233d1a938282c4dd\": container with ID starting with 0b9199ca3d21fddfab369a5c331549a1a8e3f2ce7e3c751d233d1a938282c4dd not found: ID does not exist" Jan 22 07:01:59 crc kubenswrapper[4720]: I0122 07:01:59.805654 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 22 07:01:59 crc kubenswrapper[4720]: I0122 07:01:59.809029 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 22 07:01:59 crc kubenswrapper[4720]: E0122 07:01:59.809474 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f80ec8a-6cdd-46dc-8e7c-7357052e1472" containerName="watcher-kuttl-api-log" Jan 22 07:01:59 crc kubenswrapper[4720]: I0122 07:01:59.809496 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f80ec8a-6cdd-46dc-8e7c-7357052e1472" containerName="watcher-kuttl-api-log" Jan 22 07:01:59 crc kubenswrapper[4720]: E0122 07:01:59.809530 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f80ec8a-6cdd-46dc-8e7c-7357052e1472" containerName="watcher-api" Jan 22 07:01:59 crc kubenswrapper[4720]: I0122 07:01:59.809818 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f80ec8a-6cdd-46dc-8e7c-7357052e1472" containerName="watcher-api" Jan 22 07:01:59 crc kubenswrapper[4720]: I0122 07:01:59.810063 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f80ec8a-6cdd-46dc-8e7c-7357052e1472" containerName="watcher-api" Jan 22 07:01:59 crc kubenswrapper[4720]: I0122 07:01:59.810088 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f80ec8a-6cdd-46dc-8e7c-7357052e1472" containerName="watcher-kuttl-api-log" Jan 22 07:01:59 crc kubenswrapper[4720]: I0122 07:01:59.828764 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 22 07:01:59 crc kubenswrapper[4720]: I0122 07:01:59.828923 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:01:59 crc kubenswrapper[4720]: I0122 07:01:59.833725 4720 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-api-config-data" Jan 22 07:01:59 crc kubenswrapper[4720]: I0122 07:01:59.950635 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/323e3085-cab5-4d90-accf-4586756bd395-cert-memcached-mtls\") pod \"watcher-kuttl-api-0\" (UID: \"323e3085-cab5-4d90-accf-4586756bd395\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:01:59 crc kubenswrapper[4720]: I0122 07:01:59.950723 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/323e3085-cab5-4d90-accf-4586756bd395-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"323e3085-cab5-4d90-accf-4586756bd395\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:01:59 crc kubenswrapper[4720]: I0122 07:01:59.950782 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/323e3085-cab5-4d90-accf-4586756bd395-logs\") pod \"watcher-kuttl-api-0\" (UID: \"323e3085-cab5-4d90-accf-4586756bd395\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:01:59 crc kubenswrapper[4720]: I0122 07:01:59.950822 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/323e3085-cab5-4d90-accf-4586756bd395-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"323e3085-cab5-4d90-accf-4586756bd395\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:01:59 crc kubenswrapper[4720]: I0122 07:01:59.950887 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lgcwj\" (UniqueName: \"kubernetes.io/projected/323e3085-cab5-4d90-accf-4586756bd395-kube-api-access-lgcwj\") pod \"watcher-kuttl-api-0\" (UID: \"323e3085-cab5-4d90-accf-4586756bd395\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:01:59 crc kubenswrapper[4720]: I0122 07:01:59.950957 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/323e3085-cab5-4d90-accf-4586756bd395-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"323e3085-cab5-4d90-accf-4586756bd395\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:02:00 crc kubenswrapper[4720]: I0122 07:02:00.052445 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/323e3085-cab5-4d90-accf-4586756bd395-cert-memcached-mtls\") pod \"watcher-kuttl-api-0\" (UID: \"323e3085-cab5-4d90-accf-4586756bd395\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:02:00 crc kubenswrapper[4720]: I0122 07:02:00.052528 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/323e3085-cab5-4d90-accf-4586756bd395-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"323e3085-cab5-4d90-accf-4586756bd395\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:02:00 crc kubenswrapper[4720]: I0122 07:02:00.052586 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/323e3085-cab5-4d90-accf-4586756bd395-logs\") pod \"watcher-kuttl-api-0\" (UID: \"323e3085-cab5-4d90-accf-4586756bd395\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:02:00 crc kubenswrapper[4720]: I0122 07:02:00.052627 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/323e3085-cab5-4d90-accf-4586756bd395-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"323e3085-cab5-4d90-accf-4586756bd395\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:02:00 crc kubenswrapper[4720]: I0122 07:02:00.052658 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lgcwj\" (UniqueName: \"kubernetes.io/projected/323e3085-cab5-4d90-accf-4586756bd395-kube-api-access-lgcwj\") pod \"watcher-kuttl-api-0\" (UID: \"323e3085-cab5-4d90-accf-4586756bd395\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:02:00 crc kubenswrapper[4720]: I0122 07:02:00.052697 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/323e3085-cab5-4d90-accf-4586756bd395-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"323e3085-cab5-4d90-accf-4586756bd395\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:02:00 crc kubenswrapper[4720]: I0122 07:02:00.053354 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/323e3085-cab5-4d90-accf-4586756bd395-logs\") pod \"watcher-kuttl-api-0\" (UID: \"323e3085-cab5-4d90-accf-4586756bd395\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:02:00 crc kubenswrapper[4720]: I0122 07:02:00.058621 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/323e3085-cab5-4d90-accf-4586756bd395-cert-memcached-mtls\") pod \"watcher-kuttl-api-0\" (UID: \"323e3085-cab5-4d90-accf-4586756bd395\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:02:00 crc kubenswrapper[4720]: I0122 07:02:00.058750 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/323e3085-cab5-4d90-accf-4586756bd395-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"323e3085-cab5-4d90-accf-4586756bd395\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:02:00 crc kubenswrapper[4720]: I0122 07:02:00.058898 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/323e3085-cab5-4d90-accf-4586756bd395-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"323e3085-cab5-4d90-accf-4586756bd395\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:02:00 crc kubenswrapper[4720]: I0122 07:02:00.058954 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/323e3085-cab5-4d90-accf-4586756bd395-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"323e3085-cab5-4d90-accf-4586756bd395\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:02:00 crc kubenswrapper[4720]: I0122 07:02:00.069953 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lgcwj\" (UniqueName: \"kubernetes.io/projected/323e3085-cab5-4d90-accf-4586756bd395-kube-api-access-lgcwj\") pod \"watcher-kuttl-api-0\" (UID: \"323e3085-cab5-4d90-accf-4586756bd395\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:02:00 crc kubenswrapper[4720]: I0122 07:02:00.168435 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:02:00 crc kubenswrapper[4720]: I0122 07:02:00.223322 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1f80ec8a-6cdd-46dc-8e7c-7357052e1472" path="/var/lib/kubelet/pods/1f80ec8a-6cdd-46dc-8e7c-7357052e1472/volumes" Jan 22 07:02:00 crc kubenswrapper[4720]: I0122 07:02:00.630051 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 22 07:02:00 crc kubenswrapper[4720]: W0122 07:02:00.634470 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod323e3085_cab5_4d90_accf_4586756bd395.slice/crio-1bf732d8bf11501a47deca330e06122beb19514da210024a9cccb6c91d60fac7 WatchSource:0}: Error finding container 1bf732d8bf11501a47deca330e06122beb19514da210024a9cccb6c91d60fac7: Status 404 returned error can't find the container with id 1bf732d8bf11501a47deca330e06122beb19514da210024a9cccb6c91d60fac7 Jan 22 07:02:00 crc kubenswrapper[4720]: I0122 07:02:00.746273 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"323e3085-cab5-4d90-accf-4586756bd395","Type":"ContainerStarted","Data":"1bf732d8bf11501a47deca330e06122beb19514da210024a9cccb6c91d60fac7"} Jan 22 07:02:01 crc kubenswrapper[4720]: I0122 07:02:01.767624 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:02:01 crc kubenswrapper[4720]: I0122 07:02:01.771995 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"323e3085-cab5-4d90-accf-4586756bd395","Type":"ContainerStarted","Data":"8759e2364d07ab895c7f7a44af6b96487bf75c8350fea1fccb07514573d33cec"} Jan 22 07:02:01 crc kubenswrapper[4720]: I0122 07:02:01.772086 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"323e3085-cab5-4d90-accf-4586756bd395","Type":"ContainerStarted","Data":"720963ea41449eeb62fc264e60d5d0fda3a4f9f8cc378bd1c313ce7669650a71"} Jan 22 07:02:01 crc kubenswrapper[4720]: I0122 07:02:01.772120 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:02:01 crc kubenswrapper[4720]: I0122 07:02:01.905063 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-api-0" podStartSLOduration=2.90504325 podStartE2EDuration="2.90504325s" podCreationTimestamp="2026-01-22 07:01:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 07:02:01.902317235 +0000 UTC m=+1614.044223940" watchObservedRunningTime="2026-01-22 07:02:01.90504325 +0000 UTC m=+1614.046949955" Jan 22 07:02:04 crc kubenswrapper[4720]: I0122 07:02:04.456841 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:02:05 crc kubenswrapper[4720]: I0122 07:02:05.168631 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:02:10 crc kubenswrapper[4720]: I0122 07:02:10.169124 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:02:10 crc kubenswrapper[4720]: I0122 07:02:10.176710 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:02:10 crc kubenswrapper[4720]: I0122 07:02:10.852622 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:02:24 crc kubenswrapper[4720]: I0122 07:02:24.517027 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/keystone-b68754746-52s4w" Jan 22 07:02:24 crc kubenswrapper[4720]: I0122 07:02:24.607385 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/keystone-fb4ff76bc-49d2q"] Jan 22 07:02:24 crc kubenswrapper[4720]: I0122 07:02:24.607680 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/keystone-fb4ff76bc-49d2q" podUID="94acf8e7-279f-4560-9716-56f731501d94" containerName="keystone-api" containerID="cri-o://19f76acff38894c114f2443a20a59c6e2d8b7aa672fcd330010cc3b567d81d35" gracePeriod=30 Jan 22 07:02:28 crc kubenswrapper[4720]: I0122 07:02:28.016770 4720 generic.go:334] "Generic (PLEG): container finished" podID="94acf8e7-279f-4560-9716-56f731501d94" containerID="19f76acff38894c114f2443a20a59c6e2d8b7aa672fcd330010cc3b567d81d35" exitCode=0 Jan 22 07:02:28 crc kubenswrapper[4720]: I0122 07:02:28.016875 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-fb4ff76bc-49d2q" event={"ID":"94acf8e7-279f-4560-9716-56f731501d94","Type":"ContainerDied","Data":"19f76acff38894c114f2443a20a59c6e2d8b7aa672fcd330010cc3b567d81d35"} Jan 22 07:02:28 crc kubenswrapper[4720]: I0122 07:02:28.323141 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-fb4ff76bc-49d2q" Jan 22 07:02:28 crc kubenswrapper[4720]: I0122 07:02:28.430240 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/94acf8e7-279f-4560-9716-56f731501d94-public-tls-certs\") pod \"94acf8e7-279f-4560-9716-56f731501d94\" (UID: \"94acf8e7-279f-4560-9716-56f731501d94\") " Jan 22 07:02:28 crc kubenswrapper[4720]: I0122 07:02:28.430707 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/94acf8e7-279f-4560-9716-56f731501d94-scripts\") pod \"94acf8e7-279f-4560-9716-56f731501d94\" (UID: \"94acf8e7-279f-4560-9716-56f731501d94\") " Jan 22 07:02:28 crc kubenswrapper[4720]: I0122 07:02:28.430743 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94acf8e7-279f-4560-9716-56f731501d94-combined-ca-bundle\") pod \"94acf8e7-279f-4560-9716-56f731501d94\" (UID: \"94acf8e7-279f-4560-9716-56f731501d94\") " Jan 22 07:02:28 crc kubenswrapper[4720]: I0122 07:02:28.430819 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/94acf8e7-279f-4560-9716-56f731501d94-credential-keys\") pod \"94acf8e7-279f-4560-9716-56f731501d94\" (UID: \"94acf8e7-279f-4560-9716-56f731501d94\") " Jan 22 07:02:28 crc kubenswrapper[4720]: I0122 07:02:28.430859 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/94acf8e7-279f-4560-9716-56f731501d94-fernet-keys\") pod \"94acf8e7-279f-4560-9716-56f731501d94\" (UID: \"94acf8e7-279f-4560-9716-56f731501d94\") " Jan 22 07:02:28 crc kubenswrapper[4720]: I0122 07:02:28.430893 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kknzf\" (UniqueName: \"kubernetes.io/projected/94acf8e7-279f-4560-9716-56f731501d94-kube-api-access-kknzf\") pod \"94acf8e7-279f-4560-9716-56f731501d94\" (UID: \"94acf8e7-279f-4560-9716-56f731501d94\") " Jan 22 07:02:28 crc kubenswrapper[4720]: I0122 07:02:28.430963 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94acf8e7-279f-4560-9716-56f731501d94-config-data\") pod \"94acf8e7-279f-4560-9716-56f731501d94\" (UID: \"94acf8e7-279f-4560-9716-56f731501d94\") " Jan 22 07:02:28 crc kubenswrapper[4720]: I0122 07:02:28.431003 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/94acf8e7-279f-4560-9716-56f731501d94-internal-tls-certs\") pod \"94acf8e7-279f-4560-9716-56f731501d94\" (UID: \"94acf8e7-279f-4560-9716-56f731501d94\") " Jan 22 07:02:28 crc kubenswrapper[4720]: I0122 07:02:28.436782 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/94acf8e7-279f-4560-9716-56f731501d94-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "94acf8e7-279f-4560-9716-56f731501d94" (UID: "94acf8e7-279f-4560-9716-56f731501d94"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:02:28 crc kubenswrapper[4720]: I0122 07:02:28.436813 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/94acf8e7-279f-4560-9716-56f731501d94-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "94acf8e7-279f-4560-9716-56f731501d94" (UID: "94acf8e7-279f-4560-9716-56f731501d94"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:02:28 crc kubenswrapper[4720]: I0122 07:02:28.438521 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/94acf8e7-279f-4560-9716-56f731501d94-scripts" (OuterVolumeSpecName: "scripts") pod "94acf8e7-279f-4560-9716-56f731501d94" (UID: "94acf8e7-279f-4560-9716-56f731501d94"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:02:28 crc kubenswrapper[4720]: I0122 07:02:28.442387 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94acf8e7-279f-4560-9716-56f731501d94-kube-api-access-kknzf" (OuterVolumeSpecName: "kube-api-access-kknzf") pod "94acf8e7-279f-4560-9716-56f731501d94" (UID: "94acf8e7-279f-4560-9716-56f731501d94"). InnerVolumeSpecName "kube-api-access-kknzf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 07:02:28 crc kubenswrapper[4720]: I0122 07:02:28.464268 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/94acf8e7-279f-4560-9716-56f731501d94-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "94acf8e7-279f-4560-9716-56f731501d94" (UID: "94acf8e7-279f-4560-9716-56f731501d94"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:02:28 crc kubenswrapper[4720]: I0122 07:02:28.466024 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/94acf8e7-279f-4560-9716-56f731501d94-config-data" (OuterVolumeSpecName: "config-data") pod "94acf8e7-279f-4560-9716-56f731501d94" (UID: "94acf8e7-279f-4560-9716-56f731501d94"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:02:28 crc kubenswrapper[4720]: I0122 07:02:28.478563 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/94acf8e7-279f-4560-9716-56f731501d94-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "94acf8e7-279f-4560-9716-56f731501d94" (UID: "94acf8e7-279f-4560-9716-56f731501d94"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:02:28 crc kubenswrapper[4720]: I0122 07:02:28.490337 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/94acf8e7-279f-4560-9716-56f731501d94-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "94acf8e7-279f-4560-9716-56f731501d94" (UID: "94acf8e7-279f-4560-9716-56f731501d94"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:02:28 crc kubenswrapper[4720]: I0122 07:02:28.533397 4720 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/94acf8e7-279f-4560-9716-56f731501d94-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 07:02:28 crc kubenswrapper[4720]: I0122 07:02:28.533432 4720 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94acf8e7-279f-4560-9716-56f731501d94-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 07:02:28 crc kubenswrapper[4720]: I0122 07:02:28.533448 4720 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/94acf8e7-279f-4560-9716-56f731501d94-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 22 07:02:28 crc kubenswrapper[4720]: I0122 07:02:28.533457 4720 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/94acf8e7-279f-4560-9716-56f731501d94-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 22 07:02:28 crc kubenswrapper[4720]: I0122 07:02:28.533470 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kknzf\" (UniqueName: \"kubernetes.io/projected/94acf8e7-279f-4560-9716-56f731501d94-kube-api-access-kknzf\") on node \"crc\" DevicePath \"\"" Jan 22 07:02:28 crc kubenswrapper[4720]: I0122 07:02:28.533485 4720 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94acf8e7-279f-4560-9716-56f731501d94-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 07:02:28 crc kubenswrapper[4720]: I0122 07:02:28.533495 4720 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/94acf8e7-279f-4560-9716-56f731501d94-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 22 07:02:28 crc kubenswrapper[4720]: I0122 07:02:28.533504 4720 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/94acf8e7-279f-4560-9716-56f731501d94-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 22 07:02:29 crc kubenswrapper[4720]: I0122 07:02:29.028175 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-fb4ff76bc-49d2q" event={"ID":"94acf8e7-279f-4560-9716-56f731501d94","Type":"ContainerDied","Data":"9bb688468587c4651c3504c958b57edd707756a40a5140b3b98afbdfe7bd6160"} Jan 22 07:02:29 crc kubenswrapper[4720]: I0122 07:02:29.028249 4720 scope.go:117] "RemoveContainer" containerID="19f76acff38894c114f2443a20a59c6e2d8b7aa672fcd330010cc3b567d81d35" Jan 22 07:02:29 crc kubenswrapper[4720]: I0122 07:02:29.028243 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-fb4ff76bc-49d2q" Jan 22 07:02:29 crc kubenswrapper[4720]: I0122 07:02:29.071050 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/keystone-fb4ff76bc-49d2q"] Jan 22 07:02:29 crc kubenswrapper[4720]: I0122 07:02:29.081070 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/keystone-fb4ff76bc-49d2q"] Jan 22 07:02:29 crc kubenswrapper[4720]: I0122 07:02:29.780793 4720 patch_prober.go:28] interesting pod/machine-config-daemon-bnsvd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 07:02:29 crc kubenswrapper[4720]: I0122 07:02:29.781178 4720 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" podUID="f4b26e9d-6a95-4b1c-9750-88b6aa100c67" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 07:02:30 crc kubenswrapper[4720]: I0122 07:02:30.226531 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="94acf8e7-279f-4560-9716-56f731501d94" path="/var/lib/kubelet/pods/94acf8e7-279f-4560-9716-56f731501d94/volumes" Jan 22 07:02:30 crc kubenswrapper[4720]: I0122 07:02:30.777243 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 07:02:30 crc kubenswrapper[4720]: I0122 07:02:30.777764 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="9cf69410-41fe-483a-a2f1-03fb54dbf10e" containerName="ceilometer-central-agent" containerID="cri-o://ffcc4b4cb191fddf2160975d339b31c845a24c05c2f2a17a1de6f36b6aa03a6e" gracePeriod=30 Jan 22 07:02:30 crc kubenswrapper[4720]: I0122 07:02:30.777836 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="9cf69410-41fe-483a-a2f1-03fb54dbf10e" containerName="ceilometer-notification-agent" containerID="cri-o://2ad21fae366c0f1a24b188588f88b7e594795af4b0d223d95813de0b419c5678" gracePeriod=30 Jan 22 07:02:30 crc kubenswrapper[4720]: I0122 07:02:30.777838 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="9cf69410-41fe-483a-a2f1-03fb54dbf10e" containerName="sg-core" containerID="cri-o://713c4effe31810a066a6c91cad09980b52d6f337165b56e1193f30a1f347dc33" gracePeriod=30 Jan 22 07:02:30 crc kubenswrapper[4720]: I0122 07:02:30.777824 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="9cf69410-41fe-483a-a2f1-03fb54dbf10e" containerName="proxy-httpd" containerID="cri-o://c0f5a9c8abfeb8bb5df2d859b8ae16cb107e69c31d0e54471c72b4aa8e84b551" gracePeriod=30 Jan 22 07:02:31 crc kubenswrapper[4720]: I0122 07:02:31.051301 4720 generic.go:334] "Generic (PLEG): container finished" podID="9cf69410-41fe-483a-a2f1-03fb54dbf10e" containerID="c0f5a9c8abfeb8bb5df2d859b8ae16cb107e69c31d0e54471c72b4aa8e84b551" exitCode=0 Jan 22 07:02:31 crc kubenswrapper[4720]: I0122 07:02:31.051349 4720 generic.go:334] "Generic (PLEG): container finished" podID="9cf69410-41fe-483a-a2f1-03fb54dbf10e" containerID="713c4effe31810a066a6c91cad09980b52d6f337165b56e1193f30a1f347dc33" exitCode=2 Jan 22 07:02:31 crc kubenswrapper[4720]: I0122 07:02:31.051377 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"9cf69410-41fe-483a-a2f1-03fb54dbf10e","Type":"ContainerDied","Data":"c0f5a9c8abfeb8bb5df2d859b8ae16cb107e69c31d0e54471c72b4aa8e84b551"} Jan 22 07:02:31 crc kubenswrapper[4720]: I0122 07:02:31.051415 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"9cf69410-41fe-483a-a2f1-03fb54dbf10e","Type":"ContainerDied","Data":"713c4effe31810a066a6c91cad09980b52d6f337165b56e1193f30a1f347dc33"} Jan 22 07:02:31 crc kubenswrapper[4720]: I0122 07:02:31.718009 4720 prober.go:107] "Probe failed" probeType="Readiness" pod="watcher-kuttl-default/ceilometer-0" podUID="9cf69410-41fe-483a-a2f1-03fb54dbf10e" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.168:3000/\": dial tcp 10.217.0.168:3000: connect: connection refused" Jan 22 07:02:32 crc kubenswrapper[4720]: I0122 07:02:32.069739 4720 generic.go:334] "Generic (PLEG): container finished" podID="9cf69410-41fe-483a-a2f1-03fb54dbf10e" containerID="ffcc4b4cb191fddf2160975d339b31c845a24c05c2f2a17a1de6f36b6aa03a6e" exitCode=0 Jan 22 07:02:32 crc kubenswrapper[4720]: I0122 07:02:32.069804 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"9cf69410-41fe-483a-a2f1-03fb54dbf10e","Type":"ContainerDied","Data":"ffcc4b4cb191fddf2160975d339b31c845a24c05c2f2a17a1de6f36b6aa03a6e"} Jan 22 07:02:34 crc kubenswrapper[4720]: I0122 07:02:34.090699 4720 generic.go:334] "Generic (PLEG): container finished" podID="9cf69410-41fe-483a-a2f1-03fb54dbf10e" containerID="2ad21fae366c0f1a24b188588f88b7e594795af4b0d223d95813de0b419c5678" exitCode=0 Jan 22 07:02:34 crc kubenswrapper[4720]: I0122 07:02:34.091239 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"9cf69410-41fe-483a-a2f1-03fb54dbf10e","Type":"ContainerDied","Data":"2ad21fae366c0f1a24b188588f88b7e594795af4b0d223d95813de0b419c5678"} Jan 22 07:02:34 crc kubenswrapper[4720]: I0122 07:02:34.156724 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:02:34 crc kubenswrapper[4720]: I0122 07:02:34.243834 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9cf69410-41fe-483a-a2f1-03fb54dbf10e-config-data\") pod \"9cf69410-41fe-483a-a2f1-03fb54dbf10e\" (UID: \"9cf69410-41fe-483a-a2f1-03fb54dbf10e\") " Jan 22 07:02:34 crc kubenswrapper[4720]: I0122 07:02:34.243983 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9cf69410-41fe-483a-a2f1-03fb54dbf10e-sg-core-conf-yaml\") pod \"9cf69410-41fe-483a-a2f1-03fb54dbf10e\" (UID: \"9cf69410-41fe-483a-a2f1-03fb54dbf10e\") " Jan 22 07:02:34 crc kubenswrapper[4720]: I0122 07:02:34.244046 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9cf69410-41fe-483a-a2f1-03fb54dbf10e-run-httpd\") pod \"9cf69410-41fe-483a-a2f1-03fb54dbf10e\" (UID: \"9cf69410-41fe-483a-a2f1-03fb54dbf10e\") " Jan 22 07:02:34 crc kubenswrapper[4720]: I0122 07:02:34.244097 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/9cf69410-41fe-483a-a2f1-03fb54dbf10e-ceilometer-tls-certs\") pod \"9cf69410-41fe-483a-a2f1-03fb54dbf10e\" (UID: \"9cf69410-41fe-483a-a2f1-03fb54dbf10e\") " Jan 22 07:02:34 crc kubenswrapper[4720]: I0122 07:02:34.244153 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9cf69410-41fe-483a-a2f1-03fb54dbf10e-combined-ca-bundle\") pod \"9cf69410-41fe-483a-a2f1-03fb54dbf10e\" (UID: \"9cf69410-41fe-483a-a2f1-03fb54dbf10e\") " Jan 22 07:02:34 crc kubenswrapper[4720]: I0122 07:02:34.244197 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7ck78\" (UniqueName: \"kubernetes.io/projected/9cf69410-41fe-483a-a2f1-03fb54dbf10e-kube-api-access-7ck78\") pod \"9cf69410-41fe-483a-a2f1-03fb54dbf10e\" (UID: \"9cf69410-41fe-483a-a2f1-03fb54dbf10e\") " Jan 22 07:02:34 crc kubenswrapper[4720]: I0122 07:02:34.244234 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9cf69410-41fe-483a-a2f1-03fb54dbf10e-log-httpd\") pod \"9cf69410-41fe-483a-a2f1-03fb54dbf10e\" (UID: \"9cf69410-41fe-483a-a2f1-03fb54dbf10e\") " Jan 22 07:02:34 crc kubenswrapper[4720]: I0122 07:02:34.244253 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9cf69410-41fe-483a-a2f1-03fb54dbf10e-scripts\") pod \"9cf69410-41fe-483a-a2f1-03fb54dbf10e\" (UID: \"9cf69410-41fe-483a-a2f1-03fb54dbf10e\") " Jan 22 07:02:34 crc kubenswrapper[4720]: I0122 07:02:34.247619 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9cf69410-41fe-483a-a2f1-03fb54dbf10e-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "9cf69410-41fe-483a-a2f1-03fb54dbf10e" (UID: "9cf69410-41fe-483a-a2f1-03fb54dbf10e"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 07:02:34 crc kubenswrapper[4720]: I0122 07:02:34.257689 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9cf69410-41fe-483a-a2f1-03fb54dbf10e-scripts" (OuterVolumeSpecName: "scripts") pod "9cf69410-41fe-483a-a2f1-03fb54dbf10e" (UID: "9cf69410-41fe-483a-a2f1-03fb54dbf10e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:02:34 crc kubenswrapper[4720]: I0122 07:02:34.257729 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9cf69410-41fe-483a-a2f1-03fb54dbf10e-kube-api-access-7ck78" (OuterVolumeSpecName: "kube-api-access-7ck78") pod "9cf69410-41fe-483a-a2f1-03fb54dbf10e" (UID: "9cf69410-41fe-483a-a2f1-03fb54dbf10e"). InnerVolumeSpecName "kube-api-access-7ck78". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 07:02:34 crc kubenswrapper[4720]: I0122 07:02:34.273399 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9cf69410-41fe-483a-a2f1-03fb54dbf10e-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "9cf69410-41fe-483a-a2f1-03fb54dbf10e" (UID: "9cf69410-41fe-483a-a2f1-03fb54dbf10e"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 07:02:34 crc kubenswrapper[4720]: I0122 07:02:34.290467 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9cf69410-41fe-483a-a2f1-03fb54dbf10e-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "9cf69410-41fe-483a-a2f1-03fb54dbf10e" (UID: "9cf69410-41fe-483a-a2f1-03fb54dbf10e"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:02:34 crc kubenswrapper[4720]: I0122 07:02:34.323932 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9cf69410-41fe-483a-a2f1-03fb54dbf10e-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "9cf69410-41fe-483a-a2f1-03fb54dbf10e" (UID: "9cf69410-41fe-483a-a2f1-03fb54dbf10e"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:02:34 crc kubenswrapper[4720]: I0122 07:02:34.346051 4720 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9cf69410-41fe-483a-a2f1-03fb54dbf10e-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 22 07:02:34 crc kubenswrapper[4720]: I0122 07:02:34.346087 4720 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/9cf69410-41fe-483a-a2f1-03fb54dbf10e-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 22 07:02:34 crc kubenswrapper[4720]: I0122 07:02:34.346102 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7ck78\" (UniqueName: \"kubernetes.io/projected/9cf69410-41fe-483a-a2f1-03fb54dbf10e-kube-api-access-7ck78\") on node \"crc\" DevicePath \"\"" Jan 22 07:02:34 crc kubenswrapper[4720]: I0122 07:02:34.346113 4720 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9cf69410-41fe-483a-a2f1-03fb54dbf10e-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 22 07:02:34 crc kubenswrapper[4720]: I0122 07:02:34.346123 4720 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9cf69410-41fe-483a-a2f1-03fb54dbf10e-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 07:02:34 crc kubenswrapper[4720]: I0122 07:02:34.346136 4720 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9cf69410-41fe-483a-a2f1-03fb54dbf10e-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 22 07:02:34 crc kubenswrapper[4720]: I0122 07:02:34.364861 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9cf69410-41fe-483a-a2f1-03fb54dbf10e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9cf69410-41fe-483a-a2f1-03fb54dbf10e" (UID: "9cf69410-41fe-483a-a2f1-03fb54dbf10e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:02:34 crc kubenswrapper[4720]: I0122 07:02:34.392137 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9cf69410-41fe-483a-a2f1-03fb54dbf10e-config-data" (OuterVolumeSpecName: "config-data") pod "9cf69410-41fe-483a-a2f1-03fb54dbf10e" (UID: "9cf69410-41fe-483a-a2f1-03fb54dbf10e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:02:34 crc kubenswrapper[4720]: I0122 07:02:34.448109 4720 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9cf69410-41fe-483a-a2f1-03fb54dbf10e-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 07:02:34 crc kubenswrapper[4720]: I0122 07:02:34.448397 4720 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9cf69410-41fe-483a-a2f1-03fb54dbf10e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 07:02:35 crc kubenswrapper[4720]: I0122 07:02:35.102394 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"9cf69410-41fe-483a-a2f1-03fb54dbf10e","Type":"ContainerDied","Data":"4f33ae768513bdbf51b80badd182e6e3b40dea74b927d30eddf391b6c049cdf6"} Jan 22 07:02:35 crc kubenswrapper[4720]: I0122 07:02:35.102518 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:02:35 crc kubenswrapper[4720]: I0122 07:02:35.103549 4720 scope.go:117] "RemoveContainer" containerID="c0f5a9c8abfeb8bb5df2d859b8ae16cb107e69c31d0e54471c72b4aa8e84b551" Jan 22 07:02:35 crc kubenswrapper[4720]: I0122 07:02:35.130689 4720 scope.go:117] "RemoveContainer" containerID="713c4effe31810a066a6c91cad09980b52d6f337165b56e1193f30a1f347dc33" Jan 22 07:02:35 crc kubenswrapper[4720]: I0122 07:02:35.155466 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 07:02:35 crc kubenswrapper[4720]: I0122 07:02:35.174115 4720 scope.go:117] "RemoveContainer" containerID="2ad21fae366c0f1a24b188588f88b7e594795af4b0d223d95813de0b419c5678" Jan 22 07:02:35 crc kubenswrapper[4720]: I0122 07:02:35.177887 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 07:02:35 crc kubenswrapper[4720]: I0122 07:02:35.200336 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 07:02:35 crc kubenswrapper[4720]: E0122 07:02:35.200850 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9cf69410-41fe-483a-a2f1-03fb54dbf10e" containerName="ceilometer-notification-agent" Jan 22 07:02:35 crc kubenswrapper[4720]: I0122 07:02:35.200879 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="9cf69410-41fe-483a-a2f1-03fb54dbf10e" containerName="ceilometer-notification-agent" Jan 22 07:02:35 crc kubenswrapper[4720]: E0122 07:02:35.200893 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9cf69410-41fe-483a-a2f1-03fb54dbf10e" containerName="ceilometer-central-agent" Jan 22 07:02:35 crc kubenswrapper[4720]: I0122 07:02:35.200905 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="9cf69410-41fe-483a-a2f1-03fb54dbf10e" containerName="ceilometer-central-agent" Jan 22 07:02:35 crc kubenswrapper[4720]: E0122 07:02:35.200944 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9cf69410-41fe-483a-a2f1-03fb54dbf10e" containerName="sg-core" Jan 22 07:02:35 crc kubenswrapper[4720]: I0122 07:02:35.200954 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="9cf69410-41fe-483a-a2f1-03fb54dbf10e" containerName="sg-core" Jan 22 07:02:35 crc kubenswrapper[4720]: E0122 07:02:35.200968 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94acf8e7-279f-4560-9716-56f731501d94" containerName="keystone-api" Jan 22 07:02:35 crc kubenswrapper[4720]: I0122 07:02:35.200976 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="94acf8e7-279f-4560-9716-56f731501d94" containerName="keystone-api" Jan 22 07:02:35 crc kubenswrapper[4720]: E0122 07:02:35.200987 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9cf69410-41fe-483a-a2f1-03fb54dbf10e" containerName="proxy-httpd" Jan 22 07:02:35 crc kubenswrapper[4720]: I0122 07:02:35.200995 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="9cf69410-41fe-483a-a2f1-03fb54dbf10e" containerName="proxy-httpd" Jan 22 07:02:35 crc kubenswrapper[4720]: I0122 07:02:35.201244 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="9cf69410-41fe-483a-a2f1-03fb54dbf10e" containerName="proxy-httpd" Jan 22 07:02:35 crc kubenswrapper[4720]: I0122 07:02:35.201260 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="9cf69410-41fe-483a-a2f1-03fb54dbf10e" containerName="ceilometer-central-agent" Jan 22 07:02:35 crc kubenswrapper[4720]: I0122 07:02:35.201271 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="9cf69410-41fe-483a-a2f1-03fb54dbf10e" containerName="ceilometer-notification-agent" Jan 22 07:02:35 crc kubenswrapper[4720]: I0122 07:02:35.201307 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="94acf8e7-279f-4560-9716-56f731501d94" containerName="keystone-api" Jan 22 07:02:35 crc kubenswrapper[4720]: I0122 07:02:35.201327 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="9cf69410-41fe-483a-a2f1-03fb54dbf10e" containerName="sg-core" Jan 22 07:02:35 crc kubenswrapper[4720]: I0122 07:02:35.201979 4720 scope.go:117] "RemoveContainer" containerID="ffcc4b4cb191fddf2160975d339b31c845a24c05c2f2a17a1de6f36b6aa03a6e" Jan 22 07:02:35 crc kubenswrapper[4720]: I0122 07:02:35.205821 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:02:35 crc kubenswrapper[4720]: I0122 07:02:35.208778 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 07:02:35 crc kubenswrapper[4720]: I0122 07:02:35.222044 4720 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-config-data" Jan 22 07:02:35 crc kubenswrapper[4720]: I0122 07:02:35.222289 4720 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-scripts" Jan 22 07:02:35 crc kubenswrapper[4720]: I0122 07:02:35.222405 4720 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-ceilometer-internal-svc" Jan 22 07:02:35 crc kubenswrapper[4720]: I0122 07:02:35.262482 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x8cck\" (UniqueName: \"kubernetes.io/projected/15926338-ca91-47e9-b960-c66c0cea1d91-kube-api-access-x8cck\") pod \"ceilometer-0\" (UID: \"15926338-ca91-47e9-b960-c66c0cea1d91\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:02:35 crc kubenswrapper[4720]: I0122 07:02:35.262591 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/15926338-ca91-47e9-b960-c66c0cea1d91-scripts\") pod \"ceilometer-0\" (UID: \"15926338-ca91-47e9-b960-c66c0cea1d91\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:02:35 crc kubenswrapper[4720]: I0122 07:02:35.262612 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/15926338-ca91-47e9-b960-c66c0cea1d91-log-httpd\") pod \"ceilometer-0\" (UID: \"15926338-ca91-47e9-b960-c66c0cea1d91\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:02:35 crc kubenswrapper[4720]: I0122 07:02:35.262635 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/15926338-ca91-47e9-b960-c66c0cea1d91-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"15926338-ca91-47e9-b960-c66c0cea1d91\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:02:35 crc kubenswrapper[4720]: I0122 07:02:35.262668 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15926338-ca91-47e9-b960-c66c0cea1d91-config-data\") pod \"ceilometer-0\" (UID: \"15926338-ca91-47e9-b960-c66c0cea1d91\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:02:35 crc kubenswrapper[4720]: I0122 07:02:35.262702 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/15926338-ca91-47e9-b960-c66c0cea1d91-run-httpd\") pod \"ceilometer-0\" (UID: \"15926338-ca91-47e9-b960-c66c0cea1d91\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:02:35 crc kubenswrapper[4720]: I0122 07:02:35.262762 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15926338-ca91-47e9-b960-c66c0cea1d91-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"15926338-ca91-47e9-b960-c66c0cea1d91\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:02:35 crc kubenswrapper[4720]: I0122 07:02:35.262810 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/15926338-ca91-47e9-b960-c66c0cea1d91-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"15926338-ca91-47e9-b960-c66c0cea1d91\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:02:35 crc kubenswrapper[4720]: I0122 07:02:35.364799 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/15926338-ca91-47e9-b960-c66c0cea1d91-run-httpd\") pod \"ceilometer-0\" (UID: \"15926338-ca91-47e9-b960-c66c0cea1d91\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:02:35 crc kubenswrapper[4720]: I0122 07:02:35.364867 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15926338-ca91-47e9-b960-c66c0cea1d91-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"15926338-ca91-47e9-b960-c66c0cea1d91\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:02:35 crc kubenswrapper[4720]: I0122 07:02:35.364925 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/15926338-ca91-47e9-b960-c66c0cea1d91-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"15926338-ca91-47e9-b960-c66c0cea1d91\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:02:35 crc kubenswrapper[4720]: I0122 07:02:35.364954 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x8cck\" (UniqueName: \"kubernetes.io/projected/15926338-ca91-47e9-b960-c66c0cea1d91-kube-api-access-x8cck\") pod \"ceilometer-0\" (UID: \"15926338-ca91-47e9-b960-c66c0cea1d91\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:02:35 crc kubenswrapper[4720]: I0122 07:02:35.365004 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/15926338-ca91-47e9-b960-c66c0cea1d91-scripts\") pod \"ceilometer-0\" (UID: \"15926338-ca91-47e9-b960-c66c0cea1d91\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:02:35 crc kubenswrapper[4720]: I0122 07:02:35.365023 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/15926338-ca91-47e9-b960-c66c0cea1d91-log-httpd\") pod \"ceilometer-0\" (UID: \"15926338-ca91-47e9-b960-c66c0cea1d91\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:02:35 crc kubenswrapper[4720]: I0122 07:02:35.365043 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/15926338-ca91-47e9-b960-c66c0cea1d91-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"15926338-ca91-47e9-b960-c66c0cea1d91\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:02:35 crc kubenswrapper[4720]: I0122 07:02:35.365066 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15926338-ca91-47e9-b960-c66c0cea1d91-config-data\") pod \"ceilometer-0\" (UID: \"15926338-ca91-47e9-b960-c66c0cea1d91\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:02:35 crc kubenswrapper[4720]: I0122 07:02:35.366196 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/15926338-ca91-47e9-b960-c66c0cea1d91-log-httpd\") pod \"ceilometer-0\" (UID: \"15926338-ca91-47e9-b960-c66c0cea1d91\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:02:35 crc kubenswrapper[4720]: I0122 07:02:35.366581 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/15926338-ca91-47e9-b960-c66c0cea1d91-run-httpd\") pod \"ceilometer-0\" (UID: \"15926338-ca91-47e9-b960-c66c0cea1d91\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:02:35 crc kubenswrapper[4720]: I0122 07:02:35.377353 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/15926338-ca91-47e9-b960-c66c0cea1d91-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"15926338-ca91-47e9-b960-c66c0cea1d91\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:02:35 crc kubenswrapper[4720]: I0122 07:02:35.378065 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/15926338-ca91-47e9-b960-c66c0cea1d91-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"15926338-ca91-47e9-b960-c66c0cea1d91\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:02:35 crc kubenswrapper[4720]: I0122 07:02:35.379549 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/15926338-ca91-47e9-b960-c66c0cea1d91-scripts\") pod \"ceilometer-0\" (UID: \"15926338-ca91-47e9-b960-c66c0cea1d91\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:02:35 crc kubenswrapper[4720]: I0122 07:02:35.388179 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15926338-ca91-47e9-b960-c66c0cea1d91-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"15926338-ca91-47e9-b960-c66c0cea1d91\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:02:35 crc kubenswrapper[4720]: I0122 07:02:35.389421 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15926338-ca91-47e9-b960-c66c0cea1d91-config-data\") pod \"ceilometer-0\" (UID: \"15926338-ca91-47e9-b960-c66c0cea1d91\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:02:35 crc kubenswrapper[4720]: I0122 07:02:35.408817 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x8cck\" (UniqueName: \"kubernetes.io/projected/15926338-ca91-47e9-b960-c66c0cea1d91-kube-api-access-x8cck\") pod \"ceilometer-0\" (UID: \"15926338-ca91-47e9-b960-c66c0cea1d91\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:02:35 crc kubenswrapper[4720]: I0122 07:02:35.550239 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:02:36 crc kubenswrapper[4720]: I0122 07:02:36.073529 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 07:02:36 crc kubenswrapper[4720]: I0122 07:02:36.112562 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"15926338-ca91-47e9-b960-c66c0cea1d91","Type":"ContainerStarted","Data":"982eb6d8661b5115d80f596550c42c052c7546a264cca44835862cb51205ca0b"} Jan 22 07:02:36 crc kubenswrapper[4720]: I0122 07:02:36.223366 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9cf69410-41fe-483a-a2f1-03fb54dbf10e" path="/var/lib/kubelet/pods/9cf69410-41fe-483a-a2f1-03fb54dbf10e/volumes" Jan 22 07:02:37 crc kubenswrapper[4720]: I0122 07:02:37.127706 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"15926338-ca91-47e9-b960-c66c0cea1d91","Type":"ContainerStarted","Data":"f6f406a87fc744db8dee5a2ba970c771802eb4a58769948e6004286d4a46822f"} Jan 22 07:02:38 crc kubenswrapper[4720]: I0122 07:02:38.145688 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"15926338-ca91-47e9-b960-c66c0cea1d91","Type":"ContainerStarted","Data":"69cc616e6a58721c8fd270e27ec67ddaf36b5a62655394e4c937b52debd35c60"} Jan 22 07:02:38 crc kubenswrapper[4720]: I0122 07:02:38.146256 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"15926338-ca91-47e9-b960-c66c0cea1d91","Type":"ContainerStarted","Data":"cebf6145fce44c3a49da572baf5f7553febdd12ed33705afc1866c99dc7e8ccd"} Jan 22 07:02:40 crc kubenswrapper[4720]: I0122 07:02:40.167067 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"15926338-ca91-47e9-b960-c66c0cea1d91","Type":"ContainerStarted","Data":"2bbf4e4d2b2dbb951ab083ea551fcf701bb6fbd409a5a39fa6779e5ce09550e3"} Jan 22 07:02:40 crc kubenswrapper[4720]: I0122 07:02:40.167544 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:02:40 crc kubenswrapper[4720]: I0122 07:02:40.194556 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/ceilometer-0" podStartSLOduration=2.3073614190000002 podStartE2EDuration="5.194530038s" podCreationTimestamp="2026-01-22 07:02:35 +0000 UTC" firstStartedPulling="2026-01-22 07:02:36.08164979 +0000 UTC m=+1648.223556495" lastFinishedPulling="2026-01-22 07:02:38.968818409 +0000 UTC m=+1651.110725114" observedRunningTime="2026-01-22 07:02:40.188636125 +0000 UTC m=+1652.330542840" watchObservedRunningTime="2026-01-22 07:02:40.194530038 +0000 UTC m=+1652.336436743" Jan 22 07:02:59 crc kubenswrapper[4720]: I0122 07:02:59.780513 4720 patch_prober.go:28] interesting pod/machine-config-daemon-bnsvd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 07:02:59 crc kubenswrapper[4720]: I0122 07:02:59.781251 4720 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" podUID="f4b26e9d-6a95-4b1c-9750-88b6aa100c67" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 07:03:05 crc kubenswrapper[4720]: I0122 07:03:05.657110 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:03:11 crc kubenswrapper[4720]: I0122 07:03:11.960281 4720 scope.go:117] "RemoveContainer" containerID="92bba22f19505073c77bbd838b72601a8a6bc744e60f1b3a73bb9dc2514c635a" Jan 22 07:03:12 crc kubenswrapper[4720]: I0122 07:03:12.487792 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-kzmdm"] Jan 22 07:03:12 crc kubenswrapper[4720]: I0122 07:03:12.498378 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-kzmdm"] Jan 22 07:03:12 crc kubenswrapper[4720]: I0122 07:03:12.556414 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher95d6-account-delete-h6r7p"] Jan 22 07:03:12 crc kubenswrapper[4720]: I0122 07:03:12.557881 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher95d6-account-delete-h6r7p" Jan 22 07:03:12 crc kubenswrapper[4720]: I0122 07:03:12.584740 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher95d6-account-delete-h6r7p"] Jan 22 07:03:12 crc kubenswrapper[4720]: I0122 07:03:12.626075 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 22 07:03:12 crc kubenswrapper[4720]: I0122 07:03:12.626386 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podUID="f29a126f-9c3b-4569-bfe2-64a37f315aa8" containerName="watcher-applier" containerID="cri-o://4811e7d5ba0a5f055fb36035d842cd49a65552b05c7a230970af54280030f1d9" gracePeriod=30 Jan 22 07:03:12 crc kubenswrapper[4720]: I0122 07:03:12.685547 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 22 07:03:12 crc kubenswrapper[4720]: I0122 07:03:12.686337 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="323e3085-cab5-4d90-accf-4586756bd395" containerName="watcher-kuttl-api-log" containerID="cri-o://720963ea41449eeb62fc264e60d5d0fda3a4f9f8cc378bd1c313ce7669650a71" gracePeriod=30 Jan 22 07:03:12 crc kubenswrapper[4720]: I0122 07:03:12.686981 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="323e3085-cab5-4d90-accf-4586756bd395" containerName="watcher-api" containerID="cri-o://8759e2364d07ab895c7f7a44af6b96487bf75c8350fea1fccb07514573d33cec" gracePeriod=30 Jan 22 07:03:12 crc kubenswrapper[4720]: I0122 07:03:12.728310 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ccee5f2d-20a1-462e-bfc3-207200e78545-operator-scripts\") pod \"watcher95d6-account-delete-h6r7p\" (UID: \"ccee5f2d-20a1-462e-bfc3-207200e78545\") " pod="watcher-kuttl-default/watcher95d6-account-delete-h6r7p" Jan 22 07:03:12 crc kubenswrapper[4720]: I0122 07:03:12.728621 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cdnx6\" (UniqueName: \"kubernetes.io/projected/ccee5f2d-20a1-462e-bfc3-207200e78545-kube-api-access-cdnx6\") pod \"watcher95d6-account-delete-h6r7p\" (UID: \"ccee5f2d-20a1-462e-bfc3-207200e78545\") " pod="watcher-kuttl-default/watcher95d6-account-delete-h6r7p" Jan 22 07:03:12 crc kubenswrapper[4720]: I0122 07:03:12.738774 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 22 07:03:12 crc kubenswrapper[4720]: I0122 07:03:12.739099 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="c8ac272a-b713-4024-a0e0-fd1873016edc" containerName="watcher-decision-engine" containerID="cri-o://2ca5b25aef2acf847d3623b64b617239ffa11f2d97e87979afd2c26af8d5ea14" gracePeriod=30 Jan 22 07:03:12 crc kubenswrapper[4720]: I0122 07:03:12.830346 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ccee5f2d-20a1-462e-bfc3-207200e78545-operator-scripts\") pod \"watcher95d6-account-delete-h6r7p\" (UID: \"ccee5f2d-20a1-462e-bfc3-207200e78545\") " pod="watcher-kuttl-default/watcher95d6-account-delete-h6r7p" Jan 22 07:03:12 crc kubenswrapper[4720]: I0122 07:03:12.830419 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cdnx6\" (UniqueName: \"kubernetes.io/projected/ccee5f2d-20a1-462e-bfc3-207200e78545-kube-api-access-cdnx6\") pod \"watcher95d6-account-delete-h6r7p\" (UID: \"ccee5f2d-20a1-462e-bfc3-207200e78545\") " pod="watcher-kuttl-default/watcher95d6-account-delete-h6r7p" Jan 22 07:03:12 crc kubenswrapper[4720]: I0122 07:03:12.832010 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ccee5f2d-20a1-462e-bfc3-207200e78545-operator-scripts\") pod \"watcher95d6-account-delete-h6r7p\" (UID: \"ccee5f2d-20a1-462e-bfc3-207200e78545\") " pod="watcher-kuttl-default/watcher95d6-account-delete-h6r7p" Jan 22 07:03:12 crc kubenswrapper[4720]: I0122 07:03:12.879897 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cdnx6\" (UniqueName: \"kubernetes.io/projected/ccee5f2d-20a1-462e-bfc3-207200e78545-kube-api-access-cdnx6\") pod \"watcher95d6-account-delete-h6r7p\" (UID: \"ccee5f2d-20a1-462e-bfc3-207200e78545\") " pod="watcher-kuttl-default/watcher95d6-account-delete-h6r7p" Jan 22 07:03:12 crc kubenswrapper[4720]: E0122 07:03:12.911125 4720 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="4811e7d5ba0a5f055fb36035d842cd49a65552b05c7a230970af54280030f1d9" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Jan 22 07:03:12 crc kubenswrapper[4720]: E0122 07:03:12.938945 4720 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="4811e7d5ba0a5f055fb36035d842cd49a65552b05c7a230970af54280030f1d9" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Jan 22 07:03:12 crc kubenswrapper[4720]: E0122 07:03:12.946099 4720 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="4811e7d5ba0a5f055fb36035d842cd49a65552b05c7a230970af54280030f1d9" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Jan 22 07:03:12 crc kubenswrapper[4720]: E0122 07:03:12.946188 4720 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podUID="f29a126f-9c3b-4569-bfe2-64a37f315aa8" containerName="watcher-applier" Jan 22 07:03:13 crc kubenswrapper[4720]: I0122 07:03:13.178601 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher95d6-account-delete-h6r7p" Jan 22 07:03:13 crc kubenswrapper[4720]: I0122 07:03:13.472465 4720 generic.go:334] "Generic (PLEG): container finished" podID="323e3085-cab5-4d90-accf-4586756bd395" containerID="720963ea41449eeb62fc264e60d5d0fda3a4f9f8cc378bd1c313ce7669650a71" exitCode=143 Jan 22 07:03:13 crc kubenswrapper[4720]: I0122 07:03:13.472936 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"323e3085-cab5-4d90-accf-4586756bd395","Type":"ContainerDied","Data":"720963ea41449eeb62fc264e60d5d0fda3a4f9f8cc378bd1c313ce7669650a71"} Jan 22 07:03:13 crc kubenswrapper[4720]: I0122 07:03:13.726256 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher95d6-account-delete-h6r7p"] Jan 22 07:03:14 crc kubenswrapper[4720]: I0122 07:03:14.226683 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cf4d25e1-55a2-47e0-8c43-e138cee6d47c" path="/var/lib/kubelet/pods/cf4d25e1-55a2-47e0-8c43-e138cee6d47c/volumes" Jan 22 07:03:14 crc kubenswrapper[4720]: I0122 07:03:14.484177 4720 generic.go:334] "Generic (PLEG): container finished" podID="ccee5f2d-20a1-462e-bfc3-207200e78545" containerID="65d166d097ed14bb8a1fe7053c0cc1eaf0376b4d0c4e5917295c48d62590d8f8" exitCode=0 Jan 22 07:03:14 crc kubenswrapper[4720]: I0122 07:03:14.484230 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher95d6-account-delete-h6r7p" event={"ID":"ccee5f2d-20a1-462e-bfc3-207200e78545","Type":"ContainerDied","Data":"65d166d097ed14bb8a1fe7053c0cc1eaf0376b4d0c4e5917295c48d62590d8f8"} Jan 22 07:03:14 crc kubenswrapper[4720]: I0122 07:03:14.484269 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher95d6-account-delete-h6r7p" event={"ID":"ccee5f2d-20a1-462e-bfc3-207200e78545","Type":"ContainerStarted","Data":"98a3fe5f6ea6ac0806ccbbd187918177908e5ea9ac5e5a726841cb67bb00d58b"} Jan 22 07:03:14 crc kubenswrapper[4720]: I0122 07:03:14.993363 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:03:15 crc kubenswrapper[4720]: I0122 07:03:15.087260 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/323e3085-cab5-4d90-accf-4586756bd395-config-data\") pod \"323e3085-cab5-4d90-accf-4586756bd395\" (UID: \"323e3085-cab5-4d90-accf-4586756bd395\") " Jan 22 07:03:15 crc kubenswrapper[4720]: I0122 07:03:15.087328 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lgcwj\" (UniqueName: \"kubernetes.io/projected/323e3085-cab5-4d90-accf-4586756bd395-kube-api-access-lgcwj\") pod \"323e3085-cab5-4d90-accf-4586756bd395\" (UID: \"323e3085-cab5-4d90-accf-4586756bd395\") " Jan 22 07:03:15 crc kubenswrapper[4720]: I0122 07:03:15.088348 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/323e3085-cab5-4d90-accf-4586756bd395-combined-ca-bundle\") pod \"323e3085-cab5-4d90-accf-4586756bd395\" (UID: \"323e3085-cab5-4d90-accf-4586756bd395\") " Jan 22 07:03:15 crc kubenswrapper[4720]: I0122 07:03:15.088485 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/323e3085-cab5-4d90-accf-4586756bd395-logs\") pod \"323e3085-cab5-4d90-accf-4586756bd395\" (UID: \"323e3085-cab5-4d90-accf-4586756bd395\") " Jan 22 07:03:15 crc kubenswrapper[4720]: I0122 07:03:15.088503 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/323e3085-cab5-4d90-accf-4586756bd395-custom-prometheus-ca\") pod \"323e3085-cab5-4d90-accf-4586756bd395\" (UID: \"323e3085-cab5-4d90-accf-4586756bd395\") " Jan 22 07:03:15 crc kubenswrapper[4720]: I0122 07:03:15.088560 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/323e3085-cab5-4d90-accf-4586756bd395-cert-memcached-mtls\") pod \"323e3085-cab5-4d90-accf-4586756bd395\" (UID: \"323e3085-cab5-4d90-accf-4586756bd395\") " Jan 22 07:03:15 crc kubenswrapper[4720]: I0122 07:03:15.090832 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/323e3085-cab5-4d90-accf-4586756bd395-logs" (OuterVolumeSpecName: "logs") pod "323e3085-cab5-4d90-accf-4586756bd395" (UID: "323e3085-cab5-4d90-accf-4586756bd395"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 07:03:15 crc kubenswrapper[4720]: I0122 07:03:15.094275 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/323e3085-cab5-4d90-accf-4586756bd395-kube-api-access-lgcwj" (OuterVolumeSpecName: "kube-api-access-lgcwj") pod "323e3085-cab5-4d90-accf-4586756bd395" (UID: "323e3085-cab5-4d90-accf-4586756bd395"). InnerVolumeSpecName "kube-api-access-lgcwj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 07:03:15 crc kubenswrapper[4720]: I0122 07:03:15.114216 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/323e3085-cab5-4d90-accf-4586756bd395-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "323e3085-cab5-4d90-accf-4586756bd395" (UID: "323e3085-cab5-4d90-accf-4586756bd395"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:03:15 crc kubenswrapper[4720]: I0122 07:03:15.126679 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/323e3085-cab5-4d90-accf-4586756bd395-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "323e3085-cab5-4d90-accf-4586756bd395" (UID: "323e3085-cab5-4d90-accf-4586756bd395"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:03:15 crc kubenswrapper[4720]: I0122 07:03:15.144275 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/323e3085-cab5-4d90-accf-4586756bd395-config-data" (OuterVolumeSpecName: "config-data") pod "323e3085-cab5-4d90-accf-4586756bd395" (UID: "323e3085-cab5-4d90-accf-4586756bd395"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:03:15 crc kubenswrapper[4720]: I0122 07:03:15.181717 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/323e3085-cab5-4d90-accf-4586756bd395-cert-memcached-mtls" (OuterVolumeSpecName: "cert-memcached-mtls") pod "323e3085-cab5-4d90-accf-4586756bd395" (UID: "323e3085-cab5-4d90-accf-4586756bd395"). InnerVolumeSpecName "cert-memcached-mtls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:03:15 crc kubenswrapper[4720]: I0122 07:03:15.191488 4720 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/323e3085-cab5-4d90-accf-4586756bd395-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 07:03:15 crc kubenswrapper[4720]: I0122 07:03:15.191780 4720 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/323e3085-cab5-4d90-accf-4586756bd395-logs\") on node \"crc\" DevicePath \"\"" Jan 22 07:03:15 crc kubenswrapper[4720]: I0122 07:03:15.191847 4720 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/323e3085-cab5-4d90-accf-4586756bd395-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Jan 22 07:03:15 crc kubenswrapper[4720]: I0122 07:03:15.191923 4720 reconciler_common.go:293] "Volume detached for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/323e3085-cab5-4d90-accf-4586756bd395-cert-memcached-mtls\") on node \"crc\" DevicePath \"\"" Jan 22 07:03:15 crc kubenswrapper[4720]: I0122 07:03:15.191991 4720 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/323e3085-cab5-4d90-accf-4586756bd395-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 07:03:15 crc kubenswrapper[4720]: I0122 07:03:15.192062 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lgcwj\" (UniqueName: \"kubernetes.io/projected/323e3085-cab5-4d90-accf-4586756bd395-kube-api-access-lgcwj\") on node \"crc\" DevicePath \"\"" Jan 22 07:03:15 crc kubenswrapper[4720]: I0122 07:03:15.497380 4720 generic.go:334] "Generic (PLEG): container finished" podID="323e3085-cab5-4d90-accf-4586756bd395" containerID="8759e2364d07ab895c7f7a44af6b96487bf75c8350fea1fccb07514573d33cec" exitCode=0 Jan 22 07:03:15 crc kubenswrapper[4720]: I0122 07:03:15.497490 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:03:15 crc kubenswrapper[4720]: I0122 07:03:15.497490 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"323e3085-cab5-4d90-accf-4586756bd395","Type":"ContainerDied","Data":"8759e2364d07ab895c7f7a44af6b96487bf75c8350fea1fccb07514573d33cec"} Jan 22 07:03:15 crc kubenswrapper[4720]: I0122 07:03:15.497587 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"323e3085-cab5-4d90-accf-4586756bd395","Type":"ContainerDied","Data":"1bf732d8bf11501a47deca330e06122beb19514da210024a9cccb6c91d60fac7"} Jan 22 07:03:15 crc kubenswrapper[4720]: I0122 07:03:15.497624 4720 scope.go:117] "RemoveContainer" containerID="8759e2364d07ab895c7f7a44af6b96487bf75c8350fea1fccb07514573d33cec" Jan 22 07:03:15 crc kubenswrapper[4720]: I0122 07:03:15.563213 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 22 07:03:15 crc kubenswrapper[4720]: I0122 07:03:15.565087 4720 scope.go:117] "RemoveContainer" containerID="720963ea41449eeb62fc264e60d5d0fda3a4f9f8cc378bd1c313ce7669650a71" Jan 22 07:03:15 crc kubenswrapper[4720]: I0122 07:03:15.570565 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 22 07:03:15 crc kubenswrapper[4720]: I0122 07:03:15.599761 4720 scope.go:117] "RemoveContainer" containerID="8759e2364d07ab895c7f7a44af6b96487bf75c8350fea1fccb07514573d33cec" Jan 22 07:03:15 crc kubenswrapper[4720]: E0122 07:03:15.601442 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8759e2364d07ab895c7f7a44af6b96487bf75c8350fea1fccb07514573d33cec\": container with ID starting with 8759e2364d07ab895c7f7a44af6b96487bf75c8350fea1fccb07514573d33cec not found: ID does not exist" containerID="8759e2364d07ab895c7f7a44af6b96487bf75c8350fea1fccb07514573d33cec" Jan 22 07:03:15 crc kubenswrapper[4720]: I0122 07:03:15.601517 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8759e2364d07ab895c7f7a44af6b96487bf75c8350fea1fccb07514573d33cec"} err="failed to get container status \"8759e2364d07ab895c7f7a44af6b96487bf75c8350fea1fccb07514573d33cec\": rpc error: code = NotFound desc = could not find container \"8759e2364d07ab895c7f7a44af6b96487bf75c8350fea1fccb07514573d33cec\": container with ID starting with 8759e2364d07ab895c7f7a44af6b96487bf75c8350fea1fccb07514573d33cec not found: ID does not exist" Jan 22 07:03:15 crc kubenswrapper[4720]: I0122 07:03:15.601564 4720 scope.go:117] "RemoveContainer" containerID="720963ea41449eeb62fc264e60d5d0fda3a4f9f8cc378bd1c313ce7669650a71" Jan 22 07:03:15 crc kubenswrapper[4720]: E0122 07:03:15.602380 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"720963ea41449eeb62fc264e60d5d0fda3a4f9f8cc378bd1c313ce7669650a71\": container with ID starting with 720963ea41449eeb62fc264e60d5d0fda3a4f9f8cc378bd1c313ce7669650a71 not found: ID does not exist" containerID="720963ea41449eeb62fc264e60d5d0fda3a4f9f8cc378bd1c313ce7669650a71" Jan 22 07:03:15 crc kubenswrapper[4720]: I0122 07:03:15.602435 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"720963ea41449eeb62fc264e60d5d0fda3a4f9f8cc378bd1c313ce7669650a71"} err="failed to get container status \"720963ea41449eeb62fc264e60d5d0fda3a4f9f8cc378bd1c313ce7669650a71\": rpc error: code = NotFound desc = could not find container \"720963ea41449eeb62fc264e60d5d0fda3a4f9f8cc378bd1c313ce7669650a71\": container with ID starting with 720963ea41449eeb62fc264e60d5d0fda3a4f9f8cc378bd1c313ce7669650a71 not found: ID does not exist" Jan 22 07:03:15 crc kubenswrapper[4720]: I0122 07:03:15.899823 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher95d6-account-delete-h6r7p" Jan 22 07:03:16 crc kubenswrapper[4720]: I0122 07:03:16.007713 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cdnx6\" (UniqueName: \"kubernetes.io/projected/ccee5f2d-20a1-462e-bfc3-207200e78545-kube-api-access-cdnx6\") pod \"ccee5f2d-20a1-462e-bfc3-207200e78545\" (UID: \"ccee5f2d-20a1-462e-bfc3-207200e78545\") " Jan 22 07:03:16 crc kubenswrapper[4720]: I0122 07:03:16.007965 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ccee5f2d-20a1-462e-bfc3-207200e78545-operator-scripts\") pod \"ccee5f2d-20a1-462e-bfc3-207200e78545\" (UID: \"ccee5f2d-20a1-462e-bfc3-207200e78545\") " Jan 22 07:03:16 crc kubenswrapper[4720]: I0122 07:03:16.008924 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ccee5f2d-20a1-462e-bfc3-207200e78545-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ccee5f2d-20a1-462e-bfc3-207200e78545" (UID: "ccee5f2d-20a1-462e-bfc3-207200e78545"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 07:03:16 crc kubenswrapper[4720]: I0122 07:03:16.026876 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ccee5f2d-20a1-462e-bfc3-207200e78545-kube-api-access-cdnx6" (OuterVolumeSpecName: "kube-api-access-cdnx6") pod "ccee5f2d-20a1-462e-bfc3-207200e78545" (UID: "ccee5f2d-20a1-462e-bfc3-207200e78545"). InnerVolumeSpecName "kube-api-access-cdnx6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 07:03:16 crc kubenswrapper[4720]: I0122 07:03:16.110648 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cdnx6\" (UniqueName: \"kubernetes.io/projected/ccee5f2d-20a1-462e-bfc3-207200e78545-kube-api-access-cdnx6\") on node \"crc\" DevicePath \"\"" Jan 22 07:03:16 crc kubenswrapper[4720]: I0122 07:03:16.110687 4720 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ccee5f2d-20a1-462e-bfc3-207200e78545-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 07:03:16 crc kubenswrapper[4720]: I0122 07:03:16.209755 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 07:03:16 crc kubenswrapper[4720]: I0122 07:03:16.210291 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="15926338-ca91-47e9-b960-c66c0cea1d91" containerName="ceilometer-central-agent" containerID="cri-o://f6f406a87fc744db8dee5a2ba970c771802eb4a58769948e6004286d4a46822f" gracePeriod=30 Jan 22 07:03:16 crc kubenswrapper[4720]: I0122 07:03:16.210509 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="15926338-ca91-47e9-b960-c66c0cea1d91" containerName="proxy-httpd" containerID="cri-o://2bbf4e4d2b2dbb951ab083ea551fcf701bb6fbd409a5a39fa6779e5ce09550e3" gracePeriod=30 Jan 22 07:03:16 crc kubenswrapper[4720]: I0122 07:03:16.210566 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="15926338-ca91-47e9-b960-c66c0cea1d91" containerName="sg-core" containerID="cri-o://69cc616e6a58721c8fd270e27ec67ddaf36b5a62655394e4c937b52debd35c60" gracePeriod=30 Jan 22 07:03:16 crc kubenswrapper[4720]: I0122 07:03:16.210619 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="15926338-ca91-47e9-b960-c66c0cea1d91" containerName="ceilometer-notification-agent" containerID="cri-o://cebf6145fce44c3a49da572baf5f7553febdd12ed33705afc1866c99dc7e8ccd" gracePeriod=30 Jan 22 07:03:16 crc kubenswrapper[4720]: I0122 07:03:16.223371 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="323e3085-cab5-4d90-accf-4586756bd395" path="/var/lib/kubelet/pods/323e3085-cab5-4d90-accf-4586756bd395/volumes" Jan 22 07:03:16 crc kubenswrapper[4720]: I0122 07:03:16.543242 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher95d6-account-delete-h6r7p" event={"ID":"ccee5f2d-20a1-462e-bfc3-207200e78545","Type":"ContainerDied","Data":"98a3fe5f6ea6ac0806ccbbd187918177908e5ea9ac5e5a726841cb67bb00d58b"} Jan 22 07:03:16 crc kubenswrapper[4720]: I0122 07:03:16.543573 4720 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="98a3fe5f6ea6ac0806ccbbd187918177908e5ea9ac5e5a726841cb67bb00d58b" Jan 22 07:03:16 crc kubenswrapper[4720]: I0122 07:03:16.543316 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher95d6-account-delete-h6r7p" Jan 22 07:03:16 crc kubenswrapper[4720]: I0122 07:03:16.554136 4720 generic.go:334] "Generic (PLEG): container finished" podID="15926338-ca91-47e9-b960-c66c0cea1d91" containerID="2bbf4e4d2b2dbb951ab083ea551fcf701bb6fbd409a5a39fa6779e5ce09550e3" exitCode=0 Jan 22 07:03:16 crc kubenswrapper[4720]: I0122 07:03:16.554178 4720 generic.go:334] "Generic (PLEG): container finished" podID="15926338-ca91-47e9-b960-c66c0cea1d91" containerID="69cc616e6a58721c8fd270e27ec67ddaf36b5a62655394e4c937b52debd35c60" exitCode=2 Jan 22 07:03:16 crc kubenswrapper[4720]: I0122 07:03:16.554205 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"15926338-ca91-47e9-b960-c66c0cea1d91","Type":"ContainerDied","Data":"2bbf4e4d2b2dbb951ab083ea551fcf701bb6fbd409a5a39fa6779e5ce09550e3"} Jan 22 07:03:16 crc kubenswrapper[4720]: I0122 07:03:16.554239 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"15926338-ca91-47e9-b960-c66c0cea1d91","Type":"ContainerDied","Data":"69cc616e6a58721c8fd270e27ec67ddaf36b5a62655394e4c937b52debd35c60"} Jan 22 07:03:17 crc kubenswrapper[4720]: I0122 07:03:17.566301 4720 generic.go:334] "Generic (PLEG): container finished" podID="15926338-ca91-47e9-b960-c66c0cea1d91" containerID="f6f406a87fc744db8dee5a2ba970c771802eb4a58769948e6004286d4a46822f" exitCode=0 Jan 22 07:03:17 crc kubenswrapper[4720]: I0122 07:03:17.566381 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"15926338-ca91-47e9-b960-c66c0cea1d91","Type":"ContainerDied","Data":"f6f406a87fc744db8dee5a2ba970c771802eb4a58769948e6004286d4a46822f"} Jan 22 07:03:17 crc kubenswrapper[4720]: I0122 07:03:17.585861 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-db-create-7sq2j"] Jan 22 07:03:17 crc kubenswrapper[4720]: I0122 07:03:17.594344 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-db-create-7sq2j"] Jan 22 07:03:17 crc kubenswrapper[4720]: I0122 07:03:17.611286 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher95d6-account-delete-h6r7p"] Jan 22 07:03:17 crc kubenswrapper[4720]: I0122 07:03:17.619512 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-95d6-account-create-update-pxjbc"] Jan 22 07:03:17 crc kubenswrapper[4720]: I0122 07:03:17.637132 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher95d6-account-delete-h6r7p"] Jan 22 07:03:17 crc kubenswrapper[4720]: I0122 07:03:17.642931 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-95d6-account-create-update-pxjbc"] Jan 22 07:03:17 crc kubenswrapper[4720]: E0122 07:03:17.868216 4720 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="4811e7d5ba0a5f055fb36035d842cd49a65552b05c7a230970af54280030f1d9" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Jan 22 07:03:17 crc kubenswrapper[4720]: E0122 07:03:17.870086 4720 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="4811e7d5ba0a5f055fb36035d842cd49a65552b05c7a230970af54280030f1d9" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Jan 22 07:03:17 crc kubenswrapper[4720]: E0122 07:03:17.871737 4720 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="4811e7d5ba0a5f055fb36035d842cd49a65552b05c7a230970af54280030f1d9" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Jan 22 07:03:17 crc kubenswrapper[4720]: E0122 07:03:17.871831 4720 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podUID="f29a126f-9c3b-4569-bfe2-64a37f315aa8" containerName="watcher-applier" Jan 22 07:03:18 crc kubenswrapper[4720]: I0122 07:03:18.222828 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2b5de5b3-1410-4b2c-92ab-85730d07e10c" path="/var/lib/kubelet/pods/2b5de5b3-1410-4b2c-92ab-85730d07e10c/volumes" Jan 22 07:03:18 crc kubenswrapper[4720]: I0122 07:03:18.223497 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4ebd5b4a-64cb-4011-a9ff-483f4643d5b2" path="/var/lib/kubelet/pods/4ebd5b4a-64cb-4011-a9ff-483f4643d5b2/volumes" Jan 22 07:03:18 crc kubenswrapper[4720]: I0122 07:03:18.224009 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ccee5f2d-20a1-462e-bfc3-207200e78545" path="/var/lib/kubelet/pods/ccee5f2d-20a1-462e-bfc3-207200e78545/volumes" Jan 22 07:03:18 crc kubenswrapper[4720]: E0122 07:03:18.972103 4720 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 2ca5b25aef2acf847d3623b64b617239ffa11f2d97e87979afd2c26af8d5ea14 is running failed: container process not found" containerID="2ca5b25aef2acf847d3623b64b617239ffa11f2d97e87979afd2c26af8d5ea14" cmd=["/usr/bin/pgrep","-f","-r","DRST","watcher-decision-engine"] Jan 22 07:03:18 crc kubenswrapper[4720]: E0122 07:03:18.977309 4720 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 2ca5b25aef2acf847d3623b64b617239ffa11f2d97e87979afd2c26af8d5ea14 is running failed: container process not found" containerID="2ca5b25aef2acf847d3623b64b617239ffa11f2d97e87979afd2c26af8d5ea14" cmd=["/usr/bin/pgrep","-f","-r","DRST","watcher-decision-engine"] Jan 22 07:03:18 crc kubenswrapper[4720]: E0122 07:03:18.977838 4720 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 2ca5b25aef2acf847d3623b64b617239ffa11f2d97e87979afd2c26af8d5ea14 is running failed: container process not found" containerID="2ca5b25aef2acf847d3623b64b617239ffa11f2d97e87979afd2c26af8d5ea14" cmd=["/usr/bin/pgrep","-f","-r","DRST","watcher-decision-engine"] Jan 22 07:03:18 crc kubenswrapper[4720]: E0122 07:03:18.977932 4720 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 2ca5b25aef2acf847d3623b64b617239ffa11f2d97e87979afd2c26af8d5ea14 is running failed: container process not found" probeType="Readiness" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="c8ac272a-b713-4024-a0e0-fd1873016edc" containerName="watcher-decision-engine" Jan 22 07:03:19 crc kubenswrapper[4720]: I0122 07:03:19.332510 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 07:03:19 crc kubenswrapper[4720]: I0122 07:03:19.488378 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8ac272a-b713-4024-a0e0-fd1873016edc-combined-ca-bundle\") pod \"c8ac272a-b713-4024-a0e0-fd1873016edc\" (UID: \"c8ac272a-b713-4024-a0e0-fd1873016edc\") " Jan 22 07:03:19 crc kubenswrapper[4720]: I0122 07:03:19.488443 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c8ac272a-b713-4024-a0e0-fd1873016edc-logs\") pod \"c8ac272a-b713-4024-a0e0-fd1873016edc\" (UID: \"c8ac272a-b713-4024-a0e0-fd1873016edc\") " Jan 22 07:03:19 crc kubenswrapper[4720]: I0122 07:03:19.488513 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/c8ac272a-b713-4024-a0e0-fd1873016edc-custom-prometheus-ca\") pod \"c8ac272a-b713-4024-a0e0-fd1873016edc\" (UID: \"c8ac272a-b713-4024-a0e0-fd1873016edc\") " Jan 22 07:03:19 crc kubenswrapper[4720]: I0122 07:03:19.488599 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c8ac272a-b713-4024-a0e0-fd1873016edc-config-data\") pod \"c8ac272a-b713-4024-a0e0-fd1873016edc\" (UID: \"c8ac272a-b713-4024-a0e0-fd1873016edc\") " Jan 22 07:03:19 crc kubenswrapper[4720]: I0122 07:03:19.488674 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b9f44\" (UniqueName: \"kubernetes.io/projected/c8ac272a-b713-4024-a0e0-fd1873016edc-kube-api-access-b9f44\") pod \"c8ac272a-b713-4024-a0e0-fd1873016edc\" (UID: \"c8ac272a-b713-4024-a0e0-fd1873016edc\") " Jan 22 07:03:19 crc kubenswrapper[4720]: I0122 07:03:19.488769 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/c8ac272a-b713-4024-a0e0-fd1873016edc-cert-memcached-mtls\") pod \"c8ac272a-b713-4024-a0e0-fd1873016edc\" (UID: \"c8ac272a-b713-4024-a0e0-fd1873016edc\") " Jan 22 07:03:19 crc kubenswrapper[4720]: I0122 07:03:19.489024 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c8ac272a-b713-4024-a0e0-fd1873016edc-logs" (OuterVolumeSpecName: "logs") pod "c8ac272a-b713-4024-a0e0-fd1873016edc" (UID: "c8ac272a-b713-4024-a0e0-fd1873016edc"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 07:03:19 crc kubenswrapper[4720]: I0122 07:03:19.489320 4720 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c8ac272a-b713-4024-a0e0-fd1873016edc-logs\") on node \"crc\" DevicePath \"\"" Jan 22 07:03:19 crc kubenswrapper[4720]: I0122 07:03:19.496421 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c8ac272a-b713-4024-a0e0-fd1873016edc-kube-api-access-b9f44" (OuterVolumeSpecName: "kube-api-access-b9f44") pod "c8ac272a-b713-4024-a0e0-fd1873016edc" (UID: "c8ac272a-b713-4024-a0e0-fd1873016edc"). InnerVolumeSpecName "kube-api-access-b9f44". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 07:03:19 crc kubenswrapper[4720]: I0122 07:03:19.520049 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c8ac272a-b713-4024-a0e0-fd1873016edc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c8ac272a-b713-4024-a0e0-fd1873016edc" (UID: "c8ac272a-b713-4024-a0e0-fd1873016edc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:03:19 crc kubenswrapper[4720]: I0122 07:03:19.520531 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c8ac272a-b713-4024-a0e0-fd1873016edc-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "c8ac272a-b713-4024-a0e0-fd1873016edc" (UID: "c8ac272a-b713-4024-a0e0-fd1873016edc"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:03:19 crc kubenswrapper[4720]: I0122 07:03:19.547202 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c8ac272a-b713-4024-a0e0-fd1873016edc-config-data" (OuterVolumeSpecName: "config-data") pod "c8ac272a-b713-4024-a0e0-fd1873016edc" (UID: "c8ac272a-b713-4024-a0e0-fd1873016edc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:03:19 crc kubenswrapper[4720]: I0122 07:03:19.582676 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c8ac272a-b713-4024-a0e0-fd1873016edc-cert-memcached-mtls" (OuterVolumeSpecName: "cert-memcached-mtls") pod "c8ac272a-b713-4024-a0e0-fd1873016edc" (UID: "c8ac272a-b713-4024-a0e0-fd1873016edc"). InnerVolumeSpecName "cert-memcached-mtls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:03:19 crc kubenswrapper[4720]: I0122 07:03:19.583611 4720 generic.go:334] "Generic (PLEG): container finished" podID="c8ac272a-b713-4024-a0e0-fd1873016edc" containerID="2ca5b25aef2acf847d3623b64b617239ffa11f2d97e87979afd2c26af8d5ea14" exitCode=0 Jan 22 07:03:19 crc kubenswrapper[4720]: I0122 07:03:19.583672 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"c8ac272a-b713-4024-a0e0-fd1873016edc","Type":"ContainerDied","Data":"2ca5b25aef2acf847d3623b64b617239ffa11f2d97e87979afd2c26af8d5ea14"} Jan 22 07:03:19 crc kubenswrapper[4720]: I0122 07:03:19.583681 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 07:03:19 crc kubenswrapper[4720]: I0122 07:03:19.583749 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"c8ac272a-b713-4024-a0e0-fd1873016edc","Type":"ContainerDied","Data":"750a2060fdd3ee5269a6a5717685012607a85a2645aa3cd5e3ad4b09b466c1c8"} Jan 22 07:03:19 crc kubenswrapper[4720]: I0122 07:03:19.583778 4720 scope.go:117] "RemoveContainer" containerID="2ca5b25aef2acf847d3623b64b617239ffa11f2d97e87979afd2c26af8d5ea14" Jan 22 07:03:19 crc kubenswrapper[4720]: I0122 07:03:19.590557 4720 reconciler_common.go:293] "Volume detached for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/c8ac272a-b713-4024-a0e0-fd1873016edc-cert-memcached-mtls\") on node \"crc\" DevicePath \"\"" Jan 22 07:03:19 crc kubenswrapper[4720]: I0122 07:03:19.590613 4720 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8ac272a-b713-4024-a0e0-fd1873016edc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 07:03:19 crc kubenswrapper[4720]: I0122 07:03:19.590634 4720 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/c8ac272a-b713-4024-a0e0-fd1873016edc-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Jan 22 07:03:19 crc kubenswrapper[4720]: I0122 07:03:19.590651 4720 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c8ac272a-b713-4024-a0e0-fd1873016edc-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 07:03:19 crc kubenswrapper[4720]: I0122 07:03:19.590667 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b9f44\" (UniqueName: \"kubernetes.io/projected/c8ac272a-b713-4024-a0e0-fd1873016edc-kube-api-access-b9f44\") on node \"crc\" DevicePath \"\"" Jan 22 07:03:19 crc kubenswrapper[4720]: I0122 07:03:19.614464 4720 scope.go:117] "RemoveContainer" containerID="2ca5b25aef2acf847d3623b64b617239ffa11f2d97e87979afd2c26af8d5ea14" Jan 22 07:03:19 crc kubenswrapper[4720]: E0122 07:03:19.614982 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2ca5b25aef2acf847d3623b64b617239ffa11f2d97e87979afd2c26af8d5ea14\": container with ID starting with 2ca5b25aef2acf847d3623b64b617239ffa11f2d97e87979afd2c26af8d5ea14 not found: ID does not exist" containerID="2ca5b25aef2acf847d3623b64b617239ffa11f2d97e87979afd2c26af8d5ea14" Jan 22 07:03:19 crc kubenswrapper[4720]: I0122 07:03:19.615016 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2ca5b25aef2acf847d3623b64b617239ffa11f2d97e87979afd2c26af8d5ea14"} err="failed to get container status \"2ca5b25aef2acf847d3623b64b617239ffa11f2d97e87979afd2c26af8d5ea14\": rpc error: code = NotFound desc = could not find container \"2ca5b25aef2acf847d3623b64b617239ffa11f2d97e87979afd2c26af8d5ea14\": container with ID starting with 2ca5b25aef2acf847d3623b64b617239ffa11f2d97e87979afd2c26af8d5ea14 not found: ID does not exist" Jan 22 07:03:19 crc kubenswrapper[4720]: I0122 07:03:19.621508 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 22 07:03:19 crc kubenswrapper[4720]: I0122 07:03:19.634990 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 22 07:03:20 crc kubenswrapper[4720]: I0122 07:03:20.221003 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c8ac272a-b713-4024-a0e0-fd1873016edc" path="/var/lib/kubelet/pods/c8ac272a-b713-4024-a0e0-fd1873016edc/volumes" Jan 22 07:03:20 crc kubenswrapper[4720]: I0122 07:03:20.602411 4720 generic.go:334] "Generic (PLEG): container finished" podID="15926338-ca91-47e9-b960-c66c0cea1d91" containerID="cebf6145fce44c3a49da572baf5f7553febdd12ed33705afc1866c99dc7e8ccd" exitCode=0 Jan 22 07:03:20 crc kubenswrapper[4720]: I0122 07:03:20.602463 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"15926338-ca91-47e9-b960-c66c0cea1d91","Type":"ContainerDied","Data":"cebf6145fce44c3a49da572baf5f7553febdd12ed33705afc1866c99dc7e8ccd"} Jan 22 07:03:20 crc kubenswrapper[4720]: I0122 07:03:20.822633 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:03:21 crc kubenswrapper[4720]: I0122 07:03:21.015421 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/15926338-ca91-47e9-b960-c66c0cea1d91-ceilometer-tls-certs\") pod \"15926338-ca91-47e9-b960-c66c0cea1d91\" (UID: \"15926338-ca91-47e9-b960-c66c0cea1d91\") " Jan 22 07:03:21 crc kubenswrapper[4720]: I0122 07:03:21.015854 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x8cck\" (UniqueName: \"kubernetes.io/projected/15926338-ca91-47e9-b960-c66c0cea1d91-kube-api-access-x8cck\") pod \"15926338-ca91-47e9-b960-c66c0cea1d91\" (UID: \"15926338-ca91-47e9-b960-c66c0cea1d91\") " Jan 22 07:03:21 crc kubenswrapper[4720]: I0122 07:03:21.021053 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/15926338-ca91-47e9-b960-c66c0cea1d91-log-httpd\") pod \"15926338-ca91-47e9-b960-c66c0cea1d91\" (UID: \"15926338-ca91-47e9-b960-c66c0cea1d91\") " Jan 22 07:03:21 crc kubenswrapper[4720]: I0122 07:03:21.021107 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/15926338-ca91-47e9-b960-c66c0cea1d91-run-httpd\") pod \"15926338-ca91-47e9-b960-c66c0cea1d91\" (UID: \"15926338-ca91-47e9-b960-c66c0cea1d91\") " Jan 22 07:03:21 crc kubenswrapper[4720]: I0122 07:03:21.021154 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15926338-ca91-47e9-b960-c66c0cea1d91-combined-ca-bundle\") pod \"15926338-ca91-47e9-b960-c66c0cea1d91\" (UID: \"15926338-ca91-47e9-b960-c66c0cea1d91\") " Jan 22 07:03:21 crc kubenswrapper[4720]: I0122 07:03:21.021270 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/15926338-ca91-47e9-b960-c66c0cea1d91-scripts\") pod \"15926338-ca91-47e9-b960-c66c0cea1d91\" (UID: \"15926338-ca91-47e9-b960-c66c0cea1d91\") " Jan 22 07:03:21 crc kubenswrapper[4720]: I0122 07:03:21.021391 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/15926338-ca91-47e9-b960-c66c0cea1d91-sg-core-conf-yaml\") pod \"15926338-ca91-47e9-b960-c66c0cea1d91\" (UID: \"15926338-ca91-47e9-b960-c66c0cea1d91\") " Jan 22 07:03:21 crc kubenswrapper[4720]: I0122 07:03:21.021487 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15926338-ca91-47e9-b960-c66c0cea1d91-config-data\") pod \"15926338-ca91-47e9-b960-c66c0cea1d91\" (UID: \"15926338-ca91-47e9-b960-c66c0cea1d91\") " Jan 22 07:03:21 crc kubenswrapper[4720]: I0122 07:03:21.021560 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/15926338-ca91-47e9-b960-c66c0cea1d91-kube-api-access-x8cck" (OuterVolumeSpecName: "kube-api-access-x8cck") pod "15926338-ca91-47e9-b960-c66c0cea1d91" (UID: "15926338-ca91-47e9-b960-c66c0cea1d91"). InnerVolumeSpecName "kube-api-access-x8cck". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 07:03:21 crc kubenswrapper[4720]: I0122 07:03:21.021948 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/15926338-ca91-47e9-b960-c66c0cea1d91-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "15926338-ca91-47e9-b960-c66c0cea1d91" (UID: "15926338-ca91-47e9-b960-c66c0cea1d91"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 07:03:21 crc kubenswrapper[4720]: I0122 07:03:21.022255 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/15926338-ca91-47e9-b960-c66c0cea1d91-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "15926338-ca91-47e9-b960-c66c0cea1d91" (UID: "15926338-ca91-47e9-b960-c66c0cea1d91"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 07:03:21 crc kubenswrapper[4720]: I0122 07:03:21.022342 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x8cck\" (UniqueName: \"kubernetes.io/projected/15926338-ca91-47e9-b960-c66c0cea1d91-kube-api-access-x8cck\") on node \"crc\" DevicePath \"\"" Jan 22 07:03:21 crc kubenswrapper[4720]: I0122 07:03:21.022365 4720 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/15926338-ca91-47e9-b960-c66c0cea1d91-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 22 07:03:21 crc kubenswrapper[4720]: I0122 07:03:21.024866 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15926338-ca91-47e9-b960-c66c0cea1d91-scripts" (OuterVolumeSpecName: "scripts") pod "15926338-ca91-47e9-b960-c66c0cea1d91" (UID: "15926338-ca91-47e9-b960-c66c0cea1d91"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:03:21 crc kubenswrapper[4720]: I0122 07:03:21.050254 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15926338-ca91-47e9-b960-c66c0cea1d91-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "15926338-ca91-47e9-b960-c66c0cea1d91" (UID: "15926338-ca91-47e9-b960-c66c0cea1d91"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:03:21 crc kubenswrapper[4720]: I0122 07:03:21.066549 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15926338-ca91-47e9-b960-c66c0cea1d91-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "15926338-ca91-47e9-b960-c66c0cea1d91" (UID: "15926338-ca91-47e9-b960-c66c0cea1d91"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:03:21 crc kubenswrapper[4720]: I0122 07:03:21.086798 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15926338-ca91-47e9-b960-c66c0cea1d91-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "15926338-ca91-47e9-b960-c66c0cea1d91" (UID: "15926338-ca91-47e9-b960-c66c0cea1d91"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:03:21 crc kubenswrapper[4720]: I0122 07:03:21.111837 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15926338-ca91-47e9-b960-c66c0cea1d91-config-data" (OuterVolumeSpecName: "config-data") pod "15926338-ca91-47e9-b960-c66c0cea1d91" (UID: "15926338-ca91-47e9-b960-c66c0cea1d91"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:03:21 crc kubenswrapper[4720]: I0122 07:03:21.124289 4720 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15926338-ca91-47e9-b960-c66c0cea1d91-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 07:03:21 crc kubenswrapper[4720]: I0122 07:03:21.124324 4720 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/15926338-ca91-47e9-b960-c66c0cea1d91-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 22 07:03:21 crc kubenswrapper[4720]: I0122 07:03:21.124335 4720 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/15926338-ca91-47e9-b960-c66c0cea1d91-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 22 07:03:21 crc kubenswrapper[4720]: I0122 07:03:21.124348 4720 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15926338-ca91-47e9-b960-c66c0cea1d91-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 07:03:21 crc kubenswrapper[4720]: I0122 07:03:21.124358 4720 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/15926338-ca91-47e9-b960-c66c0cea1d91-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 07:03:21 crc kubenswrapper[4720]: I0122 07:03:21.124366 4720 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/15926338-ca91-47e9-b960-c66c0cea1d91-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 22 07:03:21 crc kubenswrapper[4720]: I0122 07:03:21.614741 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"15926338-ca91-47e9-b960-c66c0cea1d91","Type":"ContainerDied","Data":"982eb6d8661b5115d80f596550c42c052c7546a264cca44835862cb51205ca0b"} Jan 22 07:03:21 crc kubenswrapper[4720]: I0122 07:03:21.614803 4720 scope.go:117] "RemoveContainer" containerID="2bbf4e4d2b2dbb951ab083ea551fcf701bb6fbd409a5a39fa6779e5ce09550e3" Jan 22 07:03:21 crc kubenswrapper[4720]: I0122 07:03:21.616054 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:03:21 crc kubenswrapper[4720]: I0122 07:03:21.643682 4720 scope.go:117] "RemoveContainer" containerID="69cc616e6a58721c8fd270e27ec67ddaf36b5a62655394e4c937b52debd35c60" Jan 22 07:03:21 crc kubenswrapper[4720]: I0122 07:03:21.677804 4720 scope.go:117] "RemoveContainer" containerID="cebf6145fce44c3a49da572baf5f7553febdd12ed33705afc1866c99dc7e8ccd" Jan 22 07:03:21 crc kubenswrapper[4720]: I0122 07:03:21.688843 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 07:03:21 crc kubenswrapper[4720]: I0122 07:03:21.709186 4720 scope.go:117] "RemoveContainer" containerID="f6f406a87fc744db8dee5a2ba970c771802eb4a58769948e6004286d4a46822f" Jan 22 07:03:21 crc kubenswrapper[4720]: I0122 07:03:21.719940 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 07:03:21 crc kubenswrapper[4720]: I0122 07:03:21.730063 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 07:03:21 crc kubenswrapper[4720]: E0122 07:03:21.730591 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15926338-ca91-47e9-b960-c66c0cea1d91" containerName="proxy-httpd" Jan 22 07:03:21 crc kubenswrapper[4720]: I0122 07:03:21.730617 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="15926338-ca91-47e9-b960-c66c0cea1d91" containerName="proxy-httpd" Jan 22 07:03:21 crc kubenswrapper[4720]: E0122 07:03:21.730639 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15926338-ca91-47e9-b960-c66c0cea1d91" containerName="ceilometer-notification-agent" Jan 22 07:03:21 crc kubenswrapper[4720]: I0122 07:03:21.730649 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="15926338-ca91-47e9-b960-c66c0cea1d91" containerName="ceilometer-notification-agent" Jan 22 07:03:21 crc kubenswrapper[4720]: E0122 07:03:21.730673 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ccee5f2d-20a1-462e-bfc3-207200e78545" containerName="mariadb-account-delete" Jan 22 07:03:21 crc kubenswrapper[4720]: I0122 07:03:21.730683 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="ccee5f2d-20a1-462e-bfc3-207200e78545" containerName="mariadb-account-delete" Jan 22 07:03:21 crc kubenswrapper[4720]: E0122 07:03:21.730694 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="323e3085-cab5-4d90-accf-4586756bd395" containerName="watcher-kuttl-api-log" Jan 22 07:03:21 crc kubenswrapper[4720]: I0122 07:03:21.730703 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="323e3085-cab5-4d90-accf-4586756bd395" containerName="watcher-kuttl-api-log" Jan 22 07:03:21 crc kubenswrapper[4720]: E0122 07:03:21.730716 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15926338-ca91-47e9-b960-c66c0cea1d91" containerName="sg-core" Jan 22 07:03:21 crc kubenswrapper[4720]: I0122 07:03:21.730724 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="15926338-ca91-47e9-b960-c66c0cea1d91" containerName="sg-core" Jan 22 07:03:21 crc kubenswrapper[4720]: E0122 07:03:21.730749 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="323e3085-cab5-4d90-accf-4586756bd395" containerName="watcher-api" Jan 22 07:03:21 crc kubenswrapper[4720]: I0122 07:03:21.730756 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="323e3085-cab5-4d90-accf-4586756bd395" containerName="watcher-api" Jan 22 07:03:21 crc kubenswrapper[4720]: E0122 07:03:21.730768 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15926338-ca91-47e9-b960-c66c0cea1d91" containerName="ceilometer-central-agent" Jan 22 07:03:21 crc kubenswrapper[4720]: I0122 07:03:21.730775 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="15926338-ca91-47e9-b960-c66c0cea1d91" containerName="ceilometer-central-agent" Jan 22 07:03:21 crc kubenswrapper[4720]: E0122 07:03:21.730791 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8ac272a-b713-4024-a0e0-fd1873016edc" containerName="watcher-decision-engine" Jan 22 07:03:21 crc kubenswrapper[4720]: I0122 07:03:21.730800 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8ac272a-b713-4024-a0e0-fd1873016edc" containerName="watcher-decision-engine" Jan 22 07:03:21 crc kubenswrapper[4720]: I0122 07:03:21.730973 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="15926338-ca91-47e9-b960-c66c0cea1d91" containerName="ceilometer-notification-agent" Jan 22 07:03:21 crc kubenswrapper[4720]: I0122 07:03:21.730986 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="15926338-ca91-47e9-b960-c66c0cea1d91" containerName="sg-core" Jan 22 07:03:21 crc kubenswrapper[4720]: I0122 07:03:21.730997 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="ccee5f2d-20a1-462e-bfc3-207200e78545" containerName="mariadb-account-delete" Jan 22 07:03:21 crc kubenswrapper[4720]: I0122 07:03:21.731007 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="15926338-ca91-47e9-b960-c66c0cea1d91" containerName="ceilometer-central-agent" Jan 22 07:03:21 crc kubenswrapper[4720]: I0122 07:03:21.731015 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="15926338-ca91-47e9-b960-c66c0cea1d91" containerName="proxy-httpd" Jan 22 07:03:21 crc kubenswrapper[4720]: I0122 07:03:21.731025 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="c8ac272a-b713-4024-a0e0-fd1873016edc" containerName="watcher-decision-engine" Jan 22 07:03:21 crc kubenswrapper[4720]: I0122 07:03:21.731032 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="323e3085-cab5-4d90-accf-4586756bd395" containerName="watcher-api" Jan 22 07:03:21 crc kubenswrapper[4720]: I0122 07:03:21.731043 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="323e3085-cab5-4d90-accf-4586756bd395" containerName="watcher-kuttl-api-log" Jan 22 07:03:21 crc kubenswrapper[4720]: I0122 07:03:21.733073 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:03:21 crc kubenswrapper[4720]: I0122 07:03:21.742395 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 07:03:21 crc kubenswrapper[4720]: I0122 07:03:21.746664 4720 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-config-data" Jan 22 07:03:21 crc kubenswrapper[4720]: I0122 07:03:21.748101 4720 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-scripts" Jan 22 07:03:21 crc kubenswrapper[4720]: I0122 07:03:21.748245 4720 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-ceilometer-internal-svc" Jan 22 07:03:21 crc kubenswrapper[4720]: I0122 07:03:21.845795 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/141f045d-3987-4578-b7b5-bf65e745233e-scripts\") pod \"ceilometer-0\" (UID: \"141f045d-3987-4578-b7b5-bf65e745233e\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:03:21 crc kubenswrapper[4720]: I0122 07:03:21.845882 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/141f045d-3987-4578-b7b5-bf65e745233e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"141f045d-3987-4578-b7b5-bf65e745233e\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:03:21 crc kubenswrapper[4720]: I0122 07:03:21.845933 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/141f045d-3987-4578-b7b5-bf65e745233e-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"141f045d-3987-4578-b7b5-bf65e745233e\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:03:21 crc kubenswrapper[4720]: I0122 07:03:21.845961 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/141f045d-3987-4578-b7b5-bf65e745233e-config-data\") pod \"ceilometer-0\" (UID: \"141f045d-3987-4578-b7b5-bf65e745233e\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:03:21 crc kubenswrapper[4720]: I0122 07:03:21.846009 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w7bz7\" (UniqueName: \"kubernetes.io/projected/141f045d-3987-4578-b7b5-bf65e745233e-kube-api-access-w7bz7\") pod \"ceilometer-0\" (UID: \"141f045d-3987-4578-b7b5-bf65e745233e\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:03:21 crc kubenswrapper[4720]: I0122 07:03:21.846040 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/141f045d-3987-4578-b7b5-bf65e745233e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"141f045d-3987-4578-b7b5-bf65e745233e\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:03:21 crc kubenswrapper[4720]: I0122 07:03:21.846158 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/141f045d-3987-4578-b7b5-bf65e745233e-log-httpd\") pod \"ceilometer-0\" (UID: \"141f045d-3987-4578-b7b5-bf65e745233e\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:03:21 crc kubenswrapper[4720]: I0122 07:03:21.846186 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/141f045d-3987-4578-b7b5-bf65e745233e-run-httpd\") pod \"ceilometer-0\" (UID: \"141f045d-3987-4578-b7b5-bf65e745233e\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:03:21 crc kubenswrapper[4720]: I0122 07:03:21.948069 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/141f045d-3987-4578-b7b5-bf65e745233e-log-httpd\") pod \"ceilometer-0\" (UID: \"141f045d-3987-4578-b7b5-bf65e745233e\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:03:21 crc kubenswrapper[4720]: I0122 07:03:21.948142 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/141f045d-3987-4578-b7b5-bf65e745233e-run-httpd\") pod \"ceilometer-0\" (UID: \"141f045d-3987-4578-b7b5-bf65e745233e\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:03:21 crc kubenswrapper[4720]: I0122 07:03:21.948173 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/141f045d-3987-4578-b7b5-bf65e745233e-scripts\") pod \"ceilometer-0\" (UID: \"141f045d-3987-4578-b7b5-bf65e745233e\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:03:21 crc kubenswrapper[4720]: I0122 07:03:21.948220 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/141f045d-3987-4578-b7b5-bf65e745233e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"141f045d-3987-4578-b7b5-bf65e745233e\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:03:21 crc kubenswrapper[4720]: I0122 07:03:21.948244 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/141f045d-3987-4578-b7b5-bf65e745233e-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"141f045d-3987-4578-b7b5-bf65e745233e\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:03:21 crc kubenswrapper[4720]: I0122 07:03:21.948263 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/141f045d-3987-4578-b7b5-bf65e745233e-config-data\") pod \"ceilometer-0\" (UID: \"141f045d-3987-4578-b7b5-bf65e745233e\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:03:21 crc kubenswrapper[4720]: I0122 07:03:21.948298 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w7bz7\" (UniqueName: \"kubernetes.io/projected/141f045d-3987-4578-b7b5-bf65e745233e-kube-api-access-w7bz7\") pod \"ceilometer-0\" (UID: \"141f045d-3987-4578-b7b5-bf65e745233e\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:03:21 crc kubenswrapper[4720]: I0122 07:03:21.948322 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/141f045d-3987-4578-b7b5-bf65e745233e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"141f045d-3987-4578-b7b5-bf65e745233e\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:03:21 crc kubenswrapper[4720]: I0122 07:03:21.948788 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/141f045d-3987-4578-b7b5-bf65e745233e-run-httpd\") pod \"ceilometer-0\" (UID: \"141f045d-3987-4578-b7b5-bf65e745233e\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:03:21 crc kubenswrapper[4720]: I0122 07:03:21.948839 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/141f045d-3987-4578-b7b5-bf65e745233e-log-httpd\") pod \"ceilometer-0\" (UID: \"141f045d-3987-4578-b7b5-bf65e745233e\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:03:21 crc kubenswrapper[4720]: I0122 07:03:21.953791 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/141f045d-3987-4578-b7b5-bf65e745233e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"141f045d-3987-4578-b7b5-bf65e745233e\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:03:21 crc kubenswrapper[4720]: I0122 07:03:21.954685 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/141f045d-3987-4578-b7b5-bf65e745233e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"141f045d-3987-4578-b7b5-bf65e745233e\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:03:21 crc kubenswrapper[4720]: I0122 07:03:21.961233 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/141f045d-3987-4578-b7b5-bf65e745233e-config-data\") pod \"ceilometer-0\" (UID: \"141f045d-3987-4578-b7b5-bf65e745233e\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:03:21 crc kubenswrapper[4720]: I0122 07:03:21.964975 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/141f045d-3987-4578-b7b5-bf65e745233e-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"141f045d-3987-4578-b7b5-bf65e745233e\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:03:21 crc kubenswrapper[4720]: I0122 07:03:21.970073 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w7bz7\" (UniqueName: \"kubernetes.io/projected/141f045d-3987-4578-b7b5-bf65e745233e-kube-api-access-w7bz7\") pod \"ceilometer-0\" (UID: \"141f045d-3987-4578-b7b5-bf65e745233e\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:03:21 crc kubenswrapper[4720]: I0122 07:03:21.970825 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/141f045d-3987-4578-b7b5-bf65e745233e-scripts\") pod \"ceilometer-0\" (UID: \"141f045d-3987-4578-b7b5-bf65e745233e\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:03:22 crc kubenswrapper[4720]: I0122 07:03:22.073190 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:03:22 crc kubenswrapper[4720]: I0122 07:03:22.230675 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="15926338-ca91-47e9-b960-c66c0cea1d91" path="/var/lib/kubelet/pods/15926338-ca91-47e9-b960-c66c0cea1d91/volumes" Jan 22 07:03:22 crc kubenswrapper[4720]: I0122 07:03:22.650861 4720 generic.go:334] "Generic (PLEG): container finished" podID="f29a126f-9c3b-4569-bfe2-64a37f315aa8" containerID="4811e7d5ba0a5f055fb36035d842cd49a65552b05c7a230970af54280030f1d9" exitCode=0 Jan 22 07:03:22 crc kubenswrapper[4720]: I0122 07:03:22.651222 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"f29a126f-9c3b-4569-bfe2-64a37f315aa8","Type":"ContainerDied","Data":"4811e7d5ba0a5f055fb36035d842cd49a65552b05c7a230970af54280030f1d9"} Jan 22 07:03:22 crc kubenswrapper[4720]: I0122 07:03:22.764677 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 07:03:22 crc kubenswrapper[4720]: I0122 07:03:22.871054 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f29a126f-9c3b-4569-bfe2-64a37f315aa8-config-data\") pod \"f29a126f-9c3b-4569-bfe2-64a37f315aa8\" (UID: \"f29a126f-9c3b-4569-bfe2-64a37f315aa8\") " Jan 22 07:03:22 crc kubenswrapper[4720]: I0122 07:03:22.871208 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4cpw\" (UniqueName: \"kubernetes.io/projected/f29a126f-9c3b-4569-bfe2-64a37f315aa8-kube-api-access-x4cpw\") pod \"f29a126f-9c3b-4569-bfe2-64a37f315aa8\" (UID: \"f29a126f-9c3b-4569-bfe2-64a37f315aa8\") " Jan 22 07:03:22 crc kubenswrapper[4720]: I0122 07:03:22.871241 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/f29a126f-9c3b-4569-bfe2-64a37f315aa8-cert-memcached-mtls\") pod \"f29a126f-9c3b-4569-bfe2-64a37f315aa8\" (UID: \"f29a126f-9c3b-4569-bfe2-64a37f315aa8\") " Jan 22 07:03:22 crc kubenswrapper[4720]: I0122 07:03:22.871270 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f29a126f-9c3b-4569-bfe2-64a37f315aa8-logs\") pod \"f29a126f-9c3b-4569-bfe2-64a37f315aa8\" (UID: \"f29a126f-9c3b-4569-bfe2-64a37f315aa8\") " Jan 22 07:03:22 crc kubenswrapper[4720]: I0122 07:03:22.871330 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f29a126f-9c3b-4569-bfe2-64a37f315aa8-combined-ca-bundle\") pod \"f29a126f-9c3b-4569-bfe2-64a37f315aa8\" (UID: \"f29a126f-9c3b-4569-bfe2-64a37f315aa8\") " Jan 22 07:03:22 crc kubenswrapper[4720]: I0122 07:03:22.874404 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f29a126f-9c3b-4569-bfe2-64a37f315aa8-logs" (OuterVolumeSpecName: "logs") pod "f29a126f-9c3b-4569-bfe2-64a37f315aa8" (UID: "f29a126f-9c3b-4569-bfe2-64a37f315aa8"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 07:03:22 crc kubenswrapper[4720]: I0122 07:03:22.878638 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f29a126f-9c3b-4569-bfe2-64a37f315aa8-kube-api-access-x4cpw" (OuterVolumeSpecName: "kube-api-access-x4cpw") pod "f29a126f-9c3b-4569-bfe2-64a37f315aa8" (UID: "f29a126f-9c3b-4569-bfe2-64a37f315aa8"). InnerVolumeSpecName "kube-api-access-x4cpw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 07:03:22 crc kubenswrapper[4720]: I0122 07:03:22.881514 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 07:03:22 crc kubenswrapper[4720]: I0122 07:03:22.900548 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f29a126f-9c3b-4569-bfe2-64a37f315aa8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f29a126f-9c3b-4569-bfe2-64a37f315aa8" (UID: "f29a126f-9c3b-4569-bfe2-64a37f315aa8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:03:22 crc kubenswrapper[4720]: I0122 07:03:22.913262 4720 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 22 07:03:22 crc kubenswrapper[4720]: I0122 07:03:22.938067 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f29a126f-9c3b-4569-bfe2-64a37f315aa8-config-data" (OuterVolumeSpecName: "config-data") pod "f29a126f-9c3b-4569-bfe2-64a37f315aa8" (UID: "f29a126f-9c3b-4569-bfe2-64a37f315aa8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:03:22 crc kubenswrapper[4720]: I0122 07:03:22.958691 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f29a126f-9c3b-4569-bfe2-64a37f315aa8-cert-memcached-mtls" (OuterVolumeSpecName: "cert-memcached-mtls") pod "f29a126f-9c3b-4569-bfe2-64a37f315aa8" (UID: "f29a126f-9c3b-4569-bfe2-64a37f315aa8"). InnerVolumeSpecName "cert-memcached-mtls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:03:22 crc kubenswrapper[4720]: I0122 07:03:22.973885 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4cpw\" (UniqueName: \"kubernetes.io/projected/f29a126f-9c3b-4569-bfe2-64a37f315aa8-kube-api-access-x4cpw\") on node \"crc\" DevicePath \"\"" Jan 22 07:03:22 crc kubenswrapper[4720]: I0122 07:03:22.973947 4720 reconciler_common.go:293] "Volume detached for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/f29a126f-9c3b-4569-bfe2-64a37f315aa8-cert-memcached-mtls\") on node \"crc\" DevicePath \"\"" Jan 22 07:03:22 crc kubenswrapper[4720]: I0122 07:03:22.973966 4720 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f29a126f-9c3b-4569-bfe2-64a37f315aa8-logs\") on node \"crc\" DevicePath \"\"" Jan 22 07:03:22 crc kubenswrapper[4720]: I0122 07:03:22.973978 4720 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f29a126f-9c3b-4569-bfe2-64a37f315aa8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 07:03:22 crc kubenswrapper[4720]: I0122 07:03:22.973989 4720 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f29a126f-9c3b-4569-bfe2-64a37f315aa8-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 07:03:23 crc kubenswrapper[4720]: I0122 07:03:23.662379 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"141f045d-3987-4578-b7b5-bf65e745233e","Type":"ContainerStarted","Data":"445589bd9c4c5e0b56a28f4a8918239cd9b2808a00257e492817e13644bc55d8"} Jan 22 07:03:23 crc kubenswrapper[4720]: I0122 07:03:23.662618 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"141f045d-3987-4578-b7b5-bf65e745233e","Type":"ContainerStarted","Data":"96835430102cc00ed51bb7e7bfcd5b2db244df711ddc846705ecb6f3cbb6a8a5"} Jan 22 07:03:23 crc kubenswrapper[4720]: I0122 07:03:23.664086 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"f29a126f-9c3b-4569-bfe2-64a37f315aa8","Type":"ContainerDied","Data":"1556095f37d8b084f2454409de12a0181cebfec940584a444e92696e269b72ce"} Jan 22 07:03:23 crc kubenswrapper[4720]: I0122 07:03:23.664117 4720 scope.go:117] "RemoveContainer" containerID="4811e7d5ba0a5f055fb36035d842cd49a65552b05c7a230970af54280030f1d9" Jan 22 07:03:23 crc kubenswrapper[4720]: I0122 07:03:23.664210 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 07:03:23 crc kubenswrapper[4720]: I0122 07:03:23.696159 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 22 07:03:23 crc kubenswrapper[4720]: I0122 07:03:23.703527 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 22 07:03:24 crc kubenswrapper[4720]: I0122 07:03:24.222917 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f29a126f-9c3b-4569-bfe2-64a37f315aa8" path="/var/lib/kubelet/pods/f29a126f-9c3b-4569-bfe2-64a37f315aa8/volumes" Jan 22 07:03:24 crc kubenswrapper[4720]: I0122 07:03:24.674550 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"141f045d-3987-4578-b7b5-bf65e745233e","Type":"ContainerStarted","Data":"d0dedeaec41ba5ac0ff39c1bfca85633f3b94611cf604fcf6af1198c6a959eb8"} Jan 22 07:03:25 crc kubenswrapper[4720]: I0122 07:03:25.718255 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"141f045d-3987-4578-b7b5-bf65e745233e","Type":"ContainerStarted","Data":"9bf9194a0332b49721aa03eee00c11e46b8bc0a1e64f0126b70f28f80d64ec22"} Jan 22 07:03:26 crc kubenswrapper[4720]: I0122 07:03:26.741931 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"141f045d-3987-4578-b7b5-bf65e745233e","Type":"ContainerStarted","Data":"8a4459b3cd7051bbf6872a5139ac333dc6dbc45b87d6a82eb5c952de0f250ec2"} Jan 22 07:03:26 crc kubenswrapper[4720]: I0122 07:03:26.742366 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:03:26 crc kubenswrapper[4720]: I0122 07:03:26.766082 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/ceilometer-0" podStartSLOduration=2.7364636879999997 podStartE2EDuration="5.76605651s" podCreationTimestamp="2026-01-22 07:03:21 +0000 UTC" firstStartedPulling="2026-01-22 07:03:22.912944458 +0000 UTC m=+1695.054851163" lastFinishedPulling="2026-01-22 07:03:25.94253728 +0000 UTC m=+1698.084443985" observedRunningTime="2026-01-22 07:03:26.759309743 +0000 UTC m=+1698.901216448" watchObservedRunningTime="2026-01-22 07:03:26.76605651 +0000 UTC m=+1698.907963215" Jan 22 07:03:27 crc kubenswrapper[4720]: I0122 07:03:27.143577 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-db-create-dm7n6"] Jan 22 07:03:27 crc kubenswrapper[4720]: E0122 07:03:27.144475 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f29a126f-9c3b-4569-bfe2-64a37f315aa8" containerName="watcher-applier" Jan 22 07:03:27 crc kubenswrapper[4720]: I0122 07:03:27.144503 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="f29a126f-9c3b-4569-bfe2-64a37f315aa8" containerName="watcher-applier" Jan 22 07:03:27 crc kubenswrapper[4720]: I0122 07:03:27.144797 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="f29a126f-9c3b-4569-bfe2-64a37f315aa8" containerName="watcher-applier" Jan 22 07:03:27 crc kubenswrapper[4720]: I0122 07:03:27.145625 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-dm7n6" Jan 22 07:03:27 crc kubenswrapper[4720]: I0122 07:03:27.152792 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-5bae-account-create-update-m6l56"] Jan 22 07:03:27 crc kubenswrapper[4720]: I0122 07:03:27.154635 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-5bae-account-create-update-m6l56" Jan 22 07:03:27 crc kubenswrapper[4720]: I0122 07:03:27.158361 4720 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-db-secret" Jan 22 07:03:27 crc kubenswrapper[4720]: I0122 07:03:27.173619 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5lvm6\" (UniqueName: \"kubernetes.io/projected/d8efc26a-3612-4b7b-a772-769daea1bb6f-kube-api-access-5lvm6\") pod \"watcher-db-create-dm7n6\" (UID: \"d8efc26a-3612-4b7b-a772-769daea1bb6f\") " pod="watcher-kuttl-default/watcher-db-create-dm7n6" Jan 22 07:03:27 crc kubenswrapper[4720]: I0122 07:03:27.173680 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d8efc26a-3612-4b7b-a772-769daea1bb6f-operator-scripts\") pod \"watcher-db-create-dm7n6\" (UID: \"d8efc26a-3612-4b7b-a772-769daea1bb6f\") " pod="watcher-kuttl-default/watcher-db-create-dm7n6" Jan 22 07:03:27 crc kubenswrapper[4720]: I0122 07:03:27.176086 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-db-create-dm7n6"] Jan 22 07:03:27 crc kubenswrapper[4720]: I0122 07:03:27.188983 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-5bae-account-create-update-m6l56"] Jan 22 07:03:27 crc kubenswrapper[4720]: I0122 07:03:27.275416 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d8efc26a-3612-4b7b-a772-769daea1bb6f-operator-scripts\") pod \"watcher-db-create-dm7n6\" (UID: \"d8efc26a-3612-4b7b-a772-769daea1bb6f\") " pod="watcher-kuttl-default/watcher-db-create-dm7n6" Jan 22 07:03:27 crc kubenswrapper[4720]: I0122 07:03:27.275500 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nj5kx\" (UniqueName: \"kubernetes.io/projected/c7bdec6b-7735-4197-b42f-378d6ec58b7a-kube-api-access-nj5kx\") pod \"watcher-5bae-account-create-update-m6l56\" (UID: \"c7bdec6b-7735-4197-b42f-378d6ec58b7a\") " pod="watcher-kuttl-default/watcher-5bae-account-create-update-m6l56" Jan 22 07:03:27 crc kubenswrapper[4720]: I0122 07:03:27.275691 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c7bdec6b-7735-4197-b42f-378d6ec58b7a-operator-scripts\") pod \"watcher-5bae-account-create-update-m6l56\" (UID: \"c7bdec6b-7735-4197-b42f-378d6ec58b7a\") " pod="watcher-kuttl-default/watcher-5bae-account-create-update-m6l56" Jan 22 07:03:27 crc kubenswrapper[4720]: I0122 07:03:27.275741 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5lvm6\" (UniqueName: \"kubernetes.io/projected/d8efc26a-3612-4b7b-a772-769daea1bb6f-kube-api-access-5lvm6\") pod \"watcher-db-create-dm7n6\" (UID: \"d8efc26a-3612-4b7b-a772-769daea1bb6f\") " pod="watcher-kuttl-default/watcher-db-create-dm7n6" Jan 22 07:03:27 crc kubenswrapper[4720]: I0122 07:03:27.276884 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d8efc26a-3612-4b7b-a772-769daea1bb6f-operator-scripts\") pod \"watcher-db-create-dm7n6\" (UID: \"d8efc26a-3612-4b7b-a772-769daea1bb6f\") " pod="watcher-kuttl-default/watcher-db-create-dm7n6" Jan 22 07:03:27 crc kubenswrapper[4720]: I0122 07:03:27.306840 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5lvm6\" (UniqueName: \"kubernetes.io/projected/d8efc26a-3612-4b7b-a772-769daea1bb6f-kube-api-access-5lvm6\") pod \"watcher-db-create-dm7n6\" (UID: \"d8efc26a-3612-4b7b-a772-769daea1bb6f\") " pod="watcher-kuttl-default/watcher-db-create-dm7n6" Jan 22 07:03:27 crc kubenswrapper[4720]: I0122 07:03:27.377891 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nj5kx\" (UniqueName: \"kubernetes.io/projected/c7bdec6b-7735-4197-b42f-378d6ec58b7a-kube-api-access-nj5kx\") pod \"watcher-5bae-account-create-update-m6l56\" (UID: \"c7bdec6b-7735-4197-b42f-378d6ec58b7a\") " pod="watcher-kuttl-default/watcher-5bae-account-create-update-m6l56" Jan 22 07:03:27 crc kubenswrapper[4720]: I0122 07:03:27.378025 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c7bdec6b-7735-4197-b42f-378d6ec58b7a-operator-scripts\") pod \"watcher-5bae-account-create-update-m6l56\" (UID: \"c7bdec6b-7735-4197-b42f-378d6ec58b7a\") " pod="watcher-kuttl-default/watcher-5bae-account-create-update-m6l56" Jan 22 07:03:27 crc kubenswrapper[4720]: I0122 07:03:27.378762 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c7bdec6b-7735-4197-b42f-378d6ec58b7a-operator-scripts\") pod \"watcher-5bae-account-create-update-m6l56\" (UID: \"c7bdec6b-7735-4197-b42f-378d6ec58b7a\") " pod="watcher-kuttl-default/watcher-5bae-account-create-update-m6l56" Jan 22 07:03:27 crc kubenswrapper[4720]: I0122 07:03:27.395402 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nj5kx\" (UniqueName: \"kubernetes.io/projected/c7bdec6b-7735-4197-b42f-378d6ec58b7a-kube-api-access-nj5kx\") pod \"watcher-5bae-account-create-update-m6l56\" (UID: \"c7bdec6b-7735-4197-b42f-378d6ec58b7a\") " pod="watcher-kuttl-default/watcher-5bae-account-create-update-m6l56" Jan 22 07:03:27 crc kubenswrapper[4720]: I0122 07:03:27.485689 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-dm7n6" Jan 22 07:03:27 crc kubenswrapper[4720]: I0122 07:03:27.500843 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-5bae-account-create-update-m6l56" Jan 22 07:03:27 crc kubenswrapper[4720]: I0122 07:03:27.942243 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-db-create-dm7n6"] Jan 22 07:03:28 crc kubenswrapper[4720]: I0122 07:03:28.029348 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-5bae-account-create-update-m6l56"] Jan 22 07:03:28 crc kubenswrapper[4720]: W0122 07:03:28.035722 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc7bdec6b_7735_4197_b42f_378d6ec58b7a.slice/crio-e539f12a572c6413832755acf3fc31e66d8de2353c478ee032cca7ce5c4fc958 WatchSource:0}: Error finding container e539f12a572c6413832755acf3fc31e66d8de2353c478ee032cca7ce5c4fc958: Status 404 returned error can't find the container with id e539f12a572c6413832755acf3fc31e66d8de2353c478ee032cca7ce5c4fc958 Jan 22 07:03:28 crc kubenswrapper[4720]: I0122 07:03:28.761446 4720 generic.go:334] "Generic (PLEG): container finished" podID="d8efc26a-3612-4b7b-a772-769daea1bb6f" containerID="18ec0b86850adae9be13f01e14a57e94e502305c7156e09baee267f6d9df281d" exitCode=0 Jan 22 07:03:28 crc kubenswrapper[4720]: I0122 07:03:28.761527 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-dm7n6" event={"ID":"d8efc26a-3612-4b7b-a772-769daea1bb6f","Type":"ContainerDied","Data":"18ec0b86850adae9be13f01e14a57e94e502305c7156e09baee267f6d9df281d"} Jan 22 07:03:28 crc kubenswrapper[4720]: I0122 07:03:28.761560 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-dm7n6" event={"ID":"d8efc26a-3612-4b7b-a772-769daea1bb6f","Type":"ContainerStarted","Data":"bc7ddd978af722f150f42b5638afaa4f681e517ae935c3464055c4572bc9bf4c"} Jan 22 07:03:28 crc kubenswrapper[4720]: I0122 07:03:28.767288 4720 generic.go:334] "Generic (PLEG): container finished" podID="c7bdec6b-7735-4197-b42f-378d6ec58b7a" containerID="19692928c611dc07da557717d9019373ee97a4da2fce2243759faecdd7a6a4dc" exitCode=0 Jan 22 07:03:28 crc kubenswrapper[4720]: I0122 07:03:28.767348 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-5bae-account-create-update-m6l56" event={"ID":"c7bdec6b-7735-4197-b42f-378d6ec58b7a","Type":"ContainerDied","Data":"19692928c611dc07da557717d9019373ee97a4da2fce2243759faecdd7a6a4dc"} Jan 22 07:03:28 crc kubenswrapper[4720]: I0122 07:03:28.767380 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-5bae-account-create-update-m6l56" event={"ID":"c7bdec6b-7735-4197-b42f-378d6ec58b7a","Type":"ContainerStarted","Data":"e539f12a572c6413832755acf3fc31e66d8de2353c478ee032cca7ce5c4fc958"} Jan 22 07:03:29 crc kubenswrapper[4720]: I0122 07:03:29.780190 4720 patch_prober.go:28] interesting pod/machine-config-daemon-bnsvd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 07:03:29 crc kubenswrapper[4720]: I0122 07:03:29.780476 4720 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" podUID="f4b26e9d-6a95-4b1c-9750-88b6aa100c67" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 07:03:29 crc kubenswrapper[4720]: I0122 07:03:29.780526 4720 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" Jan 22 07:03:29 crc kubenswrapper[4720]: I0122 07:03:29.781407 4720 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b610ae887215ebfcaa39de45e339fcae21a08ebe6e59b991ec1661de0f19a21c"} pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 22 07:03:29 crc kubenswrapper[4720]: I0122 07:03:29.781468 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" podUID="f4b26e9d-6a95-4b1c-9750-88b6aa100c67" containerName="machine-config-daemon" containerID="cri-o://b610ae887215ebfcaa39de45e339fcae21a08ebe6e59b991ec1661de0f19a21c" gracePeriod=600 Jan 22 07:03:29 crc kubenswrapper[4720]: E0122 07:03:29.948673 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bnsvd_openshift-machine-config-operator(f4b26e9d-6a95-4b1c-9750-88b6aa100c67)\"" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" podUID="f4b26e9d-6a95-4b1c-9750-88b6aa100c67" Jan 22 07:03:31 crc kubenswrapper[4720]: I0122 07:03:30.323490 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-dm7n6" Jan 22 07:03:31 crc kubenswrapper[4720]: I0122 07:03:30.330030 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-5bae-account-create-update-m6l56" Jan 22 07:03:31 crc kubenswrapper[4720]: I0122 07:03:30.418656 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c7bdec6b-7735-4197-b42f-378d6ec58b7a-operator-scripts\") pod \"c7bdec6b-7735-4197-b42f-378d6ec58b7a\" (UID: \"c7bdec6b-7735-4197-b42f-378d6ec58b7a\") " Jan 22 07:03:31 crc kubenswrapper[4720]: I0122 07:03:30.418706 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d8efc26a-3612-4b7b-a772-769daea1bb6f-operator-scripts\") pod \"d8efc26a-3612-4b7b-a772-769daea1bb6f\" (UID: \"d8efc26a-3612-4b7b-a772-769daea1bb6f\") " Jan 22 07:03:31 crc kubenswrapper[4720]: I0122 07:03:30.419298 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c7bdec6b-7735-4197-b42f-378d6ec58b7a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c7bdec6b-7735-4197-b42f-378d6ec58b7a" (UID: "c7bdec6b-7735-4197-b42f-378d6ec58b7a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 07:03:31 crc kubenswrapper[4720]: I0122 07:03:30.419373 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d8efc26a-3612-4b7b-a772-769daea1bb6f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d8efc26a-3612-4b7b-a772-769daea1bb6f" (UID: "d8efc26a-3612-4b7b-a772-769daea1bb6f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 07:03:31 crc kubenswrapper[4720]: I0122 07:03:30.519757 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nj5kx\" (UniqueName: \"kubernetes.io/projected/c7bdec6b-7735-4197-b42f-378d6ec58b7a-kube-api-access-nj5kx\") pod \"c7bdec6b-7735-4197-b42f-378d6ec58b7a\" (UID: \"c7bdec6b-7735-4197-b42f-378d6ec58b7a\") " Jan 22 07:03:31 crc kubenswrapper[4720]: I0122 07:03:30.520111 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5lvm6\" (UniqueName: \"kubernetes.io/projected/d8efc26a-3612-4b7b-a772-769daea1bb6f-kube-api-access-5lvm6\") pod \"d8efc26a-3612-4b7b-a772-769daea1bb6f\" (UID: \"d8efc26a-3612-4b7b-a772-769daea1bb6f\") " Jan 22 07:03:31 crc kubenswrapper[4720]: I0122 07:03:30.522708 4720 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c7bdec6b-7735-4197-b42f-378d6ec58b7a-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 07:03:31 crc kubenswrapper[4720]: I0122 07:03:30.522728 4720 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d8efc26a-3612-4b7b-a772-769daea1bb6f-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 07:03:31 crc kubenswrapper[4720]: I0122 07:03:30.525722 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c7bdec6b-7735-4197-b42f-378d6ec58b7a-kube-api-access-nj5kx" (OuterVolumeSpecName: "kube-api-access-nj5kx") pod "c7bdec6b-7735-4197-b42f-378d6ec58b7a" (UID: "c7bdec6b-7735-4197-b42f-378d6ec58b7a"). InnerVolumeSpecName "kube-api-access-nj5kx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 07:03:31 crc kubenswrapper[4720]: I0122 07:03:30.549195 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d8efc26a-3612-4b7b-a772-769daea1bb6f-kube-api-access-5lvm6" (OuterVolumeSpecName: "kube-api-access-5lvm6") pod "d8efc26a-3612-4b7b-a772-769daea1bb6f" (UID: "d8efc26a-3612-4b7b-a772-769daea1bb6f"). InnerVolumeSpecName "kube-api-access-5lvm6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 07:03:31 crc kubenswrapper[4720]: I0122 07:03:30.623654 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5lvm6\" (UniqueName: \"kubernetes.io/projected/d8efc26a-3612-4b7b-a772-769daea1bb6f-kube-api-access-5lvm6\") on node \"crc\" DevicePath \"\"" Jan 22 07:03:31 crc kubenswrapper[4720]: I0122 07:03:30.623682 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nj5kx\" (UniqueName: \"kubernetes.io/projected/c7bdec6b-7735-4197-b42f-378d6ec58b7a-kube-api-access-nj5kx\") on node \"crc\" DevicePath \"\"" Jan 22 07:03:31 crc kubenswrapper[4720]: I0122 07:03:30.786981 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-dm7n6" event={"ID":"d8efc26a-3612-4b7b-a772-769daea1bb6f","Type":"ContainerDied","Data":"bc7ddd978af722f150f42b5638afaa4f681e517ae935c3464055c4572bc9bf4c"} Jan 22 07:03:31 crc kubenswrapper[4720]: I0122 07:03:30.787135 4720 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bc7ddd978af722f150f42b5638afaa4f681e517ae935c3464055c4572bc9bf4c" Jan 22 07:03:31 crc kubenswrapper[4720]: I0122 07:03:30.787226 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-dm7n6" Jan 22 07:03:31 crc kubenswrapper[4720]: I0122 07:03:30.789730 4720 generic.go:334] "Generic (PLEG): container finished" podID="f4b26e9d-6a95-4b1c-9750-88b6aa100c67" containerID="b610ae887215ebfcaa39de45e339fcae21a08ebe6e59b991ec1661de0f19a21c" exitCode=0 Jan 22 07:03:31 crc kubenswrapper[4720]: I0122 07:03:30.789767 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" event={"ID":"f4b26e9d-6a95-4b1c-9750-88b6aa100c67","Type":"ContainerDied","Data":"b610ae887215ebfcaa39de45e339fcae21a08ebe6e59b991ec1661de0f19a21c"} Jan 22 07:03:31 crc kubenswrapper[4720]: I0122 07:03:30.789836 4720 scope.go:117] "RemoveContainer" containerID="cef29da1a352e3d091047268daeade230282190271ca25c80b09fe79bbd42efe" Jan 22 07:03:31 crc kubenswrapper[4720]: I0122 07:03:30.790182 4720 scope.go:117] "RemoveContainer" containerID="b610ae887215ebfcaa39de45e339fcae21a08ebe6e59b991ec1661de0f19a21c" Jan 22 07:03:31 crc kubenswrapper[4720]: E0122 07:03:30.790501 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bnsvd_openshift-machine-config-operator(f4b26e9d-6a95-4b1c-9750-88b6aa100c67)\"" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" podUID="f4b26e9d-6a95-4b1c-9750-88b6aa100c67" Jan 22 07:03:31 crc kubenswrapper[4720]: I0122 07:03:30.794011 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-5bae-account-create-update-m6l56" event={"ID":"c7bdec6b-7735-4197-b42f-378d6ec58b7a","Type":"ContainerDied","Data":"e539f12a572c6413832755acf3fc31e66d8de2353c478ee032cca7ce5c4fc958"} Jan 22 07:03:31 crc kubenswrapper[4720]: I0122 07:03:30.794047 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-5bae-account-create-update-m6l56" Jan 22 07:03:31 crc kubenswrapper[4720]: I0122 07:03:30.794065 4720 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e539f12a572c6413832755acf3fc31e66d8de2353c478ee032cca7ce5c4fc958" Jan 22 07:03:42 crc kubenswrapper[4720]: I0122 07:03:42.211780 4720 scope.go:117] "RemoveContainer" containerID="b610ae887215ebfcaa39de45e339fcae21a08ebe6e59b991ec1661de0f19a21c" Jan 22 07:03:42 crc kubenswrapper[4720]: E0122 07:03:42.213831 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bnsvd_openshift-machine-config-operator(f4b26e9d-6a95-4b1c-9750-88b6aa100c67)\"" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" podUID="f4b26e9d-6a95-4b1c-9750-88b6aa100c67" Jan 22 07:03:52 crc kubenswrapper[4720]: I0122 07:03:52.081870 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:03:55 crc kubenswrapper[4720]: I0122 07:03:55.211062 4720 scope.go:117] "RemoveContainer" containerID="b610ae887215ebfcaa39de45e339fcae21a08ebe6e59b991ec1661de0f19a21c" Jan 22 07:03:55 crc kubenswrapper[4720]: E0122 07:03:55.211662 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bnsvd_openshift-machine-config-operator(f4b26e9d-6a95-4b1c-9750-88b6aa100c67)\"" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" podUID="f4b26e9d-6a95-4b1c-9750-88b6aa100c67" Jan 22 07:04:06 crc kubenswrapper[4720]: I0122 07:04:06.211715 4720 scope.go:117] "RemoveContainer" containerID="b610ae887215ebfcaa39de45e339fcae21a08ebe6e59b991ec1661de0f19a21c" Jan 22 07:04:06 crc kubenswrapper[4720]: E0122 07:04:06.212409 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bnsvd_openshift-machine-config-operator(f4b26e9d-6a95-4b1c-9750-88b6aa100c67)\"" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" podUID="f4b26e9d-6a95-4b1c-9750-88b6aa100c67" Jan 22 07:04:12 crc kubenswrapper[4720]: I0122 07:04:12.058676 4720 scope.go:117] "RemoveContainer" containerID="9c1de39ef7a6e57b29b6c436cb2274f19acfee9e2e92f84c2315ed45f2496cfd" Jan 22 07:04:18 crc kubenswrapper[4720]: I0122 07:04:18.216313 4720 scope.go:117] "RemoveContainer" containerID="b610ae887215ebfcaa39de45e339fcae21a08ebe6e59b991ec1661de0f19a21c" Jan 22 07:04:18 crc kubenswrapper[4720]: E0122 07:04:18.217138 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bnsvd_openshift-machine-config-operator(f4b26e9d-6a95-4b1c-9750-88b6aa100c67)\"" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" podUID="f4b26e9d-6a95-4b1c-9750-88b6aa100c67" Jan 22 07:04:29 crc kubenswrapper[4720]: I0122 07:04:29.211120 4720 scope.go:117] "RemoveContainer" containerID="b610ae887215ebfcaa39de45e339fcae21a08ebe6e59b991ec1661de0f19a21c" Jan 22 07:04:29 crc kubenswrapper[4720]: E0122 07:04:29.212060 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bnsvd_openshift-machine-config-operator(f4b26e9d-6a95-4b1c-9750-88b6aa100c67)\"" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" podUID="f4b26e9d-6a95-4b1c-9750-88b6aa100c67" Jan 22 07:04:42 crc kubenswrapper[4720]: I0122 07:04:42.210957 4720 scope.go:117] "RemoveContainer" containerID="b610ae887215ebfcaa39de45e339fcae21a08ebe6e59b991ec1661de0f19a21c" Jan 22 07:04:42 crc kubenswrapper[4720]: E0122 07:04:42.211645 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bnsvd_openshift-machine-config-operator(f4b26e9d-6a95-4b1c-9750-88b6aa100c67)\"" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" podUID="f4b26e9d-6a95-4b1c-9750-88b6aa100c67" Jan 22 07:04:55 crc kubenswrapper[4720]: I0122 07:04:55.211331 4720 scope.go:117] "RemoveContainer" containerID="b610ae887215ebfcaa39de45e339fcae21a08ebe6e59b991ec1661de0f19a21c" Jan 22 07:04:55 crc kubenswrapper[4720]: E0122 07:04:55.213054 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bnsvd_openshift-machine-config-operator(f4b26e9d-6a95-4b1c-9750-88b6aa100c67)\"" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" podUID="f4b26e9d-6a95-4b1c-9750-88b6aa100c67" Jan 22 07:05:10 crc kubenswrapper[4720]: I0122 07:05:10.210396 4720 scope.go:117] "RemoveContainer" containerID="b610ae887215ebfcaa39de45e339fcae21a08ebe6e59b991ec1661de0f19a21c" Jan 22 07:05:10 crc kubenswrapper[4720]: E0122 07:05:10.211996 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bnsvd_openshift-machine-config-operator(f4b26e9d-6a95-4b1c-9750-88b6aa100c67)\"" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" podUID="f4b26e9d-6a95-4b1c-9750-88b6aa100c67" Jan 22 07:05:12 crc kubenswrapper[4720]: I0122 07:05:12.241335 4720 scope.go:117] "RemoveContainer" containerID="7e854bfb3f205f9888883a028097f7b7689c09c4ebc28d8f17bc264a304218a0" Jan 22 07:05:12 crc kubenswrapper[4720]: I0122 07:05:12.269438 4720 scope.go:117] "RemoveContainer" containerID="f2dcf7fc6592ea2bce8fd53f172e7eea21576aedf2c1682c7bc65b43df45782b" Jan 22 07:05:12 crc kubenswrapper[4720]: I0122 07:05:12.309191 4720 scope.go:117] "RemoveContainer" containerID="6154ec025579f67fef067420cf0b7795ffabd535e39b818e26a71d951d9a26be" Jan 22 07:05:23 crc kubenswrapper[4720]: I0122 07:05:23.211515 4720 scope.go:117] "RemoveContainer" containerID="b610ae887215ebfcaa39de45e339fcae21a08ebe6e59b991ec1661de0f19a21c" Jan 22 07:05:23 crc kubenswrapper[4720]: E0122 07:05:23.212609 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bnsvd_openshift-machine-config-operator(f4b26e9d-6a95-4b1c-9750-88b6aa100c67)\"" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" podUID="f4b26e9d-6a95-4b1c-9750-88b6aa100c67" Jan 22 07:05:37 crc kubenswrapper[4720]: I0122 07:05:37.211359 4720 scope.go:117] "RemoveContainer" containerID="b610ae887215ebfcaa39de45e339fcae21a08ebe6e59b991ec1661de0f19a21c" Jan 22 07:05:37 crc kubenswrapper[4720]: E0122 07:05:37.211994 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bnsvd_openshift-machine-config-operator(f4b26e9d-6a95-4b1c-9750-88b6aa100c67)\"" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" podUID="f4b26e9d-6a95-4b1c-9750-88b6aa100c67" Jan 22 07:05:48 crc kubenswrapper[4720]: I0122 07:05:48.215806 4720 scope.go:117] "RemoveContainer" containerID="b610ae887215ebfcaa39de45e339fcae21a08ebe6e59b991ec1661de0f19a21c" Jan 22 07:05:48 crc kubenswrapper[4720]: E0122 07:05:48.216631 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bnsvd_openshift-machine-config-operator(f4b26e9d-6a95-4b1c-9750-88b6aa100c67)\"" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" podUID="f4b26e9d-6a95-4b1c-9750-88b6aa100c67" Jan 22 07:06:02 crc kubenswrapper[4720]: I0122 07:06:02.210757 4720 scope.go:117] "RemoveContainer" containerID="b610ae887215ebfcaa39de45e339fcae21a08ebe6e59b991ec1661de0f19a21c" Jan 22 07:06:02 crc kubenswrapper[4720]: E0122 07:06:02.211423 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bnsvd_openshift-machine-config-operator(f4b26e9d-6a95-4b1c-9750-88b6aa100c67)\"" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" podUID="f4b26e9d-6a95-4b1c-9750-88b6aa100c67" Jan 22 07:06:12 crc kubenswrapper[4720]: I0122 07:06:12.435193 4720 scope.go:117] "RemoveContainer" containerID="f7f9a35a28503c7f5cdf6d056f6a50b947d856a6f06156c496f25d1e28fde1d4" Jan 22 07:06:12 crc kubenswrapper[4720]: I0122 07:06:12.468140 4720 scope.go:117] "RemoveContainer" containerID="9a5607d7819b622b913b6449fd1e6264ff33bd18e7bb5ea97c8bbb87ab558e9a" Jan 22 07:06:12 crc kubenswrapper[4720]: I0122 07:06:12.518548 4720 scope.go:117] "RemoveContainer" containerID="64cf752fc4b705d26561fb6a06c1d91676c2c1297c8d860a55783111b936e97f" Jan 22 07:06:12 crc kubenswrapper[4720]: I0122 07:06:12.537740 4720 scope.go:117] "RemoveContainer" containerID="d81b4d0cfc63fd64314c793b2f71b312f481851a03f9ea54e104e647f35d5c28" Jan 22 07:06:12 crc kubenswrapper[4720]: I0122 07:06:12.569863 4720 scope.go:117] "RemoveContainer" containerID="1e21d13d07afa88bb5c09e77ef705a17a9c36bd81635ace49ed4dcb4c4ca31c0" Jan 22 07:06:12 crc kubenswrapper[4720]: I0122 07:06:12.605358 4720 scope.go:117] "RemoveContainer" containerID="eb5649ecc7c2ab54f2dc5aa7a5878e7656682c34abc3fb110d2837af4ed1984b" Jan 22 07:06:12 crc kubenswrapper[4720]: I0122 07:06:12.638874 4720 scope.go:117] "RemoveContainer" containerID="5517b719f8dab98ca6d28fc71100fee4f6d967b1b70e682f90937a78c5fcbec1" Jan 22 07:06:15 crc kubenswrapper[4720]: I0122 07:06:15.211162 4720 scope.go:117] "RemoveContainer" containerID="b610ae887215ebfcaa39de45e339fcae21a08ebe6e59b991ec1661de0f19a21c" Jan 22 07:06:15 crc kubenswrapper[4720]: E0122 07:06:15.211724 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bnsvd_openshift-machine-config-operator(f4b26e9d-6a95-4b1c-9750-88b6aa100c67)\"" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" podUID="f4b26e9d-6a95-4b1c-9750-88b6aa100c67" Jan 22 07:06:26 crc kubenswrapper[4720]: I0122 07:06:26.211438 4720 scope.go:117] "RemoveContainer" containerID="b610ae887215ebfcaa39de45e339fcae21a08ebe6e59b991ec1661de0f19a21c" Jan 22 07:06:26 crc kubenswrapper[4720]: E0122 07:06:26.212291 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bnsvd_openshift-machine-config-operator(f4b26e9d-6a95-4b1c-9750-88b6aa100c67)\"" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" podUID="f4b26e9d-6a95-4b1c-9750-88b6aa100c67" Jan 22 07:06:33 crc kubenswrapper[4720]: I0122 07:06:33.062859 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/root-account-create-update-96mbz"] Jan 22 07:06:33 crc kubenswrapper[4720]: I0122 07:06:33.070002 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/root-account-create-update-96mbz"] Jan 22 07:06:34 crc kubenswrapper[4720]: I0122 07:06:34.033411 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/keystone-db-create-w6jkf"] Jan 22 07:06:34 crc kubenswrapper[4720]: I0122 07:06:34.042457 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/keystone-2e07-account-create-update-fjdrg"] Jan 22 07:06:34 crc kubenswrapper[4720]: I0122 07:06:34.050789 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/keystone-db-create-w6jkf"] Jan 22 07:06:34 crc kubenswrapper[4720]: I0122 07:06:34.058231 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/keystone-2e07-account-create-update-fjdrg"] Jan 22 07:06:34 crc kubenswrapper[4720]: I0122 07:06:34.222482 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6bf728f2-33bb-4f7c-b2a1-55e4cfd402e3" path="/var/lib/kubelet/pods/6bf728f2-33bb-4f7c-b2a1-55e4cfd402e3/volumes" Jan 22 07:06:34 crc kubenswrapper[4720]: I0122 07:06:34.223228 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d421e895-4cb2-4a95-9a5b-ebf16f934a57" path="/var/lib/kubelet/pods/d421e895-4cb2-4a95-9a5b-ebf16f934a57/volumes" Jan 22 07:06:34 crc kubenswrapper[4720]: I0122 07:06:34.223779 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f1642b8a-36b1-4482-bb8e-f289886d7d82" path="/var/lib/kubelet/pods/f1642b8a-36b1-4482-bb8e-f289886d7d82/volumes" Jan 22 07:06:37 crc kubenswrapper[4720]: I0122 07:06:37.210899 4720 scope.go:117] "RemoveContainer" containerID="b610ae887215ebfcaa39de45e339fcae21a08ebe6e59b991ec1661de0f19a21c" Jan 22 07:06:37 crc kubenswrapper[4720]: E0122 07:06:37.211639 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bnsvd_openshift-machine-config-operator(f4b26e9d-6a95-4b1c-9750-88b6aa100c67)\"" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" podUID="f4b26e9d-6a95-4b1c-9750-88b6aa100c67" Jan 22 07:06:48 crc kubenswrapper[4720]: I0122 07:06:48.216112 4720 scope.go:117] "RemoveContainer" containerID="b610ae887215ebfcaa39de45e339fcae21a08ebe6e59b991ec1661de0f19a21c" Jan 22 07:06:48 crc kubenswrapper[4720]: E0122 07:06:48.217139 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bnsvd_openshift-machine-config-operator(f4b26e9d-6a95-4b1c-9750-88b6aa100c67)\"" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" podUID="f4b26e9d-6a95-4b1c-9750-88b6aa100c67" Jan 22 07:07:02 crc kubenswrapper[4720]: I0122 07:07:02.211200 4720 scope.go:117] "RemoveContainer" containerID="b610ae887215ebfcaa39de45e339fcae21a08ebe6e59b991ec1661de0f19a21c" Jan 22 07:07:02 crc kubenswrapper[4720]: E0122 07:07:02.212045 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bnsvd_openshift-machine-config-operator(f4b26e9d-6a95-4b1c-9750-88b6aa100c67)\"" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" podUID="f4b26e9d-6a95-4b1c-9750-88b6aa100c67" Jan 22 07:07:08 crc kubenswrapper[4720]: I0122 07:07:08.046794 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/keystone-db-sync-v5pc9"] Jan 22 07:07:08 crc kubenswrapper[4720]: I0122 07:07:08.054951 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/keystone-db-sync-v5pc9"] Jan 22 07:07:08 crc kubenswrapper[4720]: I0122 07:07:08.220012 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1eb3e6e5-9c5a-44ab-af1e-46fcd3a22c99" path="/var/lib/kubelet/pods/1eb3e6e5-9c5a-44ab-af1e-46fcd3a22c99/volumes" Jan 22 07:07:12 crc kubenswrapper[4720]: I0122 07:07:12.797306 4720 scope.go:117] "RemoveContainer" containerID="2c1b6ee7e31eb2b1690292c1e0b2dbc767f64df8f199f24080d8f5b2be353c7b" Jan 22 07:07:12 crc kubenswrapper[4720]: I0122 07:07:12.833274 4720 scope.go:117] "RemoveContainer" containerID="7b28e8cd3afd54a30e97ca37d482cc2c07ab84f8171505e65ddc7aaba1922c2c" Jan 22 07:07:12 crc kubenswrapper[4720]: I0122 07:07:12.869254 4720 scope.go:117] "RemoveContainer" containerID="e7d08a4a55ec335da475031256cc67252a201eb76c20099836a15177c390783f" Jan 22 07:07:12 crc kubenswrapper[4720]: I0122 07:07:12.909200 4720 scope.go:117] "RemoveContainer" containerID="b54b07fec61ed15a3e16ee62a54fe2c9cfe0d2ae0c39be951a71d9735a84a9d0" Jan 22 07:07:12 crc kubenswrapper[4720]: I0122 07:07:12.937173 4720 scope.go:117] "RemoveContainer" containerID="f5f67f7122c451feddcf36c21770c5990412236753b1de32e9107c61632de28d" Jan 22 07:07:12 crc kubenswrapper[4720]: I0122 07:07:12.976872 4720 scope.go:117] "RemoveContainer" containerID="339c85955f662dca6dad9a2d3eccc74d3dc10f9483591310bd00b569425c12cc" Jan 22 07:07:13 crc kubenswrapper[4720]: I0122 07:07:13.003298 4720 scope.go:117] "RemoveContainer" containerID="69e0c984c8e825301e66dd11bd51bdfac2c9df7c554d0cd08fa9f6efd71e1f91" Jan 22 07:07:13 crc kubenswrapper[4720]: I0122 07:07:13.067347 4720 scope.go:117] "RemoveContainer" containerID="1ac7dfbb6385fb7241272ac912ea7a32567a7470cb7545fd2c0ef99601e814c7" Jan 22 07:07:13 crc kubenswrapper[4720]: I0122 07:07:13.086840 4720 scope.go:117] "RemoveContainer" containerID="0774a0bdfd2635d1f7fe0d734d0e352b30cf542549b9e5983b4edd34c4e1cd83" Jan 22 07:07:13 crc kubenswrapper[4720]: I0122 07:07:13.109190 4720 scope.go:117] "RemoveContainer" containerID="68fcd493888b7f71aac73f22324b312a2abe1d9eebc546619dc858b40fce8675" Jan 22 07:07:13 crc kubenswrapper[4720]: I0122 07:07:13.132568 4720 scope.go:117] "RemoveContainer" containerID="f3da2e2411b61d614b724da1441be5c003d7e5bfc77346bccb29187cb3fda5cb" Jan 22 07:07:16 crc kubenswrapper[4720]: I0122 07:07:16.211519 4720 scope.go:117] "RemoveContainer" containerID="b610ae887215ebfcaa39de45e339fcae21a08ebe6e59b991ec1661de0f19a21c" Jan 22 07:07:16 crc kubenswrapper[4720]: E0122 07:07:16.212106 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bnsvd_openshift-machine-config-operator(f4b26e9d-6a95-4b1c-9750-88b6aa100c67)\"" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" podUID="f4b26e9d-6a95-4b1c-9750-88b6aa100c67" Jan 22 07:07:28 crc kubenswrapper[4720]: I0122 07:07:28.216342 4720 scope.go:117] "RemoveContainer" containerID="b610ae887215ebfcaa39de45e339fcae21a08ebe6e59b991ec1661de0f19a21c" Jan 22 07:07:28 crc kubenswrapper[4720]: E0122 07:07:28.217485 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bnsvd_openshift-machine-config-operator(f4b26e9d-6a95-4b1c-9750-88b6aa100c67)\"" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" podUID="f4b26e9d-6a95-4b1c-9750-88b6aa100c67" Jan 22 07:07:42 crc kubenswrapper[4720]: I0122 07:07:42.210222 4720 scope.go:117] "RemoveContainer" containerID="b610ae887215ebfcaa39de45e339fcae21a08ebe6e59b991ec1661de0f19a21c" Jan 22 07:07:42 crc kubenswrapper[4720]: E0122 07:07:42.210903 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bnsvd_openshift-machine-config-operator(f4b26e9d-6a95-4b1c-9750-88b6aa100c67)\"" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" podUID="f4b26e9d-6a95-4b1c-9750-88b6aa100c67" Jan 22 07:07:53 crc kubenswrapper[4720]: I0122 07:07:53.211795 4720 scope.go:117] "RemoveContainer" containerID="b610ae887215ebfcaa39de45e339fcae21a08ebe6e59b991ec1661de0f19a21c" Jan 22 07:07:53 crc kubenswrapper[4720]: E0122 07:07:53.212462 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bnsvd_openshift-machine-config-operator(f4b26e9d-6a95-4b1c-9750-88b6aa100c67)\"" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" podUID="f4b26e9d-6a95-4b1c-9750-88b6aa100c67" Jan 22 07:08:00 crc kubenswrapper[4720]: I0122 07:08:00.972582 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-7ll64"] Jan 22 07:08:00 crc kubenswrapper[4720]: E0122 07:08:00.973549 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7bdec6b-7735-4197-b42f-378d6ec58b7a" containerName="mariadb-account-create-update" Jan 22 07:08:00 crc kubenswrapper[4720]: I0122 07:08:00.973566 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7bdec6b-7735-4197-b42f-378d6ec58b7a" containerName="mariadb-account-create-update" Jan 22 07:08:00 crc kubenswrapper[4720]: E0122 07:08:00.973580 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8efc26a-3612-4b7b-a772-769daea1bb6f" containerName="mariadb-database-create" Jan 22 07:08:00 crc kubenswrapper[4720]: I0122 07:08:00.973588 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8efc26a-3612-4b7b-a772-769daea1bb6f" containerName="mariadb-database-create" Jan 22 07:08:00 crc kubenswrapper[4720]: I0122 07:08:00.973857 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="d8efc26a-3612-4b7b-a772-769daea1bb6f" containerName="mariadb-database-create" Jan 22 07:08:00 crc kubenswrapper[4720]: I0122 07:08:00.973990 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="c7bdec6b-7735-4197-b42f-378d6ec58b7a" containerName="mariadb-account-create-update" Jan 22 07:08:00 crc kubenswrapper[4720]: I0122 07:08:00.975694 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7ll64" Jan 22 07:08:00 crc kubenswrapper[4720]: I0122 07:08:00.999343 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7ll64"] Jan 22 07:08:01 crc kubenswrapper[4720]: I0122 07:08:01.028481 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eeeaea3e-c420-47ad-862a-ecb4c26eecf4-catalog-content\") pod \"certified-operators-7ll64\" (UID: \"eeeaea3e-c420-47ad-862a-ecb4c26eecf4\") " pod="openshift-marketplace/certified-operators-7ll64" Jan 22 07:08:01 crc kubenswrapper[4720]: I0122 07:08:01.028806 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c5dr5\" (UniqueName: \"kubernetes.io/projected/eeeaea3e-c420-47ad-862a-ecb4c26eecf4-kube-api-access-c5dr5\") pod \"certified-operators-7ll64\" (UID: \"eeeaea3e-c420-47ad-862a-ecb4c26eecf4\") " pod="openshift-marketplace/certified-operators-7ll64" Jan 22 07:08:01 crc kubenswrapper[4720]: I0122 07:08:01.028844 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eeeaea3e-c420-47ad-862a-ecb4c26eecf4-utilities\") pod \"certified-operators-7ll64\" (UID: \"eeeaea3e-c420-47ad-862a-ecb4c26eecf4\") " pod="openshift-marketplace/certified-operators-7ll64" Jan 22 07:08:01 crc kubenswrapper[4720]: I0122 07:08:01.130974 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eeeaea3e-c420-47ad-862a-ecb4c26eecf4-catalog-content\") pod \"certified-operators-7ll64\" (UID: \"eeeaea3e-c420-47ad-862a-ecb4c26eecf4\") " pod="openshift-marketplace/certified-operators-7ll64" Jan 22 07:08:01 crc kubenswrapper[4720]: I0122 07:08:01.131267 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c5dr5\" (UniqueName: \"kubernetes.io/projected/eeeaea3e-c420-47ad-862a-ecb4c26eecf4-kube-api-access-c5dr5\") pod \"certified-operators-7ll64\" (UID: \"eeeaea3e-c420-47ad-862a-ecb4c26eecf4\") " pod="openshift-marketplace/certified-operators-7ll64" Jan 22 07:08:01 crc kubenswrapper[4720]: I0122 07:08:01.131291 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eeeaea3e-c420-47ad-862a-ecb4c26eecf4-utilities\") pod \"certified-operators-7ll64\" (UID: \"eeeaea3e-c420-47ad-862a-ecb4c26eecf4\") " pod="openshift-marketplace/certified-operators-7ll64" Jan 22 07:08:01 crc kubenswrapper[4720]: I0122 07:08:01.131978 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eeeaea3e-c420-47ad-862a-ecb4c26eecf4-catalog-content\") pod \"certified-operators-7ll64\" (UID: \"eeeaea3e-c420-47ad-862a-ecb4c26eecf4\") " pod="openshift-marketplace/certified-operators-7ll64" Jan 22 07:08:01 crc kubenswrapper[4720]: I0122 07:08:01.132589 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eeeaea3e-c420-47ad-862a-ecb4c26eecf4-utilities\") pod \"certified-operators-7ll64\" (UID: \"eeeaea3e-c420-47ad-862a-ecb4c26eecf4\") " pod="openshift-marketplace/certified-operators-7ll64" Jan 22 07:08:01 crc kubenswrapper[4720]: I0122 07:08:01.153731 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c5dr5\" (UniqueName: \"kubernetes.io/projected/eeeaea3e-c420-47ad-862a-ecb4c26eecf4-kube-api-access-c5dr5\") pod \"certified-operators-7ll64\" (UID: \"eeeaea3e-c420-47ad-862a-ecb4c26eecf4\") " pod="openshift-marketplace/certified-operators-7ll64" Jan 22 07:08:01 crc kubenswrapper[4720]: I0122 07:08:01.303395 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7ll64" Jan 22 07:08:01 crc kubenswrapper[4720]: I0122 07:08:01.801194 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7ll64"] Jan 22 07:08:01 crc kubenswrapper[4720]: I0122 07:08:01.867436 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7ll64" event={"ID":"eeeaea3e-c420-47ad-862a-ecb4c26eecf4","Type":"ContainerStarted","Data":"0fd0eb2c98ebec18a81b363409aa85d69b13d745fd50f091460471dff2e42fb5"} Jan 22 07:08:02 crc kubenswrapper[4720]: I0122 07:08:02.877247 4720 generic.go:334] "Generic (PLEG): container finished" podID="eeeaea3e-c420-47ad-862a-ecb4c26eecf4" containerID="9b55e298fdcfdf455f9da9ae1afd7fc2b0e1c4d4fd3ba52ef8c125ee36af2997" exitCode=0 Jan 22 07:08:02 crc kubenswrapper[4720]: I0122 07:08:02.877366 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7ll64" event={"ID":"eeeaea3e-c420-47ad-862a-ecb4c26eecf4","Type":"ContainerDied","Data":"9b55e298fdcfdf455f9da9ae1afd7fc2b0e1c4d4fd3ba52ef8c125ee36af2997"} Jan 22 07:08:03 crc kubenswrapper[4720]: I0122 07:08:03.888662 4720 generic.go:334] "Generic (PLEG): container finished" podID="eeeaea3e-c420-47ad-862a-ecb4c26eecf4" containerID="b634a1971a025259fa90537eddd2553644bc738018e8abba74324f9132128759" exitCode=0 Jan 22 07:08:03 crc kubenswrapper[4720]: I0122 07:08:03.888999 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7ll64" event={"ID":"eeeaea3e-c420-47ad-862a-ecb4c26eecf4","Type":"ContainerDied","Data":"b634a1971a025259fa90537eddd2553644bc738018e8abba74324f9132128759"} Jan 22 07:08:04 crc kubenswrapper[4720]: I0122 07:08:04.899559 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7ll64" event={"ID":"eeeaea3e-c420-47ad-862a-ecb4c26eecf4","Type":"ContainerStarted","Data":"bf5211d3180915d6ca93f84f017973471ea5de142bef65e7ea6e77dc8e6fe872"} Jan 22 07:08:07 crc kubenswrapper[4720]: I0122 07:08:07.164259 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-7ll64" podStartSLOduration=5.753469604 podStartE2EDuration="7.16423357s" podCreationTimestamp="2026-01-22 07:08:00 +0000 UTC" firstStartedPulling="2026-01-22 07:08:02.880716667 +0000 UTC m=+1975.022623372" lastFinishedPulling="2026-01-22 07:08:04.291480633 +0000 UTC m=+1976.433387338" observedRunningTime="2026-01-22 07:08:04.923742264 +0000 UTC m=+1977.065648999" watchObservedRunningTime="2026-01-22 07:08:07.16423357 +0000 UTC m=+1979.306140275" Jan 22 07:08:07 crc kubenswrapper[4720]: I0122 07:08:07.173986 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-twdnp"] Jan 22 07:08:07 crc kubenswrapper[4720]: I0122 07:08:07.176605 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-twdnp" Jan 22 07:08:07 crc kubenswrapper[4720]: I0122 07:08:07.190095 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-twdnp"] Jan 22 07:08:07 crc kubenswrapper[4720]: I0122 07:08:07.219077 4720 scope.go:117] "RemoveContainer" containerID="b610ae887215ebfcaa39de45e339fcae21a08ebe6e59b991ec1661de0f19a21c" Jan 22 07:08:07 crc kubenswrapper[4720]: E0122 07:08:07.219620 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bnsvd_openshift-machine-config-operator(f4b26e9d-6a95-4b1c-9750-88b6aa100c67)\"" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" podUID="f4b26e9d-6a95-4b1c-9750-88b6aa100c67" Jan 22 07:08:07 crc kubenswrapper[4720]: I0122 07:08:07.256328 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31b6e3a9-3b24-48e1-9dfd-623546e4b36c-catalog-content\") pod \"redhat-operators-twdnp\" (UID: \"31b6e3a9-3b24-48e1-9dfd-623546e4b36c\") " pod="openshift-marketplace/redhat-operators-twdnp" Jan 22 07:08:07 crc kubenswrapper[4720]: I0122 07:08:07.256442 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31b6e3a9-3b24-48e1-9dfd-623546e4b36c-utilities\") pod \"redhat-operators-twdnp\" (UID: \"31b6e3a9-3b24-48e1-9dfd-623546e4b36c\") " pod="openshift-marketplace/redhat-operators-twdnp" Jan 22 07:08:07 crc kubenswrapper[4720]: I0122 07:08:07.256628 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7lgp8\" (UniqueName: \"kubernetes.io/projected/31b6e3a9-3b24-48e1-9dfd-623546e4b36c-kube-api-access-7lgp8\") pod \"redhat-operators-twdnp\" (UID: \"31b6e3a9-3b24-48e1-9dfd-623546e4b36c\") " pod="openshift-marketplace/redhat-operators-twdnp" Jan 22 07:08:07 crc kubenswrapper[4720]: I0122 07:08:07.490335 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7lgp8\" (UniqueName: \"kubernetes.io/projected/31b6e3a9-3b24-48e1-9dfd-623546e4b36c-kube-api-access-7lgp8\") pod \"redhat-operators-twdnp\" (UID: \"31b6e3a9-3b24-48e1-9dfd-623546e4b36c\") " pod="openshift-marketplace/redhat-operators-twdnp" Jan 22 07:08:07 crc kubenswrapper[4720]: I0122 07:08:07.490484 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31b6e3a9-3b24-48e1-9dfd-623546e4b36c-catalog-content\") pod \"redhat-operators-twdnp\" (UID: \"31b6e3a9-3b24-48e1-9dfd-623546e4b36c\") " pod="openshift-marketplace/redhat-operators-twdnp" Jan 22 07:08:07 crc kubenswrapper[4720]: I0122 07:08:07.490527 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31b6e3a9-3b24-48e1-9dfd-623546e4b36c-utilities\") pod \"redhat-operators-twdnp\" (UID: \"31b6e3a9-3b24-48e1-9dfd-623546e4b36c\") " pod="openshift-marketplace/redhat-operators-twdnp" Jan 22 07:08:07 crc kubenswrapper[4720]: I0122 07:08:07.491117 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31b6e3a9-3b24-48e1-9dfd-623546e4b36c-utilities\") pod \"redhat-operators-twdnp\" (UID: \"31b6e3a9-3b24-48e1-9dfd-623546e4b36c\") " pod="openshift-marketplace/redhat-operators-twdnp" Jan 22 07:08:07 crc kubenswrapper[4720]: I0122 07:08:07.492848 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31b6e3a9-3b24-48e1-9dfd-623546e4b36c-catalog-content\") pod \"redhat-operators-twdnp\" (UID: \"31b6e3a9-3b24-48e1-9dfd-623546e4b36c\") " pod="openshift-marketplace/redhat-operators-twdnp" Jan 22 07:08:07 crc kubenswrapper[4720]: I0122 07:08:07.527367 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7lgp8\" (UniqueName: \"kubernetes.io/projected/31b6e3a9-3b24-48e1-9dfd-623546e4b36c-kube-api-access-7lgp8\") pod \"redhat-operators-twdnp\" (UID: \"31b6e3a9-3b24-48e1-9dfd-623546e4b36c\") " pod="openshift-marketplace/redhat-operators-twdnp" Jan 22 07:08:07 crc kubenswrapper[4720]: I0122 07:08:07.542416 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-twdnp" Jan 22 07:08:08 crc kubenswrapper[4720]: I0122 07:08:08.006795 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-twdnp"] Jan 22 07:08:08 crc kubenswrapper[4720]: I0122 07:08:08.930243 4720 generic.go:334] "Generic (PLEG): container finished" podID="31b6e3a9-3b24-48e1-9dfd-623546e4b36c" containerID="6bf4a78e666f9e1d30af8f8d6589a1a13458e1cea1fb160c3e0ce2b27c464fbe" exitCode=0 Jan 22 07:08:08 crc kubenswrapper[4720]: I0122 07:08:08.930592 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-twdnp" event={"ID":"31b6e3a9-3b24-48e1-9dfd-623546e4b36c","Type":"ContainerDied","Data":"6bf4a78e666f9e1d30af8f8d6589a1a13458e1cea1fb160c3e0ce2b27c464fbe"} Jan 22 07:08:08 crc kubenswrapper[4720]: I0122 07:08:08.930630 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-twdnp" event={"ID":"31b6e3a9-3b24-48e1-9dfd-623546e4b36c","Type":"ContainerStarted","Data":"451b9406ab9edcc37b27333a6d62fa29bee514b015580298ab9171a11b43f80f"} Jan 22 07:08:10 crc kubenswrapper[4720]: I0122 07:08:10.951886 4720 generic.go:334] "Generic (PLEG): container finished" podID="31b6e3a9-3b24-48e1-9dfd-623546e4b36c" containerID="bf8d4fd4408e33bcbfbfb6ec543074d20c53f70d663ad31446ae0fc4673e6c0f" exitCode=0 Jan 22 07:08:10 crc kubenswrapper[4720]: I0122 07:08:10.952056 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-twdnp" event={"ID":"31b6e3a9-3b24-48e1-9dfd-623546e4b36c","Type":"ContainerDied","Data":"bf8d4fd4408e33bcbfbfb6ec543074d20c53f70d663ad31446ae0fc4673e6c0f"} Jan 22 07:08:11 crc kubenswrapper[4720]: I0122 07:08:11.303886 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-7ll64" Jan 22 07:08:11 crc kubenswrapper[4720]: I0122 07:08:11.303995 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-7ll64" Jan 22 07:08:11 crc kubenswrapper[4720]: I0122 07:08:11.360821 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-7ll64" Jan 22 07:08:11 crc kubenswrapper[4720]: I0122 07:08:11.964548 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-twdnp" event={"ID":"31b6e3a9-3b24-48e1-9dfd-623546e4b36c","Type":"ContainerStarted","Data":"eaecc020e4244f98626f1961efef84b8ae04369f01e58bea3b038e3c2db59a0a"} Jan 22 07:08:11 crc kubenswrapper[4720]: I0122 07:08:11.990608 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-twdnp" podStartSLOduration=2.512375866 podStartE2EDuration="4.990586528s" podCreationTimestamp="2026-01-22 07:08:07 +0000 UTC" firstStartedPulling="2026-01-22 07:08:08.93195184 +0000 UTC m=+1981.073858545" lastFinishedPulling="2026-01-22 07:08:11.410162512 +0000 UTC m=+1983.552069207" observedRunningTime="2026-01-22 07:08:11.989604391 +0000 UTC m=+1984.131511106" watchObservedRunningTime="2026-01-22 07:08:11.990586528 +0000 UTC m=+1984.132493233" Jan 22 07:08:12 crc kubenswrapper[4720]: I0122 07:08:12.025302 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-7ll64" Jan 22 07:08:13 crc kubenswrapper[4720]: I0122 07:08:13.160229 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7ll64"] Jan 22 07:08:13 crc kubenswrapper[4720]: I0122 07:08:13.981186 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-7ll64" podUID="eeeaea3e-c420-47ad-862a-ecb4c26eecf4" containerName="registry-server" containerID="cri-o://bf5211d3180915d6ca93f84f017973471ea5de142bef65e7ea6e77dc8e6fe872" gracePeriod=2 Jan 22 07:08:14 crc kubenswrapper[4720]: I0122 07:08:14.991151 4720 generic.go:334] "Generic (PLEG): container finished" podID="eeeaea3e-c420-47ad-862a-ecb4c26eecf4" containerID="bf5211d3180915d6ca93f84f017973471ea5de142bef65e7ea6e77dc8e6fe872" exitCode=0 Jan 22 07:08:14 crc kubenswrapper[4720]: I0122 07:08:14.991201 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7ll64" event={"ID":"eeeaea3e-c420-47ad-862a-ecb4c26eecf4","Type":"ContainerDied","Data":"bf5211d3180915d6ca93f84f017973471ea5de142bef65e7ea6e77dc8e6fe872"} Jan 22 07:08:15 crc kubenswrapper[4720]: I0122 07:08:15.510623 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7ll64" Jan 22 07:08:15 crc kubenswrapper[4720]: I0122 07:08:15.546967 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c5dr5\" (UniqueName: \"kubernetes.io/projected/eeeaea3e-c420-47ad-862a-ecb4c26eecf4-kube-api-access-c5dr5\") pod \"eeeaea3e-c420-47ad-862a-ecb4c26eecf4\" (UID: \"eeeaea3e-c420-47ad-862a-ecb4c26eecf4\") " Jan 22 07:08:15 crc kubenswrapper[4720]: I0122 07:08:15.547072 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eeeaea3e-c420-47ad-862a-ecb4c26eecf4-utilities\") pod \"eeeaea3e-c420-47ad-862a-ecb4c26eecf4\" (UID: \"eeeaea3e-c420-47ad-862a-ecb4c26eecf4\") " Jan 22 07:08:15 crc kubenswrapper[4720]: I0122 07:08:15.547147 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eeeaea3e-c420-47ad-862a-ecb4c26eecf4-catalog-content\") pod \"eeeaea3e-c420-47ad-862a-ecb4c26eecf4\" (UID: \"eeeaea3e-c420-47ad-862a-ecb4c26eecf4\") " Jan 22 07:08:15 crc kubenswrapper[4720]: I0122 07:08:15.547973 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eeeaea3e-c420-47ad-862a-ecb4c26eecf4-utilities" (OuterVolumeSpecName: "utilities") pod "eeeaea3e-c420-47ad-862a-ecb4c26eecf4" (UID: "eeeaea3e-c420-47ad-862a-ecb4c26eecf4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 07:08:15 crc kubenswrapper[4720]: I0122 07:08:15.557097 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eeeaea3e-c420-47ad-862a-ecb4c26eecf4-kube-api-access-c5dr5" (OuterVolumeSpecName: "kube-api-access-c5dr5") pod "eeeaea3e-c420-47ad-862a-ecb4c26eecf4" (UID: "eeeaea3e-c420-47ad-862a-ecb4c26eecf4"). InnerVolumeSpecName "kube-api-access-c5dr5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 07:08:15 crc kubenswrapper[4720]: I0122 07:08:15.608988 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eeeaea3e-c420-47ad-862a-ecb4c26eecf4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "eeeaea3e-c420-47ad-862a-ecb4c26eecf4" (UID: "eeeaea3e-c420-47ad-862a-ecb4c26eecf4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 07:08:15 crc kubenswrapper[4720]: I0122 07:08:15.649068 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c5dr5\" (UniqueName: \"kubernetes.io/projected/eeeaea3e-c420-47ad-862a-ecb4c26eecf4-kube-api-access-c5dr5\") on node \"crc\" DevicePath \"\"" Jan 22 07:08:15 crc kubenswrapper[4720]: I0122 07:08:15.649100 4720 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eeeaea3e-c420-47ad-862a-ecb4c26eecf4-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 07:08:15 crc kubenswrapper[4720]: I0122 07:08:15.649111 4720 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eeeaea3e-c420-47ad-862a-ecb4c26eecf4-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 07:08:16 crc kubenswrapper[4720]: I0122 07:08:16.002549 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7ll64" event={"ID":"eeeaea3e-c420-47ad-862a-ecb4c26eecf4","Type":"ContainerDied","Data":"0fd0eb2c98ebec18a81b363409aa85d69b13d745fd50f091460471dff2e42fb5"} Jan 22 07:08:16 crc kubenswrapper[4720]: I0122 07:08:16.002631 4720 scope.go:117] "RemoveContainer" containerID="bf5211d3180915d6ca93f84f017973471ea5de142bef65e7ea6e77dc8e6fe872" Jan 22 07:08:16 crc kubenswrapper[4720]: I0122 07:08:16.002625 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7ll64" Jan 22 07:08:16 crc kubenswrapper[4720]: I0122 07:08:16.030959 4720 scope.go:117] "RemoveContainer" containerID="b634a1971a025259fa90537eddd2553644bc738018e8abba74324f9132128759" Jan 22 07:08:16 crc kubenswrapper[4720]: I0122 07:08:16.060671 4720 scope.go:117] "RemoveContainer" containerID="9b55e298fdcfdf455f9da9ae1afd7fc2b0e1c4d4fd3ba52ef8c125ee36af2997" Jan 22 07:08:16 crc kubenswrapper[4720]: I0122 07:08:16.098168 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7ll64"] Jan 22 07:08:16 crc kubenswrapper[4720]: I0122 07:08:16.113429 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-7ll64"] Jan 22 07:08:16 crc kubenswrapper[4720]: I0122 07:08:16.220656 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eeeaea3e-c420-47ad-862a-ecb4c26eecf4" path="/var/lib/kubelet/pods/eeeaea3e-c420-47ad-862a-ecb4c26eecf4/volumes" Jan 22 07:08:17 crc kubenswrapper[4720]: I0122 07:08:17.543422 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-twdnp" Jan 22 07:08:17 crc kubenswrapper[4720]: I0122 07:08:17.543735 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-twdnp" Jan 22 07:08:17 crc kubenswrapper[4720]: I0122 07:08:17.596658 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-twdnp" Jan 22 07:08:18 crc kubenswrapper[4720]: I0122 07:08:18.067879 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-twdnp" Jan 22 07:08:19 crc kubenswrapper[4720]: I0122 07:08:19.962064 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-twdnp"] Jan 22 07:08:20 crc kubenswrapper[4720]: I0122 07:08:20.032506 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-twdnp" podUID="31b6e3a9-3b24-48e1-9dfd-623546e4b36c" containerName="registry-server" containerID="cri-o://eaecc020e4244f98626f1961efef84b8ae04369f01e58bea3b038e3c2db59a0a" gracePeriod=2 Jan 22 07:08:20 crc kubenswrapper[4720]: I0122 07:08:20.211460 4720 scope.go:117] "RemoveContainer" containerID="b610ae887215ebfcaa39de45e339fcae21a08ebe6e59b991ec1661de0f19a21c" Jan 22 07:08:20 crc kubenswrapper[4720]: E0122 07:08:20.211732 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bnsvd_openshift-machine-config-operator(f4b26e9d-6a95-4b1c-9750-88b6aa100c67)\"" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" podUID="f4b26e9d-6a95-4b1c-9750-88b6aa100c67" Jan 22 07:08:23 crc kubenswrapper[4720]: I0122 07:08:23.072551 4720 generic.go:334] "Generic (PLEG): container finished" podID="31b6e3a9-3b24-48e1-9dfd-623546e4b36c" containerID="eaecc020e4244f98626f1961efef84b8ae04369f01e58bea3b038e3c2db59a0a" exitCode=0 Jan 22 07:08:23 crc kubenswrapper[4720]: I0122 07:08:23.072900 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-twdnp" event={"ID":"31b6e3a9-3b24-48e1-9dfd-623546e4b36c","Type":"ContainerDied","Data":"eaecc020e4244f98626f1961efef84b8ae04369f01e58bea3b038e3c2db59a0a"} Jan 22 07:08:23 crc kubenswrapper[4720]: I0122 07:08:23.605310 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-twdnp" Jan 22 07:08:23 crc kubenswrapper[4720]: I0122 07:08:23.715797 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7lgp8\" (UniqueName: \"kubernetes.io/projected/31b6e3a9-3b24-48e1-9dfd-623546e4b36c-kube-api-access-7lgp8\") pod \"31b6e3a9-3b24-48e1-9dfd-623546e4b36c\" (UID: \"31b6e3a9-3b24-48e1-9dfd-623546e4b36c\") " Jan 22 07:08:23 crc kubenswrapper[4720]: I0122 07:08:23.716191 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31b6e3a9-3b24-48e1-9dfd-623546e4b36c-catalog-content\") pod \"31b6e3a9-3b24-48e1-9dfd-623546e4b36c\" (UID: \"31b6e3a9-3b24-48e1-9dfd-623546e4b36c\") " Jan 22 07:08:23 crc kubenswrapper[4720]: I0122 07:08:23.718187 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31b6e3a9-3b24-48e1-9dfd-623546e4b36c-utilities\") pod \"31b6e3a9-3b24-48e1-9dfd-623546e4b36c\" (UID: \"31b6e3a9-3b24-48e1-9dfd-623546e4b36c\") " Jan 22 07:08:23 crc kubenswrapper[4720]: I0122 07:08:23.718876 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/31b6e3a9-3b24-48e1-9dfd-623546e4b36c-utilities" (OuterVolumeSpecName: "utilities") pod "31b6e3a9-3b24-48e1-9dfd-623546e4b36c" (UID: "31b6e3a9-3b24-48e1-9dfd-623546e4b36c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 07:08:23 crc kubenswrapper[4720]: I0122 07:08:23.724643 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31b6e3a9-3b24-48e1-9dfd-623546e4b36c-kube-api-access-7lgp8" (OuterVolumeSpecName: "kube-api-access-7lgp8") pod "31b6e3a9-3b24-48e1-9dfd-623546e4b36c" (UID: "31b6e3a9-3b24-48e1-9dfd-623546e4b36c"). InnerVolumeSpecName "kube-api-access-7lgp8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 07:08:23 crc kubenswrapper[4720]: I0122 07:08:23.820302 4720 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31b6e3a9-3b24-48e1-9dfd-623546e4b36c-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 07:08:23 crc kubenswrapper[4720]: I0122 07:08:23.820338 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7lgp8\" (UniqueName: \"kubernetes.io/projected/31b6e3a9-3b24-48e1-9dfd-623546e4b36c-kube-api-access-7lgp8\") on node \"crc\" DevicePath \"\"" Jan 22 07:08:23 crc kubenswrapper[4720]: I0122 07:08:23.832303 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/31b6e3a9-3b24-48e1-9dfd-623546e4b36c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "31b6e3a9-3b24-48e1-9dfd-623546e4b36c" (UID: "31b6e3a9-3b24-48e1-9dfd-623546e4b36c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 07:08:23 crc kubenswrapper[4720]: I0122 07:08:23.922441 4720 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31b6e3a9-3b24-48e1-9dfd-623546e4b36c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 07:08:24 crc kubenswrapper[4720]: I0122 07:08:24.083487 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-twdnp" event={"ID":"31b6e3a9-3b24-48e1-9dfd-623546e4b36c","Type":"ContainerDied","Data":"451b9406ab9edcc37b27333a6d62fa29bee514b015580298ab9171a11b43f80f"} Jan 22 07:08:24 crc kubenswrapper[4720]: I0122 07:08:24.083558 4720 scope.go:117] "RemoveContainer" containerID="eaecc020e4244f98626f1961efef84b8ae04369f01e58bea3b038e3c2db59a0a" Jan 22 07:08:24 crc kubenswrapper[4720]: I0122 07:08:24.083557 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-twdnp" Jan 22 07:08:24 crc kubenswrapper[4720]: I0122 07:08:24.110980 4720 scope.go:117] "RemoveContainer" containerID="bf8d4fd4408e33bcbfbfb6ec543074d20c53f70d663ad31446ae0fc4673e6c0f" Jan 22 07:08:24 crc kubenswrapper[4720]: I0122 07:08:24.117941 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-twdnp"] Jan 22 07:08:24 crc kubenswrapper[4720]: I0122 07:08:24.126542 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-twdnp"] Jan 22 07:08:24 crc kubenswrapper[4720]: I0122 07:08:24.133304 4720 scope.go:117] "RemoveContainer" containerID="6bf4a78e666f9e1d30af8f8d6589a1a13458e1cea1fb160c3e0ce2b27c464fbe" Jan 22 07:08:24 crc kubenswrapper[4720]: I0122 07:08:24.221837 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31b6e3a9-3b24-48e1-9dfd-623546e4b36c" path="/var/lib/kubelet/pods/31b6e3a9-3b24-48e1-9dfd-623546e4b36c/volumes" Jan 22 07:08:27 crc kubenswrapper[4720]: I0122 07:08:27.173303 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher5bae-account-delete-lr8dd"] Jan 22 07:08:27 crc kubenswrapper[4720]: E0122 07:08:27.174279 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eeeaea3e-c420-47ad-862a-ecb4c26eecf4" containerName="extract-utilities" Jan 22 07:08:27 crc kubenswrapper[4720]: I0122 07:08:27.174299 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="eeeaea3e-c420-47ad-862a-ecb4c26eecf4" containerName="extract-utilities" Jan 22 07:08:27 crc kubenswrapper[4720]: E0122 07:08:27.174316 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eeeaea3e-c420-47ad-862a-ecb4c26eecf4" containerName="registry-server" Jan 22 07:08:27 crc kubenswrapper[4720]: I0122 07:08:27.174325 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="eeeaea3e-c420-47ad-862a-ecb4c26eecf4" containerName="registry-server" Jan 22 07:08:27 crc kubenswrapper[4720]: E0122 07:08:27.174338 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31b6e3a9-3b24-48e1-9dfd-623546e4b36c" containerName="extract-utilities" Jan 22 07:08:27 crc kubenswrapper[4720]: I0122 07:08:27.174347 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="31b6e3a9-3b24-48e1-9dfd-623546e4b36c" containerName="extract-utilities" Jan 22 07:08:27 crc kubenswrapper[4720]: E0122 07:08:27.174364 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31b6e3a9-3b24-48e1-9dfd-623546e4b36c" containerName="extract-content" Jan 22 07:08:27 crc kubenswrapper[4720]: I0122 07:08:27.174370 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="31b6e3a9-3b24-48e1-9dfd-623546e4b36c" containerName="extract-content" Jan 22 07:08:27 crc kubenswrapper[4720]: E0122 07:08:27.174383 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eeeaea3e-c420-47ad-862a-ecb4c26eecf4" containerName="extract-content" Jan 22 07:08:27 crc kubenswrapper[4720]: I0122 07:08:27.174390 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="eeeaea3e-c420-47ad-862a-ecb4c26eecf4" containerName="extract-content" Jan 22 07:08:27 crc kubenswrapper[4720]: E0122 07:08:27.174406 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31b6e3a9-3b24-48e1-9dfd-623546e4b36c" containerName="registry-server" Jan 22 07:08:27 crc kubenswrapper[4720]: I0122 07:08:27.174420 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="31b6e3a9-3b24-48e1-9dfd-623546e4b36c" containerName="registry-server" Jan 22 07:08:27 crc kubenswrapper[4720]: I0122 07:08:27.174614 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="eeeaea3e-c420-47ad-862a-ecb4c26eecf4" containerName="registry-server" Jan 22 07:08:27 crc kubenswrapper[4720]: I0122 07:08:27.174632 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="31b6e3a9-3b24-48e1-9dfd-623546e4b36c" containerName="registry-server" Jan 22 07:08:27 crc kubenswrapper[4720]: I0122 07:08:27.175399 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher5bae-account-delete-lr8dd" Jan 22 07:08:27 crc kubenswrapper[4720]: I0122 07:08:27.210273 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher5bae-account-delete-lr8dd"] Jan 22 07:08:27 crc kubenswrapper[4720]: I0122 07:08:27.275723 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fc1c0bbb-df33-4754-a5a3-17331735996e-operator-scripts\") pod \"watcher5bae-account-delete-lr8dd\" (UID: \"fc1c0bbb-df33-4754-a5a3-17331735996e\") " pod="watcher-kuttl-default/watcher5bae-account-delete-lr8dd" Jan 22 07:08:27 crc kubenswrapper[4720]: I0122 07:08:27.276771 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wdfpj\" (UniqueName: \"kubernetes.io/projected/fc1c0bbb-df33-4754-a5a3-17331735996e-kube-api-access-wdfpj\") pod \"watcher5bae-account-delete-lr8dd\" (UID: \"fc1c0bbb-df33-4754-a5a3-17331735996e\") " pod="watcher-kuttl-default/watcher5bae-account-delete-lr8dd" Jan 22 07:08:27 crc kubenswrapper[4720]: I0122 07:08:27.378707 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wdfpj\" (UniqueName: \"kubernetes.io/projected/fc1c0bbb-df33-4754-a5a3-17331735996e-kube-api-access-wdfpj\") pod \"watcher5bae-account-delete-lr8dd\" (UID: \"fc1c0bbb-df33-4754-a5a3-17331735996e\") " pod="watcher-kuttl-default/watcher5bae-account-delete-lr8dd" Jan 22 07:08:27 crc kubenswrapper[4720]: I0122 07:08:27.378801 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fc1c0bbb-df33-4754-a5a3-17331735996e-operator-scripts\") pod \"watcher5bae-account-delete-lr8dd\" (UID: \"fc1c0bbb-df33-4754-a5a3-17331735996e\") " pod="watcher-kuttl-default/watcher5bae-account-delete-lr8dd" Jan 22 07:08:27 crc kubenswrapper[4720]: I0122 07:08:27.379609 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fc1c0bbb-df33-4754-a5a3-17331735996e-operator-scripts\") pod \"watcher5bae-account-delete-lr8dd\" (UID: \"fc1c0bbb-df33-4754-a5a3-17331735996e\") " pod="watcher-kuttl-default/watcher5bae-account-delete-lr8dd" Jan 22 07:08:27 crc kubenswrapper[4720]: I0122 07:08:27.409561 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wdfpj\" (UniqueName: \"kubernetes.io/projected/fc1c0bbb-df33-4754-a5a3-17331735996e-kube-api-access-wdfpj\") pod \"watcher5bae-account-delete-lr8dd\" (UID: \"fc1c0bbb-df33-4754-a5a3-17331735996e\") " pod="watcher-kuttl-default/watcher5bae-account-delete-lr8dd" Jan 22 07:08:27 crc kubenswrapper[4720]: I0122 07:08:27.493906 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher5bae-account-delete-lr8dd" Jan 22 07:08:27 crc kubenswrapper[4720]: I0122 07:08:27.792469 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher5bae-account-delete-lr8dd"] Jan 22 07:08:28 crc kubenswrapper[4720]: I0122 07:08:28.122151 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher5bae-account-delete-lr8dd" event={"ID":"fc1c0bbb-df33-4754-a5a3-17331735996e","Type":"ContainerStarted","Data":"ac3ea03e1ffb648d93b05d8b87ff7e5efcaa75069fdd8b3b46729fd5b7d08889"} Jan 22 07:08:28 crc kubenswrapper[4720]: I0122 07:08:28.122724 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher5bae-account-delete-lr8dd" event={"ID":"fc1c0bbb-df33-4754-a5a3-17331735996e","Type":"ContainerStarted","Data":"45b27f614af8e3b2f2e04e21d98a1d4523224a3bc192f3dd1f5596d891eda6de"} Jan 22 07:08:28 crc kubenswrapper[4720]: I0122 07:08:28.145084 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher5bae-account-delete-lr8dd" podStartSLOduration=1.145055658 podStartE2EDuration="1.145055658s" podCreationTimestamp="2026-01-22 07:08:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 07:08:28.143390891 +0000 UTC m=+2000.285297596" watchObservedRunningTime="2026-01-22 07:08:28.145055658 +0000 UTC m=+2000.286962383" Jan 22 07:08:29 crc kubenswrapper[4720]: I0122 07:08:29.131147 4720 generic.go:334] "Generic (PLEG): container finished" podID="fc1c0bbb-df33-4754-a5a3-17331735996e" containerID="ac3ea03e1ffb648d93b05d8b87ff7e5efcaa75069fdd8b3b46729fd5b7d08889" exitCode=0 Jan 22 07:08:29 crc kubenswrapper[4720]: I0122 07:08:29.131187 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher5bae-account-delete-lr8dd" event={"ID":"fc1c0bbb-df33-4754-a5a3-17331735996e","Type":"ContainerDied","Data":"ac3ea03e1ffb648d93b05d8b87ff7e5efcaa75069fdd8b3b46729fd5b7d08889"} Jan 22 07:08:30 crc kubenswrapper[4720]: I0122 07:08:30.480024 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher5bae-account-delete-lr8dd" Jan 22 07:08:30 crc kubenswrapper[4720]: I0122 07:08:30.641312 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wdfpj\" (UniqueName: \"kubernetes.io/projected/fc1c0bbb-df33-4754-a5a3-17331735996e-kube-api-access-wdfpj\") pod \"fc1c0bbb-df33-4754-a5a3-17331735996e\" (UID: \"fc1c0bbb-df33-4754-a5a3-17331735996e\") " Jan 22 07:08:30 crc kubenswrapper[4720]: I0122 07:08:30.641577 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fc1c0bbb-df33-4754-a5a3-17331735996e-operator-scripts\") pod \"fc1c0bbb-df33-4754-a5a3-17331735996e\" (UID: \"fc1c0bbb-df33-4754-a5a3-17331735996e\") " Jan 22 07:08:30 crc kubenswrapper[4720]: I0122 07:08:30.642251 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc1c0bbb-df33-4754-a5a3-17331735996e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "fc1c0bbb-df33-4754-a5a3-17331735996e" (UID: "fc1c0bbb-df33-4754-a5a3-17331735996e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 07:08:30 crc kubenswrapper[4720]: I0122 07:08:30.646841 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc1c0bbb-df33-4754-a5a3-17331735996e-kube-api-access-wdfpj" (OuterVolumeSpecName: "kube-api-access-wdfpj") pod "fc1c0bbb-df33-4754-a5a3-17331735996e" (UID: "fc1c0bbb-df33-4754-a5a3-17331735996e"). InnerVolumeSpecName "kube-api-access-wdfpj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 07:08:30 crc kubenswrapper[4720]: I0122 07:08:30.743252 4720 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fc1c0bbb-df33-4754-a5a3-17331735996e-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 07:08:30 crc kubenswrapper[4720]: I0122 07:08:30.743287 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wdfpj\" (UniqueName: \"kubernetes.io/projected/fc1c0bbb-df33-4754-a5a3-17331735996e-kube-api-access-wdfpj\") on node \"crc\" DevicePath \"\"" Jan 22 07:08:31 crc kubenswrapper[4720]: I0122 07:08:31.152807 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher5bae-account-delete-lr8dd" event={"ID":"fc1c0bbb-df33-4754-a5a3-17331735996e","Type":"ContainerDied","Data":"45b27f614af8e3b2f2e04e21d98a1d4523224a3bc192f3dd1f5596d891eda6de"} Jan 22 07:08:31 crc kubenswrapper[4720]: I0122 07:08:31.152849 4720 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="45b27f614af8e3b2f2e04e21d98a1d4523224a3bc192f3dd1f5596d891eda6de" Jan 22 07:08:31 crc kubenswrapper[4720]: I0122 07:08:31.152957 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher5bae-account-delete-lr8dd" Jan 22 07:08:32 crc kubenswrapper[4720]: I0122 07:08:32.311643 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-db-create-dm7n6"] Jan 22 07:08:32 crc kubenswrapper[4720]: I0122 07:08:32.327800 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-db-create-dm7n6"] Jan 22 07:08:32 crc kubenswrapper[4720]: I0122 07:08:32.352281 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-5bae-account-create-update-m6l56"] Jan 22 07:08:32 crc kubenswrapper[4720]: I0122 07:08:32.362180 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-5bae-account-create-update-m6l56"] Jan 22 07:08:32 crc kubenswrapper[4720]: I0122 07:08:32.380631 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher5bae-account-delete-lr8dd"] Jan 22 07:08:32 crc kubenswrapper[4720]: I0122 07:08:32.380699 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher5bae-account-delete-lr8dd"] Jan 22 07:08:32 crc kubenswrapper[4720]: I0122 07:08:32.476747 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-db-create-jtjl6"] Jan 22 07:08:32 crc kubenswrapper[4720]: E0122 07:08:32.477192 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc1c0bbb-df33-4754-a5a3-17331735996e" containerName="mariadb-account-delete" Jan 22 07:08:32 crc kubenswrapper[4720]: I0122 07:08:32.477211 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc1c0bbb-df33-4754-a5a3-17331735996e" containerName="mariadb-account-delete" Jan 22 07:08:32 crc kubenswrapper[4720]: I0122 07:08:32.477395 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="fc1c0bbb-df33-4754-a5a3-17331735996e" containerName="mariadb-account-delete" Jan 22 07:08:32 crc kubenswrapper[4720]: I0122 07:08:32.478045 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-jtjl6" Jan 22 07:08:32 crc kubenswrapper[4720]: I0122 07:08:32.486702 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-db-create-jtjl6"] Jan 22 07:08:32 crc kubenswrapper[4720]: I0122 07:08:32.518282 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1b2d9fec-7b33-48a9-a4d3-badf06756855-operator-scripts\") pod \"watcher-db-create-jtjl6\" (UID: \"1b2d9fec-7b33-48a9-a4d3-badf06756855\") " pod="watcher-kuttl-default/watcher-db-create-jtjl6" Jan 22 07:08:32 crc kubenswrapper[4720]: I0122 07:08:32.518331 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cg2zh\" (UniqueName: \"kubernetes.io/projected/1b2d9fec-7b33-48a9-a4d3-badf06756855-kube-api-access-cg2zh\") pod \"watcher-db-create-jtjl6\" (UID: \"1b2d9fec-7b33-48a9-a4d3-badf06756855\") " pod="watcher-kuttl-default/watcher-db-create-jtjl6" Jan 22 07:08:32 crc kubenswrapper[4720]: I0122 07:08:32.619163 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1b2d9fec-7b33-48a9-a4d3-badf06756855-operator-scripts\") pod \"watcher-db-create-jtjl6\" (UID: \"1b2d9fec-7b33-48a9-a4d3-badf06756855\") " pod="watcher-kuttl-default/watcher-db-create-jtjl6" Jan 22 07:08:32 crc kubenswrapper[4720]: I0122 07:08:32.620175 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cg2zh\" (UniqueName: \"kubernetes.io/projected/1b2d9fec-7b33-48a9-a4d3-badf06756855-kube-api-access-cg2zh\") pod \"watcher-db-create-jtjl6\" (UID: \"1b2d9fec-7b33-48a9-a4d3-badf06756855\") " pod="watcher-kuttl-default/watcher-db-create-jtjl6" Jan 22 07:08:32 crc kubenswrapper[4720]: I0122 07:08:32.620119 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1b2d9fec-7b33-48a9-a4d3-badf06756855-operator-scripts\") pod \"watcher-db-create-jtjl6\" (UID: \"1b2d9fec-7b33-48a9-a4d3-badf06756855\") " pod="watcher-kuttl-default/watcher-db-create-jtjl6" Jan 22 07:08:32 crc kubenswrapper[4720]: I0122 07:08:32.643331 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cg2zh\" (UniqueName: \"kubernetes.io/projected/1b2d9fec-7b33-48a9-a4d3-badf06756855-kube-api-access-cg2zh\") pod \"watcher-db-create-jtjl6\" (UID: \"1b2d9fec-7b33-48a9-a4d3-badf06756855\") " pod="watcher-kuttl-default/watcher-db-create-jtjl6" Jan 22 07:08:32 crc kubenswrapper[4720]: I0122 07:08:32.687319 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-3ef0-account-create-update-9s6ss"] Jan 22 07:08:32 crc kubenswrapper[4720]: I0122 07:08:32.688753 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-3ef0-account-create-update-9s6ss" Jan 22 07:08:32 crc kubenswrapper[4720]: I0122 07:08:32.690548 4720 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-db-secret" Jan 22 07:08:32 crc kubenswrapper[4720]: I0122 07:08:32.695644 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-3ef0-account-create-update-9s6ss"] Jan 22 07:08:32 crc kubenswrapper[4720]: I0122 07:08:32.796635 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-jtjl6" Jan 22 07:08:32 crc kubenswrapper[4720]: I0122 07:08:32.830565 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wtbt7\" (UniqueName: \"kubernetes.io/projected/0550363a-556c-4ab4-a361-f55f7f2afbad-kube-api-access-wtbt7\") pod \"watcher-3ef0-account-create-update-9s6ss\" (UID: \"0550363a-556c-4ab4-a361-f55f7f2afbad\") " pod="watcher-kuttl-default/watcher-3ef0-account-create-update-9s6ss" Jan 22 07:08:32 crc kubenswrapper[4720]: I0122 07:08:32.830667 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0550363a-556c-4ab4-a361-f55f7f2afbad-operator-scripts\") pod \"watcher-3ef0-account-create-update-9s6ss\" (UID: \"0550363a-556c-4ab4-a361-f55f7f2afbad\") " pod="watcher-kuttl-default/watcher-3ef0-account-create-update-9s6ss" Jan 22 07:08:32 crc kubenswrapper[4720]: I0122 07:08:32.985405 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0550363a-556c-4ab4-a361-f55f7f2afbad-operator-scripts\") pod \"watcher-3ef0-account-create-update-9s6ss\" (UID: \"0550363a-556c-4ab4-a361-f55f7f2afbad\") " pod="watcher-kuttl-default/watcher-3ef0-account-create-update-9s6ss" Jan 22 07:08:32 crc kubenswrapper[4720]: I0122 07:08:32.985805 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wtbt7\" (UniqueName: \"kubernetes.io/projected/0550363a-556c-4ab4-a361-f55f7f2afbad-kube-api-access-wtbt7\") pod \"watcher-3ef0-account-create-update-9s6ss\" (UID: \"0550363a-556c-4ab4-a361-f55f7f2afbad\") " pod="watcher-kuttl-default/watcher-3ef0-account-create-update-9s6ss" Jan 22 07:08:32 crc kubenswrapper[4720]: I0122 07:08:32.987465 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0550363a-556c-4ab4-a361-f55f7f2afbad-operator-scripts\") pod \"watcher-3ef0-account-create-update-9s6ss\" (UID: \"0550363a-556c-4ab4-a361-f55f7f2afbad\") " pod="watcher-kuttl-default/watcher-3ef0-account-create-update-9s6ss" Jan 22 07:08:33 crc kubenswrapper[4720]: I0122 07:08:33.034889 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wtbt7\" (UniqueName: \"kubernetes.io/projected/0550363a-556c-4ab4-a361-f55f7f2afbad-kube-api-access-wtbt7\") pod \"watcher-3ef0-account-create-update-9s6ss\" (UID: \"0550363a-556c-4ab4-a361-f55f7f2afbad\") " pod="watcher-kuttl-default/watcher-3ef0-account-create-update-9s6ss" Jan 22 07:08:33 crc kubenswrapper[4720]: I0122 07:08:33.324548 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-3ef0-account-create-update-9s6ss" Jan 22 07:08:33 crc kubenswrapper[4720]: I0122 07:08:33.350091 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-db-create-jtjl6"] Jan 22 07:08:33 crc kubenswrapper[4720]: I0122 07:08:33.845798 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-3ef0-account-create-update-9s6ss"] Jan 22 07:08:33 crc kubenswrapper[4720]: W0122 07:08:33.870173 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0550363a_556c_4ab4_a361_f55f7f2afbad.slice/crio-3187c116289a08acd35a62d946d526e88868b77cfb0c77561b13fe12eebcbd39 WatchSource:0}: Error finding container 3187c116289a08acd35a62d946d526e88868b77cfb0c77561b13fe12eebcbd39: Status 404 returned error can't find the container with id 3187c116289a08acd35a62d946d526e88868b77cfb0c77561b13fe12eebcbd39 Jan 22 07:08:34 crc kubenswrapper[4720]: I0122 07:08:34.221117 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c7bdec6b-7735-4197-b42f-378d6ec58b7a" path="/var/lib/kubelet/pods/c7bdec6b-7735-4197-b42f-378d6ec58b7a/volumes" Jan 22 07:08:34 crc kubenswrapper[4720]: I0122 07:08:34.221876 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d8efc26a-3612-4b7b-a772-769daea1bb6f" path="/var/lib/kubelet/pods/d8efc26a-3612-4b7b-a772-769daea1bb6f/volumes" Jan 22 07:08:34 crc kubenswrapper[4720]: I0122 07:08:34.222404 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc1c0bbb-df33-4754-a5a3-17331735996e" path="/var/lib/kubelet/pods/fc1c0bbb-df33-4754-a5a3-17331735996e/volumes" Jan 22 07:08:34 crc kubenswrapper[4720]: I0122 07:08:34.252680 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-3ef0-account-create-update-9s6ss" event={"ID":"0550363a-556c-4ab4-a361-f55f7f2afbad","Type":"ContainerStarted","Data":"5d0d3b8683c59fdf0268b786ffa5f5b29dc63bb46e658131ee301fdf9ea9ad73"} Jan 22 07:08:34 crc kubenswrapper[4720]: I0122 07:08:34.252748 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-3ef0-account-create-update-9s6ss" event={"ID":"0550363a-556c-4ab4-a361-f55f7f2afbad","Type":"ContainerStarted","Data":"3187c116289a08acd35a62d946d526e88868b77cfb0c77561b13fe12eebcbd39"} Jan 22 07:08:34 crc kubenswrapper[4720]: I0122 07:08:34.254463 4720 generic.go:334] "Generic (PLEG): container finished" podID="1b2d9fec-7b33-48a9-a4d3-badf06756855" containerID="3cea65862f84662a715490058ad9282de891f32f7264ad46aa88d9dab42dbfe5" exitCode=0 Jan 22 07:08:34 crc kubenswrapper[4720]: I0122 07:08:34.254526 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-jtjl6" event={"ID":"1b2d9fec-7b33-48a9-a4d3-badf06756855","Type":"ContainerDied","Data":"3cea65862f84662a715490058ad9282de891f32f7264ad46aa88d9dab42dbfe5"} Jan 22 07:08:34 crc kubenswrapper[4720]: I0122 07:08:34.254565 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-jtjl6" event={"ID":"1b2d9fec-7b33-48a9-a4d3-badf06756855","Type":"ContainerStarted","Data":"aff1eb6c21ab9591137c02614e1bf4016a0ad63279ee56026130b1dd207cd17d"} Jan 22 07:08:34 crc kubenswrapper[4720]: I0122 07:08:34.275021 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-3ef0-account-create-update-9s6ss" podStartSLOduration=2.274999624 podStartE2EDuration="2.274999624s" podCreationTimestamp="2026-01-22 07:08:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 07:08:34.267867633 +0000 UTC m=+2006.409774338" watchObservedRunningTime="2026-01-22 07:08:34.274999624 +0000 UTC m=+2006.416906329" Jan 22 07:08:35 crc kubenswrapper[4720]: I0122 07:08:35.210985 4720 scope.go:117] "RemoveContainer" containerID="b610ae887215ebfcaa39de45e339fcae21a08ebe6e59b991ec1661de0f19a21c" Jan 22 07:08:35 crc kubenswrapper[4720]: I0122 07:08:35.263832 4720 generic.go:334] "Generic (PLEG): container finished" podID="0550363a-556c-4ab4-a361-f55f7f2afbad" containerID="5d0d3b8683c59fdf0268b786ffa5f5b29dc63bb46e658131ee301fdf9ea9ad73" exitCode=0 Jan 22 07:08:35 crc kubenswrapper[4720]: I0122 07:08:35.263957 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-3ef0-account-create-update-9s6ss" event={"ID":"0550363a-556c-4ab4-a361-f55f7f2afbad","Type":"ContainerDied","Data":"5d0d3b8683c59fdf0268b786ffa5f5b29dc63bb46e658131ee301fdf9ea9ad73"} Jan 22 07:08:35 crc kubenswrapper[4720]: I0122 07:08:35.732224 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-jtjl6" Jan 22 07:08:35 crc kubenswrapper[4720]: I0122 07:08:35.754760 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cg2zh\" (UniqueName: \"kubernetes.io/projected/1b2d9fec-7b33-48a9-a4d3-badf06756855-kube-api-access-cg2zh\") pod \"1b2d9fec-7b33-48a9-a4d3-badf06756855\" (UID: \"1b2d9fec-7b33-48a9-a4d3-badf06756855\") " Jan 22 07:08:35 crc kubenswrapper[4720]: I0122 07:08:35.754855 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1b2d9fec-7b33-48a9-a4d3-badf06756855-operator-scripts\") pod \"1b2d9fec-7b33-48a9-a4d3-badf06756855\" (UID: \"1b2d9fec-7b33-48a9-a4d3-badf06756855\") " Jan 22 07:08:35 crc kubenswrapper[4720]: I0122 07:08:35.755401 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1b2d9fec-7b33-48a9-a4d3-badf06756855-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1b2d9fec-7b33-48a9-a4d3-badf06756855" (UID: "1b2d9fec-7b33-48a9-a4d3-badf06756855"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 07:08:35 crc kubenswrapper[4720]: I0122 07:08:35.761203 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1b2d9fec-7b33-48a9-a4d3-badf06756855-kube-api-access-cg2zh" (OuterVolumeSpecName: "kube-api-access-cg2zh") pod "1b2d9fec-7b33-48a9-a4d3-badf06756855" (UID: "1b2d9fec-7b33-48a9-a4d3-badf06756855"). InnerVolumeSpecName "kube-api-access-cg2zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 07:08:35 crc kubenswrapper[4720]: I0122 07:08:35.857284 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cg2zh\" (UniqueName: \"kubernetes.io/projected/1b2d9fec-7b33-48a9-a4d3-badf06756855-kube-api-access-cg2zh\") on node \"crc\" DevicePath \"\"" Jan 22 07:08:35 crc kubenswrapper[4720]: I0122 07:08:35.857333 4720 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1b2d9fec-7b33-48a9-a4d3-badf06756855-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 07:08:36 crc kubenswrapper[4720]: I0122 07:08:36.275158 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-jtjl6" event={"ID":"1b2d9fec-7b33-48a9-a4d3-badf06756855","Type":"ContainerDied","Data":"aff1eb6c21ab9591137c02614e1bf4016a0ad63279ee56026130b1dd207cd17d"} Jan 22 07:08:36 crc kubenswrapper[4720]: I0122 07:08:36.275234 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-jtjl6" Jan 22 07:08:36 crc kubenswrapper[4720]: I0122 07:08:36.275258 4720 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aff1eb6c21ab9591137c02614e1bf4016a0ad63279ee56026130b1dd207cd17d" Jan 22 07:08:36 crc kubenswrapper[4720]: I0122 07:08:36.277512 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" event={"ID":"f4b26e9d-6a95-4b1c-9750-88b6aa100c67","Type":"ContainerStarted","Data":"2e4c7f6c5c98df3a612e9e9bbe7b31422556264b5ee2718f6d180f5bbbf48836"} Jan 22 07:08:36 crc kubenswrapper[4720]: I0122 07:08:36.694978 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-3ef0-account-create-update-9s6ss" Jan 22 07:08:36 crc kubenswrapper[4720]: I0122 07:08:36.773810 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wtbt7\" (UniqueName: \"kubernetes.io/projected/0550363a-556c-4ab4-a361-f55f7f2afbad-kube-api-access-wtbt7\") pod \"0550363a-556c-4ab4-a361-f55f7f2afbad\" (UID: \"0550363a-556c-4ab4-a361-f55f7f2afbad\") " Jan 22 07:08:36 crc kubenswrapper[4720]: I0122 07:08:36.774987 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0550363a-556c-4ab4-a361-f55f7f2afbad-operator-scripts\") pod \"0550363a-556c-4ab4-a361-f55f7f2afbad\" (UID: \"0550363a-556c-4ab4-a361-f55f7f2afbad\") " Jan 22 07:08:36 crc kubenswrapper[4720]: I0122 07:08:36.775893 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0550363a-556c-4ab4-a361-f55f7f2afbad-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0550363a-556c-4ab4-a361-f55f7f2afbad" (UID: "0550363a-556c-4ab4-a361-f55f7f2afbad"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 07:08:36 crc kubenswrapper[4720]: I0122 07:08:36.780320 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0550363a-556c-4ab4-a361-f55f7f2afbad-kube-api-access-wtbt7" (OuterVolumeSpecName: "kube-api-access-wtbt7") pod "0550363a-556c-4ab4-a361-f55f7f2afbad" (UID: "0550363a-556c-4ab4-a361-f55f7f2afbad"). InnerVolumeSpecName "kube-api-access-wtbt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 07:08:36 crc kubenswrapper[4720]: I0122 07:08:36.876993 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wtbt7\" (UniqueName: \"kubernetes.io/projected/0550363a-556c-4ab4-a361-f55f7f2afbad-kube-api-access-wtbt7\") on node \"crc\" DevicePath \"\"" Jan 22 07:08:36 crc kubenswrapper[4720]: I0122 07:08:36.877035 4720 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0550363a-556c-4ab4-a361-f55f7f2afbad-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 07:08:37 crc kubenswrapper[4720]: I0122 07:08:37.287627 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-3ef0-account-create-update-9s6ss" event={"ID":"0550363a-556c-4ab4-a361-f55f7f2afbad","Type":"ContainerDied","Data":"3187c116289a08acd35a62d946d526e88868b77cfb0c77561b13fe12eebcbd39"} Jan 22 07:08:37 crc kubenswrapper[4720]: I0122 07:08:37.287688 4720 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3187c116289a08acd35a62d946d526e88868b77cfb0c77561b13fe12eebcbd39" Jan 22 07:08:37 crc kubenswrapper[4720]: I0122 07:08:37.288754 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-3ef0-account-create-update-9s6ss" Jan 22 07:08:37 crc kubenswrapper[4720]: I0122 07:08:37.891477 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-bsgvf"] Jan 22 07:08:37 crc kubenswrapper[4720]: E0122 07:08:37.892266 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b2d9fec-7b33-48a9-a4d3-badf06756855" containerName="mariadb-database-create" Jan 22 07:08:37 crc kubenswrapper[4720]: I0122 07:08:37.892287 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b2d9fec-7b33-48a9-a4d3-badf06756855" containerName="mariadb-database-create" Jan 22 07:08:37 crc kubenswrapper[4720]: E0122 07:08:37.892300 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0550363a-556c-4ab4-a361-f55f7f2afbad" containerName="mariadb-account-create-update" Jan 22 07:08:37 crc kubenswrapper[4720]: I0122 07:08:37.892308 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="0550363a-556c-4ab4-a361-f55f7f2afbad" containerName="mariadb-account-create-update" Jan 22 07:08:37 crc kubenswrapper[4720]: I0122 07:08:37.892464 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="1b2d9fec-7b33-48a9-a4d3-badf06756855" containerName="mariadb-database-create" Jan 22 07:08:37 crc kubenswrapper[4720]: I0122 07:08:37.892493 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="0550363a-556c-4ab4-a361-f55f7f2afbad" containerName="mariadb-account-create-update" Jan 22 07:08:37 crc kubenswrapper[4720]: I0122 07:08:37.893157 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-bsgvf" Jan 22 07:08:37 crc kubenswrapper[4720]: I0122 07:08:37.897116 4720 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-watcher-kuttl-dockercfg-nwssd" Jan 22 07:08:37 crc kubenswrapper[4720]: I0122 07:08:37.897425 4720 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-config-data" Jan 22 07:08:37 crc kubenswrapper[4720]: I0122 07:08:37.909499 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-bsgvf"] Jan 22 07:08:38 crc kubenswrapper[4720]: I0122 07:08:38.011160 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5bd76e9c-a303-4608-ab91-268931894795-combined-ca-bundle\") pod \"watcher-kuttl-db-sync-bsgvf\" (UID: \"5bd76e9c-a303-4608-ab91-268931894795\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-bsgvf" Jan 22 07:08:38 crc kubenswrapper[4720]: I0122 07:08:38.011224 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-88jqb\" (UniqueName: \"kubernetes.io/projected/5bd76e9c-a303-4608-ab91-268931894795-kube-api-access-88jqb\") pod \"watcher-kuttl-db-sync-bsgvf\" (UID: \"5bd76e9c-a303-4608-ab91-268931894795\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-bsgvf" Jan 22 07:08:38 crc kubenswrapper[4720]: I0122 07:08:38.011303 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/5bd76e9c-a303-4608-ab91-268931894795-db-sync-config-data\") pod \"watcher-kuttl-db-sync-bsgvf\" (UID: \"5bd76e9c-a303-4608-ab91-268931894795\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-bsgvf" Jan 22 07:08:38 crc kubenswrapper[4720]: I0122 07:08:38.011374 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5bd76e9c-a303-4608-ab91-268931894795-config-data\") pod \"watcher-kuttl-db-sync-bsgvf\" (UID: \"5bd76e9c-a303-4608-ab91-268931894795\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-bsgvf" Jan 22 07:08:38 crc kubenswrapper[4720]: I0122 07:08:38.112853 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5bd76e9c-a303-4608-ab91-268931894795-combined-ca-bundle\") pod \"watcher-kuttl-db-sync-bsgvf\" (UID: \"5bd76e9c-a303-4608-ab91-268931894795\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-bsgvf" Jan 22 07:08:38 crc kubenswrapper[4720]: I0122 07:08:38.112922 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-88jqb\" (UniqueName: \"kubernetes.io/projected/5bd76e9c-a303-4608-ab91-268931894795-kube-api-access-88jqb\") pod \"watcher-kuttl-db-sync-bsgvf\" (UID: \"5bd76e9c-a303-4608-ab91-268931894795\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-bsgvf" Jan 22 07:08:38 crc kubenswrapper[4720]: I0122 07:08:38.112991 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/5bd76e9c-a303-4608-ab91-268931894795-db-sync-config-data\") pod \"watcher-kuttl-db-sync-bsgvf\" (UID: \"5bd76e9c-a303-4608-ab91-268931894795\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-bsgvf" Jan 22 07:08:38 crc kubenswrapper[4720]: I0122 07:08:38.113032 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5bd76e9c-a303-4608-ab91-268931894795-config-data\") pod \"watcher-kuttl-db-sync-bsgvf\" (UID: \"5bd76e9c-a303-4608-ab91-268931894795\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-bsgvf" Jan 22 07:08:38 crc kubenswrapper[4720]: I0122 07:08:38.118440 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5bd76e9c-a303-4608-ab91-268931894795-config-data\") pod \"watcher-kuttl-db-sync-bsgvf\" (UID: \"5bd76e9c-a303-4608-ab91-268931894795\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-bsgvf" Jan 22 07:08:38 crc kubenswrapper[4720]: I0122 07:08:38.119714 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5bd76e9c-a303-4608-ab91-268931894795-combined-ca-bundle\") pod \"watcher-kuttl-db-sync-bsgvf\" (UID: \"5bd76e9c-a303-4608-ab91-268931894795\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-bsgvf" Jan 22 07:08:38 crc kubenswrapper[4720]: I0122 07:08:38.130752 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/5bd76e9c-a303-4608-ab91-268931894795-db-sync-config-data\") pod \"watcher-kuttl-db-sync-bsgvf\" (UID: \"5bd76e9c-a303-4608-ab91-268931894795\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-bsgvf" Jan 22 07:08:38 crc kubenswrapper[4720]: I0122 07:08:38.132955 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-88jqb\" (UniqueName: \"kubernetes.io/projected/5bd76e9c-a303-4608-ab91-268931894795-kube-api-access-88jqb\") pod \"watcher-kuttl-db-sync-bsgvf\" (UID: \"5bd76e9c-a303-4608-ab91-268931894795\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-bsgvf" Jan 22 07:08:38 crc kubenswrapper[4720]: I0122 07:08:38.210725 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-bsgvf" Jan 22 07:08:38 crc kubenswrapper[4720]: I0122 07:08:38.710939 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-bsgvf"] Jan 22 07:08:38 crc kubenswrapper[4720]: W0122 07:08:38.722187 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5bd76e9c_a303_4608_ab91_268931894795.slice/crio-ed0b088fff79d90adad45a233ed8549d8e06a0d8e8862ace79aa1ed1261af3ad WatchSource:0}: Error finding container ed0b088fff79d90adad45a233ed8549d8e06a0d8e8862ace79aa1ed1261af3ad: Status 404 returned error can't find the container with id ed0b088fff79d90adad45a233ed8549d8e06a0d8e8862ace79aa1ed1261af3ad Jan 22 07:08:39 crc kubenswrapper[4720]: I0122 07:08:39.311279 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-bsgvf" event={"ID":"5bd76e9c-a303-4608-ab91-268931894795","Type":"ContainerStarted","Data":"a2e87ee124f6e4831029ea5e0fb764572aa00f7812355fe16822db3f2ea7182b"} Jan 22 07:08:39 crc kubenswrapper[4720]: I0122 07:08:39.311620 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-bsgvf" event={"ID":"5bd76e9c-a303-4608-ab91-268931894795","Type":"ContainerStarted","Data":"ed0b088fff79d90adad45a233ed8549d8e06a0d8e8862ace79aa1ed1261af3ad"} Jan 22 07:08:39 crc kubenswrapper[4720]: I0122 07:08:39.338639 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-db-sync-bsgvf" podStartSLOduration=2.338614556 podStartE2EDuration="2.338614556s" podCreationTimestamp="2026-01-22 07:08:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 07:08:39.330361503 +0000 UTC m=+2011.472268198" watchObservedRunningTime="2026-01-22 07:08:39.338614556 +0000 UTC m=+2011.480521261" Jan 22 07:08:42 crc kubenswrapper[4720]: I0122 07:08:42.350243 4720 generic.go:334] "Generic (PLEG): container finished" podID="5bd76e9c-a303-4608-ab91-268931894795" containerID="a2e87ee124f6e4831029ea5e0fb764572aa00f7812355fe16822db3f2ea7182b" exitCode=0 Jan 22 07:08:42 crc kubenswrapper[4720]: I0122 07:08:42.350827 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-bsgvf" event={"ID":"5bd76e9c-a303-4608-ab91-268931894795","Type":"ContainerDied","Data":"a2e87ee124f6e4831029ea5e0fb764572aa00f7812355fe16822db3f2ea7182b"} Jan 22 07:08:43 crc kubenswrapper[4720]: I0122 07:08:43.750672 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-bsgvf" Jan 22 07:08:43 crc kubenswrapper[4720]: I0122 07:08:43.864896 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/5bd76e9c-a303-4608-ab91-268931894795-db-sync-config-data\") pod \"5bd76e9c-a303-4608-ab91-268931894795\" (UID: \"5bd76e9c-a303-4608-ab91-268931894795\") " Jan 22 07:08:43 crc kubenswrapper[4720]: I0122 07:08:43.865046 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-88jqb\" (UniqueName: \"kubernetes.io/projected/5bd76e9c-a303-4608-ab91-268931894795-kube-api-access-88jqb\") pod \"5bd76e9c-a303-4608-ab91-268931894795\" (UID: \"5bd76e9c-a303-4608-ab91-268931894795\") " Jan 22 07:08:43 crc kubenswrapper[4720]: I0122 07:08:43.865120 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5bd76e9c-a303-4608-ab91-268931894795-combined-ca-bundle\") pod \"5bd76e9c-a303-4608-ab91-268931894795\" (UID: \"5bd76e9c-a303-4608-ab91-268931894795\") " Jan 22 07:08:43 crc kubenswrapper[4720]: I0122 07:08:43.865165 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5bd76e9c-a303-4608-ab91-268931894795-config-data\") pod \"5bd76e9c-a303-4608-ab91-268931894795\" (UID: \"5bd76e9c-a303-4608-ab91-268931894795\") " Jan 22 07:08:43 crc kubenswrapper[4720]: I0122 07:08:43.870222 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5bd76e9c-a303-4608-ab91-268931894795-kube-api-access-88jqb" (OuterVolumeSpecName: "kube-api-access-88jqb") pod "5bd76e9c-a303-4608-ab91-268931894795" (UID: "5bd76e9c-a303-4608-ab91-268931894795"). InnerVolumeSpecName "kube-api-access-88jqb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 07:08:43 crc kubenswrapper[4720]: I0122 07:08:43.882897 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5bd76e9c-a303-4608-ab91-268931894795-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "5bd76e9c-a303-4608-ab91-268931894795" (UID: "5bd76e9c-a303-4608-ab91-268931894795"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:08:43 crc kubenswrapper[4720]: I0122 07:08:43.896656 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5bd76e9c-a303-4608-ab91-268931894795-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5bd76e9c-a303-4608-ab91-268931894795" (UID: "5bd76e9c-a303-4608-ab91-268931894795"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:08:43 crc kubenswrapper[4720]: I0122 07:08:43.923394 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5bd76e9c-a303-4608-ab91-268931894795-config-data" (OuterVolumeSpecName: "config-data") pod "5bd76e9c-a303-4608-ab91-268931894795" (UID: "5bd76e9c-a303-4608-ab91-268931894795"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:08:43 crc kubenswrapper[4720]: I0122 07:08:43.967266 4720 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/5bd76e9c-a303-4608-ab91-268931894795-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 07:08:43 crc kubenswrapper[4720]: I0122 07:08:43.967304 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-88jqb\" (UniqueName: \"kubernetes.io/projected/5bd76e9c-a303-4608-ab91-268931894795-kube-api-access-88jqb\") on node \"crc\" DevicePath \"\"" Jan 22 07:08:43 crc kubenswrapper[4720]: I0122 07:08:43.967316 4720 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5bd76e9c-a303-4608-ab91-268931894795-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 07:08:43 crc kubenswrapper[4720]: I0122 07:08:43.967325 4720 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5bd76e9c-a303-4608-ab91-268931894795-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 07:08:44 crc kubenswrapper[4720]: I0122 07:08:44.367210 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-bsgvf" event={"ID":"5bd76e9c-a303-4608-ab91-268931894795","Type":"ContainerDied","Data":"ed0b088fff79d90adad45a233ed8549d8e06a0d8e8862ace79aa1ed1261af3ad"} Jan 22 07:08:44 crc kubenswrapper[4720]: I0122 07:08:44.367563 4720 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ed0b088fff79d90adad45a233ed8549d8e06a0d8e8862ace79aa1ed1261af3ad" Jan 22 07:08:44 crc kubenswrapper[4720]: I0122 07:08:44.367298 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-bsgvf" Jan 22 07:08:44 crc kubenswrapper[4720]: I0122 07:08:44.718265 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 22 07:08:44 crc kubenswrapper[4720]: E0122 07:08:44.718908 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5bd76e9c-a303-4608-ab91-268931894795" containerName="watcher-kuttl-db-sync" Jan 22 07:08:44 crc kubenswrapper[4720]: I0122 07:08:44.718948 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="5bd76e9c-a303-4608-ab91-268931894795" containerName="watcher-kuttl-db-sync" Jan 22 07:08:44 crc kubenswrapper[4720]: I0122 07:08:44.719172 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="5bd76e9c-a303-4608-ab91-268931894795" containerName="watcher-kuttl-db-sync" Jan 22 07:08:44 crc kubenswrapper[4720]: I0122 07:08:44.720333 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:08:44 crc kubenswrapper[4720]: I0122 07:08:44.725764 4720 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-watcher-kuttl-dockercfg-nwssd" Jan 22 07:08:44 crc kubenswrapper[4720]: I0122 07:08:44.726819 4720 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-api-config-data" Jan 22 07:08:44 crc kubenswrapper[4720]: I0122 07:08:44.731579 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-1"] Jan 22 07:08:44 crc kubenswrapper[4720]: I0122 07:08:44.733256 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 22 07:08:44 crc kubenswrapper[4720]: I0122 07:08:44.780018 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-1"] Jan 22 07:08:44 crc kubenswrapper[4720]: I0122 07:08:44.782356 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4435d9ae-544f-4758-a6fc-15f2827c9adb-logs\") pod \"watcher-kuttl-api-0\" (UID: \"4435d9ae-544f-4758-a6fc-15f2827c9adb\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:08:44 crc kubenswrapper[4720]: I0122 07:08:44.782417 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/4435d9ae-544f-4758-a6fc-15f2827c9adb-cert-memcached-mtls\") pod \"watcher-kuttl-api-0\" (UID: \"4435d9ae-544f-4758-a6fc-15f2827c9adb\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:08:44 crc kubenswrapper[4720]: I0122 07:08:44.782471 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4435d9ae-544f-4758-a6fc-15f2827c9adb-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"4435d9ae-544f-4758-a6fc-15f2827c9adb\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:08:44 crc kubenswrapper[4720]: I0122 07:08:44.782497 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4435d9ae-544f-4758-a6fc-15f2827c9adb-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"4435d9ae-544f-4758-a6fc-15f2827c9adb\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:08:44 crc kubenswrapper[4720]: I0122 07:08:44.782559 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rb2rw\" (UniqueName: \"kubernetes.io/projected/4435d9ae-544f-4758-a6fc-15f2827c9adb-kube-api-access-rb2rw\") pod \"watcher-kuttl-api-0\" (UID: \"4435d9ae-544f-4758-a6fc-15f2827c9adb\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:08:44 crc kubenswrapper[4720]: I0122 07:08:44.782682 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/4435d9ae-544f-4758-a6fc-15f2827c9adb-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"4435d9ae-544f-4758-a6fc-15f2827c9adb\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:08:44 crc kubenswrapper[4720]: I0122 07:08:44.792288 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 22 07:08:44 crc kubenswrapper[4720]: I0122 07:08:44.934528 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/04f95580-6f16-4d5e-8a74-d2f3dcce4109-logs\") pod \"watcher-kuttl-api-1\" (UID: \"04f95580-6f16-4d5e-8a74-d2f3dcce4109\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 22 07:08:44 crc kubenswrapper[4720]: I0122 07:08:44.934585 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/4435d9ae-544f-4758-a6fc-15f2827c9adb-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"4435d9ae-544f-4758-a6fc-15f2827c9adb\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:08:44 crc kubenswrapper[4720]: I0122 07:08:44.934609 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04f95580-6f16-4d5e-8a74-d2f3dcce4109-config-data\") pod \"watcher-kuttl-api-1\" (UID: \"04f95580-6f16-4d5e-8a74-d2f3dcce4109\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 22 07:08:44 crc kubenswrapper[4720]: I0122 07:08:44.934647 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4435d9ae-544f-4758-a6fc-15f2827c9adb-logs\") pod \"watcher-kuttl-api-0\" (UID: \"4435d9ae-544f-4758-a6fc-15f2827c9adb\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:08:44 crc kubenswrapper[4720]: I0122 07:08:44.934757 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/4435d9ae-544f-4758-a6fc-15f2827c9adb-cert-memcached-mtls\") pod \"watcher-kuttl-api-0\" (UID: \"4435d9ae-544f-4758-a6fc-15f2827c9adb\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:08:44 crc kubenswrapper[4720]: I0122 07:08:44.934834 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04f95580-6f16-4d5e-8a74-d2f3dcce4109-combined-ca-bundle\") pod \"watcher-kuttl-api-1\" (UID: \"04f95580-6f16-4d5e-8a74-d2f3dcce4109\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 22 07:08:44 crc kubenswrapper[4720]: I0122 07:08:44.934947 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j4cgb\" (UniqueName: \"kubernetes.io/projected/04f95580-6f16-4d5e-8a74-d2f3dcce4109-kube-api-access-j4cgb\") pod \"watcher-kuttl-api-1\" (UID: \"04f95580-6f16-4d5e-8a74-d2f3dcce4109\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 22 07:08:44 crc kubenswrapper[4720]: I0122 07:08:44.935021 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4435d9ae-544f-4758-a6fc-15f2827c9adb-logs\") pod \"watcher-kuttl-api-0\" (UID: \"4435d9ae-544f-4758-a6fc-15f2827c9adb\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:08:44 crc kubenswrapper[4720]: I0122 07:08:44.935043 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4435d9ae-544f-4758-a6fc-15f2827c9adb-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"4435d9ae-544f-4758-a6fc-15f2827c9adb\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:08:44 crc kubenswrapper[4720]: I0122 07:08:44.935068 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/04f95580-6f16-4d5e-8a74-d2f3dcce4109-custom-prometheus-ca\") pod \"watcher-kuttl-api-1\" (UID: \"04f95580-6f16-4d5e-8a74-d2f3dcce4109\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 22 07:08:44 crc kubenswrapper[4720]: I0122 07:08:44.935091 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4435d9ae-544f-4758-a6fc-15f2827c9adb-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"4435d9ae-544f-4758-a6fc-15f2827c9adb\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:08:44 crc kubenswrapper[4720]: I0122 07:08:44.935173 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rb2rw\" (UniqueName: \"kubernetes.io/projected/4435d9ae-544f-4758-a6fc-15f2827c9adb-kube-api-access-rb2rw\") pod \"watcher-kuttl-api-0\" (UID: \"4435d9ae-544f-4758-a6fc-15f2827c9adb\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:08:44 crc kubenswrapper[4720]: I0122 07:08:44.935371 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/04f95580-6f16-4d5e-8a74-d2f3dcce4109-cert-memcached-mtls\") pod \"watcher-kuttl-api-1\" (UID: \"04f95580-6f16-4d5e-8a74-d2f3dcce4109\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 22 07:08:44 crc kubenswrapper[4720]: I0122 07:08:44.942468 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4435d9ae-544f-4758-a6fc-15f2827c9adb-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"4435d9ae-544f-4758-a6fc-15f2827c9adb\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:08:44 crc kubenswrapper[4720]: I0122 07:08:44.948540 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/4435d9ae-544f-4758-a6fc-15f2827c9adb-cert-memcached-mtls\") pod \"watcher-kuttl-api-0\" (UID: \"4435d9ae-544f-4758-a6fc-15f2827c9adb\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:08:44 crc kubenswrapper[4720]: I0122 07:08:44.967658 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4435d9ae-544f-4758-a6fc-15f2827c9adb-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"4435d9ae-544f-4758-a6fc-15f2827c9adb\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:08:44 crc kubenswrapper[4720]: I0122 07:08:44.965784 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rb2rw\" (UniqueName: \"kubernetes.io/projected/4435d9ae-544f-4758-a6fc-15f2827c9adb-kube-api-access-rb2rw\") pod \"watcher-kuttl-api-0\" (UID: \"4435d9ae-544f-4758-a6fc-15f2827c9adb\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:08:44 crc kubenswrapper[4720]: I0122 07:08:44.972510 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/4435d9ae-544f-4758-a6fc-15f2827c9adb-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"4435d9ae-544f-4758-a6fc-15f2827c9adb\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:08:45 crc kubenswrapper[4720]: I0122 07:08:45.000989 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 22 07:08:45 crc kubenswrapper[4720]: I0122 07:08:45.002306 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 07:08:45 crc kubenswrapper[4720]: I0122 07:08:45.010362 4720 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-decision-engine-config-data" Jan 22 07:08:45 crc kubenswrapper[4720]: I0122 07:08:45.037149 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04f95580-6f16-4d5e-8a74-d2f3dcce4109-combined-ca-bundle\") pod \"watcher-kuttl-api-1\" (UID: \"04f95580-6f16-4d5e-8a74-d2f3dcce4109\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 22 07:08:45 crc kubenswrapper[4720]: I0122 07:08:45.037352 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j4cgb\" (UniqueName: \"kubernetes.io/projected/04f95580-6f16-4d5e-8a74-d2f3dcce4109-kube-api-access-j4cgb\") pod \"watcher-kuttl-api-1\" (UID: \"04f95580-6f16-4d5e-8a74-d2f3dcce4109\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 22 07:08:45 crc kubenswrapper[4720]: I0122 07:08:45.037395 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/04f95580-6f16-4d5e-8a74-d2f3dcce4109-custom-prometheus-ca\") pod \"watcher-kuttl-api-1\" (UID: \"04f95580-6f16-4d5e-8a74-d2f3dcce4109\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 22 07:08:45 crc kubenswrapper[4720]: I0122 07:08:45.037489 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/04f95580-6f16-4d5e-8a74-d2f3dcce4109-cert-memcached-mtls\") pod \"watcher-kuttl-api-1\" (UID: \"04f95580-6f16-4d5e-8a74-d2f3dcce4109\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 22 07:08:45 crc kubenswrapper[4720]: I0122 07:08:45.037517 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/04f95580-6f16-4d5e-8a74-d2f3dcce4109-logs\") pod \"watcher-kuttl-api-1\" (UID: \"04f95580-6f16-4d5e-8a74-d2f3dcce4109\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 22 07:08:45 crc kubenswrapper[4720]: I0122 07:08:45.037555 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04f95580-6f16-4d5e-8a74-d2f3dcce4109-config-data\") pod \"watcher-kuttl-api-1\" (UID: \"04f95580-6f16-4d5e-8a74-d2f3dcce4109\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 22 07:08:45 crc kubenswrapper[4720]: I0122 07:08:45.038290 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 22 07:08:45 crc kubenswrapper[4720]: I0122 07:08:45.043801 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:08:45 crc kubenswrapper[4720]: I0122 07:08:45.044685 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/04f95580-6f16-4d5e-8a74-d2f3dcce4109-logs\") pod \"watcher-kuttl-api-1\" (UID: \"04f95580-6f16-4d5e-8a74-d2f3dcce4109\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 22 07:08:45 crc kubenswrapper[4720]: I0122 07:08:45.046724 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04f95580-6f16-4d5e-8a74-d2f3dcce4109-combined-ca-bundle\") pod \"watcher-kuttl-api-1\" (UID: \"04f95580-6f16-4d5e-8a74-d2f3dcce4109\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 22 07:08:45 crc kubenswrapper[4720]: I0122 07:08:45.050800 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04f95580-6f16-4d5e-8a74-d2f3dcce4109-config-data\") pod \"watcher-kuttl-api-1\" (UID: \"04f95580-6f16-4d5e-8a74-d2f3dcce4109\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 22 07:08:45 crc kubenswrapper[4720]: I0122 07:08:45.059507 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/04f95580-6f16-4d5e-8a74-d2f3dcce4109-custom-prometheus-ca\") pod \"watcher-kuttl-api-1\" (UID: \"04f95580-6f16-4d5e-8a74-d2f3dcce4109\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 22 07:08:45 crc kubenswrapper[4720]: I0122 07:08:45.059864 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/04f95580-6f16-4d5e-8a74-d2f3dcce4109-cert-memcached-mtls\") pod \"watcher-kuttl-api-1\" (UID: \"04f95580-6f16-4d5e-8a74-d2f3dcce4109\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 22 07:08:45 crc kubenswrapper[4720]: I0122 07:08:45.073533 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j4cgb\" (UniqueName: \"kubernetes.io/projected/04f95580-6f16-4d5e-8a74-d2f3dcce4109-kube-api-access-j4cgb\") pod \"watcher-kuttl-api-1\" (UID: \"04f95580-6f16-4d5e-8a74-d2f3dcce4109\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 22 07:08:45 crc kubenswrapper[4720]: I0122 07:08:45.103686 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 22 07:08:45 crc kubenswrapper[4720]: I0122 07:08:45.105615 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 07:08:45 crc kubenswrapper[4720]: I0122 07:08:45.110108 4720 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-applier-config-data" Jan 22 07:08:45 crc kubenswrapper[4720]: I0122 07:08:45.136964 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 22 07:08:45 crc kubenswrapper[4720]: I0122 07:08:45.138947 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/363c7177-2769-4f77-9d02-4631aa271f29-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"363c7177-2769-4f77-9d02-4631aa271f29\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 07:08:45 crc kubenswrapper[4720]: I0122 07:08:45.138989 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/363c7177-2769-4f77-9d02-4631aa271f29-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"363c7177-2769-4f77-9d02-4631aa271f29\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 07:08:45 crc kubenswrapper[4720]: I0122 07:08:45.139106 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/363c7177-2769-4f77-9d02-4631aa271f29-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"363c7177-2769-4f77-9d02-4631aa271f29\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 07:08:45 crc kubenswrapper[4720]: I0122 07:08:45.139261 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/363c7177-2769-4f77-9d02-4631aa271f29-cert-memcached-mtls\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"363c7177-2769-4f77-9d02-4631aa271f29\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 07:08:45 crc kubenswrapper[4720]: I0122 07:08:45.139296 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/363c7177-2769-4f77-9d02-4631aa271f29-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"363c7177-2769-4f77-9d02-4631aa271f29\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 07:08:45 crc kubenswrapper[4720]: I0122 07:08:45.139426 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9tvzt\" (UniqueName: \"kubernetes.io/projected/363c7177-2769-4f77-9d02-4631aa271f29-kube-api-access-9tvzt\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"363c7177-2769-4f77-9d02-4631aa271f29\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 07:08:45 crc kubenswrapper[4720]: I0122 07:08:45.241472 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/363c7177-2769-4f77-9d02-4631aa271f29-cert-memcached-mtls\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"363c7177-2769-4f77-9d02-4631aa271f29\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 07:08:45 crc kubenswrapper[4720]: I0122 07:08:45.241520 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/363c7177-2769-4f77-9d02-4631aa271f29-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"363c7177-2769-4f77-9d02-4631aa271f29\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 07:08:45 crc kubenswrapper[4720]: I0122 07:08:45.241557 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4418ed8-9908-46c6-9afa-9dc16f45aa57-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"d4418ed8-9908-46c6-9afa-9dc16f45aa57\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 07:08:45 crc kubenswrapper[4720]: I0122 07:08:45.241585 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d4418ed8-9908-46c6-9afa-9dc16f45aa57-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"d4418ed8-9908-46c6-9afa-9dc16f45aa57\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 07:08:45 crc kubenswrapper[4720]: I0122 07:08:45.241608 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/d4418ed8-9908-46c6-9afa-9dc16f45aa57-cert-memcached-mtls\") pod \"watcher-kuttl-applier-0\" (UID: \"d4418ed8-9908-46c6-9afa-9dc16f45aa57\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 07:08:45 crc kubenswrapper[4720]: I0122 07:08:45.241645 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9tvzt\" (UniqueName: \"kubernetes.io/projected/363c7177-2769-4f77-9d02-4631aa271f29-kube-api-access-9tvzt\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"363c7177-2769-4f77-9d02-4631aa271f29\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 07:08:45 crc kubenswrapper[4720]: I0122 07:08:45.241687 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4418ed8-9908-46c6-9afa-9dc16f45aa57-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"d4418ed8-9908-46c6-9afa-9dc16f45aa57\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 07:08:45 crc kubenswrapper[4720]: I0122 07:08:45.241706 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/363c7177-2769-4f77-9d02-4631aa271f29-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"363c7177-2769-4f77-9d02-4631aa271f29\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 07:08:45 crc kubenswrapper[4720]: I0122 07:08:45.241725 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/363c7177-2769-4f77-9d02-4631aa271f29-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"363c7177-2769-4f77-9d02-4631aa271f29\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 07:08:45 crc kubenswrapper[4720]: I0122 07:08:45.241784 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/363c7177-2769-4f77-9d02-4631aa271f29-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"363c7177-2769-4f77-9d02-4631aa271f29\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 07:08:45 crc kubenswrapper[4720]: I0122 07:08:45.241851 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n96kh\" (UniqueName: \"kubernetes.io/projected/d4418ed8-9908-46c6-9afa-9dc16f45aa57-kube-api-access-n96kh\") pod \"watcher-kuttl-applier-0\" (UID: \"d4418ed8-9908-46c6-9afa-9dc16f45aa57\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 07:08:45 crc kubenswrapper[4720]: I0122 07:08:45.242990 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/363c7177-2769-4f77-9d02-4631aa271f29-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"363c7177-2769-4f77-9d02-4631aa271f29\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 07:08:45 crc kubenswrapper[4720]: I0122 07:08:45.255103 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/363c7177-2769-4f77-9d02-4631aa271f29-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"363c7177-2769-4f77-9d02-4631aa271f29\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 07:08:45 crc kubenswrapper[4720]: I0122 07:08:45.256285 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/363c7177-2769-4f77-9d02-4631aa271f29-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"363c7177-2769-4f77-9d02-4631aa271f29\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 07:08:45 crc kubenswrapper[4720]: I0122 07:08:45.256950 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/363c7177-2769-4f77-9d02-4631aa271f29-cert-memcached-mtls\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"363c7177-2769-4f77-9d02-4631aa271f29\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 07:08:45 crc kubenswrapper[4720]: I0122 07:08:45.259303 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/363c7177-2769-4f77-9d02-4631aa271f29-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"363c7177-2769-4f77-9d02-4631aa271f29\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 07:08:45 crc kubenswrapper[4720]: I0122 07:08:45.261364 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9tvzt\" (UniqueName: \"kubernetes.io/projected/363c7177-2769-4f77-9d02-4631aa271f29-kube-api-access-9tvzt\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"363c7177-2769-4f77-9d02-4631aa271f29\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 07:08:45 crc kubenswrapper[4720]: I0122 07:08:45.348549 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n96kh\" (UniqueName: \"kubernetes.io/projected/d4418ed8-9908-46c6-9afa-9dc16f45aa57-kube-api-access-n96kh\") pod \"watcher-kuttl-applier-0\" (UID: \"d4418ed8-9908-46c6-9afa-9dc16f45aa57\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 07:08:45 crc kubenswrapper[4720]: I0122 07:08:45.348700 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4418ed8-9908-46c6-9afa-9dc16f45aa57-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"d4418ed8-9908-46c6-9afa-9dc16f45aa57\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 07:08:45 crc kubenswrapper[4720]: I0122 07:08:45.348754 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d4418ed8-9908-46c6-9afa-9dc16f45aa57-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"d4418ed8-9908-46c6-9afa-9dc16f45aa57\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 07:08:45 crc kubenswrapper[4720]: I0122 07:08:45.348778 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/d4418ed8-9908-46c6-9afa-9dc16f45aa57-cert-memcached-mtls\") pod \"watcher-kuttl-applier-0\" (UID: \"d4418ed8-9908-46c6-9afa-9dc16f45aa57\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 07:08:45 crc kubenswrapper[4720]: I0122 07:08:45.348876 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4418ed8-9908-46c6-9afa-9dc16f45aa57-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"d4418ed8-9908-46c6-9afa-9dc16f45aa57\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 07:08:45 crc kubenswrapper[4720]: I0122 07:08:45.350390 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d4418ed8-9908-46c6-9afa-9dc16f45aa57-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"d4418ed8-9908-46c6-9afa-9dc16f45aa57\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 07:08:45 crc kubenswrapper[4720]: I0122 07:08:45.351050 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 22 07:08:45 crc kubenswrapper[4720]: I0122 07:08:45.353061 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4418ed8-9908-46c6-9afa-9dc16f45aa57-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"d4418ed8-9908-46c6-9afa-9dc16f45aa57\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 07:08:45 crc kubenswrapper[4720]: I0122 07:08:45.353775 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/d4418ed8-9908-46c6-9afa-9dc16f45aa57-cert-memcached-mtls\") pod \"watcher-kuttl-applier-0\" (UID: \"d4418ed8-9908-46c6-9afa-9dc16f45aa57\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 07:08:45 crc kubenswrapper[4720]: I0122 07:08:45.354584 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4418ed8-9908-46c6-9afa-9dc16f45aa57-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"d4418ed8-9908-46c6-9afa-9dc16f45aa57\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 07:08:45 crc kubenswrapper[4720]: I0122 07:08:45.379897 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n96kh\" (UniqueName: \"kubernetes.io/projected/d4418ed8-9908-46c6-9afa-9dc16f45aa57-kube-api-access-n96kh\") pod \"watcher-kuttl-applier-0\" (UID: \"d4418ed8-9908-46c6-9afa-9dc16f45aa57\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 07:08:45 crc kubenswrapper[4720]: I0122 07:08:45.448559 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 07:08:45 crc kubenswrapper[4720]: I0122 07:08:45.467174 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 07:08:45 crc kubenswrapper[4720]: I0122 07:08:45.591422 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 22 07:08:45 crc kubenswrapper[4720]: I0122 07:08:45.840179 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-1"] Jan 22 07:08:46 crc kubenswrapper[4720]: I0122 07:08:46.022852 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 22 07:08:46 crc kubenswrapper[4720]: I0122 07:08:46.173228 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 22 07:08:46 crc kubenswrapper[4720]: I0122 07:08:46.407774 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"363c7177-2769-4f77-9d02-4631aa271f29","Type":"ContainerStarted","Data":"a368bd65488ea5b1366e1d580596a8b1889d2ec5556737eea21bd29cd271991c"} Jan 22 07:08:46 crc kubenswrapper[4720]: I0122 07:08:46.412135 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-1" event={"ID":"04f95580-6f16-4d5e-8a74-d2f3dcce4109","Type":"ContainerStarted","Data":"04426488c4a18c1b4b80d4ebeb17423bf69c8d4cfbbf2b702f147da4954b75b6"} Jan 22 07:08:46 crc kubenswrapper[4720]: I0122 07:08:46.412207 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-1" event={"ID":"04f95580-6f16-4d5e-8a74-d2f3dcce4109","Type":"ContainerStarted","Data":"e9cabe487c42666fce6015dfcd365601793ec73d5bc2bfa3cf1887b94576d971"} Jan 22 07:08:46 crc kubenswrapper[4720]: I0122 07:08:46.415235 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"d4418ed8-9908-46c6-9afa-9dc16f45aa57","Type":"ContainerStarted","Data":"bf35871b5b2599edc849908d43ae38fd07129e0208147078085a223e5792a5c6"} Jan 22 07:08:46 crc kubenswrapper[4720]: I0122 07:08:46.417293 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"4435d9ae-544f-4758-a6fc-15f2827c9adb","Type":"ContainerStarted","Data":"e4deac1227cde3e3308a5017a2426f0c9a87a2375f4ea7ffb8aecfeb775f21cb"} Jan 22 07:08:46 crc kubenswrapper[4720]: I0122 07:08:46.417325 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"4435d9ae-544f-4758-a6fc-15f2827c9adb","Type":"ContainerStarted","Data":"ccb903735f90e1bf5564d2b29631fec0ca193cc48b489bd9866d7f2db102d68d"} Jan 22 07:08:46 crc kubenswrapper[4720]: I0122 07:08:46.417341 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"4435d9ae-544f-4758-a6fc-15f2827c9adb","Type":"ContainerStarted","Data":"26fa0f9906a7410e4c633ede2e90ada4c526b1b5242fc74f1640b2a90d98c344"} Jan 22 07:08:46 crc kubenswrapper[4720]: I0122 07:08:46.417839 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:08:46 crc kubenswrapper[4720]: I0122 07:08:46.418985 4720 prober.go:107] "Probe failed" probeType="Readiness" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="4435d9ae-544f-4758-a6fc-15f2827c9adb" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.187:9322/\": dial tcp 10.217.0.187:9322: connect: connection refused" Jan 22 07:08:46 crc kubenswrapper[4720]: I0122 07:08:46.439252 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-api-0" podStartSLOduration=2.439232715 podStartE2EDuration="2.439232715s" podCreationTimestamp="2026-01-22 07:08:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 07:08:46.435113399 +0000 UTC m=+2018.577020104" watchObservedRunningTime="2026-01-22 07:08:46.439232715 +0000 UTC m=+2018.581139420" Jan 22 07:08:47 crc kubenswrapper[4720]: I0122 07:08:47.427051 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"363c7177-2769-4f77-9d02-4631aa271f29","Type":"ContainerStarted","Data":"085b52ceafd7b8e22df1742d3b17e58956e20512271b0bd9379eba22f4521900"} Jan 22 07:08:47 crc kubenswrapper[4720]: I0122 07:08:47.431099 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-1" event={"ID":"04f95580-6f16-4d5e-8a74-d2f3dcce4109","Type":"ContainerStarted","Data":"e9bc3d11e15b14a0d9f73853f74fa018ad0b09d6aeb8cfeb54400f630e96909c"} Jan 22 07:08:47 crc kubenswrapper[4720]: I0122 07:08:47.432627 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 22 07:08:47 crc kubenswrapper[4720]: I0122 07:08:47.434576 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"d4418ed8-9908-46c6-9afa-9dc16f45aa57","Type":"ContainerStarted","Data":"0dfa71ddf9b88f05f9d2f2851878e5f7d7886ab5d28529fc046332f951c03424"} Jan 22 07:08:47 crc kubenswrapper[4720]: I0122 07:08:47.487194 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podStartSLOduration=3.48717408 podStartE2EDuration="3.48717408s" podCreationTimestamp="2026-01-22 07:08:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 07:08:47.460443326 +0000 UTC m=+2019.602350051" watchObservedRunningTime="2026-01-22 07:08:47.48717408 +0000 UTC m=+2019.629080785" Jan 22 07:08:47 crc kubenswrapper[4720]: I0122 07:08:47.489069 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podStartSLOduration=3.489062693 podStartE2EDuration="3.489062693s" podCreationTimestamp="2026-01-22 07:08:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 07:08:47.477068405 +0000 UTC m=+2019.618975110" watchObservedRunningTime="2026-01-22 07:08:47.489062693 +0000 UTC m=+2019.630969398" Jan 22 07:08:47 crc kubenswrapper[4720]: I0122 07:08:47.505402 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-api-1" podStartSLOduration=3.505384764 podStartE2EDuration="3.505384764s" podCreationTimestamp="2026-01-22 07:08:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 07:08:47.50347187 +0000 UTC m=+2019.645378595" watchObservedRunningTime="2026-01-22 07:08:47.505384764 +0000 UTC m=+2019.647291479" Jan 22 07:08:50 crc kubenswrapper[4720]: I0122 07:08:50.019835 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:08:50 crc kubenswrapper[4720]: I0122 07:08:50.045553 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:08:50 crc kubenswrapper[4720]: I0122 07:08:50.187032 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 22 07:08:50 crc kubenswrapper[4720]: I0122 07:08:50.352744 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 22 07:08:50 crc kubenswrapper[4720]: I0122 07:08:50.468704 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 07:08:55 crc kubenswrapper[4720]: I0122 07:08:55.045305 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:08:55 crc kubenswrapper[4720]: I0122 07:08:55.049159 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:08:55 crc kubenswrapper[4720]: I0122 07:08:55.352480 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 22 07:08:55 crc kubenswrapper[4720]: I0122 07:08:55.363020 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 22 07:08:55 crc kubenswrapper[4720]: I0122 07:08:55.449312 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 07:08:55 crc kubenswrapper[4720]: I0122 07:08:55.468119 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 07:08:55 crc kubenswrapper[4720]: I0122 07:08:55.489647 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 07:08:55 crc kubenswrapper[4720]: I0122 07:08:55.499191 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 07:08:55 crc kubenswrapper[4720]: I0122 07:08:55.509333 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 07:08:55 crc kubenswrapper[4720]: I0122 07:08:55.522470 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:08:55 crc kubenswrapper[4720]: I0122 07:08:55.525754 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 22 07:08:55 crc kubenswrapper[4720]: I0122 07:08:55.542052 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 07:08:55 crc kubenswrapper[4720]: I0122 07:08:55.571187 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 07:08:58 crc kubenswrapper[4720]: I0122 07:08:58.050257 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 07:08:58 crc kubenswrapper[4720]: I0122 07:08:58.050988 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="141f045d-3987-4578-b7b5-bf65e745233e" containerName="ceilometer-central-agent" containerID="cri-o://445589bd9c4c5e0b56a28f4a8918239cd9b2808a00257e492817e13644bc55d8" gracePeriod=30 Jan 22 07:08:58 crc kubenswrapper[4720]: I0122 07:08:58.051497 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="141f045d-3987-4578-b7b5-bf65e745233e" containerName="proxy-httpd" containerID="cri-o://8a4459b3cd7051bbf6872a5139ac333dc6dbc45b87d6a82eb5c952de0f250ec2" gracePeriod=30 Jan 22 07:08:58 crc kubenswrapper[4720]: I0122 07:08:58.051583 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="141f045d-3987-4578-b7b5-bf65e745233e" containerName="sg-core" containerID="cri-o://9bf9194a0332b49721aa03eee00c11e46b8bc0a1e64f0126b70f28f80d64ec22" gracePeriod=30 Jan 22 07:08:58 crc kubenswrapper[4720]: I0122 07:08:58.051602 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="141f045d-3987-4578-b7b5-bf65e745233e" containerName="ceilometer-notification-agent" containerID="cri-o://d0dedeaec41ba5ac0ff39c1bfca85633f3b94611cf604fcf6af1198c6a959eb8" gracePeriod=30 Jan 22 07:08:58 crc kubenswrapper[4720]: I0122 07:08:58.532992 4720 generic.go:334] "Generic (PLEG): container finished" podID="141f045d-3987-4578-b7b5-bf65e745233e" containerID="8a4459b3cd7051bbf6872a5139ac333dc6dbc45b87d6a82eb5c952de0f250ec2" exitCode=0 Jan 22 07:08:58 crc kubenswrapper[4720]: I0122 07:08:58.533036 4720 generic.go:334] "Generic (PLEG): container finished" podID="141f045d-3987-4578-b7b5-bf65e745233e" containerID="9bf9194a0332b49721aa03eee00c11e46b8bc0a1e64f0126b70f28f80d64ec22" exitCode=2 Jan 22 07:08:58 crc kubenswrapper[4720]: I0122 07:08:58.533059 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"141f045d-3987-4578-b7b5-bf65e745233e","Type":"ContainerDied","Data":"8a4459b3cd7051bbf6872a5139ac333dc6dbc45b87d6a82eb5c952de0f250ec2"} Jan 22 07:08:58 crc kubenswrapper[4720]: I0122 07:08:58.533093 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"141f045d-3987-4578-b7b5-bf65e745233e","Type":"ContainerDied","Data":"9bf9194a0332b49721aa03eee00c11e46b8bc0a1e64f0126b70f28f80d64ec22"} Jan 22 07:08:59 crc kubenswrapper[4720]: I0122 07:08:59.546544 4720 generic.go:334] "Generic (PLEG): container finished" podID="141f045d-3987-4578-b7b5-bf65e745233e" containerID="445589bd9c4c5e0b56a28f4a8918239cd9b2808a00257e492817e13644bc55d8" exitCode=0 Jan 22 07:08:59 crc kubenswrapper[4720]: I0122 07:08:59.546863 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"141f045d-3987-4578-b7b5-bf65e745233e","Type":"ContainerDied","Data":"445589bd9c4c5e0b56a28f4a8918239cd9b2808a00257e492817e13644bc55d8"} Jan 22 07:09:03 crc kubenswrapper[4720]: I0122 07:09:03.649324 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-2"] Jan 22 07:09:03 crc kubenswrapper[4720]: I0122 07:09:03.651428 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-2" Jan 22 07:09:03 crc kubenswrapper[4720]: I0122 07:09:03.665000 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-2"] Jan 22 07:09:03 crc kubenswrapper[4720]: I0122 07:09:03.825413 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/eaa181bf-c954-4b02-8173-6412083bdebe-custom-prometheus-ca\") pod \"watcher-kuttl-api-2\" (UID: \"eaa181bf-c954-4b02-8173-6412083bdebe\") " pod="watcher-kuttl-default/watcher-kuttl-api-2" Jan 22 07:09:03 crc kubenswrapper[4720]: I0122 07:09:03.825515 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eaa181bf-c954-4b02-8173-6412083bdebe-config-data\") pod \"watcher-kuttl-api-2\" (UID: \"eaa181bf-c954-4b02-8173-6412083bdebe\") " pod="watcher-kuttl-default/watcher-kuttl-api-2" Jan 22 07:09:03 crc kubenswrapper[4720]: I0122 07:09:03.825561 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eaa181bf-c954-4b02-8173-6412083bdebe-logs\") pod \"watcher-kuttl-api-2\" (UID: \"eaa181bf-c954-4b02-8173-6412083bdebe\") " pod="watcher-kuttl-default/watcher-kuttl-api-2" Jan 22 07:09:03 crc kubenswrapper[4720]: I0122 07:09:03.825580 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lhhzr\" (UniqueName: \"kubernetes.io/projected/eaa181bf-c954-4b02-8173-6412083bdebe-kube-api-access-lhhzr\") pod \"watcher-kuttl-api-2\" (UID: \"eaa181bf-c954-4b02-8173-6412083bdebe\") " pod="watcher-kuttl-default/watcher-kuttl-api-2" Jan 22 07:09:03 crc kubenswrapper[4720]: I0122 07:09:03.825597 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eaa181bf-c954-4b02-8173-6412083bdebe-combined-ca-bundle\") pod \"watcher-kuttl-api-2\" (UID: \"eaa181bf-c954-4b02-8173-6412083bdebe\") " pod="watcher-kuttl-default/watcher-kuttl-api-2" Jan 22 07:09:03 crc kubenswrapper[4720]: I0122 07:09:03.825761 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/eaa181bf-c954-4b02-8173-6412083bdebe-cert-memcached-mtls\") pod \"watcher-kuttl-api-2\" (UID: \"eaa181bf-c954-4b02-8173-6412083bdebe\") " pod="watcher-kuttl-default/watcher-kuttl-api-2" Jan 22 07:09:03 crc kubenswrapper[4720]: I0122 07:09:03.927891 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/eaa181bf-c954-4b02-8173-6412083bdebe-custom-prometheus-ca\") pod \"watcher-kuttl-api-2\" (UID: \"eaa181bf-c954-4b02-8173-6412083bdebe\") " pod="watcher-kuttl-default/watcher-kuttl-api-2" Jan 22 07:09:03 crc kubenswrapper[4720]: I0122 07:09:03.928031 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eaa181bf-c954-4b02-8173-6412083bdebe-config-data\") pod \"watcher-kuttl-api-2\" (UID: \"eaa181bf-c954-4b02-8173-6412083bdebe\") " pod="watcher-kuttl-default/watcher-kuttl-api-2" Jan 22 07:09:03 crc kubenswrapper[4720]: I0122 07:09:03.928088 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eaa181bf-c954-4b02-8173-6412083bdebe-logs\") pod \"watcher-kuttl-api-2\" (UID: \"eaa181bf-c954-4b02-8173-6412083bdebe\") " pod="watcher-kuttl-default/watcher-kuttl-api-2" Jan 22 07:09:03 crc kubenswrapper[4720]: I0122 07:09:03.928115 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lhhzr\" (UniqueName: \"kubernetes.io/projected/eaa181bf-c954-4b02-8173-6412083bdebe-kube-api-access-lhhzr\") pod \"watcher-kuttl-api-2\" (UID: \"eaa181bf-c954-4b02-8173-6412083bdebe\") " pod="watcher-kuttl-default/watcher-kuttl-api-2" Jan 22 07:09:03 crc kubenswrapper[4720]: I0122 07:09:03.928138 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eaa181bf-c954-4b02-8173-6412083bdebe-combined-ca-bundle\") pod \"watcher-kuttl-api-2\" (UID: \"eaa181bf-c954-4b02-8173-6412083bdebe\") " pod="watcher-kuttl-default/watcher-kuttl-api-2" Jan 22 07:09:03 crc kubenswrapper[4720]: I0122 07:09:03.928167 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/eaa181bf-c954-4b02-8173-6412083bdebe-cert-memcached-mtls\") pod \"watcher-kuttl-api-2\" (UID: \"eaa181bf-c954-4b02-8173-6412083bdebe\") " pod="watcher-kuttl-default/watcher-kuttl-api-2" Jan 22 07:09:03 crc kubenswrapper[4720]: I0122 07:09:03.928657 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eaa181bf-c954-4b02-8173-6412083bdebe-logs\") pod \"watcher-kuttl-api-2\" (UID: \"eaa181bf-c954-4b02-8173-6412083bdebe\") " pod="watcher-kuttl-default/watcher-kuttl-api-2" Jan 22 07:09:03 crc kubenswrapper[4720]: I0122 07:09:03.934586 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eaa181bf-c954-4b02-8173-6412083bdebe-combined-ca-bundle\") pod \"watcher-kuttl-api-2\" (UID: \"eaa181bf-c954-4b02-8173-6412083bdebe\") " pod="watcher-kuttl-default/watcher-kuttl-api-2" Jan 22 07:09:03 crc kubenswrapper[4720]: I0122 07:09:03.937334 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/eaa181bf-c954-4b02-8173-6412083bdebe-cert-memcached-mtls\") pod \"watcher-kuttl-api-2\" (UID: \"eaa181bf-c954-4b02-8173-6412083bdebe\") " pod="watcher-kuttl-default/watcher-kuttl-api-2" Jan 22 07:09:03 crc kubenswrapper[4720]: I0122 07:09:03.938684 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/eaa181bf-c954-4b02-8173-6412083bdebe-custom-prometheus-ca\") pod \"watcher-kuttl-api-2\" (UID: \"eaa181bf-c954-4b02-8173-6412083bdebe\") " pod="watcher-kuttl-default/watcher-kuttl-api-2" Jan 22 07:09:03 crc kubenswrapper[4720]: I0122 07:09:03.942549 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eaa181bf-c954-4b02-8173-6412083bdebe-config-data\") pod \"watcher-kuttl-api-2\" (UID: \"eaa181bf-c954-4b02-8173-6412083bdebe\") " pod="watcher-kuttl-default/watcher-kuttl-api-2" Jan 22 07:09:03 crc kubenswrapper[4720]: I0122 07:09:03.944339 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lhhzr\" (UniqueName: \"kubernetes.io/projected/eaa181bf-c954-4b02-8173-6412083bdebe-kube-api-access-lhhzr\") pod \"watcher-kuttl-api-2\" (UID: \"eaa181bf-c954-4b02-8173-6412083bdebe\") " pod="watcher-kuttl-default/watcher-kuttl-api-2" Jan 22 07:09:03 crc kubenswrapper[4720]: I0122 07:09:03.970179 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-2" Jan 22 07:09:04 crc kubenswrapper[4720]: I0122 07:09:04.374441 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-2"] Jan 22 07:09:04 crc kubenswrapper[4720]: W0122 07:09:04.377749 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeaa181bf_c954_4b02_8173_6412083bdebe.slice/crio-e6daaa5840fdb240f2be4cf3dd07ad8db07122fad9afb61cf568a3fa873a8b85 WatchSource:0}: Error finding container e6daaa5840fdb240f2be4cf3dd07ad8db07122fad9afb61cf568a3fa873a8b85: Status 404 returned error can't find the container with id e6daaa5840fdb240f2be4cf3dd07ad8db07122fad9afb61cf568a3fa873a8b85 Jan 22 07:09:04 crc kubenswrapper[4720]: I0122 07:09:04.610222 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-2" event={"ID":"eaa181bf-c954-4b02-8173-6412083bdebe","Type":"ContainerStarted","Data":"2a922c4d33042b015ed4c10c9c7b4e97425c6c853f32284c8aae203e1ffc5472"} Jan 22 07:09:04 crc kubenswrapper[4720]: I0122 07:09:04.610356 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-2" event={"ID":"eaa181bf-c954-4b02-8173-6412083bdebe","Type":"ContainerStarted","Data":"e6daaa5840fdb240f2be4cf3dd07ad8db07122fad9afb61cf568a3fa873a8b85"} Jan 22 07:09:05 crc kubenswrapper[4720]: I0122 07:09:05.620259 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-2" event={"ID":"eaa181bf-c954-4b02-8173-6412083bdebe","Type":"ContainerStarted","Data":"93c055a185830312f248e57132956c85109559ef3cc0fce5aafc6a53ec40cbc9"} Jan 22 07:09:05 crc kubenswrapper[4720]: I0122 07:09:05.621653 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-2" Jan 22 07:09:05 crc kubenswrapper[4720]: I0122 07:09:05.648113 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-api-2" podStartSLOduration=2.648088283 podStartE2EDuration="2.648088283s" podCreationTimestamp="2026-01-22 07:09:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 07:09:05.640359494 +0000 UTC m=+2037.782266219" watchObservedRunningTime="2026-01-22 07:09:05.648088283 +0000 UTC m=+2037.789994988" Jan 22 07:09:07 crc kubenswrapper[4720]: I0122 07:09:07.642285 4720 generic.go:334] "Generic (PLEG): container finished" podID="141f045d-3987-4578-b7b5-bf65e745233e" containerID="d0dedeaec41ba5ac0ff39c1bfca85633f3b94611cf604fcf6af1198c6a959eb8" exitCode=0 Jan 22 07:09:07 crc kubenswrapper[4720]: I0122 07:09:07.642605 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"141f045d-3987-4578-b7b5-bf65e745233e","Type":"ContainerDied","Data":"d0dedeaec41ba5ac0ff39c1bfca85633f3b94611cf604fcf6af1198c6a959eb8"} Jan 22 07:09:07 crc kubenswrapper[4720]: I0122 07:09:07.941018 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-2" Jan 22 07:09:07 crc kubenswrapper[4720]: I0122 07:09:07.953332 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:09:08 crc kubenswrapper[4720]: I0122 07:09:08.126385 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/141f045d-3987-4578-b7b5-bf65e745233e-scripts\") pod \"141f045d-3987-4578-b7b5-bf65e745233e\" (UID: \"141f045d-3987-4578-b7b5-bf65e745233e\") " Jan 22 07:09:08 crc kubenswrapper[4720]: I0122 07:09:08.126487 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/141f045d-3987-4578-b7b5-bf65e745233e-ceilometer-tls-certs\") pod \"141f045d-3987-4578-b7b5-bf65e745233e\" (UID: \"141f045d-3987-4578-b7b5-bf65e745233e\") " Jan 22 07:09:08 crc kubenswrapper[4720]: I0122 07:09:08.126563 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7bz7\" (UniqueName: \"kubernetes.io/projected/141f045d-3987-4578-b7b5-bf65e745233e-kube-api-access-w7bz7\") pod \"141f045d-3987-4578-b7b5-bf65e745233e\" (UID: \"141f045d-3987-4578-b7b5-bf65e745233e\") " Jan 22 07:09:08 crc kubenswrapper[4720]: I0122 07:09:08.126613 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/141f045d-3987-4578-b7b5-bf65e745233e-run-httpd\") pod \"141f045d-3987-4578-b7b5-bf65e745233e\" (UID: \"141f045d-3987-4578-b7b5-bf65e745233e\") " Jan 22 07:09:08 crc kubenswrapper[4720]: I0122 07:09:08.126677 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/141f045d-3987-4578-b7b5-bf65e745233e-sg-core-conf-yaml\") pod \"141f045d-3987-4578-b7b5-bf65e745233e\" (UID: \"141f045d-3987-4578-b7b5-bf65e745233e\") " Jan 22 07:09:08 crc kubenswrapper[4720]: I0122 07:09:08.126698 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/141f045d-3987-4578-b7b5-bf65e745233e-combined-ca-bundle\") pod \"141f045d-3987-4578-b7b5-bf65e745233e\" (UID: \"141f045d-3987-4578-b7b5-bf65e745233e\") " Jan 22 07:09:08 crc kubenswrapper[4720]: I0122 07:09:08.126792 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/141f045d-3987-4578-b7b5-bf65e745233e-log-httpd\") pod \"141f045d-3987-4578-b7b5-bf65e745233e\" (UID: \"141f045d-3987-4578-b7b5-bf65e745233e\") " Jan 22 07:09:08 crc kubenswrapper[4720]: I0122 07:09:08.126875 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/141f045d-3987-4578-b7b5-bf65e745233e-config-data\") pod \"141f045d-3987-4578-b7b5-bf65e745233e\" (UID: \"141f045d-3987-4578-b7b5-bf65e745233e\") " Jan 22 07:09:08 crc kubenswrapper[4720]: I0122 07:09:08.127398 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/141f045d-3987-4578-b7b5-bf65e745233e-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "141f045d-3987-4578-b7b5-bf65e745233e" (UID: "141f045d-3987-4578-b7b5-bf65e745233e"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 07:09:08 crc kubenswrapper[4720]: I0122 07:09:08.127420 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/141f045d-3987-4578-b7b5-bf65e745233e-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "141f045d-3987-4578-b7b5-bf65e745233e" (UID: "141f045d-3987-4578-b7b5-bf65e745233e"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 07:09:08 crc kubenswrapper[4720]: I0122 07:09:08.152176 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/141f045d-3987-4578-b7b5-bf65e745233e-scripts" (OuterVolumeSpecName: "scripts") pod "141f045d-3987-4578-b7b5-bf65e745233e" (UID: "141f045d-3987-4578-b7b5-bf65e745233e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:09:08 crc kubenswrapper[4720]: I0122 07:09:08.165196 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/141f045d-3987-4578-b7b5-bf65e745233e-kube-api-access-w7bz7" (OuterVolumeSpecName: "kube-api-access-w7bz7") pod "141f045d-3987-4578-b7b5-bf65e745233e" (UID: "141f045d-3987-4578-b7b5-bf65e745233e"). InnerVolumeSpecName "kube-api-access-w7bz7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 07:09:08 crc kubenswrapper[4720]: I0122 07:09:08.187441 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/141f045d-3987-4578-b7b5-bf65e745233e-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "141f045d-3987-4578-b7b5-bf65e745233e" (UID: "141f045d-3987-4578-b7b5-bf65e745233e"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:09:08 crc kubenswrapper[4720]: I0122 07:09:08.229440 4720 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/141f045d-3987-4578-b7b5-bf65e745233e-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 22 07:09:08 crc kubenswrapper[4720]: I0122 07:09:08.229479 4720 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/141f045d-3987-4578-b7b5-bf65e745233e-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 22 07:09:08 crc kubenswrapper[4720]: I0122 07:09:08.229491 4720 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/141f045d-3987-4578-b7b5-bf65e745233e-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 07:09:08 crc kubenswrapper[4720]: I0122 07:09:08.229507 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7bz7\" (UniqueName: \"kubernetes.io/projected/141f045d-3987-4578-b7b5-bf65e745233e-kube-api-access-w7bz7\") on node \"crc\" DevicePath \"\"" Jan 22 07:09:08 crc kubenswrapper[4720]: I0122 07:09:08.229521 4720 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/141f045d-3987-4578-b7b5-bf65e745233e-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 22 07:09:08 crc kubenswrapper[4720]: I0122 07:09:08.277060 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/141f045d-3987-4578-b7b5-bf65e745233e-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "141f045d-3987-4578-b7b5-bf65e745233e" (UID: "141f045d-3987-4578-b7b5-bf65e745233e"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:09:08 crc kubenswrapper[4720]: I0122 07:09:08.300191 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/141f045d-3987-4578-b7b5-bf65e745233e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "141f045d-3987-4578-b7b5-bf65e745233e" (UID: "141f045d-3987-4578-b7b5-bf65e745233e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:09:08 crc kubenswrapper[4720]: I0122 07:09:08.334399 4720 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/141f045d-3987-4578-b7b5-bf65e745233e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 07:09:08 crc kubenswrapper[4720]: I0122 07:09:08.334445 4720 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/141f045d-3987-4578-b7b5-bf65e745233e-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 22 07:09:08 crc kubenswrapper[4720]: I0122 07:09:08.352045 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/141f045d-3987-4578-b7b5-bf65e745233e-config-data" (OuterVolumeSpecName: "config-data") pod "141f045d-3987-4578-b7b5-bf65e745233e" (UID: "141f045d-3987-4578-b7b5-bf65e745233e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:09:08 crc kubenswrapper[4720]: I0122 07:09:08.435836 4720 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/141f045d-3987-4578-b7b5-bf65e745233e-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 07:09:08 crc kubenswrapper[4720]: I0122 07:09:08.655736 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"141f045d-3987-4578-b7b5-bf65e745233e","Type":"ContainerDied","Data":"96835430102cc00ed51bb7e7bfcd5b2db244df711ddc846705ecb6f3cbb6a8a5"} Jan 22 07:09:08 crc kubenswrapper[4720]: I0122 07:09:08.655809 4720 scope.go:117] "RemoveContainer" containerID="8a4459b3cd7051bbf6872a5139ac333dc6dbc45b87d6a82eb5c952de0f250ec2" Jan 22 07:09:08 crc kubenswrapper[4720]: I0122 07:09:08.655841 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:09:08 crc kubenswrapper[4720]: I0122 07:09:08.675961 4720 scope.go:117] "RemoveContainer" containerID="9bf9194a0332b49721aa03eee00c11e46b8bc0a1e64f0126b70f28f80d64ec22" Jan 22 07:09:08 crc kubenswrapper[4720]: I0122 07:09:08.692940 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 07:09:08 crc kubenswrapper[4720]: I0122 07:09:08.703414 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 07:09:08 crc kubenswrapper[4720]: I0122 07:09:08.723524 4720 scope.go:117] "RemoveContainer" containerID="d0dedeaec41ba5ac0ff39c1bfca85633f3b94611cf604fcf6af1198c6a959eb8" Jan 22 07:09:08 crc kubenswrapper[4720]: I0122 07:09:08.748963 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 07:09:08 crc kubenswrapper[4720]: E0122 07:09:08.749419 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="141f045d-3987-4578-b7b5-bf65e745233e" containerName="ceilometer-central-agent" Jan 22 07:09:08 crc kubenswrapper[4720]: I0122 07:09:08.749438 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="141f045d-3987-4578-b7b5-bf65e745233e" containerName="ceilometer-central-agent" Jan 22 07:09:08 crc kubenswrapper[4720]: E0122 07:09:08.749448 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="141f045d-3987-4578-b7b5-bf65e745233e" containerName="proxy-httpd" Jan 22 07:09:08 crc kubenswrapper[4720]: I0122 07:09:08.749455 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="141f045d-3987-4578-b7b5-bf65e745233e" containerName="proxy-httpd" Jan 22 07:09:08 crc kubenswrapper[4720]: E0122 07:09:08.749476 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="141f045d-3987-4578-b7b5-bf65e745233e" containerName="sg-core" Jan 22 07:09:08 crc kubenswrapper[4720]: I0122 07:09:08.749482 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="141f045d-3987-4578-b7b5-bf65e745233e" containerName="sg-core" Jan 22 07:09:08 crc kubenswrapper[4720]: E0122 07:09:08.749497 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="141f045d-3987-4578-b7b5-bf65e745233e" containerName="ceilometer-notification-agent" Jan 22 07:09:08 crc kubenswrapper[4720]: I0122 07:09:08.749503 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="141f045d-3987-4578-b7b5-bf65e745233e" containerName="ceilometer-notification-agent" Jan 22 07:09:08 crc kubenswrapper[4720]: I0122 07:09:08.749668 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="141f045d-3987-4578-b7b5-bf65e745233e" containerName="proxy-httpd" Jan 22 07:09:08 crc kubenswrapper[4720]: I0122 07:09:08.749870 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="141f045d-3987-4578-b7b5-bf65e745233e" containerName="sg-core" Jan 22 07:09:08 crc kubenswrapper[4720]: I0122 07:09:08.749882 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="141f045d-3987-4578-b7b5-bf65e745233e" containerName="ceilometer-notification-agent" Jan 22 07:09:08 crc kubenswrapper[4720]: I0122 07:09:08.749895 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="141f045d-3987-4578-b7b5-bf65e745233e" containerName="ceilometer-central-agent" Jan 22 07:09:08 crc kubenswrapper[4720]: I0122 07:09:08.755378 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 07:09:08 crc kubenswrapper[4720]: I0122 07:09:08.755512 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:09:08 crc kubenswrapper[4720]: I0122 07:09:08.758876 4720 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-scripts" Jan 22 07:09:08 crc kubenswrapper[4720]: I0122 07:09:08.759374 4720 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-config-data" Jan 22 07:09:08 crc kubenswrapper[4720]: I0122 07:09:08.759609 4720 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-ceilometer-internal-svc" Jan 22 07:09:08 crc kubenswrapper[4720]: I0122 07:09:08.762146 4720 scope.go:117] "RemoveContainer" containerID="445589bd9c4c5e0b56a28f4a8918239cd9b2808a00257e492817e13644bc55d8" Jan 22 07:09:08 crc kubenswrapper[4720]: I0122 07:09:08.945448 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1bfd898a-5438-4c41-b043-989ea7ef24d0-scripts\") pod \"ceilometer-0\" (UID: \"1bfd898a-5438-4c41-b043-989ea7ef24d0\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:09:08 crc kubenswrapper[4720]: I0122 07:09:08.945699 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1bfd898a-5438-4c41-b043-989ea7ef24d0-log-httpd\") pod \"ceilometer-0\" (UID: \"1bfd898a-5438-4c41-b043-989ea7ef24d0\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:09:08 crc kubenswrapper[4720]: I0122 07:09:08.945821 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1bfd898a-5438-4c41-b043-989ea7ef24d0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"1bfd898a-5438-4c41-b043-989ea7ef24d0\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:09:08 crc kubenswrapper[4720]: I0122 07:09:08.945981 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9cbpq\" (UniqueName: \"kubernetes.io/projected/1bfd898a-5438-4c41-b043-989ea7ef24d0-kube-api-access-9cbpq\") pod \"ceilometer-0\" (UID: \"1bfd898a-5438-4c41-b043-989ea7ef24d0\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:09:08 crc kubenswrapper[4720]: I0122 07:09:08.946085 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1bfd898a-5438-4c41-b043-989ea7ef24d0-run-httpd\") pod \"ceilometer-0\" (UID: \"1bfd898a-5438-4c41-b043-989ea7ef24d0\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:09:08 crc kubenswrapper[4720]: I0122 07:09:08.946170 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/1bfd898a-5438-4c41-b043-989ea7ef24d0-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"1bfd898a-5438-4c41-b043-989ea7ef24d0\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:09:08 crc kubenswrapper[4720]: I0122 07:09:08.946258 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1bfd898a-5438-4c41-b043-989ea7ef24d0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"1bfd898a-5438-4c41-b043-989ea7ef24d0\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:09:08 crc kubenswrapper[4720]: I0122 07:09:08.946314 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1bfd898a-5438-4c41-b043-989ea7ef24d0-config-data\") pod \"ceilometer-0\" (UID: \"1bfd898a-5438-4c41-b043-989ea7ef24d0\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:09:08 crc kubenswrapper[4720]: I0122 07:09:08.971573 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-2" Jan 22 07:09:09 crc kubenswrapper[4720]: I0122 07:09:09.047893 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1bfd898a-5438-4c41-b043-989ea7ef24d0-scripts\") pod \"ceilometer-0\" (UID: \"1bfd898a-5438-4c41-b043-989ea7ef24d0\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:09:09 crc kubenswrapper[4720]: I0122 07:09:09.047975 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1bfd898a-5438-4c41-b043-989ea7ef24d0-log-httpd\") pod \"ceilometer-0\" (UID: \"1bfd898a-5438-4c41-b043-989ea7ef24d0\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:09:09 crc kubenswrapper[4720]: I0122 07:09:09.048016 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1bfd898a-5438-4c41-b043-989ea7ef24d0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"1bfd898a-5438-4c41-b043-989ea7ef24d0\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:09:09 crc kubenswrapper[4720]: I0122 07:09:09.048096 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9cbpq\" (UniqueName: \"kubernetes.io/projected/1bfd898a-5438-4c41-b043-989ea7ef24d0-kube-api-access-9cbpq\") pod \"ceilometer-0\" (UID: \"1bfd898a-5438-4c41-b043-989ea7ef24d0\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:09:09 crc kubenswrapper[4720]: I0122 07:09:09.048120 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1bfd898a-5438-4c41-b043-989ea7ef24d0-run-httpd\") pod \"ceilometer-0\" (UID: \"1bfd898a-5438-4c41-b043-989ea7ef24d0\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:09:09 crc kubenswrapper[4720]: I0122 07:09:09.048164 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/1bfd898a-5438-4c41-b043-989ea7ef24d0-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"1bfd898a-5438-4c41-b043-989ea7ef24d0\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:09:09 crc kubenswrapper[4720]: I0122 07:09:09.048198 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1bfd898a-5438-4c41-b043-989ea7ef24d0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"1bfd898a-5438-4c41-b043-989ea7ef24d0\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:09:09 crc kubenswrapper[4720]: I0122 07:09:09.048248 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1bfd898a-5438-4c41-b043-989ea7ef24d0-config-data\") pod \"ceilometer-0\" (UID: \"1bfd898a-5438-4c41-b043-989ea7ef24d0\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:09:09 crc kubenswrapper[4720]: I0122 07:09:09.048565 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1bfd898a-5438-4c41-b043-989ea7ef24d0-log-httpd\") pod \"ceilometer-0\" (UID: \"1bfd898a-5438-4c41-b043-989ea7ef24d0\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:09:09 crc kubenswrapper[4720]: I0122 07:09:09.048719 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1bfd898a-5438-4c41-b043-989ea7ef24d0-run-httpd\") pod \"ceilometer-0\" (UID: \"1bfd898a-5438-4c41-b043-989ea7ef24d0\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:09:09 crc kubenswrapper[4720]: I0122 07:09:09.054039 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/1bfd898a-5438-4c41-b043-989ea7ef24d0-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"1bfd898a-5438-4c41-b043-989ea7ef24d0\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:09:09 crc kubenswrapper[4720]: I0122 07:09:09.054230 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1bfd898a-5438-4c41-b043-989ea7ef24d0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"1bfd898a-5438-4c41-b043-989ea7ef24d0\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:09:09 crc kubenswrapper[4720]: I0122 07:09:09.054620 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1bfd898a-5438-4c41-b043-989ea7ef24d0-config-data\") pod \"ceilometer-0\" (UID: \"1bfd898a-5438-4c41-b043-989ea7ef24d0\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:09:09 crc kubenswrapper[4720]: I0122 07:09:09.057649 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1bfd898a-5438-4c41-b043-989ea7ef24d0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"1bfd898a-5438-4c41-b043-989ea7ef24d0\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:09:09 crc kubenswrapper[4720]: I0122 07:09:09.059614 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1bfd898a-5438-4c41-b043-989ea7ef24d0-scripts\") pod \"ceilometer-0\" (UID: \"1bfd898a-5438-4c41-b043-989ea7ef24d0\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:09:09 crc kubenswrapper[4720]: I0122 07:09:09.067690 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9cbpq\" (UniqueName: \"kubernetes.io/projected/1bfd898a-5438-4c41-b043-989ea7ef24d0-kube-api-access-9cbpq\") pod \"ceilometer-0\" (UID: \"1bfd898a-5438-4c41-b043-989ea7ef24d0\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:09:09 crc kubenswrapper[4720]: I0122 07:09:09.110069 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:09:09 crc kubenswrapper[4720]: I0122 07:09:09.603955 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 07:09:09 crc kubenswrapper[4720]: I0122 07:09:09.614765 4720 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 22 07:09:09 crc kubenswrapper[4720]: I0122 07:09:09.664006 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"1bfd898a-5438-4c41-b043-989ea7ef24d0","Type":"ContainerStarted","Data":"8bfc3c47c8346a733ce2d410b0341c3abcb330427b70e5f0bffe3f2f69643134"} Jan 22 07:09:10 crc kubenswrapper[4720]: I0122 07:09:10.228603 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="141f045d-3987-4578-b7b5-bf65e745233e" path="/var/lib/kubelet/pods/141f045d-3987-4578-b7b5-bf65e745233e/volumes" Jan 22 07:09:10 crc kubenswrapper[4720]: I0122 07:09:10.675129 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"1bfd898a-5438-4c41-b043-989ea7ef24d0","Type":"ContainerStarted","Data":"f252f8514c42bb1071ee3becf32807f4730a4207cd74db2f62bc7d41df78d91e"} Jan 22 07:09:11 crc kubenswrapper[4720]: I0122 07:09:11.684674 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"1bfd898a-5438-4c41-b043-989ea7ef24d0","Type":"ContainerStarted","Data":"38d5ef9569c36f999e1ec73fbe636bf3f254a24292d79e9ed995ebe41ad8f0c7"} Jan 22 07:09:12 crc kubenswrapper[4720]: I0122 07:09:12.694414 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"1bfd898a-5438-4c41-b043-989ea7ef24d0","Type":"ContainerStarted","Data":"88630a110903fd1968947f53b3f0be10936faade7686d87353f488304510e4d8"} Jan 22 07:09:13 crc kubenswrapper[4720]: I0122 07:09:13.706070 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"1bfd898a-5438-4c41-b043-989ea7ef24d0","Type":"ContainerStarted","Data":"ba271508e28e3554c6dd19286a2bf0ccc772a022acca3d6285cfcc6c819d460a"} Jan 22 07:09:13 crc kubenswrapper[4720]: I0122 07:09:13.706339 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:09:13 crc kubenswrapper[4720]: I0122 07:09:13.799646 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/ceilometer-0" podStartSLOduration=2.509254325 podStartE2EDuration="5.799616881s" podCreationTimestamp="2026-01-22 07:09:08 +0000 UTC" firstStartedPulling="2026-01-22 07:09:09.614410255 +0000 UTC m=+2041.756316960" lastFinishedPulling="2026-01-22 07:09:12.904772811 +0000 UTC m=+2045.046679516" observedRunningTime="2026-01-22 07:09:13.756884114 +0000 UTC m=+2045.898790829" watchObservedRunningTime="2026-01-22 07:09:13.799616881 +0000 UTC m=+2045.941523586" Jan 22 07:09:13 crc kubenswrapper[4720]: I0122 07:09:13.971659 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-api-2" Jan 22 07:09:13 crc kubenswrapper[4720]: I0122 07:09:13.978557 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-api-2" Jan 22 07:09:14 crc kubenswrapper[4720]: I0122 07:09:14.721053 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-2" Jan 22 07:09:15 crc kubenswrapper[4720]: I0122 07:09:15.331880 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-2"] Jan 22 07:09:15 crc kubenswrapper[4720]: I0122 07:09:15.361284 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-1"] Jan 22 07:09:15 crc kubenswrapper[4720]: I0122 07:09:15.361588 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-1" podUID="04f95580-6f16-4d5e-8a74-d2f3dcce4109" containerName="watcher-kuttl-api-log" containerID="cri-o://04426488c4a18c1b4b80d4ebeb17423bf69c8d4cfbbf2b702f147da4954b75b6" gracePeriod=30 Jan 22 07:09:15 crc kubenswrapper[4720]: I0122 07:09:15.362146 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-1" podUID="04f95580-6f16-4d5e-8a74-d2f3dcce4109" containerName="watcher-api" containerID="cri-o://e9bc3d11e15b14a0d9f73853f74fa018ad0b09d6aeb8cfeb54400f630e96909c" gracePeriod=30 Jan 22 07:09:15 crc kubenswrapper[4720]: I0122 07:09:15.392863 4720 prober.go:107] "Probe failed" probeType="Liveness" pod="watcher-kuttl-default/watcher-kuttl-api-1" podUID="04f95580-6f16-4d5e-8a74-d2f3dcce4109" containerName="watcher-kuttl-api-log" probeResult="failure" output="Get \"http://10.217.0.188:9322/\": EOF" Jan 22 07:09:15 crc kubenswrapper[4720]: I0122 07:09:15.725262 4720 generic.go:334] "Generic (PLEG): container finished" podID="04f95580-6f16-4d5e-8a74-d2f3dcce4109" containerID="04426488c4a18c1b4b80d4ebeb17423bf69c8d4cfbbf2b702f147da4954b75b6" exitCode=143 Jan 22 07:09:15 crc kubenswrapper[4720]: I0122 07:09:15.725364 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-1" event={"ID":"04f95580-6f16-4d5e-8a74-d2f3dcce4109","Type":"ContainerDied","Data":"04426488c4a18c1b4b80d4ebeb17423bf69c8d4cfbbf2b702f147da4954b75b6"} Jan 22 07:09:16 crc kubenswrapper[4720]: I0122 07:09:16.751221 4720 generic.go:334] "Generic (PLEG): container finished" podID="04f95580-6f16-4d5e-8a74-d2f3dcce4109" containerID="e9bc3d11e15b14a0d9f73853f74fa018ad0b09d6aeb8cfeb54400f630e96909c" exitCode=0 Jan 22 07:09:16 crc kubenswrapper[4720]: I0122 07:09:16.751671 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-2" podUID="eaa181bf-c954-4b02-8173-6412083bdebe" containerName="watcher-kuttl-api-log" containerID="cri-o://2a922c4d33042b015ed4c10c9c7b4e97425c6c853f32284c8aae203e1ffc5472" gracePeriod=30 Jan 22 07:09:16 crc kubenswrapper[4720]: I0122 07:09:16.751989 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-1" event={"ID":"04f95580-6f16-4d5e-8a74-d2f3dcce4109","Type":"ContainerDied","Data":"e9bc3d11e15b14a0d9f73853f74fa018ad0b09d6aeb8cfeb54400f630e96909c"} Jan 22 07:09:16 crc kubenswrapper[4720]: I0122 07:09:16.752371 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-2" podUID="eaa181bf-c954-4b02-8173-6412083bdebe" containerName="watcher-api" containerID="cri-o://93c055a185830312f248e57132956c85109559ef3cc0fce5aafc6a53ec40cbc9" gracePeriod=30 Jan 22 07:09:16 crc kubenswrapper[4720]: I0122 07:09:16.947397 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 22 07:09:17 crc kubenswrapper[4720]: I0122 07:09:17.011460 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/04f95580-6f16-4d5e-8a74-d2f3dcce4109-custom-prometheus-ca\") pod \"04f95580-6f16-4d5e-8a74-d2f3dcce4109\" (UID: \"04f95580-6f16-4d5e-8a74-d2f3dcce4109\") " Jan 22 07:09:17 crc kubenswrapper[4720]: I0122 07:09:17.011533 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04f95580-6f16-4d5e-8a74-d2f3dcce4109-combined-ca-bundle\") pod \"04f95580-6f16-4d5e-8a74-d2f3dcce4109\" (UID: \"04f95580-6f16-4d5e-8a74-d2f3dcce4109\") " Jan 22 07:09:17 crc kubenswrapper[4720]: I0122 07:09:17.011577 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/04f95580-6f16-4d5e-8a74-d2f3dcce4109-cert-memcached-mtls\") pod \"04f95580-6f16-4d5e-8a74-d2f3dcce4109\" (UID: \"04f95580-6f16-4d5e-8a74-d2f3dcce4109\") " Jan 22 07:09:17 crc kubenswrapper[4720]: I0122 07:09:17.011677 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04f95580-6f16-4d5e-8a74-d2f3dcce4109-config-data\") pod \"04f95580-6f16-4d5e-8a74-d2f3dcce4109\" (UID: \"04f95580-6f16-4d5e-8a74-d2f3dcce4109\") " Jan 22 07:09:17 crc kubenswrapper[4720]: I0122 07:09:17.011748 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/04f95580-6f16-4d5e-8a74-d2f3dcce4109-logs\") pod \"04f95580-6f16-4d5e-8a74-d2f3dcce4109\" (UID: \"04f95580-6f16-4d5e-8a74-d2f3dcce4109\") " Jan 22 07:09:17 crc kubenswrapper[4720]: I0122 07:09:17.011800 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j4cgb\" (UniqueName: \"kubernetes.io/projected/04f95580-6f16-4d5e-8a74-d2f3dcce4109-kube-api-access-j4cgb\") pod \"04f95580-6f16-4d5e-8a74-d2f3dcce4109\" (UID: \"04f95580-6f16-4d5e-8a74-d2f3dcce4109\") " Jan 22 07:09:17 crc kubenswrapper[4720]: I0122 07:09:17.015485 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/04f95580-6f16-4d5e-8a74-d2f3dcce4109-logs" (OuterVolumeSpecName: "logs") pod "04f95580-6f16-4d5e-8a74-d2f3dcce4109" (UID: "04f95580-6f16-4d5e-8a74-d2f3dcce4109"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 07:09:17 crc kubenswrapper[4720]: I0122 07:09:17.019409 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/04f95580-6f16-4d5e-8a74-d2f3dcce4109-kube-api-access-j4cgb" (OuterVolumeSpecName: "kube-api-access-j4cgb") pod "04f95580-6f16-4d5e-8a74-d2f3dcce4109" (UID: "04f95580-6f16-4d5e-8a74-d2f3dcce4109"). InnerVolumeSpecName "kube-api-access-j4cgb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 07:09:17 crc kubenswrapper[4720]: I0122 07:09:17.045150 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04f95580-6f16-4d5e-8a74-d2f3dcce4109-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "04f95580-6f16-4d5e-8a74-d2f3dcce4109" (UID: "04f95580-6f16-4d5e-8a74-d2f3dcce4109"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:09:17 crc kubenswrapper[4720]: I0122 07:09:17.066258 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04f95580-6f16-4d5e-8a74-d2f3dcce4109-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "04f95580-6f16-4d5e-8a74-d2f3dcce4109" (UID: "04f95580-6f16-4d5e-8a74-d2f3dcce4109"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:09:17 crc kubenswrapper[4720]: I0122 07:09:17.085096 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04f95580-6f16-4d5e-8a74-d2f3dcce4109-config-data" (OuterVolumeSpecName: "config-data") pod "04f95580-6f16-4d5e-8a74-d2f3dcce4109" (UID: "04f95580-6f16-4d5e-8a74-d2f3dcce4109"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:09:17 crc kubenswrapper[4720]: I0122 07:09:17.094446 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04f95580-6f16-4d5e-8a74-d2f3dcce4109-cert-memcached-mtls" (OuterVolumeSpecName: "cert-memcached-mtls") pod "04f95580-6f16-4d5e-8a74-d2f3dcce4109" (UID: "04f95580-6f16-4d5e-8a74-d2f3dcce4109"). InnerVolumeSpecName "cert-memcached-mtls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:09:17 crc kubenswrapper[4720]: I0122 07:09:17.117213 4720 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/04f95580-6f16-4d5e-8a74-d2f3dcce4109-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Jan 22 07:09:17 crc kubenswrapper[4720]: I0122 07:09:17.117261 4720 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04f95580-6f16-4d5e-8a74-d2f3dcce4109-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 07:09:17 crc kubenswrapper[4720]: I0122 07:09:17.117273 4720 reconciler_common.go:293] "Volume detached for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/04f95580-6f16-4d5e-8a74-d2f3dcce4109-cert-memcached-mtls\") on node \"crc\" DevicePath \"\"" Jan 22 07:09:17 crc kubenswrapper[4720]: I0122 07:09:17.117286 4720 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04f95580-6f16-4d5e-8a74-d2f3dcce4109-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 07:09:17 crc kubenswrapper[4720]: I0122 07:09:17.117296 4720 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/04f95580-6f16-4d5e-8a74-d2f3dcce4109-logs\") on node \"crc\" DevicePath \"\"" Jan 22 07:09:17 crc kubenswrapper[4720]: I0122 07:09:17.117305 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j4cgb\" (UniqueName: \"kubernetes.io/projected/04f95580-6f16-4d5e-8a74-d2f3dcce4109-kube-api-access-j4cgb\") on node \"crc\" DevicePath \"\"" Jan 22 07:09:17 crc kubenswrapper[4720]: I0122 07:09:17.763713 4720 generic.go:334] "Generic (PLEG): container finished" podID="eaa181bf-c954-4b02-8173-6412083bdebe" containerID="93c055a185830312f248e57132956c85109559ef3cc0fce5aafc6a53ec40cbc9" exitCode=0 Jan 22 07:09:17 crc kubenswrapper[4720]: I0122 07:09:17.763749 4720 generic.go:334] "Generic (PLEG): container finished" podID="eaa181bf-c954-4b02-8173-6412083bdebe" containerID="2a922c4d33042b015ed4c10c9c7b4e97425c6c853f32284c8aae203e1ffc5472" exitCode=143 Jan 22 07:09:17 crc kubenswrapper[4720]: I0122 07:09:17.763797 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-2" event={"ID":"eaa181bf-c954-4b02-8173-6412083bdebe","Type":"ContainerDied","Data":"93c055a185830312f248e57132956c85109559ef3cc0fce5aafc6a53ec40cbc9"} Jan 22 07:09:17 crc kubenswrapper[4720]: I0122 07:09:17.763850 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-2" event={"ID":"eaa181bf-c954-4b02-8173-6412083bdebe","Type":"ContainerDied","Data":"2a922c4d33042b015ed4c10c9c7b4e97425c6c853f32284c8aae203e1ffc5472"} Jan 22 07:09:17 crc kubenswrapper[4720]: I0122 07:09:17.765467 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-1" event={"ID":"04f95580-6f16-4d5e-8a74-d2f3dcce4109","Type":"ContainerDied","Data":"e9cabe487c42666fce6015dfcd365601793ec73d5bc2bfa3cf1887b94576d971"} Jan 22 07:09:17 crc kubenswrapper[4720]: I0122 07:09:17.765523 4720 scope.go:117] "RemoveContainer" containerID="e9bc3d11e15b14a0d9f73853f74fa018ad0b09d6aeb8cfeb54400f630e96909c" Jan 22 07:09:17 crc kubenswrapper[4720]: I0122 07:09:17.765589 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 22 07:09:17 crc kubenswrapper[4720]: I0122 07:09:17.787143 4720 scope.go:117] "RemoveContainer" containerID="04426488c4a18c1b4b80d4ebeb17423bf69c8d4cfbbf2b702f147da4954b75b6" Jan 22 07:09:17 crc kubenswrapper[4720]: I0122 07:09:17.817147 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-1"] Jan 22 07:09:17 crc kubenswrapper[4720]: I0122 07:09:17.824294 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-1"] Jan 22 07:09:18 crc kubenswrapper[4720]: I0122 07:09:18.055820 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-2" Jan 22 07:09:18 crc kubenswrapper[4720]: I0122 07:09:18.133786 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eaa181bf-c954-4b02-8173-6412083bdebe-config-data\") pod \"eaa181bf-c954-4b02-8173-6412083bdebe\" (UID: \"eaa181bf-c954-4b02-8173-6412083bdebe\") " Jan 22 07:09:18 crc kubenswrapper[4720]: I0122 07:09:18.134542 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/eaa181bf-c954-4b02-8173-6412083bdebe-cert-memcached-mtls\") pod \"eaa181bf-c954-4b02-8173-6412083bdebe\" (UID: \"eaa181bf-c954-4b02-8173-6412083bdebe\") " Jan 22 07:09:18 crc kubenswrapper[4720]: I0122 07:09:18.134619 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/eaa181bf-c954-4b02-8173-6412083bdebe-custom-prometheus-ca\") pod \"eaa181bf-c954-4b02-8173-6412083bdebe\" (UID: \"eaa181bf-c954-4b02-8173-6412083bdebe\") " Jan 22 07:09:18 crc kubenswrapper[4720]: I0122 07:09:18.134680 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eaa181bf-c954-4b02-8173-6412083bdebe-combined-ca-bundle\") pod \"eaa181bf-c954-4b02-8173-6412083bdebe\" (UID: \"eaa181bf-c954-4b02-8173-6412083bdebe\") " Jan 22 07:09:18 crc kubenswrapper[4720]: I0122 07:09:18.134721 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lhhzr\" (UniqueName: \"kubernetes.io/projected/eaa181bf-c954-4b02-8173-6412083bdebe-kube-api-access-lhhzr\") pod \"eaa181bf-c954-4b02-8173-6412083bdebe\" (UID: \"eaa181bf-c954-4b02-8173-6412083bdebe\") " Jan 22 07:09:18 crc kubenswrapper[4720]: I0122 07:09:18.134740 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eaa181bf-c954-4b02-8173-6412083bdebe-logs\") pod \"eaa181bf-c954-4b02-8173-6412083bdebe\" (UID: \"eaa181bf-c954-4b02-8173-6412083bdebe\") " Jan 22 07:09:18 crc kubenswrapper[4720]: I0122 07:09:18.135673 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eaa181bf-c954-4b02-8173-6412083bdebe-logs" (OuterVolumeSpecName: "logs") pod "eaa181bf-c954-4b02-8173-6412083bdebe" (UID: "eaa181bf-c954-4b02-8173-6412083bdebe"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 07:09:18 crc kubenswrapper[4720]: I0122 07:09:18.140559 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eaa181bf-c954-4b02-8173-6412083bdebe-kube-api-access-lhhzr" (OuterVolumeSpecName: "kube-api-access-lhhzr") pod "eaa181bf-c954-4b02-8173-6412083bdebe" (UID: "eaa181bf-c954-4b02-8173-6412083bdebe"). InnerVolumeSpecName "kube-api-access-lhhzr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 07:09:18 crc kubenswrapper[4720]: I0122 07:09:18.164042 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eaa181bf-c954-4b02-8173-6412083bdebe-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "eaa181bf-c954-4b02-8173-6412083bdebe" (UID: "eaa181bf-c954-4b02-8173-6412083bdebe"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:09:18 crc kubenswrapper[4720]: I0122 07:09:18.178249 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eaa181bf-c954-4b02-8173-6412083bdebe-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "eaa181bf-c954-4b02-8173-6412083bdebe" (UID: "eaa181bf-c954-4b02-8173-6412083bdebe"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:09:18 crc kubenswrapper[4720]: I0122 07:09:18.203234 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eaa181bf-c954-4b02-8173-6412083bdebe-config-data" (OuterVolumeSpecName: "config-data") pod "eaa181bf-c954-4b02-8173-6412083bdebe" (UID: "eaa181bf-c954-4b02-8173-6412083bdebe"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:09:18 crc kubenswrapper[4720]: I0122 07:09:18.222623 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eaa181bf-c954-4b02-8173-6412083bdebe-cert-memcached-mtls" (OuterVolumeSpecName: "cert-memcached-mtls") pod "eaa181bf-c954-4b02-8173-6412083bdebe" (UID: "eaa181bf-c954-4b02-8173-6412083bdebe"). InnerVolumeSpecName "cert-memcached-mtls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:09:18 crc kubenswrapper[4720]: I0122 07:09:18.223616 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="04f95580-6f16-4d5e-8a74-d2f3dcce4109" path="/var/lib/kubelet/pods/04f95580-6f16-4d5e-8a74-d2f3dcce4109/volumes" Jan 22 07:09:18 crc kubenswrapper[4720]: I0122 07:09:18.237127 4720 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eaa181bf-c954-4b02-8173-6412083bdebe-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 07:09:18 crc kubenswrapper[4720]: I0122 07:09:18.237171 4720 reconciler_common.go:293] "Volume detached for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/eaa181bf-c954-4b02-8173-6412083bdebe-cert-memcached-mtls\") on node \"crc\" DevicePath \"\"" Jan 22 07:09:18 crc kubenswrapper[4720]: I0122 07:09:18.237185 4720 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/eaa181bf-c954-4b02-8173-6412083bdebe-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Jan 22 07:09:18 crc kubenswrapper[4720]: I0122 07:09:18.237197 4720 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eaa181bf-c954-4b02-8173-6412083bdebe-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 07:09:18 crc kubenswrapper[4720]: I0122 07:09:18.237210 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lhhzr\" (UniqueName: \"kubernetes.io/projected/eaa181bf-c954-4b02-8173-6412083bdebe-kube-api-access-lhhzr\") on node \"crc\" DevicePath \"\"" Jan 22 07:09:18 crc kubenswrapper[4720]: I0122 07:09:18.237223 4720 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eaa181bf-c954-4b02-8173-6412083bdebe-logs\") on node \"crc\" DevicePath \"\"" Jan 22 07:09:18 crc kubenswrapper[4720]: I0122 07:09:18.779774 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-2" Jan 22 07:09:18 crc kubenswrapper[4720]: I0122 07:09:18.779791 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-2" event={"ID":"eaa181bf-c954-4b02-8173-6412083bdebe","Type":"ContainerDied","Data":"e6daaa5840fdb240f2be4cf3dd07ad8db07122fad9afb61cf568a3fa873a8b85"} Jan 22 07:09:18 crc kubenswrapper[4720]: I0122 07:09:18.780008 4720 scope.go:117] "RemoveContainer" containerID="93c055a185830312f248e57132956c85109559ef3cc0fce5aafc6a53ec40cbc9" Jan 22 07:09:18 crc kubenswrapper[4720]: I0122 07:09:18.805458 4720 scope.go:117] "RemoveContainer" containerID="2a922c4d33042b015ed4c10c9c7b4e97425c6c853f32284c8aae203e1ffc5472" Jan 22 07:09:18 crc kubenswrapper[4720]: I0122 07:09:18.830831 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-2"] Jan 22 07:09:18 crc kubenswrapper[4720]: I0122 07:09:18.840302 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-2"] Jan 22 07:09:19 crc kubenswrapper[4720]: I0122 07:09:19.623002 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 22 07:09:19 crc kubenswrapper[4720]: I0122 07:09:19.623869 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="4435d9ae-544f-4758-a6fc-15f2827c9adb" containerName="watcher-kuttl-api-log" containerID="cri-o://ccb903735f90e1bf5564d2b29631fec0ca193cc48b489bd9866d7f2db102d68d" gracePeriod=30 Jan 22 07:09:19 crc kubenswrapper[4720]: I0122 07:09:19.623933 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="4435d9ae-544f-4758-a6fc-15f2827c9adb" containerName="watcher-api" containerID="cri-o://e4deac1227cde3e3308a5017a2426f0c9a87a2375f4ea7ffb8aecfeb775f21cb" gracePeriod=30 Jan 22 07:09:19 crc kubenswrapper[4720]: I0122 07:09:19.793677 4720 generic.go:334] "Generic (PLEG): container finished" podID="4435d9ae-544f-4758-a6fc-15f2827c9adb" containerID="ccb903735f90e1bf5564d2b29631fec0ca193cc48b489bd9866d7f2db102d68d" exitCode=143 Jan 22 07:09:19 crc kubenswrapper[4720]: I0122 07:09:19.793727 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"4435d9ae-544f-4758-a6fc-15f2827c9adb","Type":"ContainerDied","Data":"ccb903735f90e1bf5564d2b29631fec0ca193cc48b489bd9866d7f2db102d68d"} Jan 22 07:09:20 crc kubenswrapper[4720]: I0122 07:09:20.113402 4720 prober.go:107] "Probe failed" probeType="Readiness" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="4435d9ae-544f-4758-a6fc-15f2827c9adb" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.187:9322/\": read tcp 10.217.0.2:42710->10.217.0.187:9322: read: connection reset by peer" Jan 22 07:09:20 crc kubenswrapper[4720]: I0122 07:09:20.113402 4720 prober.go:107] "Probe failed" probeType="Readiness" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="4435d9ae-544f-4758-a6fc-15f2827c9adb" containerName="watcher-kuttl-api-log" probeResult="failure" output="Get \"http://10.217.0.187:9322/\": read tcp 10.217.0.2:42712->10.217.0.187:9322: read: connection reset by peer" Jan 22 07:09:20 crc kubenswrapper[4720]: I0122 07:09:20.221670 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eaa181bf-c954-4b02-8173-6412083bdebe" path="/var/lib/kubelet/pods/eaa181bf-c954-4b02-8173-6412083bdebe/volumes" Jan 22 07:09:20 crc kubenswrapper[4720]: I0122 07:09:20.508781 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:09:20 crc kubenswrapper[4720]: I0122 07:09:20.578760 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4435d9ae-544f-4758-a6fc-15f2827c9adb-config-data\") pod \"4435d9ae-544f-4758-a6fc-15f2827c9adb\" (UID: \"4435d9ae-544f-4758-a6fc-15f2827c9adb\") " Jan 22 07:09:20 crc kubenswrapper[4720]: I0122 07:09:20.579168 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rb2rw\" (UniqueName: \"kubernetes.io/projected/4435d9ae-544f-4758-a6fc-15f2827c9adb-kube-api-access-rb2rw\") pod \"4435d9ae-544f-4758-a6fc-15f2827c9adb\" (UID: \"4435d9ae-544f-4758-a6fc-15f2827c9adb\") " Jan 22 07:09:20 crc kubenswrapper[4720]: I0122 07:09:20.579220 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/4435d9ae-544f-4758-a6fc-15f2827c9adb-custom-prometheus-ca\") pod \"4435d9ae-544f-4758-a6fc-15f2827c9adb\" (UID: \"4435d9ae-544f-4758-a6fc-15f2827c9adb\") " Jan 22 07:09:20 crc kubenswrapper[4720]: I0122 07:09:20.579348 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4435d9ae-544f-4758-a6fc-15f2827c9adb-logs\") pod \"4435d9ae-544f-4758-a6fc-15f2827c9adb\" (UID: \"4435d9ae-544f-4758-a6fc-15f2827c9adb\") " Jan 22 07:09:20 crc kubenswrapper[4720]: I0122 07:09:20.579480 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4435d9ae-544f-4758-a6fc-15f2827c9adb-combined-ca-bundle\") pod \"4435d9ae-544f-4758-a6fc-15f2827c9adb\" (UID: \"4435d9ae-544f-4758-a6fc-15f2827c9adb\") " Jan 22 07:09:20 crc kubenswrapper[4720]: I0122 07:09:20.579521 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/4435d9ae-544f-4758-a6fc-15f2827c9adb-cert-memcached-mtls\") pod \"4435d9ae-544f-4758-a6fc-15f2827c9adb\" (UID: \"4435d9ae-544f-4758-a6fc-15f2827c9adb\") " Jan 22 07:09:20 crc kubenswrapper[4720]: I0122 07:09:20.580125 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4435d9ae-544f-4758-a6fc-15f2827c9adb-logs" (OuterVolumeSpecName: "logs") pod "4435d9ae-544f-4758-a6fc-15f2827c9adb" (UID: "4435d9ae-544f-4758-a6fc-15f2827c9adb"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 07:09:20 crc kubenswrapper[4720]: I0122 07:09:20.599495 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4435d9ae-544f-4758-a6fc-15f2827c9adb-kube-api-access-rb2rw" (OuterVolumeSpecName: "kube-api-access-rb2rw") pod "4435d9ae-544f-4758-a6fc-15f2827c9adb" (UID: "4435d9ae-544f-4758-a6fc-15f2827c9adb"). InnerVolumeSpecName "kube-api-access-rb2rw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 07:09:20 crc kubenswrapper[4720]: I0122 07:09:20.614383 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4435d9ae-544f-4758-a6fc-15f2827c9adb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4435d9ae-544f-4758-a6fc-15f2827c9adb" (UID: "4435d9ae-544f-4758-a6fc-15f2827c9adb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:09:20 crc kubenswrapper[4720]: I0122 07:09:20.657131 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4435d9ae-544f-4758-a6fc-15f2827c9adb-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "4435d9ae-544f-4758-a6fc-15f2827c9adb" (UID: "4435d9ae-544f-4758-a6fc-15f2827c9adb"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:09:20 crc kubenswrapper[4720]: I0122 07:09:20.662117 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4435d9ae-544f-4758-a6fc-15f2827c9adb-config-data" (OuterVolumeSpecName: "config-data") pod "4435d9ae-544f-4758-a6fc-15f2827c9adb" (UID: "4435d9ae-544f-4758-a6fc-15f2827c9adb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:09:20 crc kubenswrapper[4720]: I0122 07:09:20.680007 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4435d9ae-544f-4758-a6fc-15f2827c9adb-cert-memcached-mtls" (OuterVolumeSpecName: "cert-memcached-mtls") pod "4435d9ae-544f-4758-a6fc-15f2827c9adb" (UID: "4435d9ae-544f-4758-a6fc-15f2827c9adb"). InnerVolumeSpecName "cert-memcached-mtls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:09:20 crc kubenswrapper[4720]: I0122 07:09:20.681328 4720 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4435d9ae-544f-4758-a6fc-15f2827c9adb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 07:09:20 crc kubenswrapper[4720]: I0122 07:09:20.681353 4720 reconciler_common.go:293] "Volume detached for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/4435d9ae-544f-4758-a6fc-15f2827c9adb-cert-memcached-mtls\") on node \"crc\" DevicePath \"\"" Jan 22 07:09:20 crc kubenswrapper[4720]: I0122 07:09:20.681366 4720 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4435d9ae-544f-4758-a6fc-15f2827c9adb-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 07:09:20 crc kubenswrapper[4720]: I0122 07:09:20.681378 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rb2rw\" (UniqueName: \"kubernetes.io/projected/4435d9ae-544f-4758-a6fc-15f2827c9adb-kube-api-access-rb2rw\") on node \"crc\" DevicePath \"\"" Jan 22 07:09:20 crc kubenswrapper[4720]: I0122 07:09:20.681390 4720 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/4435d9ae-544f-4758-a6fc-15f2827c9adb-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Jan 22 07:09:20 crc kubenswrapper[4720]: I0122 07:09:20.681402 4720 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4435d9ae-544f-4758-a6fc-15f2827c9adb-logs\") on node \"crc\" DevicePath \"\"" Jan 22 07:09:20 crc kubenswrapper[4720]: I0122 07:09:20.803880 4720 generic.go:334] "Generic (PLEG): container finished" podID="4435d9ae-544f-4758-a6fc-15f2827c9adb" containerID="e4deac1227cde3e3308a5017a2426f0c9a87a2375f4ea7ffb8aecfeb775f21cb" exitCode=0 Jan 22 07:09:20 crc kubenswrapper[4720]: I0122 07:09:20.803937 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"4435d9ae-544f-4758-a6fc-15f2827c9adb","Type":"ContainerDied","Data":"e4deac1227cde3e3308a5017a2426f0c9a87a2375f4ea7ffb8aecfeb775f21cb"} Jan 22 07:09:20 crc kubenswrapper[4720]: I0122 07:09:20.803969 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:09:20 crc kubenswrapper[4720]: I0122 07:09:20.803989 4720 scope.go:117] "RemoveContainer" containerID="e4deac1227cde3e3308a5017a2426f0c9a87a2375f4ea7ffb8aecfeb775f21cb" Jan 22 07:09:20 crc kubenswrapper[4720]: I0122 07:09:20.803974 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"4435d9ae-544f-4758-a6fc-15f2827c9adb","Type":"ContainerDied","Data":"26fa0f9906a7410e4c633ede2e90ada4c526b1b5242fc74f1640b2a90d98c344"} Jan 22 07:09:20 crc kubenswrapper[4720]: I0122 07:09:20.853601 4720 scope.go:117] "RemoveContainer" containerID="ccb903735f90e1bf5564d2b29631fec0ca193cc48b489bd9866d7f2db102d68d" Jan 22 07:09:20 crc kubenswrapper[4720]: I0122 07:09:20.859260 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 22 07:09:20 crc kubenswrapper[4720]: I0122 07:09:20.872607 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 22 07:09:20 crc kubenswrapper[4720]: I0122 07:09:20.884946 4720 scope.go:117] "RemoveContainer" containerID="e4deac1227cde3e3308a5017a2426f0c9a87a2375f4ea7ffb8aecfeb775f21cb" Jan 22 07:09:20 crc kubenswrapper[4720]: E0122 07:09:20.885568 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e4deac1227cde3e3308a5017a2426f0c9a87a2375f4ea7ffb8aecfeb775f21cb\": container with ID starting with e4deac1227cde3e3308a5017a2426f0c9a87a2375f4ea7ffb8aecfeb775f21cb not found: ID does not exist" containerID="e4deac1227cde3e3308a5017a2426f0c9a87a2375f4ea7ffb8aecfeb775f21cb" Jan 22 07:09:20 crc kubenswrapper[4720]: I0122 07:09:20.885611 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e4deac1227cde3e3308a5017a2426f0c9a87a2375f4ea7ffb8aecfeb775f21cb"} err="failed to get container status \"e4deac1227cde3e3308a5017a2426f0c9a87a2375f4ea7ffb8aecfeb775f21cb\": rpc error: code = NotFound desc = could not find container \"e4deac1227cde3e3308a5017a2426f0c9a87a2375f4ea7ffb8aecfeb775f21cb\": container with ID starting with e4deac1227cde3e3308a5017a2426f0c9a87a2375f4ea7ffb8aecfeb775f21cb not found: ID does not exist" Jan 22 07:09:20 crc kubenswrapper[4720]: I0122 07:09:20.885640 4720 scope.go:117] "RemoveContainer" containerID="ccb903735f90e1bf5564d2b29631fec0ca193cc48b489bd9866d7f2db102d68d" Jan 22 07:09:20 crc kubenswrapper[4720]: E0122 07:09:20.886006 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ccb903735f90e1bf5564d2b29631fec0ca193cc48b489bd9866d7f2db102d68d\": container with ID starting with ccb903735f90e1bf5564d2b29631fec0ca193cc48b489bd9866d7f2db102d68d not found: ID does not exist" containerID="ccb903735f90e1bf5564d2b29631fec0ca193cc48b489bd9866d7f2db102d68d" Jan 22 07:09:20 crc kubenswrapper[4720]: I0122 07:09:20.886076 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ccb903735f90e1bf5564d2b29631fec0ca193cc48b489bd9866d7f2db102d68d"} err="failed to get container status \"ccb903735f90e1bf5564d2b29631fec0ca193cc48b489bd9866d7f2db102d68d\": rpc error: code = NotFound desc = could not find container \"ccb903735f90e1bf5564d2b29631fec0ca193cc48b489bd9866d7f2db102d68d\": container with ID starting with ccb903735f90e1bf5564d2b29631fec0ca193cc48b489bd9866d7f2db102d68d not found: ID does not exist" Jan 22 07:09:20 crc kubenswrapper[4720]: I0122 07:09:20.911464 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-bsgvf"] Jan 22 07:09:20 crc kubenswrapper[4720]: I0122 07:09:20.926033 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-bsgvf"] Jan 22 07:09:20 crc kubenswrapper[4720]: I0122 07:09:20.983952 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher3ef0-account-delete-prxcd"] Jan 22 07:09:20 crc kubenswrapper[4720]: E0122 07:09:20.984405 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4435d9ae-544f-4758-a6fc-15f2827c9adb" containerName="watcher-kuttl-api-log" Jan 22 07:09:20 crc kubenswrapper[4720]: I0122 07:09:20.984423 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="4435d9ae-544f-4758-a6fc-15f2827c9adb" containerName="watcher-kuttl-api-log" Jan 22 07:09:20 crc kubenswrapper[4720]: E0122 07:09:20.984436 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4435d9ae-544f-4758-a6fc-15f2827c9adb" containerName="watcher-api" Jan 22 07:09:20 crc kubenswrapper[4720]: I0122 07:09:20.984443 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="4435d9ae-544f-4758-a6fc-15f2827c9adb" containerName="watcher-api" Jan 22 07:09:20 crc kubenswrapper[4720]: E0122 07:09:20.984457 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04f95580-6f16-4d5e-8a74-d2f3dcce4109" containerName="watcher-kuttl-api-log" Jan 22 07:09:20 crc kubenswrapper[4720]: I0122 07:09:20.984462 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="04f95580-6f16-4d5e-8a74-d2f3dcce4109" containerName="watcher-kuttl-api-log" Jan 22 07:09:20 crc kubenswrapper[4720]: E0122 07:09:20.984476 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eaa181bf-c954-4b02-8173-6412083bdebe" containerName="watcher-api" Jan 22 07:09:20 crc kubenswrapper[4720]: I0122 07:09:20.984482 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="eaa181bf-c954-4b02-8173-6412083bdebe" containerName="watcher-api" Jan 22 07:09:20 crc kubenswrapper[4720]: E0122 07:09:20.984493 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eaa181bf-c954-4b02-8173-6412083bdebe" containerName="watcher-kuttl-api-log" Jan 22 07:09:20 crc kubenswrapper[4720]: I0122 07:09:20.984499 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="eaa181bf-c954-4b02-8173-6412083bdebe" containerName="watcher-kuttl-api-log" Jan 22 07:09:20 crc kubenswrapper[4720]: E0122 07:09:20.984514 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04f95580-6f16-4d5e-8a74-d2f3dcce4109" containerName="watcher-api" Jan 22 07:09:20 crc kubenswrapper[4720]: I0122 07:09:20.984521 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="04f95580-6f16-4d5e-8a74-d2f3dcce4109" containerName="watcher-api" Jan 22 07:09:20 crc kubenswrapper[4720]: I0122 07:09:20.984673 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="4435d9ae-544f-4758-a6fc-15f2827c9adb" containerName="watcher-kuttl-api-log" Jan 22 07:09:20 crc kubenswrapper[4720]: I0122 07:09:20.984687 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="4435d9ae-544f-4758-a6fc-15f2827c9adb" containerName="watcher-api" Jan 22 07:09:20 crc kubenswrapper[4720]: I0122 07:09:20.984701 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="eaa181bf-c954-4b02-8173-6412083bdebe" containerName="watcher-kuttl-api-log" Jan 22 07:09:20 crc kubenswrapper[4720]: I0122 07:09:20.984712 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="04f95580-6f16-4d5e-8a74-d2f3dcce4109" containerName="watcher-api" Jan 22 07:09:20 crc kubenswrapper[4720]: I0122 07:09:20.984721 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="eaa181bf-c954-4b02-8173-6412083bdebe" containerName="watcher-api" Jan 22 07:09:20 crc kubenswrapper[4720]: I0122 07:09:20.984732 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="04f95580-6f16-4d5e-8a74-d2f3dcce4109" containerName="watcher-kuttl-api-log" Jan 22 07:09:20 crc kubenswrapper[4720]: I0122 07:09:20.985497 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher3ef0-account-delete-prxcd" Jan 22 07:09:20 crc kubenswrapper[4720]: I0122 07:09:20.993900 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher3ef0-account-delete-prxcd"] Jan 22 07:09:21 crc kubenswrapper[4720]: I0122 07:09:21.025739 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 22 07:09:21 crc kubenswrapper[4720]: I0122 07:09:21.026444 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podUID="d4418ed8-9908-46c6-9afa-9dc16f45aa57" containerName="watcher-applier" containerID="cri-o://0dfa71ddf9b88f05f9d2f2851878e5f7d7886ab5d28529fc046332f951c03424" gracePeriod=30 Jan 22 07:09:21 crc kubenswrapper[4720]: I0122 07:09:21.084444 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 22 07:09:21 crc kubenswrapper[4720]: I0122 07:09:21.084717 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="363c7177-2769-4f77-9d02-4631aa271f29" containerName="watcher-decision-engine" containerID="cri-o://085b52ceafd7b8e22df1742d3b17e58956e20512271b0bd9379eba22f4521900" gracePeriod=30 Jan 22 07:09:21 crc kubenswrapper[4720]: I0122 07:09:21.089180 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8c7e9564-7ac1-4fe5-8d86-f73015634f2d-operator-scripts\") pod \"watcher3ef0-account-delete-prxcd\" (UID: \"8c7e9564-7ac1-4fe5-8d86-f73015634f2d\") " pod="watcher-kuttl-default/watcher3ef0-account-delete-prxcd" Jan 22 07:09:21 crc kubenswrapper[4720]: I0122 07:09:21.089268 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cvbqw\" (UniqueName: \"kubernetes.io/projected/8c7e9564-7ac1-4fe5-8d86-f73015634f2d-kube-api-access-cvbqw\") pod \"watcher3ef0-account-delete-prxcd\" (UID: \"8c7e9564-7ac1-4fe5-8d86-f73015634f2d\") " pod="watcher-kuttl-default/watcher3ef0-account-delete-prxcd" Jan 22 07:09:21 crc kubenswrapper[4720]: I0122 07:09:21.191492 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8c7e9564-7ac1-4fe5-8d86-f73015634f2d-operator-scripts\") pod \"watcher3ef0-account-delete-prxcd\" (UID: \"8c7e9564-7ac1-4fe5-8d86-f73015634f2d\") " pod="watcher-kuttl-default/watcher3ef0-account-delete-prxcd" Jan 22 07:09:21 crc kubenswrapper[4720]: I0122 07:09:21.191845 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cvbqw\" (UniqueName: \"kubernetes.io/projected/8c7e9564-7ac1-4fe5-8d86-f73015634f2d-kube-api-access-cvbqw\") pod \"watcher3ef0-account-delete-prxcd\" (UID: \"8c7e9564-7ac1-4fe5-8d86-f73015634f2d\") " pod="watcher-kuttl-default/watcher3ef0-account-delete-prxcd" Jan 22 07:09:21 crc kubenswrapper[4720]: I0122 07:09:21.192534 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8c7e9564-7ac1-4fe5-8d86-f73015634f2d-operator-scripts\") pod \"watcher3ef0-account-delete-prxcd\" (UID: \"8c7e9564-7ac1-4fe5-8d86-f73015634f2d\") " pod="watcher-kuttl-default/watcher3ef0-account-delete-prxcd" Jan 22 07:09:21 crc kubenswrapper[4720]: I0122 07:09:21.217274 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cvbqw\" (UniqueName: \"kubernetes.io/projected/8c7e9564-7ac1-4fe5-8d86-f73015634f2d-kube-api-access-cvbqw\") pod \"watcher3ef0-account-delete-prxcd\" (UID: \"8c7e9564-7ac1-4fe5-8d86-f73015634f2d\") " pod="watcher-kuttl-default/watcher3ef0-account-delete-prxcd" Jan 22 07:09:21 crc kubenswrapper[4720]: I0122 07:09:21.305812 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher3ef0-account-delete-prxcd" Jan 22 07:09:21 crc kubenswrapper[4720]: I0122 07:09:21.830136 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher3ef0-account-delete-prxcd"] Jan 22 07:09:21 crc kubenswrapper[4720]: W0122 07:09:21.832801 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8c7e9564_7ac1_4fe5_8d86_f73015634f2d.slice/crio-9191cd7b2ef3352262950b2aec44dd6fbde59e7d44e8c61df97d5c55b555bf5d WatchSource:0}: Error finding container 9191cd7b2ef3352262950b2aec44dd6fbde59e7d44e8c61df97d5c55b555bf5d: Status 404 returned error can't find the container with id 9191cd7b2ef3352262950b2aec44dd6fbde59e7d44e8c61df97d5c55b555bf5d Jan 22 07:09:22 crc kubenswrapper[4720]: I0122 07:09:22.221331 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4435d9ae-544f-4758-a6fc-15f2827c9adb" path="/var/lib/kubelet/pods/4435d9ae-544f-4758-a6fc-15f2827c9adb/volumes" Jan 22 07:09:22 crc kubenswrapper[4720]: I0122 07:09:22.222369 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5bd76e9c-a303-4608-ab91-268931894795" path="/var/lib/kubelet/pods/5bd76e9c-a303-4608-ab91-268931894795/volumes" Jan 22 07:09:22 crc kubenswrapper[4720]: I0122 07:09:22.828846 4720 generic.go:334] "Generic (PLEG): container finished" podID="8c7e9564-7ac1-4fe5-8d86-f73015634f2d" containerID="c1d3677247e926de6bdf9b25d658331f06213d5da76ebf21d4fa186dfdde6499" exitCode=0 Jan 22 07:09:22 crc kubenswrapper[4720]: I0122 07:09:22.828959 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher3ef0-account-delete-prxcd" event={"ID":"8c7e9564-7ac1-4fe5-8d86-f73015634f2d","Type":"ContainerDied","Data":"c1d3677247e926de6bdf9b25d658331f06213d5da76ebf21d4fa186dfdde6499"} Jan 22 07:09:22 crc kubenswrapper[4720]: I0122 07:09:22.829325 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher3ef0-account-delete-prxcd" event={"ID":"8c7e9564-7ac1-4fe5-8d86-f73015634f2d","Type":"ContainerStarted","Data":"9191cd7b2ef3352262950b2aec44dd6fbde59e7d44e8c61df97d5c55b555bf5d"} Jan 22 07:09:23 crc kubenswrapper[4720]: I0122 07:09:23.595248 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 07:09:23 crc kubenswrapper[4720]: I0122 07:09:23.596317 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="1bfd898a-5438-4c41-b043-989ea7ef24d0" containerName="proxy-httpd" containerID="cri-o://ba271508e28e3554c6dd19286a2bf0ccc772a022acca3d6285cfcc6c819d460a" gracePeriod=30 Jan 22 07:09:23 crc kubenswrapper[4720]: I0122 07:09:23.596356 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="1bfd898a-5438-4c41-b043-989ea7ef24d0" containerName="sg-core" containerID="cri-o://88630a110903fd1968947f53b3f0be10936faade7686d87353f488304510e4d8" gracePeriod=30 Jan 22 07:09:23 crc kubenswrapper[4720]: I0122 07:09:23.596419 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="1bfd898a-5438-4c41-b043-989ea7ef24d0" containerName="ceilometer-notification-agent" containerID="cri-o://38d5ef9569c36f999e1ec73fbe636bf3f254a24292d79e9ed995ebe41ad8f0c7" gracePeriod=30 Jan 22 07:09:23 crc kubenswrapper[4720]: I0122 07:09:23.596290 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="1bfd898a-5438-4c41-b043-989ea7ef24d0" containerName="ceilometer-central-agent" containerID="cri-o://f252f8514c42bb1071ee3becf32807f4730a4207cd74db2f62bc7d41df78d91e" gracePeriod=30 Jan 22 07:09:23 crc kubenswrapper[4720]: I0122 07:09:23.699389 4720 prober.go:107] "Probe failed" probeType="Readiness" pod="watcher-kuttl-default/ceilometer-0" podUID="1bfd898a-5438-4c41-b043-989ea7ef24d0" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.192:3000/\": read tcp 10.217.0.2:46808->10.217.0.192:3000: read: connection reset by peer" Jan 22 07:09:23 crc kubenswrapper[4720]: I0122 07:09:23.839858 4720 generic.go:334] "Generic (PLEG): container finished" podID="1bfd898a-5438-4c41-b043-989ea7ef24d0" containerID="ba271508e28e3554c6dd19286a2bf0ccc772a022acca3d6285cfcc6c819d460a" exitCode=0 Jan 22 07:09:23 crc kubenswrapper[4720]: I0122 07:09:23.839903 4720 generic.go:334] "Generic (PLEG): container finished" podID="1bfd898a-5438-4c41-b043-989ea7ef24d0" containerID="88630a110903fd1968947f53b3f0be10936faade7686d87353f488304510e4d8" exitCode=2 Jan 22 07:09:23 crc kubenswrapper[4720]: I0122 07:09:23.839950 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"1bfd898a-5438-4c41-b043-989ea7ef24d0","Type":"ContainerDied","Data":"ba271508e28e3554c6dd19286a2bf0ccc772a022acca3d6285cfcc6c819d460a"} Jan 22 07:09:23 crc kubenswrapper[4720]: I0122 07:09:23.840062 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"1bfd898a-5438-4c41-b043-989ea7ef24d0","Type":"ContainerDied","Data":"88630a110903fd1968947f53b3f0be10936faade7686d87353f488304510e4d8"} Jan 22 07:09:24 crc kubenswrapper[4720]: I0122 07:09:24.191497 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher3ef0-account-delete-prxcd" Jan 22 07:09:24 crc kubenswrapper[4720]: I0122 07:09:24.247258 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8c7e9564-7ac1-4fe5-8d86-f73015634f2d-operator-scripts\") pod \"8c7e9564-7ac1-4fe5-8d86-f73015634f2d\" (UID: \"8c7e9564-7ac1-4fe5-8d86-f73015634f2d\") " Jan 22 07:09:24 crc kubenswrapper[4720]: I0122 07:09:24.247361 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cvbqw\" (UniqueName: \"kubernetes.io/projected/8c7e9564-7ac1-4fe5-8d86-f73015634f2d-kube-api-access-cvbqw\") pod \"8c7e9564-7ac1-4fe5-8d86-f73015634f2d\" (UID: \"8c7e9564-7ac1-4fe5-8d86-f73015634f2d\") " Jan 22 07:09:24 crc kubenswrapper[4720]: I0122 07:09:24.248128 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8c7e9564-7ac1-4fe5-8d86-f73015634f2d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8c7e9564-7ac1-4fe5-8d86-f73015634f2d" (UID: "8c7e9564-7ac1-4fe5-8d86-f73015634f2d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 07:09:24 crc kubenswrapper[4720]: I0122 07:09:24.248380 4720 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8c7e9564-7ac1-4fe5-8d86-f73015634f2d-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 07:09:24 crc kubenswrapper[4720]: I0122 07:09:24.254047 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c7e9564-7ac1-4fe5-8d86-f73015634f2d-kube-api-access-cvbqw" (OuterVolumeSpecName: "kube-api-access-cvbqw") pod "8c7e9564-7ac1-4fe5-8d86-f73015634f2d" (UID: "8c7e9564-7ac1-4fe5-8d86-f73015634f2d"). InnerVolumeSpecName "kube-api-access-cvbqw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 07:09:24 crc kubenswrapper[4720]: I0122 07:09:24.352246 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cvbqw\" (UniqueName: \"kubernetes.io/projected/8c7e9564-7ac1-4fe5-8d86-f73015634f2d-kube-api-access-cvbqw\") on node \"crc\" DevicePath \"\"" Jan 22 07:09:24 crc kubenswrapper[4720]: I0122 07:09:24.853394 4720 generic.go:334] "Generic (PLEG): container finished" podID="d4418ed8-9908-46c6-9afa-9dc16f45aa57" containerID="0dfa71ddf9b88f05f9d2f2851878e5f7d7886ab5d28529fc046332f951c03424" exitCode=0 Jan 22 07:09:24 crc kubenswrapper[4720]: I0122 07:09:24.853503 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"d4418ed8-9908-46c6-9afa-9dc16f45aa57","Type":"ContainerDied","Data":"0dfa71ddf9b88f05f9d2f2851878e5f7d7886ab5d28529fc046332f951c03424"} Jan 22 07:09:24 crc kubenswrapper[4720]: I0122 07:09:24.859722 4720 generic.go:334] "Generic (PLEG): container finished" podID="1bfd898a-5438-4c41-b043-989ea7ef24d0" containerID="f252f8514c42bb1071ee3becf32807f4730a4207cd74db2f62bc7d41df78d91e" exitCode=0 Jan 22 07:09:24 crc kubenswrapper[4720]: I0122 07:09:24.859921 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"1bfd898a-5438-4c41-b043-989ea7ef24d0","Type":"ContainerDied","Data":"f252f8514c42bb1071ee3becf32807f4730a4207cd74db2f62bc7d41df78d91e"} Jan 22 07:09:24 crc kubenswrapper[4720]: I0122 07:09:24.862505 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher3ef0-account-delete-prxcd" event={"ID":"8c7e9564-7ac1-4fe5-8d86-f73015634f2d","Type":"ContainerDied","Data":"9191cd7b2ef3352262950b2aec44dd6fbde59e7d44e8c61df97d5c55b555bf5d"} Jan 22 07:09:24 crc kubenswrapper[4720]: I0122 07:09:24.862553 4720 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9191cd7b2ef3352262950b2aec44dd6fbde59e7d44e8c61df97d5c55b555bf5d" Jan 22 07:09:24 crc kubenswrapper[4720]: I0122 07:09:24.862628 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher3ef0-account-delete-prxcd" Jan 22 07:09:25 crc kubenswrapper[4720]: I0122 07:09:25.182958 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 07:09:25 crc kubenswrapper[4720]: I0122 07:09:25.267690 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n96kh\" (UniqueName: \"kubernetes.io/projected/d4418ed8-9908-46c6-9afa-9dc16f45aa57-kube-api-access-n96kh\") pod \"d4418ed8-9908-46c6-9afa-9dc16f45aa57\" (UID: \"d4418ed8-9908-46c6-9afa-9dc16f45aa57\") " Jan 22 07:09:25 crc kubenswrapper[4720]: I0122 07:09:25.267822 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4418ed8-9908-46c6-9afa-9dc16f45aa57-combined-ca-bundle\") pod \"d4418ed8-9908-46c6-9afa-9dc16f45aa57\" (UID: \"d4418ed8-9908-46c6-9afa-9dc16f45aa57\") " Jan 22 07:09:25 crc kubenswrapper[4720]: I0122 07:09:25.267872 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/d4418ed8-9908-46c6-9afa-9dc16f45aa57-cert-memcached-mtls\") pod \"d4418ed8-9908-46c6-9afa-9dc16f45aa57\" (UID: \"d4418ed8-9908-46c6-9afa-9dc16f45aa57\") " Jan 22 07:09:25 crc kubenswrapper[4720]: I0122 07:09:25.267928 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d4418ed8-9908-46c6-9afa-9dc16f45aa57-logs\") pod \"d4418ed8-9908-46c6-9afa-9dc16f45aa57\" (UID: \"d4418ed8-9908-46c6-9afa-9dc16f45aa57\") " Jan 22 07:09:25 crc kubenswrapper[4720]: I0122 07:09:25.267981 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4418ed8-9908-46c6-9afa-9dc16f45aa57-config-data\") pod \"d4418ed8-9908-46c6-9afa-9dc16f45aa57\" (UID: \"d4418ed8-9908-46c6-9afa-9dc16f45aa57\") " Jan 22 07:09:25 crc kubenswrapper[4720]: I0122 07:09:25.269022 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d4418ed8-9908-46c6-9afa-9dc16f45aa57-logs" (OuterVolumeSpecName: "logs") pod "d4418ed8-9908-46c6-9afa-9dc16f45aa57" (UID: "d4418ed8-9908-46c6-9afa-9dc16f45aa57"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 07:09:25 crc kubenswrapper[4720]: I0122 07:09:25.310502 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d4418ed8-9908-46c6-9afa-9dc16f45aa57-kube-api-access-n96kh" (OuterVolumeSpecName: "kube-api-access-n96kh") pod "d4418ed8-9908-46c6-9afa-9dc16f45aa57" (UID: "d4418ed8-9908-46c6-9afa-9dc16f45aa57"). InnerVolumeSpecName "kube-api-access-n96kh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 07:09:25 crc kubenswrapper[4720]: I0122 07:09:25.318095 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4418ed8-9908-46c6-9afa-9dc16f45aa57-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d4418ed8-9908-46c6-9afa-9dc16f45aa57" (UID: "d4418ed8-9908-46c6-9afa-9dc16f45aa57"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:09:25 crc kubenswrapper[4720]: I0122 07:09:25.388181 4720 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4418ed8-9908-46c6-9afa-9dc16f45aa57-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 07:09:25 crc kubenswrapper[4720]: I0122 07:09:25.388227 4720 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d4418ed8-9908-46c6-9afa-9dc16f45aa57-logs\") on node \"crc\" DevicePath \"\"" Jan 22 07:09:25 crc kubenswrapper[4720]: I0122 07:09:25.388242 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n96kh\" (UniqueName: \"kubernetes.io/projected/d4418ed8-9908-46c6-9afa-9dc16f45aa57-kube-api-access-n96kh\") on node \"crc\" DevicePath \"\"" Jan 22 07:09:25 crc kubenswrapper[4720]: I0122 07:09:25.393029 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4418ed8-9908-46c6-9afa-9dc16f45aa57-config-data" (OuterVolumeSpecName: "config-data") pod "d4418ed8-9908-46c6-9afa-9dc16f45aa57" (UID: "d4418ed8-9908-46c6-9afa-9dc16f45aa57"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:09:25 crc kubenswrapper[4720]: I0122 07:09:25.437392 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4418ed8-9908-46c6-9afa-9dc16f45aa57-cert-memcached-mtls" (OuterVolumeSpecName: "cert-memcached-mtls") pod "d4418ed8-9908-46c6-9afa-9dc16f45aa57" (UID: "d4418ed8-9908-46c6-9afa-9dc16f45aa57"). InnerVolumeSpecName "cert-memcached-mtls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:09:25 crc kubenswrapper[4720]: I0122 07:09:25.489882 4720 reconciler_common.go:293] "Volume detached for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/d4418ed8-9908-46c6-9afa-9dc16f45aa57-cert-memcached-mtls\") on node \"crc\" DevicePath \"\"" Jan 22 07:09:25 crc kubenswrapper[4720]: I0122 07:09:25.489953 4720 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4418ed8-9908-46c6-9afa-9dc16f45aa57-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 07:09:25 crc kubenswrapper[4720]: I0122 07:09:25.871947 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"d4418ed8-9908-46c6-9afa-9dc16f45aa57","Type":"ContainerDied","Data":"bf35871b5b2599edc849908d43ae38fd07129e0208147078085a223e5792a5c6"} Jan 22 07:09:25 crc kubenswrapper[4720]: I0122 07:09:25.872008 4720 scope.go:117] "RemoveContainer" containerID="0dfa71ddf9b88f05f9d2f2851878e5f7d7886ab5d28529fc046332f951c03424" Jan 22 07:09:25 crc kubenswrapper[4720]: I0122 07:09:25.872016 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 07:09:25 crc kubenswrapper[4720]: I0122 07:09:25.915312 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 22 07:09:25 crc kubenswrapper[4720]: I0122 07:09:25.923327 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 22 07:09:26 crc kubenswrapper[4720]: I0122 07:09:26.014726 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-db-create-jtjl6"] Jan 22 07:09:26 crc kubenswrapper[4720]: I0122 07:09:26.023497 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-db-create-jtjl6"] Jan 22 07:09:26 crc kubenswrapper[4720]: I0122 07:09:26.037278 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-3ef0-account-create-update-9s6ss"] Jan 22 07:09:26 crc kubenswrapper[4720]: I0122 07:09:26.044925 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher3ef0-account-delete-prxcd"] Jan 22 07:09:26 crc kubenswrapper[4720]: I0122 07:09:26.052626 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher3ef0-account-delete-prxcd"] Jan 22 07:09:26 crc kubenswrapper[4720]: I0122 07:09:26.059419 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-3ef0-account-create-update-9s6ss"] Jan 22 07:09:26 crc kubenswrapper[4720]: I0122 07:09:26.224016 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0550363a-556c-4ab4-a361-f55f7f2afbad" path="/var/lib/kubelet/pods/0550363a-556c-4ab4-a361-f55f7f2afbad/volumes" Jan 22 07:09:26 crc kubenswrapper[4720]: I0122 07:09:26.225790 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1b2d9fec-7b33-48a9-a4d3-badf06756855" path="/var/lib/kubelet/pods/1b2d9fec-7b33-48a9-a4d3-badf06756855/volumes" Jan 22 07:09:26 crc kubenswrapper[4720]: I0122 07:09:26.226595 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8c7e9564-7ac1-4fe5-8d86-f73015634f2d" path="/var/lib/kubelet/pods/8c7e9564-7ac1-4fe5-8d86-f73015634f2d/volumes" Jan 22 07:09:26 crc kubenswrapper[4720]: I0122 07:09:26.227876 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d4418ed8-9908-46c6-9afa-9dc16f45aa57" path="/var/lib/kubelet/pods/d4418ed8-9908-46c6-9afa-9dc16f45aa57/volumes" Jan 22 07:09:26 crc kubenswrapper[4720]: I0122 07:09:26.677468 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 07:09:26 crc kubenswrapper[4720]: I0122 07:09:26.730056 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/363c7177-2769-4f77-9d02-4631aa271f29-config-data\") pod \"363c7177-2769-4f77-9d02-4631aa271f29\" (UID: \"363c7177-2769-4f77-9d02-4631aa271f29\") " Jan 22 07:09:26 crc kubenswrapper[4720]: I0122 07:09:26.730452 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/363c7177-2769-4f77-9d02-4631aa271f29-combined-ca-bundle\") pod \"363c7177-2769-4f77-9d02-4631aa271f29\" (UID: \"363c7177-2769-4f77-9d02-4631aa271f29\") " Jan 22 07:09:26 crc kubenswrapper[4720]: I0122 07:09:26.730522 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/363c7177-2769-4f77-9d02-4631aa271f29-logs\") pod \"363c7177-2769-4f77-9d02-4631aa271f29\" (UID: \"363c7177-2769-4f77-9d02-4631aa271f29\") " Jan 22 07:09:26 crc kubenswrapper[4720]: I0122 07:09:26.730628 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9tvzt\" (UniqueName: \"kubernetes.io/projected/363c7177-2769-4f77-9d02-4631aa271f29-kube-api-access-9tvzt\") pod \"363c7177-2769-4f77-9d02-4631aa271f29\" (UID: \"363c7177-2769-4f77-9d02-4631aa271f29\") " Jan 22 07:09:26 crc kubenswrapper[4720]: I0122 07:09:26.730689 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/363c7177-2769-4f77-9d02-4631aa271f29-custom-prometheus-ca\") pod \"363c7177-2769-4f77-9d02-4631aa271f29\" (UID: \"363c7177-2769-4f77-9d02-4631aa271f29\") " Jan 22 07:09:26 crc kubenswrapper[4720]: I0122 07:09:26.730713 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/363c7177-2769-4f77-9d02-4631aa271f29-cert-memcached-mtls\") pod \"363c7177-2769-4f77-9d02-4631aa271f29\" (UID: \"363c7177-2769-4f77-9d02-4631aa271f29\") " Jan 22 07:09:26 crc kubenswrapper[4720]: I0122 07:09:26.735565 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/363c7177-2769-4f77-9d02-4631aa271f29-logs" (OuterVolumeSpecName: "logs") pod "363c7177-2769-4f77-9d02-4631aa271f29" (UID: "363c7177-2769-4f77-9d02-4631aa271f29"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 07:09:26 crc kubenswrapper[4720]: I0122 07:09:26.735576 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/363c7177-2769-4f77-9d02-4631aa271f29-kube-api-access-9tvzt" (OuterVolumeSpecName: "kube-api-access-9tvzt") pod "363c7177-2769-4f77-9d02-4631aa271f29" (UID: "363c7177-2769-4f77-9d02-4631aa271f29"). InnerVolumeSpecName "kube-api-access-9tvzt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 07:09:26 crc kubenswrapper[4720]: I0122 07:09:26.756628 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/363c7177-2769-4f77-9d02-4631aa271f29-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "363c7177-2769-4f77-9d02-4631aa271f29" (UID: "363c7177-2769-4f77-9d02-4631aa271f29"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:09:26 crc kubenswrapper[4720]: I0122 07:09:26.769333 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/363c7177-2769-4f77-9d02-4631aa271f29-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "363c7177-2769-4f77-9d02-4631aa271f29" (UID: "363c7177-2769-4f77-9d02-4631aa271f29"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:09:26 crc kubenswrapper[4720]: I0122 07:09:26.777799 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/363c7177-2769-4f77-9d02-4631aa271f29-config-data" (OuterVolumeSpecName: "config-data") pod "363c7177-2769-4f77-9d02-4631aa271f29" (UID: "363c7177-2769-4f77-9d02-4631aa271f29"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:09:26 crc kubenswrapper[4720]: I0122 07:09:26.816100 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/363c7177-2769-4f77-9d02-4631aa271f29-cert-memcached-mtls" (OuterVolumeSpecName: "cert-memcached-mtls") pod "363c7177-2769-4f77-9d02-4631aa271f29" (UID: "363c7177-2769-4f77-9d02-4631aa271f29"). InnerVolumeSpecName "cert-memcached-mtls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:09:26 crc kubenswrapper[4720]: I0122 07:09:26.832931 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9tvzt\" (UniqueName: \"kubernetes.io/projected/363c7177-2769-4f77-9d02-4631aa271f29-kube-api-access-9tvzt\") on node \"crc\" DevicePath \"\"" Jan 22 07:09:26 crc kubenswrapper[4720]: I0122 07:09:26.832969 4720 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/363c7177-2769-4f77-9d02-4631aa271f29-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Jan 22 07:09:26 crc kubenswrapper[4720]: I0122 07:09:26.832978 4720 reconciler_common.go:293] "Volume detached for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/363c7177-2769-4f77-9d02-4631aa271f29-cert-memcached-mtls\") on node \"crc\" DevicePath \"\"" Jan 22 07:09:26 crc kubenswrapper[4720]: I0122 07:09:26.832988 4720 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/363c7177-2769-4f77-9d02-4631aa271f29-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 07:09:26 crc kubenswrapper[4720]: I0122 07:09:26.832997 4720 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/363c7177-2769-4f77-9d02-4631aa271f29-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 07:09:26 crc kubenswrapper[4720]: I0122 07:09:26.833007 4720 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/363c7177-2769-4f77-9d02-4631aa271f29-logs\") on node \"crc\" DevicePath \"\"" Jan 22 07:09:26 crc kubenswrapper[4720]: I0122 07:09:26.882180 4720 generic.go:334] "Generic (PLEG): container finished" podID="363c7177-2769-4f77-9d02-4631aa271f29" containerID="085b52ceafd7b8e22df1742d3b17e58956e20512271b0bd9379eba22f4521900" exitCode=0 Jan 22 07:09:26 crc kubenswrapper[4720]: I0122 07:09:26.882275 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 07:09:26 crc kubenswrapper[4720]: I0122 07:09:26.882296 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"363c7177-2769-4f77-9d02-4631aa271f29","Type":"ContainerDied","Data":"085b52ceafd7b8e22df1742d3b17e58956e20512271b0bd9379eba22f4521900"} Jan 22 07:09:26 crc kubenswrapper[4720]: I0122 07:09:26.883363 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"363c7177-2769-4f77-9d02-4631aa271f29","Type":"ContainerDied","Data":"a368bd65488ea5b1366e1d580596a8b1889d2ec5556737eea21bd29cd271991c"} Jan 22 07:09:26 crc kubenswrapper[4720]: I0122 07:09:26.883415 4720 scope.go:117] "RemoveContainer" containerID="085b52ceafd7b8e22df1742d3b17e58956e20512271b0bd9379eba22f4521900" Jan 22 07:09:26 crc kubenswrapper[4720]: I0122 07:09:26.905423 4720 scope.go:117] "RemoveContainer" containerID="085b52ceafd7b8e22df1742d3b17e58956e20512271b0bd9379eba22f4521900" Jan 22 07:09:26 crc kubenswrapper[4720]: E0122 07:09:26.905893 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"085b52ceafd7b8e22df1742d3b17e58956e20512271b0bd9379eba22f4521900\": container with ID starting with 085b52ceafd7b8e22df1742d3b17e58956e20512271b0bd9379eba22f4521900 not found: ID does not exist" containerID="085b52ceafd7b8e22df1742d3b17e58956e20512271b0bd9379eba22f4521900" Jan 22 07:09:26 crc kubenswrapper[4720]: I0122 07:09:26.906031 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"085b52ceafd7b8e22df1742d3b17e58956e20512271b0bd9379eba22f4521900"} err="failed to get container status \"085b52ceafd7b8e22df1742d3b17e58956e20512271b0bd9379eba22f4521900\": rpc error: code = NotFound desc = could not find container \"085b52ceafd7b8e22df1742d3b17e58956e20512271b0bd9379eba22f4521900\": container with ID starting with 085b52ceafd7b8e22df1742d3b17e58956e20512271b0bd9379eba22f4521900 not found: ID does not exist" Jan 22 07:09:26 crc kubenswrapper[4720]: I0122 07:09:26.919738 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 22 07:09:26 crc kubenswrapper[4720]: I0122 07:09:26.929104 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 22 07:09:27 crc kubenswrapper[4720]: I0122 07:09:27.704657 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-db-create-7bkbr"] Jan 22 07:09:27 crc kubenswrapper[4720]: E0122 07:09:27.705128 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c7e9564-7ac1-4fe5-8d86-f73015634f2d" containerName="mariadb-account-delete" Jan 22 07:09:27 crc kubenswrapper[4720]: I0122 07:09:27.705150 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c7e9564-7ac1-4fe5-8d86-f73015634f2d" containerName="mariadb-account-delete" Jan 22 07:09:27 crc kubenswrapper[4720]: E0122 07:09:27.705162 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4418ed8-9908-46c6-9afa-9dc16f45aa57" containerName="watcher-applier" Jan 22 07:09:27 crc kubenswrapper[4720]: I0122 07:09:27.705173 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4418ed8-9908-46c6-9afa-9dc16f45aa57" containerName="watcher-applier" Jan 22 07:09:27 crc kubenswrapper[4720]: E0122 07:09:27.705209 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="363c7177-2769-4f77-9d02-4631aa271f29" containerName="watcher-decision-engine" Jan 22 07:09:27 crc kubenswrapper[4720]: I0122 07:09:27.705219 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="363c7177-2769-4f77-9d02-4631aa271f29" containerName="watcher-decision-engine" Jan 22 07:09:27 crc kubenswrapper[4720]: I0122 07:09:27.705441 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4418ed8-9908-46c6-9afa-9dc16f45aa57" containerName="watcher-applier" Jan 22 07:09:27 crc kubenswrapper[4720]: I0122 07:09:27.705464 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c7e9564-7ac1-4fe5-8d86-f73015634f2d" containerName="mariadb-account-delete" Jan 22 07:09:27 crc kubenswrapper[4720]: I0122 07:09:27.705484 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="363c7177-2769-4f77-9d02-4631aa271f29" containerName="watcher-decision-engine" Jan 22 07:09:27 crc kubenswrapper[4720]: I0122 07:09:27.706284 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-7bkbr" Jan 22 07:09:27 crc kubenswrapper[4720]: I0122 07:09:27.717678 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-db-create-7bkbr"] Jan 22 07:09:27 crc kubenswrapper[4720]: I0122 07:09:27.730667 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-5d39-account-create-update-7cf7p"] Jan 22 07:09:27 crc kubenswrapper[4720]: I0122 07:09:27.732060 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-5d39-account-create-update-7cf7p" Jan 22 07:09:27 crc kubenswrapper[4720]: I0122 07:09:27.734633 4720 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-db-secret" Jan 22 07:09:27 crc kubenswrapper[4720]: I0122 07:09:27.751582 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4s767\" (UniqueName: \"kubernetes.io/projected/e26c83ad-445d-4fb7-92f9-a830d1fd4e41-kube-api-access-4s767\") pod \"watcher-db-create-7bkbr\" (UID: \"e26c83ad-445d-4fb7-92f9-a830d1fd4e41\") " pod="watcher-kuttl-default/watcher-db-create-7bkbr" Jan 22 07:09:27 crc kubenswrapper[4720]: I0122 07:09:27.751643 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e26c83ad-445d-4fb7-92f9-a830d1fd4e41-operator-scripts\") pod \"watcher-db-create-7bkbr\" (UID: \"e26c83ad-445d-4fb7-92f9-a830d1fd4e41\") " pod="watcher-kuttl-default/watcher-db-create-7bkbr" Jan 22 07:09:27 crc kubenswrapper[4720]: I0122 07:09:27.759398 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-5d39-account-create-update-7cf7p"] Jan 22 07:09:27 crc kubenswrapper[4720]: I0122 07:09:27.853752 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4s767\" (UniqueName: \"kubernetes.io/projected/e26c83ad-445d-4fb7-92f9-a830d1fd4e41-kube-api-access-4s767\") pod \"watcher-db-create-7bkbr\" (UID: \"e26c83ad-445d-4fb7-92f9-a830d1fd4e41\") " pod="watcher-kuttl-default/watcher-db-create-7bkbr" Jan 22 07:09:27 crc kubenswrapper[4720]: I0122 07:09:27.854130 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e26c83ad-445d-4fb7-92f9-a830d1fd4e41-operator-scripts\") pod \"watcher-db-create-7bkbr\" (UID: \"e26c83ad-445d-4fb7-92f9-a830d1fd4e41\") " pod="watcher-kuttl-default/watcher-db-create-7bkbr" Jan 22 07:09:27 crc kubenswrapper[4720]: I0122 07:09:27.854163 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5324884d-c405-4664-b229-59325b6fff1b-operator-scripts\") pod \"watcher-5d39-account-create-update-7cf7p\" (UID: \"5324884d-c405-4664-b229-59325b6fff1b\") " pod="watcher-kuttl-default/watcher-5d39-account-create-update-7cf7p" Jan 22 07:09:27 crc kubenswrapper[4720]: I0122 07:09:27.854389 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jz6gw\" (UniqueName: \"kubernetes.io/projected/5324884d-c405-4664-b229-59325b6fff1b-kube-api-access-jz6gw\") pod \"watcher-5d39-account-create-update-7cf7p\" (UID: \"5324884d-c405-4664-b229-59325b6fff1b\") " pod="watcher-kuttl-default/watcher-5d39-account-create-update-7cf7p" Jan 22 07:09:27 crc kubenswrapper[4720]: I0122 07:09:27.854860 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e26c83ad-445d-4fb7-92f9-a830d1fd4e41-operator-scripts\") pod \"watcher-db-create-7bkbr\" (UID: \"e26c83ad-445d-4fb7-92f9-a830d1fd4e41\") " pod="watcher-kuttl-default/watcher-db-create-7bkbr" Jan 22 07:09:27 crc kubenswrapper[4720]: I0122 07:09:27.877159 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4s767\" (UniqueName: \"kubernetes.io/projected/e26c83ad-445d-4fb7-92f9-a830d1fd4e41-kube-api-access-4s767\") pod \"watcher-db-create-7bkbr\" (UID: \"e26c83ad-445d-4fb7-92f9-a830d1fd4e41\") " pod="watcher-kuttl-default/watcher-db-create-7bkbr" Jan 22 07:09:27 crc kubenswrapper[4720]: I0122 07:09:27.958881 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jz6gw\" (UniqueName: \"kubernetes.io/projected/5324884d-c405-4664-b229-59325b6fff1b-kube-api-access-jz6gw\") pod \"watcher-5d39-account-create-update-7cf7p\" (UID: \"5324884d-c405-4664-b229-59325b6fff1b\") " pod="watcher-kuttl-default/watcher-5d39-account-create-update-7cf7p" Jan 22 07:09:27 crc kubenswrapper[4720]: I0122 07:09:27.959025 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5324884d-c405-4664-b229-59325b6fff1b-operator-scripts\") pod \"watcher-5d39-account-create-update-7cf7p\" (UID: \"5324884d-c405-4664-b229-59325b6fff1b\") " pod="watcher-kuttl-default/watcher-5d39-account-create-update-7cf7p" Jan 22 07:09:27 crc kubenswrapper[4720]: I0122 07:09:27.960037 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5324884d-c405-4664-b229-59325b6fff1b-operator-scripts\") pod \"watcher-5d39-account-create-update-7cf7p\" (UID: \"5324884d-c405-4664-b229-59325b6fff1b\") " pod="watcher-kuttl-default/watcher-5d39-account-create-update-7cf7p" Jan 22 07:09:28 crc kubenswrapper[4720]: I0122 07:09:28.022931 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jz6gw\" (UniqueName: \"kubernetes.io/projected/5324884d-c405-4664-b229-59325b6fff1b-kube-api-access-jz6gw\") pod \"watcher-5d39-account-create-update-7cf7p\" (UID: \"5324884d-c405-4664-b229-59325b6fff1b\") " pod="watcher-kuttl-default/watcher-5d39-account-create-update-7cf7p" Jan 22 07:09:28 crc kubenswrapper[4720]: I0122 07:09:28.026472 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-7bkbr" Jan 22 07:09:28 crc kubenswrapper[4720]: I0122 07:09:28.070026 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-5d39-account-create-update-7cf7p" Jan 22 07:09:28 crc kubenswrapper[4720]: I0122 07:09:28.228014 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="363c7177-2769-4f77-9d02-4631aa271f29" path="/var/lib/kubelet/pods/363c7177-2769-4f77-9d02-4631aa271f29/volumes" Jan 22 07:09:28 crc kubenswrapper[4720]: I0122 07:09:28.297957 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:09:28 crc kubenswrapper[4720]: I0122 07:09:28.372262 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1bfd898a-5438-4c41-b043-989ea7ef24d0-config-data\") pod \"1bfd898a-5438-4c41-b043-989ea7ef24d0\" (UID: \"1bfd898a-5438-4c41-b043-989ea7ef24d0\") " Jan 22 07:09:28 crc kubenswrapper[4720]: I0122 07:09:28.372316 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/1bfd898a-5438-4c41-b043-989ea7ef24d0-ceilometer-tls-certs\") pod \"1bfd898a-5438-4c41-b043-989ea7ef24d0\" (UID: \"1bfd898a-5438-4c41-b043-989ea7ef24d0\") " Jan 22 07:09:28 crc kubenswrapper[4720]: I0122 07:09:28.372576 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1bfd898a-5438-4c41-b043-989ea7ef24d0-sg-core-conf-yaml\") pod \"1bfd898a-5438-4c41-b043-989ea7ef24d0\" (UID: \"1bfd898a-5438-4c41-b043-989ea7ef24d0\") " Jan 22 07:09:28 crc kubenswrapper[4720]: I0122 07:09:28.372650 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1bfd898a-5438-4c41-b043-989ea7ef24d0-log-httpd\") pod \"1bfd898a-5438-4c41-b043-989ea7ef24d0\" (UID: \"1bfd898a-5438-4c41-b043-989ea7ef24d0\") " Jan 22 07:09:28 crc kubenswrapper[4720]: I0122 07:09:28.372802 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1bfd898a-5438-4c41-b043-989ea7ef24d0-run-httpd\") pod \"1bfd898a-5438-4c41-b043-989ea7ef24d0\" (UID: \"1bfd898a-5438-4c41-b043-989ea7ef24d0\") " Jan 22 07:09:28 crc kubenswrapper[4720]: I0122 07:09:28.372840 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1bfd898a-5438-4c41-b043-989ea7ef24d0-scripts\") pod \"1bfd898a-5438-4c41-b043-989ea7ef24d0\" (UID: \"1bfd898a-5438-4c41-b043-989ea7ef24d0\") " Jan 22 07:09:28 crc kubenswrapper[4720]: I0122 07:09:28.372856 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1bfd898a-5438-4c41-b043-989ea7ef24d0-combined-ca-bundle\") pod \"1bfd898a-5438-4c41-b043-989ea7ef24d0\" (UID: \"1bfd898a-5438-4c41-b043-989ea7ef24d0\") " Jan 22 07:09:28 crc kubenswrapper[4720]: I0122 07:09:28.372875 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9cbpq\" (UniqueName: \"kubernetes.io/projected/1bfd898a-5438-4c41-b043-989ea7ef24d0-kube-api-access-9cbpq\") pod \"1bfd898a-5438-4c41-b043-989ea7ef24d0\" (UID: \"1bfd898a-5438-4c41-b043-989ea7ef24d0\") " Jan 22 07:09:28 crc kubenswrapper[4720]: I0122 07:09:28.374214 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1bfd898a-5438-4c41-b043-989ea7ef24d0-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "1bfd898a-5438-4c41-b043-989ea7ef24d0" (UID: "1bfd898a-5438-4c41-b043-989ea7ef24d0"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 07:09:28 crc kubenswrapper[4720]: I0122 07:09:28.374463 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1bfd898a-5438-4c41-b043-989ea7ef24d0-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "1bfd898a-5438-4c41-b043-989ea7ef24d0" (UID: "1bfd898a-5438-4c41-b043-989ea7ef24d0"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 07:09:28 crc kubenswrapper[4720]: I0122 07:09:28.379352 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bfd898a-5438-4c41-b043-989ea7ef24d0-scripts" (OuterVolumeSpecName: "scripts") pod "1bfd898a-5438-4c41-b043-989ea7ef24d0" (UID: "1bfd898a-5438-4c41-b043-989ea7ef24d0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:09:28 crc kubenswrapper[4720]: I0122 07:09:28.379847 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bfd898a-5438-4c41-b043-989ea7ef24d0-kube-api-access-9cbpq" (OuterVolumeSpecName: "kube-api-access-9cbpq") pod "1bfd898a-5438-4c41-b043-989ea7ef24d0" (UID: "1bfd898a-5438-4c41-b043-989ea7ef24d0"). InnerVolumeSpecName "kube-api-access-9cbpq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 07:09:28 crc kubenswrapper[4720]: I0122 07:09:28.404099 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bfd898a-5438-4c41-b043-989ea7ef24d0-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "1bfd898a-5438-4c41-b043-989ea7ef24d0" (UID: "1bfd898a-5438-4c41-b043-989ea7ef24d0"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:09:28 crc kubenswrapper[4720]: I0122 07:09:28.445091 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bfd898a-5438-4c41-b043-989ea7ef24d0-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "1bfd898a-5438-4c41-b043-989ea7ef24d0" (UID: "1bfd898a-5438-4c41-b043-989ea7ef24d0"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:09:28 crc kubenswrapper[4720]: I0122 07:09:28.447603 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bfd898a-5438-4c41-b043-989ea7ef24d0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1bfd898a-5438-4c41-b043-989ea7ef24d0" (UID: "1bfd898a-5438-4c41-b043-989ea7ef24d0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:09:28 crc kubenswrapper[4720]: I0122 07:09:28.465728 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bfd898a-5438-4c41-b043-989ea7ef24d0-config-data" (OuterVolumeSpecName: "config-data") pod "1bfd898a-5438-4c41-b043-989ea7ef24d0" (UID: "1bfd898a-5438-4c41-b043-989ea7ef24d0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:09:28 crc kubenswrapper[4720]: I0122 07:09:28.474924 4720 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1bfd898a-5438-4c41-b043-989ea7ef24d0-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 07:09:28 crc kubenswrapper[4720]: I0122 07:09:28.474955 4720 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1bfd898a-5438-4c41-b043-989ea7ef24d0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 07:09:28 crc kubenswrapper[4720]: I0122 07:09:28.474969 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9cbpq\" (UniqueName: \"kubernetes.io/projected/1bfd898a-5438-4c41-b043-989ea7ef24d0-kube-api-access-9cbpq\") on node \"crc\" DevicePath \"\"" Jan 22 07:09:28 crc kubenswrapper[4720]: I0122 07:09:28.474979 4720 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1bfd898a-5438-4c41-b043-989ea7ef24d0-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 07:09:28 crc kubenswrapper[4720]: I0122 07:09:28.474987 4720 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/1bfd898a-5438-4c41-b043-989ea7ef24d0-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 22 07:09:28 crc kubenswrapper[4720]: I0122 07:09:28.474997 4720 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1bfd898a-5438-4c41-b043-989ea7ef24d0-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 22 07:09:28 crc kubenswrapper[4720]: I0122 07:09:28.475004 4720 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1bfd898a-5438-4c41-b043-989ea7ef24d0-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 22 07:09:28 crc kubenswrapper[4720]: I0122 07:09:28.475014 4720 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1bfd898a-5438-4c41-b043-989ea7ef24d0-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 22 07:09:28 crc kubenswrapper[4720]: I0122 07:09:28.584643 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-db-create-7bkbr"] Jan 22 07:09:28 crc kubenswrapper[4720]: I0122 07:09:28.697943 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-5d39-account-create-update-7cf7p"] Jan 22 07:09:28 crc kubenswrapper[4720]: W0122 07:09:28.703063 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5324884d_c405_4664_b229_59325b6fff1b.slice/crio-d33028c6f932ae2046275c5a4d0ff1620912a0dacf286a82f28344f8cdfeb409 WatchSource:0}: Error finding container d33028c6f932ae2046275c5a4d0ff1620912a0dacf286a82f28344f8cdfeb409: Status 404 returned error can't find the container with id d33028c6f932ae2046275c5a4d0ff1620912a0dacf286a82f28344f8cdfeb409 Jan 22 07:09:28 crc kubenswrapper[4720]: I0122 07:09:28.906120 4720 generic.go:334] "Generic (PLEG): container finished" podID="1bfd898a-5438-4c41-b043-989ea7ef24d0" containerID="38d5ef9569c36f999e1ec73fbe636bf3f254a24292d79e9ed995ebe41ad8f0c7" exitCode=0 Jan 22 07:09:28 crc kubenswrapper[4720]: I0122 07:09:28.906242 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:09:28 crc kubenswrapper[4720]: I0122 07:09:28.906226 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"1bfd898a-5438-4c41-b043-989ea7ef24d0","Type":"ContainerDied","Data":"38d5ef9569c36f999e1ec73fbe636bf3f254a24292d79e9ed995ebe41ad8f0c7"} Jan 22 07:09:28 crc kubenswrapper[4720]: I0122 07:09:28.906340 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"1bfd898a-5438-4c41-b043-989ea7ef24d0","Type":"ContainerDied","Data":"8bfc3c47c8346a733ce2d410b0341c3abcb330427b70e5f0bffe3f2f69643134"} Jan 22 07:09:28 crc kubenswrapper[4720]: I0122 07:09:28.906384 4720 scope.go:117] "RemoveContainer" containerID="ba271508e28e3554c6dd19286a2bf0ccc772a022acca3d6285cfcc6c819d460a" Jan 22 07:09:28 crc kubenswrapper[4720]: I0122 07:09:28.916421 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-7bkbr" event={"ID":"e26c83ad-445d-4fb7-92f9-a830d1fd4e41","Type":"ContainerStarted","Data":"b0afe50b82a4ed4fa15c71d75b0addd355b0e11e803757d7a72ecdc39fd38963"} Jan 22 07:09:28 crc kubenswrapper[4720]: I0122 07:09:28.916494 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-7bkbr" event={"ID":"e26c83ad-445d-4fb7-92f9-a830d1fd4e41","Type":"ContainerStarted","Data":"4e5ceb2c1a1273179092d686890a80815a58d0933d1c944f33c7f2a743793c4f"} Jan 22 07:09:28 crc kubenswrapper[4720]: I0122 07:09:28.936467 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-5d39-account-create-update-7cf7p" event={"ID":"5324884d-c405-4664-b229-59325b6fff1b","Type":"ContainerStarted","Data":"07fd63215362add398cc17fe1a62a2747a73c18494f3a671e91152137886d9b4"} Jan 22 07:09:28 crc kubenswrapper[4720]: I0122 07:09:28.936540 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-5d39-account-create-update-7cf7p" event={"ID":"5324884d-c405-4664-b229-59325b6fff1b","Type":"ContainerStarted","Data":"d33028c6f932ae2046275c5a4d0ff1620912a0dacf286a82f28344f8cdfeb409"} Jan 22 07:09:28 crc kubenswrapper[4720]: I0122 07:09:28.946219 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-db-create-7bkbr" podStartSLOduration=1.946195897 podStartE2EDuration="1.946195897s" podCreationTimestamp="2026-01-22 07:09:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 07:09:28.94418496 +0000 UTC m=+2061.086091675" watchObservedRunningTime="2026-01-22 07:09:28.946195897 +0000 UTC m=+2061.088102602" Jan 22 07:09:28 crc kubenswrapper[4720]: I0122 07:09:28.962481 4720 scope.go:117] "RemoveContainer" containerID="88630a110903fd1968947f53b3f0be10936faade7686d87353f488304510e4d8" Jan 22 07:09:28 crc kubenswrapper[4720]: I0122 07:09:28.980649 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-5d39-account-create-update-7cf7p" podStartSLOduration=1.980623239 podStartE2EDuration="1.980623239s" podCreationTimestamp="2026-01-22 07:09:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 07:09:28.976597736 +0000 UTC m=+2061.118504441" watchObservedRunningTime="2026-01-22 07:09:28.980623239 +0000 UTC m=+2061.122529944" Jan 22 07:09:29 crc kubenswrapper[4720]: I0122 07:09:29.040133 4720 scope.go:117] "RemoveContainer" containerID="38d5ef9569c36f999e1ec73fbe636bf3f254a24292d79e9ed995ebe41ad8f0c7" Jan 22 07:09:29 crc kubenswrapper[4720]: I0122 07:09:29.047902 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 07:09:29 crc kubenswrapper[4720]: I0122 07:09:29.060306 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 07:09:29 crc kubenswrapper[4720]: I0122 07:09:29.083961 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 07:09:29 crc kubenswrapper[4720]: E0122 07:09:29.084351 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1bfd898a-5438-4c41-b043-989ea7ef24d0" containerName="proxy-httpd" Jan 22 07:09:29 crc kubenswrapper[4720]: I0122 07:09:29.084368 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="1bfd898a-5438-4c41-b043-989ea7ef24d0" containerName="proxy-httpd" Jan 22 07:09:29 crc kubenswrapper[4720]: E0122 07:09:29.084389 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1bfd898a-5438-4c41-b043-989ea7ef24d0" containerName="ceilometer-notification-agent" Jan 22 07:09:29 crc kubenswrapper[4720]: I0122 07:09:29.084396 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="1bfd898a-5438-4c41-b043-989ea7ef24d0" containerName="ceilometer-notification-agent" Jan 22 07:09:29 crc kubenswrapper[4720]: E0122 07:09:29.084408 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1bfd898a-5438-4c41-b043-989ea7ef24d0" containerName="sg-core" Jan 22 07:09:29 crc kubenswrapper[4720]: I0122 07:09:29.084414 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="1bfd898a-5438-4c41-b043-989ea7ef24d0" containerName="sg-core" Jan 22 07:09:29 crc kubenswrapper[4720]: E0122 07:09:29.084432 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1bfd898a-5438-4c41-b043-989ea7ef24d0" containerName="ceilometer-central-agent" Jan 22 07:09:29 crc kubenswrapper[4720]: I0122 07:09:29.084438 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="1bfd898a-5438-4c41-b043-989ea7ef24d0" containerName="ceilometer-central-agent" Jan 22 07:09:29 crc kubenswrapper[4720]: I0122 07:09:29.084586 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="1bfd898a-5438-4c41-b043-989ea7ef24d0" containerName="ceilometer-notification-agent" Jan 22 07:09:29 crc kubenswrapper[4720]: I0122 07:09:29.084602 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="1bfd898a-5438-4c41-b043-989ea7ef24d0" containerName="sg-core" Jan 22 07:09:29 crc kubenswrapper[4720]: I0122 07:09:29.084616 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="1bfd898a-5438-4c41-b043-989ea7ef24d0" containerName="ceilometer-central-agent" Jan 22 07:09:29 crc kubenswrapper[4720]: I0122 07:09:29.084626 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="1bfd898a-5438-4c41-b043-989ea7ef24d0" containerName="proxy-httpd" Jan 22 07:09:29 crc kubenswrapper[4720]: I0122 07:09:29.086034 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:09:29 crc kubenswrapper[4720]: I0122 07:09:29.096262 4720 scope.go:117] "RemoveContainer" containerID="f252f8514c42bb1071ee3becf32807f4730a4207cd74db2f62bc7d41df78d91e" Jan 22 07:09:29 crc kubenswrapper[4720]: I0122 07:09:29.096648 4720 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-ceilometer-internal-svc" Jan 22 07:09:29 crc kubenswrapper[4720]: I0122 07:09:29.096861 4720 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-config-data" Jan 22 07:09:29 crc kubenswrapper[4720]: I0122 07:09:29.100481 4720 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-scripts" Jan 22 07:09:29 crc kubenswrapper[4720]: I0122 07:09:29.120273 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 07:09:29 crc kubenswrapper[4720]: I0122 07:09:29.166109 4720 scope.go:117] "RemoveContainer" containerID="ba271508e28e3554c6dd19286a2bf0ccc772a022acca3d6285cfcc6c819d460a" Jan 22 07:09:29 crc kubenswrapper[4720]: E0122 07:09:29.167564 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ba271508e28e3554c6dd19286a2bf0ccc772a022acca3d6285cfcc6c819d460a\": container with ID starting with ba271508e28e3554c6dd19286a2bf0ccc772a022acca3d6285cfcc6c819d460a not found: ID does not exist" containerID="ba271508e28e3554c6dd19286a2bf0ccc772a022acca3d6285cfcc6c819d460a" Jan 22 07:09:29 crc kubenswrapper[4720]: I0122 07:09:29.167599 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ba271508e28e3554c6dd19286a2bf0ccc772a022acca3d6285cfcc6c819d460a"} err="failed to get container status \"ba271508e28e3554c6dd19286a2bf0ccc772a022acca3d6285cfcc6c819d460a\": rpc error: code = NotFound desc = could not find container \"ba271508e28e3554c6dd19286a2bf0ccc772a022acca3d6285cfcc6c819d460a\": container with ID starting with ba271508e28e3554c6dd19286a2bf0ccc772a022acca3d6285cfcc6c819d460a not found: ID does not exist" Jan 22 07:09:29 crc kubenswrapper[4720]: I0122 07:09:29.167632 4720 scope.go:117] "RemoveContainer" containerID="88630a110903fd1968947f53b3f0be10936faade7686d87353f488304510e4d8" Jan 22 07:09:29 crc kubenswrapper[4720]: E0122 07:09:29.167901 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"88630a110903fd1968947f53b3f0be10936faade7686d87353f488304510e4d8\": container with ID starting with 88630a110903fd1968947f53b3f0be10936faade7686d87353f488304510e4d8 not found: ID does not exist" containerID="88630a110903fd1968947f53b3f0be10936faade7686d87353f488304510e4d8" Jan 22 07:09:29 crc kubenswrapper[4720]: I0122 07:09:29.167938 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"88630a110903fd1968947f53b3f0be10936faade7686d87353f488304510e4d8"} err="failed to get container status \"88630a110903fd1968947f53b3f0be10936faade7686d87353f488304510e4d8\": rpc error: code = NotFound desc = could not find container \"88630a110903fd1968947f53b3f0be10936faade7686d87353f488304510e4d8\": container with ID starting with 88630a110903fd1968947f53b3f0be10936faade7686d87353f488304510e4d8 not found: ID does not exist" Jan 22 07:09:29 crc kubenswrapper[4720]: I0122 07:09:29.167955 4720 scope.go:117] "RemoveContainer" containerID="38d5ef9569c36f999e1ec73fbe636bf3f254a24292d79e9ed995ebe41ad8f0c7" Jan 22 07:09:29 crc kubenswrapper[4720]: E0122 07:09:29.168339 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"38d5ef9569c36f999e1ec73fbe636bf3f254a24292d79e9ed995ebe41ad8f0c7\": container with ID starting with 38d5ef9569c36f999e1ec73fbe636bf3f254a24292d79e9ed995ebe41ad8f0c7 not found: ID does not exist" containerID="38d5ef9569c36f999e1ec73fbe636bf3f254a24292d79e9ed995ebe41ad8f0c7" Jan 22 07:09:29 crc kubenswrapper[4720]: I0122 07:09:29.168512 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"38d5ef9569c36f999e1ec73fbe636bf3f254a24292d79e9ed995ebe41ad8f0c7"} err="failed to get container status \"38d5ef9569c36f999e1ec73fbe636bf3f254a24292d79e9ed995ebe41ad8f0c7\": rpc error: code = NotFound desc = could not find container \"38d5ef9569c36f999e1ec73fbe636bf3f254a24292d79e9ed995ebe41ad8f0c7\": container with ID starting with 38d5ef9569c36f999e1ec73fbe636bf3f254a24292d79e9ed995ebe41ad8f0c7 not found: ID does not exist" Jan 22 07:09:29 crc kubenswrapper[4720]: I0122 07:09:29.168632 4720 scope.go:117] "RemoveContainer" containerID="f252f8514c42bb1071ee3becf32807f4730a4207cd74db2f62bc7d41df78d91e" Jan 22 07:09:29 crc kubenswrapper[4720]: E0122 07:09:29.169087 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f252f8514c42bb1071ee3becf32807f4730a4207cd74db2f62bc7d41df78d91e\": container with ID starting with f252f8514c42bb1071ee3becf32807f4730a4207cd74db2f62bc7d41df78d91e not found: ID does not exist" containerID="f252f8514c42bb1071ee3becf32807f4730a4207cd74db2f62bc7d41df78d91e" Jan 22 07:09:29 crc kubenswrapper[4720]: I0122 07:09:29.169114 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f252f8514c42bb1071ee3becf32807f4730a4207cd74db2f62bc7d41df78d91e"} err="failed to get container status \"f252f8514c42bb1071ee3becf32807f4730a4207cd74db2f62bc7d41df78d91e\": rpc error: code = NotFound desc = could not find container \"f252f8514c42bb1071ee3becf32807f4730a4207cd74db2f62bc7d41df78d91e\": container with ID starting with f252f8514c42bb1071ee3becf32807f4730a4207cd74db2f62bc7d41df78d91e not found: ID does not exist" Jan 22 07:09:29 crc kubenswrapper[4720]: I0122 07:09:29.188158 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a09fd934-5a94-44e1-a13c-0b7ba32a4987-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a09fd934-5a94-44e1-a13c-0b7ba32a4987\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:09:29 crc kubenswrapper[4720]: I0122 07:09:29.188209 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a09fd934-5a94-44e1-a13c-0b7ba32a4987-config-data\") pod \"ceilometer-0\" (UID: \"a09fd934-5a94-44e1-a13c-0b7ba32a4987\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:09:29 crc kubenswrapper[4720]: I0122 07:09:29.188247 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bd752\" (UniqueName: \"kubernetes.io/projected/a09fd934-5a94-44e1-a13c-0b7ba32a4987-kube-api-access-bd752\") pod \"ceilometer-0\" (UID: \"a09fd934-5a94-44e1-a13c-0b7ba32a4987\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:09:29 crc kubenswrapper[4720]: I0122 07:09:29.188265 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/a09fd934-5a94-44e1-a13c-0b7ba32a4987-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"a09fd934-5a94-44e1-a13c-0b7ba32a4987\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:09:29 crc kubenswrapper[4720]: I0122 07:09:29.188285 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a09fd934-5a94-44e1-a13c-0b7ba32a4987-run-httpd\") pod \"ceilometer-0\" (UID: \"a09fd934-5a94-44e1-a13c-0b7ba32a4987\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:09:29 crc kubenswrapper[4720]: I0122 07:09:29.188300 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a09fd934-5a94-44e1-a13c-0b7ba32a4987-log-httpd\") pod \"ceilometer-0\" (UID: \"a09fd934-5a94-44e1-a13c-0b7ba32a4987\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:09:29 crc kubenswrapper[4720]: I0122 07:09:29.188322 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a09fd934-5a94-44e1-a13c-0b7ba32a4987-scripts\") pod \"ceilometer-0\" (UID: \"a09fd934-5a94-44e1-a13c-0b7ba32a4987\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:09:29 crc kubenswrapper[4720]: I0122 07:09:29.188355 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a09fd934-5a94-44e1-a13c-0b7ba32a4987-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a09fd934-5a94-44e1-a13c-0b7ba32a4987\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:09:29 crc kubenswrapper[4720]: I0122 07:09:29.290257 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a09fd934-5a94-44e1-a13c-0b7ba32a4987-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a09fd934-5a94-44e1-a13c-0b7ba32a4987\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:09:29 crc kubenswrapper[4720]: I0122 07:09:29.290323 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a09fd934-5a94-44e1-a13c-0b7ba32a4987-config-data\") pod \"ceilometer-0\" (UID: \"a09fd934-5a94-44e1-a13c-0b7ba32a4987\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:09:29 crc kubenswrapper[4720]: I0122 07:09:29.290377 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bd752\" (UniqueName: \"kubernetes.io/projected/a09fd934-5a94-44e1-a13c-0b7ba32a4987-kube-api-access-bd752\") pod \"ceilometer-0\" (UID: \"a09fd934-5a94-44e1-a13c-0b7ba32a4987\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:09:29 crc kubenswrapper[4720]: I0122 07:09:29.290396 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/a09fd934-5a94-44e1-a13c-0b7ba32a4987-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"a09fd934-5a94-44e1-a13c-0b7ba32a4987\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:09:29 crc kubenswrapper[4720]: I0122 07:09:29.290412 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a09fd934-5a94-44e1-a13c-0b7ba32a4987-run-httpd\") pod \"ceilometer-0\" (UID: \"a09fd934-5a94-44e1-a13c-0b7ba32a4987\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:09:29 crc kubenswrapper[4720]: I0122 07:09:29.290429 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a09fd934-5a94-44e1-a13c-0b7ba32a4987-log-httpd\") pod \"ceilometer-0\" (UID: \"a09fd934-5a94-44e1-a13c-0b7ba32a4987\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:09:29 crc kubenswrapper[4720]: I0122 07:09:29.290468 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a09fd934-5a94-44e1-a13c-0b7ba32a4987-scripts\") pod \"ceilometer-0\" (UID: \"a09fd934-5a94-44e1-a13c-0b7ba32a4987\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:09:29 crc kubenswrapper[4720]: I0122 07:09:29.290488 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a09fd934-5a94-44e1-a13c-0b7ba32a4987-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a09fd934-5a94-44e1-a13c-0b7ba32a4987\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:09:29 crc kubenswrapper[4720]: I0122 07:09:29.291011 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a09fd934-5a94-44e1-a13c-0b7ba32a4987-run-httpd\") pod \"ceilometer-0\" (UID: \"a09fd934-5a94-44e1-a13c-0b7ba32a4987\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:09:29 crc kubenswrapper[4720]: I0122 07:09:29.291074 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a09fd934-5a94-44e1-a13c-0b7ba32a4987-log-httpd\") pod \"ceilometer-0\" (UID: \"a09fd934-5a94-44e1-a13c-0b7ba32a4987\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:09:29 crc kubenswrapper[4720]: I0122 07:09:29.297613 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a09fd934-5a94-44e1-a13c-0b7ba32a4987-config-data\") pod \"ceilometer-0\" (UID: \"a09fd934-5a94-44e1-a13c-0b7ba32a4987\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:09:29 crc kubenswrapper[4720]: I0122 07:09:29.297731 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a09fd934-5a94-44e1-a13c-0b7ba32a4987-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a09fd934-5a94-44e1-a13c-0b7ba32a4987\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:09:29 crc kubenswrapper[4720]: I0122 07:09:29.298112 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a09fd934-5a94-44e1-a13c-0b7ba32a4987-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a09fd934-5a94-44e1-a13c-0b7ba32a4987\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:09:29 crc kubenswrapper[4720]: I0122 07:09:29.299554 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/a09fd934-5a94-44e1-a13c-0b7ba32a4987-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"a09fd934-5a94-44e1-a13c-0b7ba32a4987\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:09:29 crc kubenswrapper[4720]: I0122 07:09:29.306370 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a09fd934-5a94-44e1-a13c-0b7ba32a4987-scripts\") pod \"ceilometer-0\" (UID: \"a09fd934-5a94-44e1-a13c-0b7ba32a4987\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:09:29 crc kubenswrapper[4720]: I0122 07:09:29.309613 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bd752\" (UniqueName: \"kubernetes.io/projected/a09fd934-5a94-44e1-a13c-0b7ba32a4987-kube-api-access-bd752\") pod \"ceilometer-0\" (UID: \"a09fd934-5a94-44e1-a13c-0b7ba32a4987\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:09:29 crc kubenswrapper[4720]: I0122 07:09:29.449804 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:09:29 crc kubenswrapper[4720]: I0122 07:09:29.921672 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 07:09:29 crc kubenswrapper[4720]: I0122 07:09:29.950751 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"a09fd934-5a94-44e1-a13c-0b7ba32a4987","Type":"ContainerStarted","Data":"545f60b59e2ef11d73585d21f2279752487d3bdee385b76391b14f6b9a4acbf3"} Jan 22 07:09:29 crc kubenswrapper[4720]: I0122 07:09:29.956400 4720 generic.go:334] "Generic (PLEG): container finished" podID="e26c83ad-445d-4fb7-92f9-a830d1fd4e41" containerID="b0afe50b82a4ed4fa15c71d75b0addd355b0e11e803757d7a72ecdc39fd38963" exitCode=0 Jan 22 07:09:29 crc kubenswrapper[4720]: I0122 07:09:29.956950 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-7bkbr" event={"ID":"e26c83ad-445d-4fb7-92f9-a830d1fd4e41","Type":"ContainerDied","Data":"b0afe50b82a4ed4fa15c71d75b0addd355b0e11e803757d7a72ecdc39fd38963"} Jan 22 07:09:29 crc kubenswrapper[4720]: I0122 07:09:29.958800 4720 generic.go:334] "Generic (PLEG): container finished" podID="5324884d-c405-4664-b229-59325b6fff1b" containerID="07fd63215362add398cc17fe1a62a2747a73c18494f3a671e91152137886d9b4" exitCode=0 Jan 22 07:09:29 crc kubenswrapper[4720]: I0122 07:09:29.958954 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-5d39-account-create-update-7cf7p" event={"ID":"5324884d-c405-4664-b229-59325b6fff1b","Type":"ContainerDied","Data":"07fd63215362add398cc17fe1a62a2747a73c18494f3a671e91152137886d9b4"} Jan 22 07:09:30 crc kubenswrapper[4720]: I0122 07:09:30.220837 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bfd898a-5438-4c41-b043-989ea7ef24d0" path="/var/lib/kubelet/pods/1bfd898a-5438-4c41-b043-989ea7ef24d0/volumes" Jan 22 07:09:30 crc kubenswrapper[4720]: I0122 07:09:30.975435 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"a09fd934-5a94-44e1-a13c-0b7ba32a4987","Type":"ContainerStarted","Data":"e4e71d2118cca21a2e602bc67b2955947ba244b714edfee6ec2ed2f30bc4fbcb"} Jan 22 07:09:31 crc kubenswrapper[4720]: I0122 07:09:31.416883 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-5d39-account-create-update-7cf7p" Jan 22 07:09:31 crc kubenswrapper[4720]: I0122 07:09:31.424419 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-7bkbr" Jan 22 07:09:31 crc kubenswrapper[4720]: I0122 07:09:31.551923 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jz6gw\" (UniqueName: \"kubernetes.io/projected/5324884d-c405-4664-b229-59325b6fff1b-kube-api-access-jz6gw\") pod \"5324884d-c405-4664-b229-59325b6fff1b\" (UID: \"5324884d-c405-4664-b229-59325b6fff1b\") " Jan 22 07:09:31 crc kubenswrapper[4720]: I0122 07:09:31.552097 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5324884d-c405-4664-b229-59325b6fff1b-operator-scripts\") pod \"5324884d-c405-4664-b229-59325b6fff1b\" (UID: \"5324884d-c405-4664-b229-59325b6fff1b\") " Jan 22 07:09:31 crc kubenswrapper[4720]: I0122 07:09:31.552154 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4s767\" (UniqueName: \"kubernetes.io/projected/e26c83ad-445d-4fb7-92f9-a830d1fd4e41-kube-api-access-4s767\") pod \"e26c83ad-445d-4fb7-92f9-a830d1fd4e41\" (UID: \"e26c83ad-445d-4fb7-92f9-a830d1fd4e41\") " Jan 22 07:09:31 crc kubenswrapper[4720]: I0122 07:09:31.552221 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e26c83ad-445d-4fb7-92f9-a830d1fd4e41-operator-scripts\") pod \"e26c83ad-445d-4fb7-92f9-a830d1fd4e41\" (UID: \"e26c83ad-445d-4fb7-92f9-a830d1fd4e41\") " Jan 22 07:09:31 crc kubenswrapper[4720]: I0122 07:09:31.552696 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5324884d-c405-4664-b229-59325b6fff1b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5324884d-c405-4664-b229-59325b6fff1b" (UID: "5324884d-c405-4664-b229-59325b6fff1b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 07:09:31 crc kubenswrapper[4720]: I0122 07:09:31.552775 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e26c83ad-445d-4fb7-92f9-a830d1fd4e41-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e26c83ad-445d-4fb7-92f9-a830d1fd4e41" (UID: "e26c83ad-445d-4fb7-92f9-a830d1fd4e41"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 07:09:31 crc kubenswrapper[4720]: I0122 07:09:31.553281 4720 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5324884d-c405-4664-b229-59325b6fff1b-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 07:09:31 crc kubenswrapper[4720]: I0122 07:09:31.553309 4720 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e26c83ad-445d-4fb7-92f9-a830d1fd4e41-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 07:09:31 crc kubenswrapper[4720]: I0122 07:09:31.556169 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5324884d-c405-4664-b229-59325b6fff1b-kube-api-access-jz6gw" (OuterVolumeSpecName: "kube-api-access-jz6gw") pod "5324884d-c405-4664-b229-59325b6fff1b" (UID: "5324884d-c405-4664-b229-59325b6fff1b"). InnerVolumeSpecName "kube-api-access-jz6gw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 07:09:31 crc kubenswrapper[4720]: I0122 07:09:31.556241 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e26c83ad-445d-4fb7-92f9-a830d1fd4e41-kube-api-access-4s767" (OuterVolumeSpecName: "kube-api-access-4s767") pod "e26c83ad-445d-4fb7-92f9-a830d1fd4e41" (UID: "e26c83ad-445d-4fb7-92f9-a830d1fd4e41"). InnerVolumeSpecName "kube-api-access-4s767". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 07:09:31 crc kubenswrapper[4720]: I0122 07:09:31.654859 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jz6gw\" (UniqueName: \"kubernetes.io/projected/5324884d-c405-4664-b229-59325b6fff1b-kube-api-access-jz6gw\") on node \"crc\" DevicePath \"\"" Jan 22 07:09:31 crc kubenswrapper[4720]: I0122 07:09:31.654895 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4s767\" (UniqueName: \"kubernetes.io/projected/e26c83ad-445d-4fb7-92f9-a830d1fd4e41-kube-api-access-4s767\") on node \"crc\" DevicePath \"\"" Jan 22 07:09:31 crc kubenswrapper[4720]: I0122 07:09:31.989231 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"a09fd934-5a94-44e1-a13c-0b7ba32a4987","Type":"ContainerStarted","Data":"56df722864eb64ec95f0f741d84b7927ab4d291f2dc4c9c592876ab7958f4792"} Jan 22 07:09:31 crc kubenswrapper[4720]: I0122 07:09:31.989581 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"a09fd934-5a94-44e1-a13c-0b7ba32a4987","Type":"ContainerStarted","Data":"a5703c2979540feb0ac4a611441df60d715cf3267aef5c957170533844acfe1a"} Jan 22 07:09:31 crc kubenswrapper[4720]: I0122 07:09:31.993365 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-7bkbr" Jan 22 07:09:31 crc kubenswrapper[4720]: I0122 07:09:31.995121 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-7bkbr" event={"ID":"e26c83ad-445d-4fb7-92f9-a830d1fd4e41","Type":"ContainerDied","Data":"4e5ceb2c1a1273179092d686890a80815a58d0933d1c944f33c7f2a743793c4f"} Jan 22 07:09:31 crc kubenswrapper[4720]: I0122 07:09:31.995211 4720 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4e5ceb2c1a1273179092d686890a80815a58d0933d1c944f33c7f2a743793c4f" Jan 22 07:09:31 crc kubenswrapper[4720]: I0122 07:09:31.997800 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-5d39-account-create-update-7cf7p" event={"ID":"5324884d-c405-4664-b229-59325b6fff1b","Type":"ContainerDied","Data":"d33028c6f932ae2046275c5a4d0ff1620912a0dacf286a82f28344f8cdfeb409"} Jan 22 07:09:31 crc kubenswrapper[4720]: I0122 07:09:31.997835 4720 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d33028c6f932ae2046275c5a4d0ff1620912a0dacf286a82f28344f8cdfeb409" Jan 22 07:09:31 crc kubenswrapper[4720]: I0122 07:09:31.997974 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-5d39-account-create-update-7cf7p" Jan 22 07:09:33 crc kubenswrapper[4720]: I0122 07:09:33.085987 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-bfbqw"] Jan 22 07:09:33 crc kubenswrapper[4720]: E0122 07:09:33.086765 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5324884d-c405-4664-b229-59325b6fff1b" containerName="mariadb-account-create-update" Jan 22 07:09:33 crc kubenswrapper[4720]: I0122 07:09:33.086782 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="5324884d-c405-4664-b229-59325b6fff1b" containerName="mariadb-account-create-update" Jan 22 07:09:33 crc kubenswrapper[4720]: E0122 07:09:33.086817 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e26c83ad-445d-4fb7-92f9-a830d1fd4e41" containerName="mariadb-database-create" Jan 22 07:09:33 crc kubenswrapper[4720]: I0122 07:09:33.086826 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="e26c83ad-445d-4fb7-92f9-a830d1fd4e41" containerName="mariadb-database-create" Jan 22 07:09:33 crc kubenswrapper[4720]: I0122 07:09:33.087045 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="e26c83ad-445d-4fb7-92f9-a830d1fd4e41" containerName="mariadb-database-create" Jan 22 07:09:33 crc kubenswrapper[4720]: I0122 07:09:33.087065 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="5324884d-c405-4664-b229-59325b6fff1b" containerName="mariadb-account-create-update" Jan 22 07:09:33 crc kubenswrapper[4720]: I0122 07:09:33.087899 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-bfbqw" Jan 22 07:09:33 crc kubenswrapper[4720]: I0122 07:09:33.095589 4720 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-config-data" Jan 22 07:09:33 crc kubenswrapper[4720]: I0122 07:09:33.098048 4720 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-watcher-kuttl-dockercfg-57hws" Jan 22 07:09:33 crc kubenswrapper[4720]: I0122 07:09:33.107750 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-bfbqw"] Jan 22 07:09:33 crc kubenswrapper[4720]: I0122 07:09:33.200833 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bf6ff\" (UniqueName: \"kubernetes.io/projected/d93a94ce-74e1-414f-930d-e74f67d17f2c-kube-api-access-bf6ff\") pod \"watcher-kuttl-db-sync-bfbqw\" (UID: \"d93a94ce-74e1-414f-930d-e74f67d17f2c\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-bfbqw" Jan 22 07:09:33 crc kubenswrapper[4720]: I0122 07:09:33.200896 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d93a94ce-74e1-414f-930d-e74f67d17f2c-combined-ca-bundle\") pod \"watcher-kuttl-db-sync-bfbqw\" (UID: \"d93a94ce-74e1-414f-930d-e74f67d17f2c\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-bfbqw" Jan 22 07:09:33 crc kubenswrapper[4720]: I0122 07:09:33.201369 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d93a94ce-74e1-414f-930d-e74f67d17f2c-db-sync-config-data\") pod \"watcher-kuttl-db-sync-bfbqw\" (UID: \"d93a94ce-74e1-414f-930d-e74f67d17f2c\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-bfbqw" Jan 22 07:09:33 crc kubenswrapper[4720]: I0122 07:09:33.201505 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d93a94ce-74e1-414f-930d-e74f67d17f2c-config-data\") pod \"watcher-kuttl-db-sync-bfbqw\" (UID: \"d93a94ce-74e1-414f-930d-e74f67d17f2c\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-bfbqw" Jan 22 07:09:33 crc kubenswrapper[4720]: I0122 07:09:33.303205 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d93a94ce-74e1-414f-930d-e74f67d17f2c-db-sync-config-data\") pod \"watcher-kuttl-db-sync-bfbqw\" (UID: \"d93a94ce-74e1-414f-930d-e74f67d17f2c\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-bfbqw" Jan 22 07:09:33 crc kubenswrapper[4720]: I0122 07:09:33.303315 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d93a94ce-74e1-414f-930d-e74f67d17f2c-config-data\") pod \"watcher-kuttl-db-sync-bfbqw\" (UID: \"d93a94ce-74e1-414f-930d-e74f67d17f2c\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-bfbqw" Jan 22 07:09:33 crc kubenswrapper[4720]: I0122 07:09:33.303366 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bf6ff\" (UniqueName: \"kubernetes.io/projected/d93a94ce-74e1-414f-930d-e74f67d17f2c-kube-api-access-bf6ff\") pod \"watcher-kuttl-db-sync-bfbqw\" (UID: \"d93a94ce-74e1-414f-930d-e74f67d17f2c\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-bfbqw" Jan 22 07:09:33 crc kubenswrapper[4720]: I0122 07:09:33.303443 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d93a94ce-74e1-414f-930d-e74f67d17f2c-combined-ca-bundle\") pod \"watcher-kuttl-db-sync-bfbqw\" (UID: \"d93a94ce-74e1-414f-930d-e74f67d17f2c\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-bfbqw" Jan 22 07:09:33 crc kubenswrapper[4720]: I0122 07:09:33.308430 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d93a94ce-74e1-414f-930d-e74f67d17f2c-db-sync-config-data\") pod \"watcher-kuttl-db-sync-bfbqw\" (UID: \"d93a94ce-74e1-414f-930d-e74f67d17f2c\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-bfbqw" Jan 22 07:09:33 crc kubenswrapper[4720]: I0122 07:09:33.311669 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d93a94ce-74e1-414f-930d-e74f67d17f2c-combined-ca-bundle\") pod \"watcher-kuttl-db-sync-bfbqw\" (UID: \"d93a94ce-74e1-414f-930d-e74f67d17f2c\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-bfbqw" Jan 22 07:09:33 crc kubenswrapper[4720]: I0122 07:09:33.316442 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d93a94ce-74e1-414f-930d-e74f67d17f2c-config-data\") pod \"watcher-kuttl-db-sync-bfbqw\" (UID: \"d93a94ce-74e1-414f-930d-e74f67d17f2c\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-bfbqw" Jan 22 07:09:33 crc kubenswrapper[4720]: I0122 07:09:33.320627 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bf6ff\" (UniqueName: \"kubernetes.io/projected/d93a94ce-74e1-414f-930d-e74f67d17f2c-kube-api-access-bf6ff\") pod \"watcher-kuttl-db-sync-bfbqw\" (UID: \"d93a94ce-74e1-414f-930d-e74f67d17f2c\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-bfbqw" Jan 22 07:09:33 crc kubenswrapper[4720]: I0122 07:09:33.409580 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-bfbqw" Jan 22 07:09:33 crc kubenswrapper[4720]: I0122 07:09:33.885295 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-bfbqw"] Jan 22 07:09:33 crc kubenswrapper[4720]: W0122 07:09:33.885572 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd93a94ce_74e1_414f_930d_e74f67d17f2c.slice/crio-7884722469ea17ac03ca1ceb8020c77295d302e6e994c78eff93ea0ccdb3f5cb WatchSource:0}: Error finding container 7884722469ea17ac03ca1ceb8020c77295d302e6e994c78eff93ea0ccdb3f5cb: Status 404 returned error can't find the container with id 7884722469ea17ac03ca1ceb8020c77295d302e6e994c78eff93ea0ccdb3f5cb Jan 22 07:09:34 crc kubenswrapper[4720]: I0122 07:09:34.039064 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-bfbqw" event={"ID":"d93a94ce-74e1-414f-930d-e74f67d17f2c","Type":"ContainerStarted","Data":"7884722469ea17ac03ca1ceb8020c77295d302e6e994c78eff93ea0ccdb3f5cb"} Jan 22 07:09:34 crc kubenswrapper[4720]: I0122 07:09:34.041938 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"a09fd934-5a94-44e1-a13c-0b7ba32a4987","Type":"ContainerStarted","Data":"9bec4c454909acdd09bd576b9a3be8ebdd1e883ed4bbab4511ac585dd593bc36"} Jan 22 07:09:34 crc kubenswrapper[4720]: I0122 07:09:34.042276 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:09:34 crc kubenswrapper[4720]: I0122 07:09:34.070007 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/ceilometer-0" podStartSLOduration=2.163006344 podStartE2EDuration="5.069970918s" podCreationTimestamp="2026-01-22 07:09:29 +0000 UTC" firstStartedPulling="2026-01-22 07:09:29.933620963 +0000 UTC m=+2062.075527668" lastFinishedPulling="2026-01-22 07:09:32.840585537 +0000 UTC m=+2064.982492242" observedRunningTime="2026-01-22 07:09:34.062499826 +0000 UTC m=+2066.204406531" watchObservedRunningTime="2026-01-22 07:09:34.069970918 +0000 UTC m=+2066.211877623" Jan 22 07:09:35 crc kubenswrapper[4720]: I0122 07:09:35.052508 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-bfbqw" event={"ID":"d93a94ce-74e1-414f-930d-e74f67d17f2c","Type":"ContainerStarted","Data":"ae4d37f38ddf0bd212636ab0a7b8c476af35c18661e463ed2b384e725d443be0"} Jan 22 07:09:35 crc kubenswrapper[4720]: I0122 07:09:35.070713 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-db-sync-bfbqw" podStartSLOduration=2.070690739 podStartE2EDuration="2.070690739s" podCreationTimestamp="2026-01-22 07:09:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 07:09:35.068365513 +0000 UTC m=+2067.210272228" watchObservedRunningTime="2026-01-22 07:09:35.070690739 +0000 UTC m=+2067.212597444" Jan 22 07:09:37 crc kubenswrapper[4720]: I0122 07:09:37.070299 4720 generic.go:334] "Generic (PLEG): container finished" podID="d93a94ce-74e1-414f-930d-e74f67d17f2c" containerID="ae4d37f38ddf0bd212636ab0a7b8c476af35c18661e463ed2b384e725d443be0" exitCode=0 Jan 22 07:09:37 crc kubenswrapper[4720]: I0122 07:09:37.070624 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-bfbqw" event={"ID":"d93a94ce-74e1-414f-930d-e74f67d17f2c","Type":"ContainerDied","Data":"ae4d37f38ddf0bd212636ab0a7b8c476af35c18661e463ed2b384e725d443be0"} Jan 22 07:09:38 crc kubenswrapper[4720]: I0122 07:09:38.439147 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-bfbqw" Jan 22 07:09:38 crc kubenswrapper[4720]: I0122 07:09:38.611038 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d93a94ce-74e1-414f-930d-e74f67d17f2c-config-data\") pod \"d93a94ce-74e1-414f-930d-e74f67d17f2c\" (UID: \"d93a94ce-74e1-414f-930d-e74f67d17f2c\") " Jan 22 07:09:38 crc kubenswrapper[4720]: I0122 07:09:38.611120 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d93a94ce-74e1-414f-930d-e74f67d17f2c-combined-ca-bundle\") pod \"d93a94ce-74e1-414f-930d-e74f67d17f2c\" (UID: \"d93a94ce-74e1-414f-930d-e74f67d17f2c\") " Jan 22 07:09:38 crc kubenswrapper[4720]: I0122 07:09:38.611369 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf6ff\" (UniqueName: \"kubernetes.io/projected/d93a94ce-74e1-414f-930d-e74f67d17f2c-kube-api-access-bf6ff\") pod \"d93a94ce-74e1-414f-930d-e74f67d17f2c\" (UID: \"d93a94ce-74e1-414f-930d-e74f67d17f2c\") " Jan 22 07:09:38 crc kubenswrapper[4720]: I0122 07:09:38.611429 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d93a94ce-74e1-414f-930d-e74f67d17f2c-db-sync-config-data\") pod \"d93a94ce-74e1-414f-930d-e74f67d17f2c\" (UID: \"d93a94ce-74e1-414f-930d-e74f67d17f2c\") " Jan 22 07:09:38 crc kubenswrapper[4720]: I0122 07:09:38.626087 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d93a94ce-74e1-414f-930d-e74f67d17f2c-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "d93a94ce-74e1-414f-930d-e74f67d17f2c" (UID: "d93a94ce-74e1-414f-930d-e74f67d17f2c"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:09:38 crc kubenswrapper[4720]: I0122 07:09:38.626160 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d93a94ce-74e1-414f-930d-e74f67d17f2c-kube-api-access-bf6ff" (OuterVolumeSpecName: "kube-api-access-bf6ff") pod "d93a94ce-74e1-414f-930d-e74f67d17f2c" (UID: "d93a94ce-74e1-414f-930d-e74f67d17f2c"). InnerVolumeSpecName "kube-api-access-bf6ff". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 07:09:38 crc kubenswrapper[4720]: I0122 07:09:38.631959 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d93a94ce-74e1-414f-930d-e74f67d17f2c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d93a94ce-74e1-414f-930d-e74f67d17f2c" (UID: "d93a94ce-74e1-414f-930d-e74f67d17f2c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:09:38 crc kubenswrapper[4720]: I0122 07:09:38.654031 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d93a94ce-74e1-414f-930d-e74f67d17f2c-config-data" (OuterVolumeSpecName: "config-data") pod "d93a94ce-74e1-414f-930d-e74f67d17f2c" (UID: "d93a94ce-74e1-414f-930d-e74f67d17f2c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:09:38 crc kubenswrapper[4720]: I0122 07:09:38.713390 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf6ff\" (UniqueName: \"kubernetes.io/projected/d93a94ce-74e1-414f-930d-e74f67d17f2c-kube-api-access-bf6ff\") on node \"crc\" DevicePath \"\"" Jan 22 07:09:38 crc kubenswrapper[4720]: I0122 07:09:38.713426 4720 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d93a94ce-74e1-414f-930d-e74f67d17f2c-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 07:09:38 crc kubenswrapper[4720]: I0122 07:09:38.713436 4720 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d93a94ce-74e1-414f-930d-e74f67d17f2c-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 07:09:38 crc kubenswrapper[4720]: I0122 07:09:38.713446 4720 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d93a94ce-74e1-414f-930d-e74f67d17f2c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 07:09:39 crc kubenswrapper[4720]: I0122 07:09:39.092778 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-bfbqw" event={"ID":"d93a94ce-74e1-414f-930d-e74f67d17f2c","Type":"ContainerDied","Data":"7884722469ea17ac03ca1ceb8020c77295d302e6e994c78eff93ea0ccdb3f5cb"} Jan 22 07:09:39 crc kubenswrapper[4720]: I0122 07:09:39.092822 4720 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7884722469ea17ac03ca1ceb8020c77295d302e6e994c78eff93ea0ccdb3f5cb" Jan 22 07:09:39 crc kubenswrapper[4720]: I0122 07:09:39.092888 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-bfbqw" Jan 22 07:09:39 crc kubenswrapper[4720]: I0122 07:09:39.364670 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 22 07:09:39 crc kubenswrapper[4720]: E0122 07:09:39.365125 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d93a94ce-74e1-414f-930d-e74f67d17f2c" containerName="watcher-kuttl-db-sync" Jan 22 07:09:39 crc kubenswrapper[4720]: I0122 07:09:39.365145 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="d93a94ce-74e1-414f-930d-e74f67d17f2c" containerName="watcher-kuttl-db-sync" Jan 22 07:09:39 crc kubenswrapper[4720]: I0122 07:09:39.365315 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="d93a94ce-74e1-414f-930d-e74f67d17f2c" containerName="watcher-kuttl-db-sync" Jan 22 07:09:39 crc kubenswrapper[4720]: I0122 07:09:39.366632 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:09:39 crc kubenswrapper[4720]: I0122 07:09:39.370084 4720 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-watcher-kuttl-dockercfg-57hws" Jan 22 07:09:39 crc kubenswrapper[4720]: I0122 07:09:39.370328 4720 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-api-config-data" Jan 22 07:09:39 crc kubenswrapper[4720]: I0122 07:09:39.383235 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 22 07:09:39 crc kubenswrapper[4720]: I0122 07:09:39.467134 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 22 07:09:39 crc kubenswrapper[4720]: I0122 07:09:39.475681 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 07:09:39 crc kubenswrapper[4720]: I0122 07:09:39.478688 4720 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-applier-config-data" Jan 22 07:09:39 crc kubenswrapper[4720]: I0122 07:09:39.483386 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 22 07:09:39 crc kubenswrapper[4720]: I0122 07:09:39.500777 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 22 07:09:39 crc kubenswrapper[4720]: I0122 07:09:39.502366 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 07:09:39 crc kubenswrapper[4720]: I0122 07:09:39.515337 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 22 07:09:39 crc kubenswrapper[4720]: I0122 07:09:39.522056 4720 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-decision-engine-config-data" Jan 22 07:09:39 crc kubenswrapper[4720]: I0122 07:09:39.525588 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bde2542f-6d84-4fee-8690-23325fb92c83-logs\") pod \"watcher-kuttl-api-0\" (UID: \"bde2542f-6d84-4fee-8690-23325fb92c83\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:09:39 crc kubenswrapper[4720]: I0122 07:09:39.525684 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/bde2542f-6d84-4fee-8690-23325fb92c83-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"bde2542f-6d84-4fee-8690-23325fb92c83\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:09:39 crc kubenswrapper[4720]: I0122 07:09:39.525733 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bde2542f-6d84-4fee-8690-23325fb92c83-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"bde2542f-6d84-4fee-8690-23325fb92c83\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:09:39 crc kubenswrapper[4720]: I0122 07:09:39.525764 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8c6g7\" (UniqueName: \"kubernetes.io/projected/bde2542f-6d84-4fee-8690-23325fb92c83-kube-api-access-8c6g7\") pod \"watcher-kuttl-api-0\" (UID: \"bde2542f-6d84-4fee-8690-23325fb92c83\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:09:39 crc kubenswrapper[4720]: I0122 07:09:39.525794 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/bde2542f-6d84-4fee-8690-23325fb92c83-cert-memcached-mtls\") pod \"watcher-kuttl-api-0\" (UID: \"bde2542f-6d84-4fee-8690-23325fb92c83\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:09:39 crc kubenswrapper[4720]: I0122 07:09:39.525857 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bde2542f-6d84-4fee-8690-23325fb92c83-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"bde2542f-6d84-4fee-8690-23325fb92c83\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:09:39 crc kubenswrapper[4720]: I0122 07:09:39.627272 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/be2f9b40-2fd1-4ae5-8772-d8770884bd9d-cert-memcached-mtls\") pod \"watcher-kuttl-applier-0\" (UID: \"be2f9b40-2fd1-4ae5-8772-d8770884bd9d\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 07:09:39 crc kubenswrapper[4720]: I0122 07:09:39.627329 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/be2f9b40-2fd1-4ae5-8772-d8770884bd9d-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"be2f9b40-2fd1-4ae5-8772-d8770884bd9d\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 07:09:39 crc kubenswrapper[4720]: I0122 07:09:39.627355 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/95a76f1b-07af-4869-b242-1cdbdb0b1f98-cert-memcached-mtls\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"95a76f1b-07af-4869-b242-1cdbdb0b1f98\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 07:09:39 crc kubenswrapper[4720]: I0122 07:09:39.627382 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bde2542f-6d84-4fee-8690-23325fb92c83-logs\") pod \"watcher-kuttl-api-0\" (UID: \"bde2542f-6d84-4fee-8690-23325fb92c83\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:09:39 crc kubenswrapper[4720]: I0122 07:09:39.627403 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/95a76f1b-07af-4869-b242-1cdbdb0b1f98-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"95a76f1b-07af-4869-b242-1cdbdb0b1f98\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 07:09:39 crc kubenswrapper[4720]: I0122 07:09:39.627424 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95a76f1b-07af-4869-b242-1cdbdb0b1f98-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"95a76f1b-07af-4869-b242-1cdbdb0b1f98\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 07:09:39 crc kubenswrapper[4720]: I0122 07:09:39.627459 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pr654\" (UniqueName: \"kubernetes.io/projected/be2f9b40-2fd1-4ae5-8772-d8770884bd9d-kube-api-access-pr654\") pod \"watcher-kuttl-applier-0\" (UID: \"be2f9b40-2fd1-4ae5-8772-d8770884bd9d\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 07:09:39 crc kubenswrapper[4720]: I0122 07:09:39.627542 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/95a76f1b-07af-4869-b242-1cdbdb0b1f98-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"95a76f1b-07af-4869-b242-1cdbdb0b1f98\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 07:09:39 crc kubenswrapper[4720]: I0122 07:09:39.627591 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/bde2542f-6d84-4fee-8690-23325fb92c83-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"bde2542f-6d84-4fee-8690-23325fb92c83\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:09:39 crc kubenswrapper[4720]: I0122 07:09:39.627629 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be2f9b40-2fd1-4ae5-8772-d8770884bd9d-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"be2f9b40-2fd1-4ae5-8772-d8770884bd9d\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 07:09:39 crc kubenswrapper[4720]: I0122 07:09:39.627669 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bde2542f-6d84-4fee-8690-23325fb92c83-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"bde2542f-6d84-4fee-8690-23325fb92c83\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:09:39 crc kubenswrapper[4720]: I0122 07:09:39.627695 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8c6g7\" (UniqueName: \"kubernetes.io/projected/bde2542f-6d84-4fee-8690-23325fb92c83-kube-api-access-8c6g7\") pod \"watcher-kuttl-api-0\" (UID: \"bde2542f-6d84-4fee-8690-23325fb92c83\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:09:39 crc kubenswrapper[4720]: I0122 07:09:39.627727 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/bde2542f-6d84-4fee-8690-23325fb92c83-cert-memcached-mtls\") pod \"watcher-kuttl-api-0\" (UID: \"bde2542f-6d84-4fee-8690-23325fb92c83\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:09:39 crc kubenswrapper[4720]: I0122 07:09:39.627760 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95a76f1b-07af-4869-b242-1cdbdb0b1f98-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"95a76f1b-07af-4869-b242-1cdbdb0b1f98\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 07:09:39 crc kubenswrapper[4720]: I0122 07:09:39.627796 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bde2542f-6d84-4fee-8690-23325fb92c83-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"bde2542f-6d84-4fee-8690-23325fb92c83\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:09:39 crc kubenswrapper[4720]: I0122 07:09:39.627827 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lm59q\" (UniqueName: \"kubernetes.io/projected/95a76f1b-07af-4869-b242-1cdbdb0b1f98-kube-api-access-lm59q\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"95a76f1b-07af-4869-b242-1cdbdb0b1f98\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 07:09:39 crc kubenswrapper[4720]: I0122 07:09:39.627850 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be2f9b40-2fd1-4ae5-8772-d8770884bd9d-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"be2f9b40-2fd1-4ae5-8772-d8770884bd9d\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 07:09:39 crc kubenswrapper[4720]: I0122 07:09:39.628279 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bde2542f-6d84-4fee-8690-23325fb92c83-logs\") pod \"watcher-kuttl-api-0\" (UID: \"bde2542f-6d84-4fee-8690-23325fb92c83\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:09:39 crc kubenswrapper[4720]: I0122 07:09:39.632295 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/bde2542f-6d84-4fee-8690-23325fb92c83-cert-memcached-mtls\") pod \"watcher-kuttl-api-0\" (UID: \"bde2542f-6d84-4fee-8690-23325fb92c83\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:09:39 crc kubenswrapper[4720]: I0122 07:09:39.632419 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bde2542f-6d84-4fee-8690-23325fb92c83-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"bde2542f-6d84-4fee-8690-23325fb92c83\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:09:39 crc kubenswrapper[4720]: I0122 07:09:39.632495 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bde2542f-6d84-4fee-8690-23325fb92c83-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"bde2542f-6d84-4fee-8690-23325fb92c83\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:09:39 crc kubenswrapper[4720]: I0122 07:09:39.632512 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/bde2542f-6d84-4fee-8690-23325fb92c83-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"bde2542f-6d84-4fee-8690-23325fb92c83\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:09:39 crc kubenswrapper[4720]: I0122 07:09:39.647599 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8c6g7\" (UniqueName: \"kubernetes.io/projected/bde2542f-6d84-4fee-8690-23325fb92c83-kube-api-access-8c6g7\") pod \"watcher-kuttl-api-0\" (UID: \"bde2542f-6d84-4fee-8690-23325fb92c83\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:09:39 crc kubenswrapper[4720]: I0122 07:09:39.696368 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:09:39 crc kubenswrapper[4720]: I0122 07:09:39.729900 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/be2f9b40-2fd1-4ae5-8772-d8770884bd9d-cert-memcached-mtls\") pod \"watcher-kuttl-applier-0\" (UID: \"be2f9b40-2fd1-4ae5-8772-d8770884bd9d\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 07:09:39 crc kubenswrapper[4720]: I0122 07:09:39.730335 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/be2f9b40-2fd1-4ae5-8772-d8770884bd9d-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"be2f9b40-2fd1-4ae5-8772-d8770884bd9d\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 07:09:39 crc kubenswrapper[4720]: I0122 07:09:39.730367 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/95a76f1b-07af-4869-b242-1cdbdb0b1f98-cert-memcached-mtls\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"95a76f1b-07af-4869-b242-1cdbdb0b1f98\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 07:09:39 crc kubenswrapper[4720]: I0122 07:09:39.730397 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95a76f1b-07af-4869-b242-1cdbdb0b1f98-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"95a76f1b-07af-4869-b242-1cdbdb0b1f98\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 07:09:39 crc kubenswrapper[4720]: I0122 07:09:39.730413 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/95a76f1b-07af-4869-b242-1cdbdb0b1f98-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"95a76f1b-07af-4869-b242-1cdbdb0b1f98\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 07:09:39 crc kubenswrapper[4720]: I0122 07:09:39.730431 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pr654\" (UniqueName: \"kubernetes.io/projected/be2f9b40-2fd1-4ae5-8772-d8770884bd9d-kube-api-access-pr654\") pod \"watcher-kuttl-applier-0\" (UID: \"be2f9b40-2fd1-4ae5-8772-d8770884bd9d\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 07:09:39 crc kubenswrapper[4720]: I0122 07:09:39.730455 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/95a76f1b-07af-4869-b242-1cdbdb0b1f98-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"95a76f1b-07af-4869-b242-1cdbdb0b1f98\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 07:09:39 crc kubenswrapper[4720]: I0122 07:09:39.730497 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be2f9b40-2fd1-4ae5-8772-d8770884bd9d-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"be2f9b40-2fd1-4ae5-8772-d8770884bd9d\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 07:09:39 crc kubenswrapper[4720]: I0122 07:09:39.730557 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95a76f1b-07af-4869-b242-1cdbdb0b1f98-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"95a76f1b-07af-4869-b242-1cdbdb0b1f98\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 07:09:39 crc kubenswrapper[4720]: I0122 07:09:39.730625 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lm59q\" (UniqueName: \"kubernetes.io/projected/95a76f1b-07af-4869-b242-1cdbdb0b1f98-kube-api-access-lm59q\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"95a76f1b-07af-4869-b242-1cdbdb0b1f98\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 07:09:39 crc kubenswrapper[4720]: I0122 07:09:39.730647 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be2f9b40-2fd1-4ae5-8772-d8770884bd9d-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"be2f9b40-2fd1-4ae5-8772-d8770884bd9d\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 07:09:39 crc kubenswrapper[4720]: I0122 07:09:39.731755 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/be2f9b40-2fd1-4ae5-8772-d8770884bd9d-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"be2f9b40-2fd1-4ae5-8772-d8770884bd9d\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 07:09:39 crc kubenswrapper[4720]: I0122 07:09:39.734738 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/95a76f1b-07af-4869-b242-1cdbdb0b1f98-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"95a76f1b-07af-4869-b242-1cdbdb0b1f98\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 07:09:39 crc kubenswrapper[4720]: I0122 07:09:39.735234 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be2f9b40-2fd1-4ae5-8772-d8770884bd9d-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"be2f9b40-2fd1-4ae5-8772-d8770884bd9d\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 07:09:39 crc kubenswrapper[4720]: I0122 07:09:39.736384 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/95a76f1b-07af-4869-b242-1cdbdb0b1f98-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"95a76f1b-07af-4869-b242-1cdbdb0b1f98\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 07:09:39 crc kubenswrapper[4720]: I0122 07:09:39.739082 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95a76f1b-07af-4869-b242-1cdbdb0b1f98-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"95a76f1b-07af-4869-b242-1cdbdb0b1f98\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 07:09:39 crc kubenswrapper[4720]: I0122 07:09:39.739271 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/95a76f1b-07af-4869-b242-1cdbdb0b1f98-cert-memcached-mtls\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"95a76f1b-07af-4869-b242-1cdbdb0b1f98\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 07:09:39 crc kubenswrapper[4720]: I0122 07:09:39.739623 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be2f9b40-2fd1-4ae5-8772-d8770884bd9d-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"be2f9b40-2fd1-4ae5-8772-d8770884bd9d\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 07:09:39 crc kubenswrapper[4720]: I0122 07:09:39.740492 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95a76f1b-07af-4869-b242-1cdbdb0b1f98-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"95a76f1b-07af-4869-b242-1cdbdb0b1f98\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 07:09:39 crc kubenswrapper[4720]: I0122 07:09:39.740670 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/be2f9b40-2fd1-4ae5-8772-d8770884bd9d-cert-memcached-mtls\") pod \"watcher-kuttl-applier-0\" (UID: \"be2f9b40-2fd1-4ae5-8772-d8770884bd9d\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 07:09:39 crc kubenswrapper[4720]: I0122 07:09:39.756769 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pr654\" (UniqueName: \"kubernetes.io/projected/be2f9b40-2fd1-4ae5-8772-d8770884bd9d-kube-api-access-pr654\") pod \"watcher-kuttl-applier-0\" (UID: \"be2f9b40-2fd1-4ae5-8772-d8770884bd9d\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 07:09:39 crc kubenswrapper[4720]: I0122 07:09:39.757852 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lm59q\" (UniqueName: \"kubernetes.io/projected/95a76f1b-07af-4869-b242-1cdbdb0b1f98-kube-api-access-lm59q\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"95a76f1b-07af-4869-b242-1cdbdb0b1f98\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 07:09:39 crc kubenswrapper[4720]: I0122 07:09:39.793586 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 07:09:39 crc kubenswrapper[4720]: I0122 07:09:39.838711 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 07:09:40 crc kubenswrapper[4720]: I0122 07:09:40.225397 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 22 07:09:40 crc kubenswrapper[4720]: I0122 07:09:40.318467 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 22 07:09:40 crc kubenswrapper[4720]: W0122 07:09:40.327402 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbe2f9b40_2fd1_4ae5_8772_d8770884bd9d.slice/crio-00318edb60daddb11d74c999395b56ffb244a3f537c4a21c7e759ddf41dd5f16 WatchSource:0}: Error finding container 00318edb60daddb11d74c999395b56ffb244a3f537c4a21c7e759ddf41dd5f16: Status 404 returned error can't find the container with id 00318edb60daddb11d74c999395b56ffb244a3f537c4a21c7e759ddf41dd5f16 Jan 22 07:09:40 crc kubenswrapper[4720]: I0122 07:09:40.341925 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 22 07:09:40 crc kubenswrapper[4720]: W0122 07:09:40.354069 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod95a76f1b_07af_4869_b242_1cdbdb0b1f98.slice/crio-9f37c7cc01420a6cfef868e161e13747d31de623a5722864437ebfa1df22c805 WatchSource:0}: Error finding container 9f37c7cc01420a6cfef868e161e13747d31de623a5722864437ebfa1df22c805: Status 404 returned error can't find the container with id 9f37c7cc01420a6cfef868e161e13747d31de623a5722864437ebfa1df22c805 Jan 22 07:09:41 crc kubenswrapper[4720]: I0122 07:09:41.129362 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"bde2542f-6d84-4fee-8690-23325fb92c83","Type":"ContainerStarted","Data":"71fee65bd2a62e26d0e6179628570fd693e3bce7b1561fbb3408c11c7fdd5cda"} Jan 22 07:09:41 crc kubenswrapper[4720]: I0122 07:09:41.129843 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:09:41 crc kubenswrapper[4720]: I0122 07:09:41.129857 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"bde2542f-6d84-4fee-8690-23325fb92c83","Type":"ContainerStarted","Data":"de6570d94cc0eb56bc1241f55456918f2f6ecc692da21e09b65ee0d291644536"} Jan 22 07:09:41 crc kubenswrapper[4720]: I0122 07:09:41.129867 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"bde2542f-6d84-4fee-8690-23325fb92c83","Type":"ContainerStarted","Data":"71e5ece357d2a36464161da08fb7b28cbcfbea9f4161795951a993ef4a5c01a6"} Jan 22 07:09:41 crc kubenswrapper[4720]: I0122 07:09:41.133534 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"be2f9b40-2fd1-4ae5-8772-d8770884bd9d","Type":"ContainerStarted","Data":"a9096406605c76a8731deae712fd1bfe87eedb40f9b51d28e66bc9ab53ddf51e"} Jan 22 07:09:41 crc kubenswrapper[4720]: I0122 07:09:41.133613 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"be2f9b40-2fd1-4ae5-8772-d8770884bd9d","Type":"ContainerStarted","Data":"00318edb60daddb11d74c999395b56ffb244a3f537c4a21c7e759ddf41dd5f16"} Jan 22 07:09:41 crc kubenswrapper[4720]: I0122 07:09:41.138139 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"95a76f1b-07af-4869-b242-1cdbdb0b1f98","Type":"ContainerStarted","Data":"81723a08301dd466878bfb4f71b3ed672eb44ba3a7aa82d52fb24b1c976f949b"} Jan 22 07:09:41 crc kubenswrapper[4720]: I0122 07:09:41.138202 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"95a76f1b-07af-4869-b242-1cdbdb0b1f98","Type":"ContainerStarted","Data":"9f37c7cc01420a6cfef868e161e13747d31de623a5722864437ebfa1df22c805"} Jan 22 07:09:41 crc kubenswrapper[4720]: I0122 07:09:41.161190 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-api-0" podStartSLOduration=2.161166731 podStartE2EDuration="2.161166731s" podCreationTimestamp="2026-01-22 07:09:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 07:09:41.154519624 +0000 UTC m=+2073.296426329" watchObservedRunningTime="2026-01-22 07:09:41.161166731 +0000 UTC m=+2073.303073446" Jan 22 07:09:41 crc kubenswrapper[4720]: I0122 07:09:41.193652 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podStartSLOduration=2.193630898 podStartE2EDuration="2.193630898s" podCreationTimestamp="2026-01-22 07:09:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 07:09:41.190605282 +0000 UTC m=+2073.332512007" watchObservedRunningTime="2026-01-22 07:09:41.193630898 +0000 UTC m=+2073.335537603" Jan 22 07:09:41 crc kubenswrapper[4720]: I0122 07:09:41.210336 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podStartSLOduration=2.210313429 podStartE2EDuration="2.210313429s" podCreationTimestamp="2026-01-22 07:09:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 07:09:41.205962016 +0000 UTC m=+2073.347868721" watchObservedRunningTime="2026-01-22 07:09:41.210313429 +0000 UTC m=+2073.352220134" Jan 22 07:09:41 crc kubenswrapper[4720]: I0122 07:09:41.560246 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_95a76f1b-07af-4869-b242-1cdbdb0b1f98/watcher-decision-engine/0.log" Jan 22 07:09:42 crc kubenswrapper[4720]: I0122 07:09:42.768689 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_95a76f1b-07af-4869-b242-1cdbdb0b1f98/watcher-decision-engine/0.log" Jan 22 07:09:43 crc kubenswrapper[4720]: I0122 07:09:43.445529 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:09:43 crc kubenswrapper[4720]: I0122 07:09:43.992970 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_95a76f1b-07af-4869-b242-1cdbdb0b1f98/watcher-decision-engine/0.log" Jan 22 07:09:44 crc kubenswrapper[4720]: I0122 07:09:44.697154 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:09:44 crc kubenswrapper[4720]: I0122 07:09:44.794734 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 07:09:45 crc kubenswrapper[4720]: I0122 07:09:45.246633 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_95a76f1b-07af-4869-b242-1cdbdb0b1f98/watcher-decision-engine/0.log" Jan 22 07:09:46 crc kubenswrapper[4720]: I0122 07:09:46.503267 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_95a76f1b-07af-4869-b242-1cdbdb0b1f98/watcher-decision-engine/0.log" Jan 22 07:09:47 crc kubenswrapper[4720]: I0122 07:09:47.725103 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_95a76f1b-07af-4869-b242-1cdbdb0b1f98/watcher-decision-engine/0.log" Jan 22 07:09:48 crc kubenswrapper[4720]: I0122 07:09:48.980879 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_95a76f1b-07af-4869-b242-1cdbdb0b1f98/watcher-decision-engine/0.log" Jan 22 07:09:49 crc kubenswrapper[4720]: I0122 07:09:49.696899 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:09:49 crc kubenswrapper[4720]: I0122 07:09:49.701471 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:09:49 crc kubenswrapper[4720]: I0122 07:09:49.795449 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 07:09:49 crc kubenswrapper[4720]: I0122 07:09:49.826722 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 07:09:49 crc kubenswrapper[4720]: I0122 07:09:49.839928 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 07:09:49 crc kubenswrapper[4720]: I0122 07:09:49.875335 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 07:09:50 crc kubenswrapper[4720]: I0122 07:09:50.196846 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_95a76f1b-07af-4869-b242-1cdbdb0b1f98/watcher-decision-engine/0.log" Jan 22 07:09:50 crc kubenswrapper[4720]: I0122 07:09:50.224611 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 07:09:50 crc kubenswrapper[4720]: I0122 07:09:50.224684 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:09:50 crc kubenswrapper[4720]: I0122 07:09:50.254939 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 07:09:50 crc kubenswrapper[4720]: I0122 07:09:50.255702 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 07:09:51 crc kubenswrapper[4720]: I0122 07:09:51.403640 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_95a76f1b-07af-4869-b242-1cdbdb0b1f98/watcher-decision-engine/0.log" Jan 22 07:09:51 crc kubenswrapper[4720]: I0122 07:09:51.689536 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_95a76f1b-07af-4869-b242-1cdbdb0b1f98/watcher-decision-engine/0.log" Jan 22 07:09:52 crc kubenswrapper[4720]: I0122 07:09:52.270728 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/cinder-17a7-account-create-update-lx4js"] Jan 22 07:09:52 crc kubenswrapper[4720]: I0122 07:09:52.272040 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder-17a7-account-create-update-lx4js" Jan 22 07:09:52 crc kubenswrapper[4720]: I0122 07:09:52.276039 4720 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cinder-db-secret" Jan 22 07:09:52 crc kubenswrapper[4720]: I0122 07:09:52.279607 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/cinder-db-create-q6sv2"] Jan 22 07:09:52 crc kubenswrapper[4720]: I0122 07:09:52.280925 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder-db-create-q6sv2" Jan 22 07:09:52 crc kubenswrapper[4720]: I0122 07:09:52.287667 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/cinder-17a7-account-create-update-lx4js"] Jan 22 07:09:52 crc kubenswrapper[4720]: I0122 07:09:52.296272 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/cinder-db-create-q6sv2"] Jan 22 07:09:52 crc kubenswrapper[4720]: I0122 07:09:52.378925 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0024e023-1c1d-4b82-bd73-fc7646298fb6-operator-scripts\") pod \"cinder-17a7-account-create-update-lx4js\" (UID: \"0024e023-1c1d-4b82-bd73-fc7646298fb6\") " pod="watcher-kuttl-default/cinder-17a7-account-create-update-lx4js" Jan 22 07:09:52 crc kubenswrapper[4720]: I0122 07:09:52.379015 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lc5c4\" (UniqueName: \"kubernetes.io/projected/0024e023-1c1d-4b82-bd73-fc7646298fb6-kube-api-access-lc5c4\") pod \"cinder-17a7-account-create-update-lx4js\" (UID: \"0024e023-1c1d-4b82-bd73-fc7646298fb6\") " pod="watcher-kuttl-default/cinder-17a7-account-create-update-lx4js" Jan 22 07:09:52 crc kubenswrapper[4720]: I0122 07:09:52.379057 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d34d613b-05e5-4e43-8bf6-0eb5a5b1bebc-operator-scripts\") pod \"cinder-db-create-q6sv2\" (UID: \"d34d613b-05e5-4e43-8bf6-0eb5a5b1bebc\") " pod="watcher-kuttl-default/cinder-db-create-q6sv2" Jan 22 07:09:52 crc kubenswrapper[4720]: I0122 07:09:52.379106 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8n7jz\" (UniqueName: \"kubernetes.io/projected/d34d613b-05e5-4e43-8bf6-0eb5a5b1bebc-kube-api-access-8n7jz\") pod \"cinder-db-create-q6sv2\" (UID: \"d34d613b-05e5-4e43-8bf6-0eb5a5b1bebc\") " pod="watcher-kuttl-default/cinder-db-create-q6sv2" Jan 22 07:09:52 crc kubenswrapper[4720]: I0122 07:09:52.480700 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lc5c4\" (UniqueName: \"kubernetes.io/projected/0024e023-1c1d-4b82-bd73-fc7646298fb6-kube-api-access-lc5c4\") pod \"cinder-17a7-account-create-update-lx4js\" (UID: \"0024e023-1c1d-4b82-bd73-fc7646298fb6\") " pod="watcher-kuttl-default/cinder-17a7-account-create-update-lx4js" Jan 22 07:09:52 crc kubenswrapper[4720]: I0122 07:09:52.480758 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d34d613b-05e5-4e43-8bf6-0eb5a5b1bebc-operator-scripts\") pod \"cinder-db-create-q6sv2\" (UID: \"d34d613b-05e5-4e43-8bf6-0eb5a5b1bebc\") " pod="watcher-kuttl-default/cinder-db-create-q6sv2" Jan 22 07:09:52 crc kubenswrapper[4720]: I0122 07:09:52.480810 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8n7jz\" (UniqueName: \"kubernetes.io/projected/d34d613b-05e5-4e43-8bf6-0eb5a5b1bebc-kube-api-access-8n7jz\") pod \"cinder-db-create-q6sv2\" (UID: \"d34d613b-05e5-4e43-8bf6-0eb5a5b1bebc\") " pod="watcher-kuttl-default/cinder-db-create-q6sv2" Jan 22 07:09:52 crc kubenswrapper[4720]: I0122 07:09:52.480885 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0024e023-1c1d-4b82-bd73-fc7646298fb6-operator-scripts\") pod \"cinder-17a7-account-create-update-lx4js\" (UID: \"0024e023-1c1d-4b82-bd73-fc7646298fb6\") " pod="watcher-kuttl-default/cinder-17a7-account-create-update-lx4js" Jan 22 07:09:52 crc kubenswrapper[4720]: I0122 07:09:52.481675 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0024e023-1c1d-4b82-bd73-fc7646298fb6-operator-scripts\") pod \"cinder-17a7-account-create-update-lx4js\" (UID: \"0024e023-1c1d-4b82-bd73-fc7646298fb6\") " pod="watcher-kuttl-default/cinder-17a7-account-create-update-lx4js" Jan 22 07:09:52 crc kubenswrapper[4720]: I0122 07:09:52.481679 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d34d613b-05e5-4e43-8bf6-0eb5a5b1bebc-operator-scripts\") pod \"cinder-db-create-q6sv2\" (UID: \"d34d613b-05e5-4e43-8bf6-0eb5a5b1bebc\") " pod="watcher-kuttl-default/cinder-db-create-q6sv2" Jan 22 07:09:52 crc kubenswrapper[4720]: I0122 07:09:52.503764 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lc5c4\" (UniqueName: \"kubernetes.io/projected/0024e023-1c1d-4b82-bd73-fc7646298fb6-kube-api-access-lc5c4\") pod \"cinder-17a7-account-create-update-lx4js\" (UID: \"0024e023-1c1d-4b82-bd73-fc7646298fb6\") " pod="watcher-kuttl-default/cinder-17a7-account-create-update-lx4js" Jan 22 07:09:52 crc kubenswrapper[4720]: I0122 07:09:52.506385 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8n7jz\" (UniqueName: \"kubernetes.io/projected/d34d613b-05e5-4e43-8bf6-0eb5a5b1bebc-kube-api-access-8n7jz\") pod \"cinder-db-create-q6sv2\" (UID: \"d34d613b-05e5-4e43-8bf6-0eb5a5b1bebc\") " pod="watcher-kuttl-default/cinder-db-create-q6sv2" Jan 22 07:09:52 crc kubenswrapper[4720]: I0122 07:09:52.599396 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder-17a7-account-create-update-lx4js" Jan 22 07:09:52 crc kubenswrapper[4720]: I0122 07:09:52.607394 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder-db-create-q6sv2" Jan 22 07:09:52 crc kubenswrapper[4720]: I0122 07:09:52.943275 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_95a76f1b-07af-4869-b242-1cdbdb0b1f98/watcher-decision-engine/0.log" Jan 22 07:09:52 crc kubenswrapper[4720]: I0122 07:09:52.990092 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 07:09:52 crc kubenswrapper[4720]: I0122 07:09:52.990432 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="a09fd934-5a94-44e1-a13c-0b7ba32a4987" containerName="ceilometer-central-agent" containerID="cri-o://e4e71d2118cca21a2e602bc67b2955947ba244b714edfee6ec2ed2f30bc4fbcb" gracePeriod=30 Jan 22 07:09:52 crc kubenswrapper[4720]: I0122 07:09:52.991072 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="a09fd934-5a94-44e1-a13c-0b7ba32a4987" containerName="proxy-httpd" containerID="cri-o://9bec4c454909acdd09bd576b9a3be8ebdd1e883ed4bbab4511ac585dd593bc36" gracePeriod=30 Jan 22 07:09:52 crc kubenswrapper[4720]: I0122 07:09:52.991131 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="a09fd934-5a94-44e1-a13c-0b7ba32a4987" containerName="sg-core" containerID="cri-o://56df722864eb64ec95f0f741d84b7927ab4d291f2dc4c9c592876ab7958f4792" gracePeriod=30 Jan 22 07:09:52 crc kubenswrapper[4720]: I0122 07:09:52.991155 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="a09fd934-5a94-44e1-a13c-0b7ba32a4987" containerName="ceilometer-notification-agent" containerID="cri-o://a5703c2979540feb0ac4a611441df60d715cf3267aef5c957170533844acfe1a" gracePeriod=30 Jan 22 07:09:53 crc kubenswrapper[4720]: I0122 07:09:53.097010 4720 prober.go:107] "Probe failed" probeType="Readiness" pod="watcher-kuttl-default/ceilometer-0" podUID="a09fd934-5a94-44e1-a13c-0b7ba32a4987" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.196:3000/\": read tcp 10.217.0.2:45218->10.217.0.196:3000: read: connection reset by peer" Jan 22 07:09:53 crc kubenswrapper[4720]: I0122 07:09:53.138282 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/cinder-17a7-account-create-update-lx4js"] Jan 22 07:09:53 crc kubenswrapper[4720]: I0122 07:09:53.188553 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/cinder-db-create-q6sv2"] Jan 22 07:09:53 crc kubenswrapper[4720]: I0122 07:09:53.252302 4720 generic.go:334] "Generic (PLEG): container finished" podID="a09fd934-5a94-44e1-a13c-0b7ba32a4987" containerID="9bec4c454909acdd09bd576b9a3be8ebdd1e883ed4bbab4511ac585dd593bc36" exitCode=0 Jan 22 07:09:53 crc kubenswrapper[4720]: I0122 07:09:53.252781 4720 generic.go:334] "Generic (PLEG): container finished" podID="a09fd934-5a94-44e1-a13c-0b7ba32a4987" containerID="56df722864eb64ec95f0f741d84b7927ab4d291f2dc4c9c592876ab7958f4792" exitCode=2 Jan 22 07:09:53 crc kubenswrapper[4720]: I0122 07:09:53.252410 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"a09fd934-5a94-44e1-a13c-0b7ba32a4987","Type":"ContainerDied","Data":"9bec4c454909acdd09bd576b9a3be8ebdd1e883ed4bbab4511ac585dd593bc36"} Jan 22 07:09:53 crc kubenswrapper[4720]: I0122 07:09:53.252949 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"a09fd934-5a94-44e1-a13c-0b7ba32a4987","Type":"ContainerDied","Data":"56df722864eb64ec95f0f741d84b7927ab4d291f2dc4c9c592876ab7958f4792"} Jan 22 07:09:53 crc kubenswrapper[4720]: I0122 07:09:53.254629 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-17a7-account-create-update-lx4js" event={"ID":"0024e023-1c1d-4b82-bd73-fc7646298fb6","Type":"ContainerStarted","Data":"b6c504563efb5667f8778da58c65f9b467da8936a03a0ba74d2596c817039115"} Jan 22 07:09:53 crc kubenswrapper[4720]: I0122 07:09:53.260461 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-db-create-q6sv2" event={"ID":"d34d613b-05e5-4e43-8bf6-0eb5a5b1bebc","Type":"ContainerStarted","Data":"bcdaf08f0be851dd2cf96aa61a27cd190d4dadfe6957f0caead9245c25a82a47"} Jan 22 07:09:54 crc kubenswrapper[4720]: I0122 07:09:54.129800 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_95a76f1b-07af-4869-b242-1cdbdb0b1f98/watcher-decision-engine/0.log" Jan 22 07:09:54 crc kubenswrapper[4720]: I0122 07:09:54.270917 4720 generic.go:334] "Generic (PLEG): container finished" podID="d34d613b-05e5-4e43-8bf6-0eb5a5b1bebc" containerID="7869ba0810291f5b33a7836d7f41e37e907cada1c5944038c89e9e4ad91c493a" exitCode=0 Jan 22 07:09:54 crc kubenswrapper[4720]: I0122 07:09:54.270971 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-db-create-q6sv2" event={"ID":"d34d613b-05e5-4e43-8bf6-0eb5a5b1bebc","Type":"ContainerDied","Data":"7869ba0810291f5b33a7836d7f41e37e907cada1c5944038c89e9e4ad91c493a"} Jan 22 07:09:54 crc kubenswrapper[4720]: I0122 07:09:54.274255 4720 generic.go:334] "Generic (PLEG): container finished" podID="a09fd934-5a94-44e1-a13c-0b7ba32a4987" containerID="e4e71d2118cca21a2e602bc67b2955947ba244b714edfee6ec2ed2f30bc4fbcb" exitCode=0 Jan 22 07:09:54 crc kubenswrapper[4720]: I0122 07:09:54.274300 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"a09fd934-5a94-44e1-a13c-0b7ba32a4987","Type":"ContainerDied","Data":"e4e71d2118cca21a2e602bc67b2955947ba244b714edfee6ec2ed2f30bc4fbcb"} Jan 22 07:09:54 crc kubenswrapper[4720]: I0122 07:09:54.276353 4720 generic.go:334] "Generic (PLEG): container finished" podID="0024e023-1c1d-4b82-bd73-fc7646298fb6" containerID="757b4c064a93c7e227f8248cc00357eebf04139d73204e5f3483b7e74714082e" exitCode=0 Jan 22 07:09:54 crc kubenswrapper[4720]: I0122 07:09:54.276438 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-17a7-account-create-update-lx4js" event={"ID":"0024e023-1c1d-4b82-bd73-fc7646298fb6","Type":"ContainerDied","Data":"757b4c064a93c7e227f8248cc00357eebf04139d73204e5f3483b7e74714082e"} Jan 22 07:09:55 crc kubenswrapper[4720]: I0122 07:09:55.387934 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_95a76f1b-07af-4869-b242-1cdbdb0b1f98/watcher-decision-engine/0.log" Jan 22 07:09:55 crc kubenswrapper[4720]: I0122 07:09:55.783553 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder-db-create-q6sv2" Jan 22 07:09:55 crc kubenswrapper[4720]: I0122 07:09:55.790479 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder-17a7-account-create-update-lx4js" Jan 22 07:09:55 crc kubenswrapper[4720]: I0122 07:09:55.955188 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8n7jz\" (UniqueName: \"kubernetes.io/projected/d34d613b-05e5-4e43-8bf6-0eb5a5b1bebc-kube-api-access-8n7jz\") pod \"d34d613b-05e5-4e43-8bf6-0eb5a5b1bebc\" (UID: \"d34d613b-05e5-4e43-8bf6-0eb5a5b1bebc\") " Jan 22 07:09:55 crc kubenswrapper[4720]: I0122 07:09:55.955413 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d34d613b-05e5-4e43-8bf6-0eb5a5b1bebc-operator-scripts\") pod \"d34d613b-05e5-4e43-8bf6-0eb5a5b1bebc\" (UID: \"d34d613b-05e5-4e43-8bf6-0eb5a5b1bebc\") " Jan 22 07:09:55 crc kubenswrapper[4720]: I0122 07:09:55.955439 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0024e023-1c1d-4b82-bd73-fc7646298fb6-operator-scripts\") pod \"0024e023-1c1d-4b82-bd73-fc7646298fb6\" (UID: \"0024e023-1c1d-4b82-bd73-fc7646298fb6\") " Jan 22 07:09:55 crc kubenswrapper[4720]: I0122 07:09:55.955630 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lc5c4\" (UniqueName: \"kubernetes.io/projected/0024e023-1c1d-4b82-bd73-fc7646298fb6-kube-api-access-lc5c4\") pod \"0024e023-1c1d-4b82-bd73-fc7646298fb6\" (UID: \"0024e023-1c1d-4b82-bd73-fc7646298fb6\") " Jan 22 07:09:55 crc kubenswrapper[4720]: I0122 07:09:55.956067 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0024e023-1c1d-4b82-bd73-fc7646298fb6-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0024e023-1c1d-4b82-bd73-fc7646298fb6" (UID: "0024e023-1c1d-4b82-bd73-fc7646298fb6"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 07:09:55 crc kubenswrapper[4720]: I0122 07:09:55.956067 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d34d613b-05e5-4e43-8bf6-0eb5a5b1bebc-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d34d613b-05e5-4e43-8bf6-0eb5a5b1bebc" (UID: "d34d613b-05e5-4e43-8bf6-0eb5a5b1bebc"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 07:09:55 crc kubenswrapper[4720]: I0122 07:09:55.961885 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0024e023-1c1d-4b82-bd73-fc7646298fb6-kube-api-access-lc5c4" (OuterVolumeSpecName: "kube-api-access-lc5c4") pod "0024e023-1c1d-4b82-bd73-fc7646298fb6" (UID: "0024e023-1c1d-4b82-bd73-fc7646298fb6"). InnerVolumeSpecName "kube-api-access-lc5c4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 07:09:55 crc kubenswrapper[4720]: I0122 07:09:55.962051 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d34d613b-05e5-4e43-8bf6-0eb5a5b1bebc-kube-api-access-8n7jz" (OuterVolumeSpecName: "kube-api-access-8n7jz") pod "d34d613b-05e5-4e43-8bf6-0eb5a5b1bebc" (UID: "d34d613b-05e5-4e43-8bf6-0eb5a5b1bebc"). InnerVolumeSpecName "kube-api-access-8n7jz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 07:09:56 crc kubenswrapper[4720]: I0122 07:09:56.057984 4720 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d34d613b-05e5-4e43-8bf6-0eb5a5b1bebc-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 07:09:56 crc kubenswrapper[4720]: I0122 07:09:56.058020 4720 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0024e023-1c1d-4b82-bd73-fc7646298fb6-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 07:09:56 crc kubenswrapper[4720]: I0122 07:09:56.058030 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lc5c4\" (UniqueName: \"kubernetes.io/projected/0024e023-1c1d-4b82-bd73-fc7646298fb6-kube-api-access-lc5c4\") on node \"crc\" DevicePath \"\"" Jan 22 07:09:56 crc kubenswrapper[4720]: I0122 07:09:56.058040 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8n7jz\" (UniqueName: \"kubernetes.io/projected/d34d613b-05e5-4e43-8bf6-0eb5a5b1bebc-kube-api-access-8n7jz\") on node \"crc\" DevicePath \"\"" Jan 22 07:09:56 crc kubenswrapper[4720]: I0122 07:09:56.293654 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-17a7-account-create-update-lx4js" event={"ID":"0024e023-1c1d-4b82-bd73-fc7646298fb6","Type":"ContainerDied","Data":"b6c504563efb5667f8778da58c65f9b467da8936a03a0ba74d2596c817039115"} Jan 22 07:09:56 crc kubenswrapper[4720]: I0122 07:09:56.293695 4720 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b6c504563efb5667f8778da58c65f9b467da8936a03a0ba74d2596c817039115" Jan 22 07:09:56 crc kubenswrapper[4720]: I0122 07:09:56.293778 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder-17a7-account-create-update-lx4js" Jan 22 07:09:56 crc kubenswrapper[4720]: I0122 07:09:56.295903 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-db-create-q6sv2" event={"ID":"d34d613b-05e5-4e43-8bf6-0eb5a5b1bebc","Type":"ContainerDied","Data":"bcdaf08f0be851dd2cf96aa61a27cd190d4dadfe6957f0caead9245c25a82a47"} Jan 22 07:09:56 crc kubenswrapper[4720]: I0122 07:09:56.295938 4720 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bcdaf08f0be851dd2cf96aa61a27cd190d4dadfe6957f0caead9245c25a82a47" Jan 22 07:09:56 crc kubenswrapper[4720]: I0122 07:09:56.295994 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder-db-create-q6sv2" Jan 22 07:09:56 crc kubenswrapper[4720]: I0122 07:09:56.627639 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_95a76f1b-07af-4869-b242-1cdbdb0b1f98/watcher-decision-engine/0.log" Jan 22 07:09:57 crc kubenswrapper[4720]: I0122 07:09:57.605550 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/cinder-db-sync-4dc5v"] Jan 22 07:09:57 crc kubenswrapper[4720]: E0122 07:09:57.606329 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d34d613b-05e5-4e43-8bf6-0eb5a5b1bebc" containerName="mariadb-database-create" Jan 22 07:09:57 crc kubenswrapper[4720]: I0122 07:09:57.606354 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="d34d613b-05e5-4e43-8bf6-0eb5a5b1bebc" containerName="mariadb-database-create" Jan 22 07:09:57 crc kubenswrapper[4720]: E0122 07:09:57.606379 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0024e023-1c1d-4b82-bd73-fc7646298fb6" containerName="mariadb-account-create-update" Jan 22 07:09:57 crc kubenswrapper[4720]: I0122 07:09:57.606390 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="0024e023-1c1d-4b82-bd73-fc7646298fb6" containerName="mariadb-account-create-update" Jan 22 07:09:57 crc kubenswrapper[4720]: I0122 07:09:57.606600 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="d34d613b-05e5-4e43-8bf6-0eb5a5b1bebc" containerName="mariadb-database-create" Jan 22 07:09:57 crc kubenswrapper[4720]: I0122 07:09:57.606621 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="0024e023-1c1d-4b82-bd73-fc7646298fb6" containerName="mariadb-account-create-update" Jan 22 07:09:57 crc kubenswrapper[4720]: I0122 07:09:57.607473 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder-db-sync-4dc5v" Jan 22 07:09:57 crc kubenswrapper[4720]: I0122 07:09:57.613880 4720 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cinder-scripts" Jan 22 07:09:57 crc kubenswrapper[4720]: I0122 07:09:57.613931 4720 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cinder-cinder-dockercfg-j7jsc" Jan 22 07:09:57 crc kubenswrapper[4720]: I0122 07:09:57.614584 4720 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cinder-config-data" Jan 22 07:09:57 crc kubenswrapper[4720]: I0122 07:09:57.615443 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/cinder-db-sync-4dc5v"] Jan 22 07:09:57 crc kubenswrapper[4720]: I0122 07:09:57.691990 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/63210d7b-5ccb-49b7-a85c-0c136a6ab0c9-db-sync-config-data\") pod \"cinder-db-sync-4dc5v\" (UID: \"63210d7b-5ccb-49b7-a85c-0c136a6ab0c9\") " pod="watcher-kuttl-default/cinder-db-sync-4dc5v" Jan 22 07:09:57 crc kubenswrapper[4720]: I0122 07:09:57.692070 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63210d7b-5ccb-49b7-a85c-0c136a6ab0c9-config-data\") pod \"cinder-db-sync-4dc5v\" (UID: \"63210d7b-5ccb-49b7-a85c-0c136a6ab0c9\") " pod="watcher-kuttl-default/cinder-db-sync-4dc5v" Jan 22 07:09:57 crc kubenswrapper[4720]: I0122 07:09:57.692103 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63210d7b-5ccb-49b7-a85c-0c136a6ab0c9-combined-ca-bundle\") pod \"cinder-db-sync-4dc5v\" (UID: \"63210d7b-5ccb-49b7-a85c-0c136a6ab0c9\") " pod="watcher-kuttl-default/cinder-db-sync-4dc5v" Jan 22 07:09:57 crc kubenswrapper[4720]: I0122 07:09:57.692156 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7df9x\" (UniqueName: \"kubernetes.io/projected/63210d7b-5ccb-49b7-a85c-0c136a6ab0c9-kube-api-access-7df9x\") pod \"cinder-db-sync-4dc5v\" (UID: \"63210d7b-5ccb-49b7-a85c-0c136a6ab0c9\") " pod="watcher-kuttl-default/cinder-db-sync-4dc5v" Jan 22 07:09:57 crc kubenswrapper[4720]: I0122 07:09:57.692187 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/63210d7b-5ccb-49b7-a85c-0c136a6ab0c9-scripts\") pod \"cinder-db-sync-4dc5v\" (UID: \"63210d7b-5ccb-49b7-a85c-0c136a6ab0c9\") " pod="watcher-kuttl-default/cinder-db-sync-4dc5v" Jan 22 07:09:57 crc kubenswrapper[4720]: I0122 07:09:57.692256 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/63210d7b-5ccb-49b7-a85c-0c136a6ab0c9-etc-machine-id\") pod \"cinder-db-sync-4dc5v\" (UID: \"63210d7b-5ccb-49b7-a85c-0c136a6ab0c9\") " pod="watcher-kuttl-default/cinder-db-sync-4dc5v" Jan 22 07:09:57 crc kubenswrapper[4720]: I0122 07:09:57.794236 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/63210d7b-5ccb-49b7-a85c-0c136a6ab0c9-etc-machine-id\") pod \"cinder-db-sync-4dc5v\" (UID: \"63210d7b-5ccb-49b7-a85c-0c136a6ab0c9\") " pod="watcher-kuttl-default/cinder-db-sync-4dc5v" Jan 22 07:09:57 crc kubenswrapper[4720]: I0122 07:09:57.794319 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/63210d7b-5ccb-49b7-a85c-0c136a6ab0c9-etc-machine-id\") pod \"cinder-db-sync-4dc5v\" (UID: \"63210d7b-5ccb-49b7-a85c-0c136a6ab0c9\") " pod="watcher-kuttl-default/cinder-db-sync-4dc5v" Jan 22 07:09:57 crc kubenswrapper[4720]: I0122 07:09:57.794470 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/63210d7b-5ccb-49b7-a85c-0c136a6ab0c9-db-sync-config-data\") pod \"cinder-db-sync-4dc5v\" (UID: \"63210d7b-5ccb-49b7-a85c-0c136a6ab0c9\") " pod="watcher-kuttl-default/cinder-db-sync-4dc5v" Jan 22 07:09:57 crc kubenswrapper[4720]: I0122 07:09:57.794509 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63210d7b-5ccb-49b7-a85c-0c136a6ab0c9-config-data\") pod \"cinder-db-sync-4dc5v\" (UID: \"63210d7b-5ccb-49b7-a85c-0c136a6ab0c9\") " pod="watcher-kuttl-default/cinder-db-sync-4dc5v" Jan 22 07:09:57 crc kubenswrapper[4720]: I0122 07:09:57.794529 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63210d7b-5ccb-49b7-a85c-0c136a6ab0c9-combined-ca-bundle\") pod \"cinder-db-sync-4dc5v\" (UID: \"63210d7b-5ccb-49b7-a85c-0c136a6ab0c9\") " pod="watcher-kuttl-default/cinder-db-sync-4dc5v" Jan 22 07:09:57 crc kubenswrapper[4720]: I0122 07:09:57.794561 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7df9x\" (UniqueName: \"kubernetes.io/projected/63210d7b-5ccb-49b7-a85c-0c136a6ab0c9-kube-api-access-7df9x\") pod \"cinder-db-sync-4dc5v\" (UID: \"63210d7b-5ccb-49b7-a85c-0c136a6ab0c9\") " pod="watcher-kuttl-default/cinder-db-sync-4dc5v" Jan 22 07:09:57 crc kubenswrapper[4720]: I0122 07:09:57.794584 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/63210d7b-5ccb-49b7-a85c-0c136a6ab0c9-scripts\") pod \"cinder-db-sync-4dc5v\" (UID: \"63210d7b-5ccb-49b7-a85c-0c136a6ab0c9\") " pod="watcher-kuttl-default/cinder-db-sync-4dc5v" Jan 22 07:09:57 crc kubenswrapper[4720]: I0122 07:09:57.801779 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63210d7b-5ccb-49b7-a85c-0c136a6ab0c9-combined-ca-bundle\") pod \"cinder-db-sync-4dc5v\" (UID: \"63210d7b-5ccb-49b7-a85c-0c136a6ab0c9\") " pod="watcher-kuttl-default/cinder-db-sync-4dc5v" Jan 22 07:09:57 crc kubenswrapper[4720]: I0122 07:09:57.801925 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63210d7b-5ccb-49b7-a85c-0c136a6ab0c9-config-data\") pod \"cinder-db-sync-4dc5v\" (UID: \"63210d7b-5ccb-49b7-a85c-0c136a6ab0c9\") " pod="watcher-kuttl-default/cinder-db-sync-4dc5v" Jan 22 07:09:57 crc kubenswrapper[4720]: I0122 07:09:57.802538 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/63210d7b-5ccb-49b7-a85c-0c136a6ab0c9-db-sync-config-data\") pod \"cinder-db-sync-4dc5v\" (UID: \"63210d7b-5ccb-49b7-a85c-0c136a6ab0c9\") " pod="watcher-kuttl-default/cinder-db-sync-4dc5v" Jan 22 07:09:57 crc kubenswrapper[4720]: I0122 07:09:57.802937 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/63210d7b-5ccb-49b7-a85c-0c136a6ab0c9-scripts\") pod \"cinder-db-sync-4dc5v\" (UID: \"63210d7b-5ccb-49b7-a85c-0c136a6ab0c9\") " pod="watcher-kuttl-default/cinder-db-sync-4dc5v" Jan 22 07:09:57 crc kubenswrapper[4720]: I0122 07:09:57.821271 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7df9x\" (UniqueName: \"kubernetes.io/projected/63210d7b-5ccb-49b7-a85c-0c136a6ab0c9-kube-api-access-7df9x\") pod \"cinder-db-sync-4dc5v\" (UID: \"63210d7b-5ccb-49b7-a85c-0c136a6ab0c9\") " pod="watcher-kuttl-default/cinder-db-sync-4dc5v" Jan 22 07:09:57 crc kubenswrapper[4720]: I0122 07:09:57.871970 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_95a76f1b-07af-4869-b242-1cdbdb0b1f98/watcher-decision-engine/0.log" Jan 22 07:09:57 crc kubenswrapper[4720]: I0122 07:09:57.925722 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder-db-sync-4dc5v" Jan 22 07:09:58 crc kubenswrapper[4720]: I0122 07:09:58.316771 4720 generic.go:334] "Generic (PLEG): container finished" podID="a09fd934-5a94-44e1-a13c-0b7ba32a4987" containerID="a5703c2979540feb0ac4a611441df60d715cf3267aef5c957170533844acfe1a" exitCode=0 Jan 22 07:09:58 crc kubenswrapper[4720]: I0122 07:09:58.316860 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"a09fd934-5a94-44e1-a13c-0b7ba32a4987","Type":"ContainerDied","Data":"a5703c2979540feb0ac4a611441df60d715cf3267aef5c957170533844acfe1a"} Jan 22 07:09:58 crc kubenswrapper[4720]: I0122 07:09:58.412287 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/cinder-db-sync-4dc5v"] Jan 22 07:09:58 crc kubenswrapper[4720]: W0122 07:09:58.415966 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod63210d7b_5ccb_49b7_a85c_0c136a6ab0c9.slice/crio-f97af350dfe67214b1282381600bebde0f0ea35696603c7ab2342d2cf4aba1e1 WatchSource:0}: Error finding container f97af350dfe67214b1282381600bebde0f0ea35696603c7ab2342d2cf4aba1e1: Status 404 returned error can't find the container with id f97af350dfe67214b1282381600bebde0f0ea35696603c7ab2342d2cf4aba1e1 Jan 22 07:09:58 crc kubenswrapper[4720]: I0122 07:09:58.480151 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:09:58 crc kubenswrapper[4720]: I0122 07:09:58.617367 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a09fd934-5a94-44e1-a13c-0b7ba32a4987-scripts\") pod \"a09fd934-5a94-44e1-a13c-0b7ba32a4987\" (UID: \"a09fd934-5a94-44e1-a13c-0b7ba32a4987\") " Jan 22 07:09:58 crc kubenswrapper[4720]: I0122 07:09:58.617443 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a09fd934-5a94-44e1-a13c-0b7ba32a4987-combined-ca-bundle\") pod \"a09fd934-5a94-44e1-a13c-0b7ba32a4987\" (UID: \"a09fd934-5a94-44e1-a13c-0b7ba32a4987\") " Jan 22 07:09:58 crc kubenswrapper[4720]: I0122 07:09:58.617504 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a09fd934-5a94-44e1-a13c-0b7ba32a4987-run-httpd\") pod \"a09fd934-5a94-44e1-a13c-0b7ba32a4987\" (UID: \"a09fd934-5a94-44e1-a13c-0b7ba32a4987\") " Jan 22 07:09:58 crc kubenswrapper[4720]: I0122 07:09:58.617546 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a09fd934-5a94-44e1-a13c-0b7ba32a4987-log-httpd\") pod \"a09fd934-5a94-44e1-a13c-0b7ba32a4987\" (UID: \"a09fd934-5a94-44e1-a13c-0b7ba32a4987\") " Jan 22 07:09:58 crc kubenswrapper[4720]: I0122 07:09:58.617651 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a09fd934-5a94-44e1-a13c-0b7ba32a4987-config-data\") pod \"a09fd934-5a94-44e1-a13c-0b7ba32a4987\" (UID: \"a09fd934-5a94-44e1-a13c-0b7ba32a4987\") " Jan 22 07:09:58 crc kubenswrapper[4720]: I0122 07:09:58.617820 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bd752\" (UniqueName: \"kubernetes.io/projected/a09fd934-5a94-44e1-a13c-0b7ba32a4987-kube-api-access-bd752\") pod \"a09fd934-5a94-44e1-a13c-0b7ba32a4987\" (UID: \"a09fd934-5a94-44e1-a13c-0b7ba32a4987\") " Jan 22 07:09:58 crc kubenswrapper[4720]: I0122 07:09:58.617883 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a09fd934-5a94-44e1-a13c-0b7ba32a4987-sg-core-conf-yaml\") pod \"a09fd934-5a94-44e1-a13c-0b7ba32a4987\" (UID: \"a09fd934-5a94-44e1-a13c-0b7ba32a4987\") " Jan 22 07:09:58 crc kubenswrapper[4720]: I0122 07:09:58.617971 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/a09fd934-5a94-44e1-a13c-0b7ba32a4987-ceilometer-tls-certs\") pod \"a09fd934-5a94-44e1-a13c-0b7ba32a4987\" (UID: \"a09fd934-5a94-44e1-a13c-0b7ba32a4987\") " Jan 22 07:09:58 crc kubenswrapper[4720]: I0122 07:09:58.618446 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a09fd934-5a94-44e1-a13c-0b7ba32a4987-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "a09fd934-5a94-44e1-a13c-0b7ba32a4987" (UID: "a09fd934-5a94-44e1-a13c-0b7ba32a4987"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 07:09:58 crc kubenswrapper[4720]: I0122 07:09:58.618516 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a09fd934-5a94-44e1-a13c-0b7ba32a4987-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "a09fd934-5a94-44e1-a13c-0b7ba32a4987" (UID: "a09fd934-5a94-44e1-a13c-0b7ba32a4987"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 07:09:58 crc kubenswrapper[4720]: I0122 07:09:58.618546 4720 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a09fd934-5a94-44e1-a13c-0b7ba32a4987-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 22 07:09:58 crc kubenswrapper[4720]: I0122 07:09:58.623985 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a09fd934-5a94-44e1-a13c-0b7ba32a4987-scripts" (OuterVolumeSpecName: "scripts") pod "a09fd934-5a94-44e1-a13c-0b7ba32a4987" (UID: "a09fd934-5a94-44e1-a13c-0b7ba32a4987"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:09:58 crc kubenswrapper[4720]: I0122 07:09:58.624684 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a09fd934-5a94-44e1-a13c-0b7ba32a4987-kube-api-access-bd752" (OuterVolumeSpecName: "kube-api-access-bd752") pod "a09fd934-5a94-44e1-a13c-0b7ba32a4987" (UID: "a09fd934-5a94-44e1-a13c-0b7ba32a4987"). InnerVolumeSpecName "kube-api-access-bd752". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 07:09:58 crc kubenswrapper[4720]: I0122 07:09:58.646715 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a09fd934-5a94-44e1-a13c-0b7ba32a4987-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "a09fd934-5a94-44e1-a13c-0b7ba32a4987" (UID: "a09fd934-5a94-44e1-a13c-0b7ba32a4987"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:09:58 crc kubenswrapper[4720]: I0122 07:09:58.668184 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a09fd934-5a94-44e1-a13c-0b7ba32a4987-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "a09fd934-5a94-44e1-a13c-0b7ba32a4987" (UID: "a09fd934-5a94-44e1-a13c-0b7ba32a4987"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:09:58 crc kubenswrapper[4720]: I0122 07:09:58.696517 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a09fd934-5a94-44e1-a13c-0b7ba32a4987-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a09fd934-5a94-44e1-a13c-0b7ba32a4987" (UID: "a09fd934-5a94-44e1-a13c-0b7ba32a4987"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:09:58 crc kubenswrapper[4720]: I0122 07:09:58.704349 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a09fd934-5a94-44e1-a13c-0b7ba32a4987-config-data" (OuterVolumeSpecName: "config-data") pod "a09fd934-5a94-44e1-a13c-0b7ba32a4987" (UID: "a09fd934-5a94-44e1-a13c-0b7ba32a4987"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:09:58 crc kubenswrapper[4720]: I0122 07:09:58.720525 4720 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a09fd934-5a94-44e1-a13c-0b7ba32a4987-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 07:09:58 crc kubenswrapper[4720]: I0122 07:09:58.720556 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bd752\" (UniqueName: \"kubernetes.io/projected/a09fd934-5a94-44e1-a13c-0b7ba32a4987-kube-api-access-bd752\") on node \"crc\" DevicePath \"\"" Jan 22 07:09:58 crc kubenswrapper[4720]: I0122 07:09:58.720567 4720 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a09fd934-5a94-44e1-a13c-0b7ba32a4987-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 22 07:09:58 crc kubenswrapper[4720]: I0122 07:09:58.720575 4720 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/a09fd934-5a94-44e1-a13c-0b7ba32a4987-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 22 07:09:58 crc kubenswrapper[4720]: I0122 07:09:58.720586 4720 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a09fd934-5a94-44e1-a13c-0b7ba32a4987-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 07:09:58 crc kubenswrapper[4720]: I0122 07:09:58.720597 4720 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a09fd934-5a94-44e1-a13c-0b7ba32a4987-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 07:09:58 crc kubenswrapper[4720]: I0122 07:09:58.720609 4720 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a09fd934-5a94-44e1-a13c-0b7ba32a4987-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 22 07:09:59 crc kubenswrapper[4720]: I0122 07:09:59.112312 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_95a76f1b-07af-4869-b242-1cdbdb0b1f98/watcher-decision-engine/0.log" Jan 22 07:09:59 crc kubenswrapper[4720]: I0122 07:09:59.329586 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-db-sync-4dc5v" event={"ID":"63210d7b-5ccb-49b7-a85c-0c136a6ab0c9","Type":"ContainerStarted","Data":"f97af350dfe67214b1282381600bebde0f0ea35696603c7ab2342d2cf4aba1e1"} Jan 22 07:09:59 crc kubenswrapper[4720]: I0122 07:09:59.335514 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"a09fd934-5a94-44e1-a13c-0b7ba32a4987","Type":"ContainerDied","Data":"545f60b59e2ef11d73585d21f2279752487d3bdee385b76391b14f6b9a4acbf3"} Jan 22 07:09:59 crc kubenswrapper[4720]: I0122 07:09:59.335562 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:09:59 crc kubenswrapper[4720]: I0122 07:09:59.335589 4720 scope.go:117] "RemoveContainer" containerID="9bec4c454909acdd09bd576b9a3be8ebdd1e883ed4bbab4511ac585dd593bc36" Jan 22 07:09:59 crc kubenswrapper[4720]: I0122 07:09:59.365133 4720 scope.go:117] "RemoveContainer" containerID="56df722864eb64ec95f0f741d84b7927ab4d291f2dc4c9c592876ab7958f4792" Jan 22 07:09:59 crc kubenswrapper[4720]: I0122 07:09:59.379321 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 07:09:59 crc kubenswrapper[4720]: I0122 07:09:59.379400 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 07:09:59 crc kubenswrapper[4720]: I0122 07:09:59.403762 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 07:09:59 crc kubenswrapper[4720]: E0122 07:09:59.404618 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a09fd934-5a94-44e1-a13c-0b7ba32a4987" containerName="proxy-httpd" Jan 22 07:09:59 crc kubenswrapper[4720]: I0122 07:09:59.404644 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="a09fd934-5a94-44e1-a13c-0b7ba32a4987" containerName="proxy-httpd" Jan 22 07:09:59 crc kubenswrapper[4720]: E0122 07:09:59.404663 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a09fd934-5a94-44e1-a13c-0b7ba32a4987" containerName="ceilometer-central-agent" Jan 22 07:09:59 crc kubenswrapper[4720]: I0122 07:09:59.404672 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="a09fd934-5a94-44e1-a13c-0b7ba32a4987" containerName="ceilometer-central-agent" Jan 22 07:09:59 crc kubenswrapper[4720]: E0122 07:09:59.404685 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a09fd934-5a94-44e1-a13c-0b7ba32a4987" containerName="sg-core" Jan 22 07:09:59 crc kubenswrapper[4720]: I0122 07:09:59.404694 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="a09fd934-5a94-44e1-a13c-0b7ba32a4987" containerName="sg-core" Jan 22 07:09:59 crc kubenswrapper[4720]: E0122 07:09:59.404717 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a09fd934-5a94-44e1-a13c-0b7ba32a4987" containerName="ceilometer-notification-agent" Jan 22 07:09:59 crc kubenswrapper[4720]: I0122 07:09:59.404725 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="a09fd934-5a94-44e1-a13c-0b7ba32a4987" containerName="ceilometer-notification-agent" Jan 22 07:09:59 crc kubenswrapper[4720]: I0122 07:09:59.405463 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="a09fd934-5a94-44e1-a13c-0b7ba32a4987" containerName="ceilometer-central-agent" Jan 22 07:09:59 crc kubenswrapper[4720]: I0122 07:09:59.405488 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="a09fd934-5a94-44e1-a13c-0b7ba32a4987" containerName="proxy-httpd" Jan 22 07:09:59 crc kubenswrapper[4720]: I0122 07:09:59.405508 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="a09fd934-5a94-44e1-a13c-0b7ba32a4987" containerName="sg-core" Jan 22 07:09:59 crc kubenswrapper[4720]: I0122 07:09:59.405522 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="a09fd934-5a94-44e1-a13c-0b7ba32a4987" containerName="ceilometer-notification-agent" Jan 22 07:09:59 crc kubenswrapper[4720]: I0122 07:09:59.413144 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:09:59 crc kubenswrapper[4720]: I0122 07:09:59.414188 4720 scope.go:117] "RemoveContainer" containerID="a5703c2979540feb0ac4a611441df60d715cf3267aef5c957170533844acfe1a" Jan 22 07:09:59 crc kubenswrapper[4720]: I0122 07:09:59.417747 4720 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-config-data" Jan 22 07:09:59 crc kubenswrapper[4720]: I0122 07:09:59.417751 4720 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-ceilometer-internal-svc" Jan 22 07:09:59 crc kubenswrapper[4720]: I0122 07:09:59.418343 4720 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-scripts" Jan 22 07:09:59 crc kubenswrapper[4720]: I0122 07:09:59.435851 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 07:09:59 crc kubenswrapper[4720]: I0122 07:09:59.472429 4720 scope.go:117] "RemoveContainer" containerID="e4e71d2118cca21a2e602bc67b2955947ba244b714edfee6ec2ed2f30bc4fbcb" Jan 22 07:09:59 crc kubenswrapper[4720]: I0122 07:09:59.534074 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ba5fb927-8677-4576-85bf-75621f514a9d-log-httpd\") pod \"ceilometer-0\" (UID: \"ba5fb927-8677-4576-85bf-75621f514a9d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:09:59 crc kubenswrapper[4720]: I0122 07:09:59.534130 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba5fb927-8677-4576-85bf-75621f514a9d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ba5fb927-8677-4576-85bf-75621f514a9d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:09:59 crc kubenswrapper[4720]: I0122 07:09:59.534288 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba5fb927-8677-4576-85bf-75621f514a9d-config-data\") pod \"ceilometer-0\" (UID: \"ba5fb927-8677-4576-85bf-75621f514a9d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:09:59 crc kubenswrapper[4720]: I0122 07:09:59.534362 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ba5fb927-8677-4576-85bf-75621f514a9d-run-httpd\") pod \"ceilometer-0\" (UID: \"ba5fb927-8677-4576-85bf-75621f514a9d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:09:59 crc kubenswrapper[4720]: I0122 07:09:59.534405 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/ba5fb927-8677-4576-85bf-75621f514a9d-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"ba5fb927-8677-4576-85bf-75621f514a9d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:09:59 crc kubenswrapper[4720]: I0122 07:09:59.534437 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cptjb\" (UniqueName: \"kubernetes.io/projected/ba5fb927-8677-4576-85bf-75621f514a9d-kube-api-access-cptjb\") pod \"ceilometer-0\" (UID: \"ba5fb927-8677-4576-85bf-75621f514a9d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:09:59 crc kubenswrapper[4720]: I0122 07:09:59.534500 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ba5fb927-8677-4576-85bf-75621f514a9d-scripts\") pod \"ceilometer-0\" (UID: \"ba5fb927-8677-4576-85bf-75621f514a9d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:09:59 crc kubenswrapper[4720]: I0122 07:09:59.534538 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ba5fb927-8677-4576-85bf-75621f514a9d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ba5fb927-8677-4576-85bf-75621f514a9d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:09:59 crc kubenswrapper[4720]: I0122 07:09:59.636972 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ba5fb927-8677-4576-85bf-75621f514a9d-run-httpd\") pod \"ceilometer-0\" (UID: \"ba5fb927-8677-4576-85bf-75621f514a9d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:09:59 crc kubenswrapper[4720]: I0122 07:09:59.637021 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/ba5fb927-8677-4576-85bf-75621f514a9d-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"ba5fb927-8677-4576-85bf-75621f514a9d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:09:59 crc kubenswrapper[4720]: I0122 07:09:59.637045 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cptjb\" (UniqueName: \"kubernetes.io/projected/ba5fb927-8677-4576-85bf-75621f514a9d-kube-api-access-cptjb\") pod \"ceilometer-0\" (UID: \"ba5fb927-8677-4576-85bf-75621f514a9d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:09:59 crc kubenswrapper[4720]: I0122 07:09:59.637079 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ba5fb927-8677-4576-85bf-75621f514a9d-scripts\") pod \"ceilometer-0\" (UID: \"ba5fb927-8677-4576-85bf-75621f514a9d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:09:59 crc kubenswrapper[4720]: I0122 07:09:59.637111 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ba5fb927-8677-4576-85bf-75621f514a9d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ba5fb927-8677-4576-85bf-75621f514a9d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:09:59 crc kubenswrapper[4720]: I0122 07:09:59.637180 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ba5fb927-8677-4576-85bf-75621f514a9d-log-httpd\") pod \"ceilometer-0\" (UID: \"ba5fb927-8677-4576-85bf-75621f514a9d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:09:59 crc kubenswrapper[4720]: I0122 07:09:59.637202 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba5fb927-8677-4576-85bf-75621f514a9d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ba5fb927-8677-4576-85bf-75621f514a9d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:09:59 crc kubenswrapper[4720]: I0122 07:09:59.637306 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba5fb927-8677-4576-85bf-75621f514a9d-config-data\") pod \"ceilometer-0\" (UID: \"ba5fb927-8677-4576-85bf-75621f514a9d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:09:59 crc kubenswrapper[4720]: I0122 07:09:59.637611 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ba5fb927-8677-4576-85bf-75621f514a9d-run-httpd\") pod \"ceilometer-0\" (UID: \"ba5fb927-8677-4576-85bf-75621f514a9d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:09:59 crc kubenswrapper[4720]: I0122 07:09:59.637979 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ba5fb927-8677-4576-85bf-75621f514a9d-log-httpd\") pod \"ceilometer-0\" (UID: \"ba5fb927-8677-4576-85bf-75621f514a9d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:09:59 crc kubenswrapper[4720]: I0122 07:09:59.643021 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ba5fb927-8677-4576-85bf-75621f514a9d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ba5fb927-8677-4576-85bf-75621f514a9d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:09:59 crc kubenswrapper[4720]: I0122 07:09:59.644245 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/ba5fb927-8677-4576-85bf-75621f514a9d-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"ba5fb927-8677-4576-85bf-75621f514a9d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:09:59 crc kubenswrapper[4720]: I0122 07:09:59.644452 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ba5fb927-8677-4576-85bf-75621f514a9d-scripts\") pod \"ceilometer-0\" (UID: \"ba5fb927-8677-4576-85bf-75621f514a9d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:09:59 crc kubenswrapper[4720]: I0122 07:09:59.644936 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba5fb927-8677-4576-85bf-75621f514a9d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ba5fb927-8677-4576-85bf-75621f514a9d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:09:59 crc kubenswrapper[4720]: I0122 07:09:59.649052 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba5fb927-8677-4576-85bf-75621f514a9d-config-data\") pod \"ceilometer-0\" (UID: \"ba5fb927-8677-4576-85bf-75621f514a9d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:09:59 crc kubenswrapper[4720]: I0122 07:09:59.668711 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cptjb\" (UniqueName: \"kubernetes.io/projected/ba5fb927-8677-4576-85bf-75621f514a9d-kube-api-access-cptjb\") pod \"ceilometer-0\" (UID: \"ba5fb927-8677-4576-85bf-75621f514a9d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:09:59 crc kubenswrapper[4720]: I0122 07:09:59.744902 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:10:00 crc kubenswrapper[4720]: I0122 07:10:00.230553 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a09fd934-5a94-44e1-a13c-0b7ba32a4987" path="/var/lib/kubelet/pods/a09fd934-5a94-44e1-a13c-0b7ba32a4987/volumes" Jan 22 07:10:00 crc kubenswrapper[4720]: I0122 07:10:00.244882 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 07:10:00 crc kubenswrapper[4720]: W0122 07:10:00.249828 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podba5fb927_8677_4576_85bf_75621f514a9d.slice/crio-238f963a3d7f27ac0966c30ea48077e94e8f9d9f4e848d71654873a6b6ac0122 WatchSource:0}: Error finding container 238f963a3d7f27ac0966c30ea48077e94e8f9d9f4e848d71654873a6b6ac0122: Status 404 returned error can't find the container with id 238f963a3d7f27ac0966c30ea48077e94e8f9d9f4e848d71654873a6b6ac0122 Jan 22 07:10:00 crc kubenswrapper[4720]: I0122 07:10:00.305751 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_95a76f1b-07af-4869-b242-1cdbdb0b1f98/watcher-decision-engine/0.log" Jan 22 07:10:00 crc kubenswrapper[4720]: I0122 07:10:00.355549 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"ba5fb927-8677-4576-85bf-75621f514a9d","Type":"ContainerStarted","Data":"238f963a3d7f27ac0966c30ea48077e94e8f9d9f4e848d71654873a6b6ac0122"} Jan 22 07:10:01 crc kubenswrapper[4720]: I0122 07:10:01.374046 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"ba5fb927-8677-4576-85bf-75621f514a9d","Type":"ContainerStarted","Data":"3f373f6594899caa060de0a7c781957193a49f39e24312aaf9742c8b300e488a"} Jan 22 07:10:01 crc kubenswrapper[4720]: I0122 07:10:01.512519 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_95a76f1b-07af-4869-b242-1cdbdb0b1f98/watcher-decision-engine/0.log" Jan 22 07:10:02 crc kubenswrapper[4720]: I0122 07:10:02.386958 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"ba5fb927-8677-4576-85bf-75621f514a9d","Type":"ContainerStarted","Data":"0e84feceb255ba20a9809b499f4d49f9fcebdb02128d64c15388577a8d3649e7"} Jan 22 07:10:02 crc kubenswrapper[4720]: I0122 07:10:02.387514 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"ba5fb927-8677-4576-85bf-75621f514a9d","Type":"ContainerStarted","Data":"fcaa5ffa174c2659a1c4077ca79099f49c331ab17f3870fae02ea2968d89ca46"} Jan 22 07:10:02 crc kubenswrapper[4720]: I0122 07:10:02.739669 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_95a76f1b-07af-4869-b242-1cdbdb0b1f98/watcher-decision-engine/0.log" Jan 22 07:10:04 crc kubenswrapper[4720]: I0122 07:10:04.004080 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_95a76f1b-07af-4869-b242-1cdbdb0b1f98/watcher-decision-engine/0.log" Jan 22 07:10:04 crc kubenswrapper[4720]: I0122 07:10:04.407593 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"ba5fb927-8677-4576-85bf-75621f514a9d","Type":"ContainerStarted","Data":"5ef80c746c7c023464e9624ad446abfc3200ae365249db7487fafd95954842d3"} Jan 22 07:10:04 crc kubenswrapper[4720]: I0122 07:10:04.408940 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:10:04 crc kubenswrapper[4720]: I0122 07:10:04.439021 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/ceilometer-0" podStartSLOduration=2.393926239 podStartE2EDuration="5.438998773s" podCreationTimestamp="2026-01-22 07:09:59 +0000 UTC" firstStartedPulling="2026-01-22 07:10:00.252241417 +0000 UTC m=+2092.394148122" lastFinishedPulling="2026-01-22 07:10:03.297313951 +0000 UTC m=+2095.439220656" observedRunningTime="2026-01-22 07:10:04.434924618 +0000 UTC m=+2096.576831333" watchObservedRunningTime="2026-01-22 07:10:04.438998773 +0000 UTC m=+2096.580905478" Jan 22 07:10:05 crc kubenswrapper[4720]: I0122 07:10:05.229819 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_95a76f1b-07af-4869-b242-1cdbdb0b1f98/watcher-decision-engine/0.log" Jan 22 07:10:06 crc kubenswrapper[4720]: I0122 07:10:06.483318 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_95a76f1b-07af-4869-b242-1cdbdb0b1f98/watcher-decision-engine/0.log" Jan 22 07:10:07 crc kubenswrapper[4720]: I0122 07:10:07.713120 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_95a76f1b-07af-4869-b242-1cdbdb0b1f98/watcher-decision-engine/0.log" Jan 22 07:10:08 crc kubenswrapper[4720]: I0122 07:10:08.967236 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_95a76f1b-07af-4869-b242-1cdbdb0b1f98/watcher-decision-engine/0.log" Jan 22 07:10:10 crc kubenswrapper[4720]: I0122 07:10:10.180260 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_95a76f1b-07af-4869-b242-1cdbdb0b1f98/watcher-decision-engine/0.log" Jan 22 07:10:11 crc kubenswrapper[4720]: I0122 07:10:11.377424 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_95a76f1b-07af-4869-b242-1cdbdb0b1f98/watcher-decision-engine/0.log" Jan 22 07:10:12 crc kubenswrapper[4720]: I0122 07:10:12.649568 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_95a76f1b-07af-4869-b242-1cdbdb0b1f98/watcher-decision-engine/0.log" Jan 22 07:10:13 crc kubenswrapper[4720]: I0122 07:10:13.377350 4720 scope.go:117] "RemoveContainer" containerID="65d166d097ed14bb8a1fe7053c0cc1eaf0376b4d0c4e5917295c48d62590d8f8" Jan 22 07:10:13 crc kubenswrapper[4720]: I0122 07:10:13.869375 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_95a76f1b-07af-4869-b242-1cdbdb0b1f98/watcher-decision-engine/0.log" Jan 22 07:10:15 crc kubenswrapper[4720]: I0122 07:10:15.052385 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_95a76f1b-07af-4869-b242-1cdbdb0b1f98/watcher-decision-engine/0.log" Jan 22 07:10:16 crc kubenswrapper[4720]: I0122 07:10:16.317120 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_95a76f1b-07af-4869-b242-1cdbdb0b1f98/watcher-decision-engine/0.log" Jan 22 07:10:17 crc kubenswrapper[4720]: I0122 07:10:17.246453 4720 scope.go:117] "RemoveContainer" containerID="19692928c611dc07da557717d9019373ee97a4da2fce2243759faecdd7a6a4dc" Jan 22 07:10:17 crc kubenswrapper[4720]: I0122 07:10:17.285716 4720 scope.go:117] "RemoveContainer" containerID="18ec0b86850adae9be13f01e14a57e94e502305c7156e09baee267f6d9df281d" Jan 22 07:10:17 crc kubenswrapper[4720]: E0122 07:10:17.298147 4720 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Jan 22 07:10:17 crc kubenswrapper[4720]: E0122 07:10:17.298319 4720 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7df9x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-4dc5v_watcher-kuttl-default(63210d7b-5ccb-49b7-a85c-0c136a6ab0c9): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 22 07:10:17 crc kubenswrapper[4720]: E0122 07:10:17.299494 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="watcher-kuttl-default/cinder-db-sync-4dc5v" podUID="63210d7b-5ccb-49b7-a85c-0c136a6ab0c9" Jan 22 07:10:17 crc kubenswrapper[4720]: I0122 07:10:17.564354 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_95a76f1b-07af-4869-b242-1cdbdb0b1f98/watcher-decision-engine/0.log" Jan 22 07:10:17 crc kubenswrapper[4720]: E0122 07:10:17.590766 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="watcher-kuttl-default/cinder-db-sync-4dc5v" podUID="63210d7b-5ccb-49b7-a85c-0c136a6ab0c9" Jan 22 07:10:18 crc kubenswrapper[4720]: I0122 07:10:18.818937 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_95a76f1b-07af-4869-b242-1cdbdb0b1f98/watcher-decision-engine/0.log" Jan 22 07:10:20 crc kubenswrapper[4720]: I0122 07:10:20.058148 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_95a76f1b-07af-4869-b242-1cdbdb0b1f98/watcher-decision-engine/0.log" Jan 22 07:10:21 crc kubenswrapper[4720]: I0122 07:10:21.294562 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_95a76f1b-07af-4869-b242-1cdbdb0b1f98/watcher-decision-engine/0.log" Jan 22 07:10:22 crc kubenswrapper[4720]: I0122 07:10:22.542711 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_95a76f1b-07af-4869-b242-1cdbdb0b1f98/watcher-decision-engine/0.log" Jan 22 07:10:23 crc kubenswrapper[4720]: I0122 07:10:23.771015 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_95a76f1b-07af-4869-b242-1cdbdb0b1f98/watcher-decision-engine/0.log" Jan 22 07:10:24 crc kubenswrapper[4720]: I0122 07:10:24.995460 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_95a76f1b-07af-4869-b242-1cdbdb0b1f98/watcher-decision-engine/0.log" Jan 22 07:10:26 crc kubenswrapper[4720]: I0122 07:10:26.243371 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_95a76f1b-07af-4869-b242-1cdbdb0b1f98/watcher-decision-engine/0.log" Jan 22 07:10:27 crc kubenswrapper[4720]: I0122 07:10:27.474473 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_95a76f1b-07af-4869-b242-1cdbdb0b1f98/watcher-decision-engine/0.log" Jan 22 07:10:28 crc kubenswrapper[4720]: I0122 07:10:28.688567 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_95a76f1b-07af-4869-b242-1cdbdb0b1f98/watcher-decision-engine/0.log" Jan 22 07:10:29 crc kubenswrapper[4720]: I0122 07:10:29.753392 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:10:29 crc kubenswrapper[4720]: I0122 07:10:29.916015 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_95a76f1b-07af-4869-b242-1cdbdb0b1f98/watcher-decision-engine/0.log" Jan 22 07:10:31 crc kubenswrapper[4720]: I0122 07:10:31.132432 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_95a76f1b-07af-4869-b242-1cdbdb0b1f98/watcher-decision-engine/0.log" Jan 22 07:10:32 crc kubenswrapper[4720]: I0122 07:10:32.342127 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_95a76f1b-07af-4869-b242-1cdbdb0b1f98/watcher-decision-engine/0.log" Jan 22 07:10:32 crc kubenswrapper[4720]: I0122 07:10:32.723726 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-db-sync-4dc5v" event={"ID":"63210d7b-5ccb-49b7-a85c-0c136a6ab0c9","Type":"ContainerStarted","Data":"16ed8beab0f87d6a21c1576aa5bdd58052f6c272d8b38b9a587d0a2eb080151e"} Jan 22 07:10:32 crc kubenswrapper[4720]: I0122 07:10:32.746590 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/cinder-db-sync-4dc5v" podStartSLOduration=2.532852112 podStartE2EDuration="35.746567914s" podCreationTimestamp="2026-01-22 07:09:57 +0000 UTC" firstStartedPulling="2026-01-22 07:09:58.418667546 +0000 UTC m=+2090.560574251" lastFinishedPulling="2026-01-22 07:10:31.632383348 +0000 UTC m=+2123.774290053" observedRunningTime="2026-01-22 07:10:32.745509254 +0000 UTC m=+2124.887415969" watchObservedRunningTime="2026-01-22 07:10:32.746567914 +0000 UTC m=+2124.888474619" Jan 22 07:10:33 crc kubenswrapper[4720]: I0122 07:10:33.515237 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_95a76f1b-07af-4869-b242-1cdbdb0b1f98/watcher-decision-engine/0.log" Jan 22 07:10:34 crc kubenswrapper[4720]: I0122 07:10:34.705889 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_95a76f1b-07af-4869-b242-1cdbdb0b1f98/watcher-decision-engine/0.log" Jan 22 07:10:35 crc kubenswrapper[4720]: I0122 07:10:35.936261 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_95a76f1b-07af-4869-b242-1cdbdb0b1f98/watcher-decision-engine/0.log" Jan 22 07:10:37 crc kubenswrapper[4720]: I0122 07:10:37.176758 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_95a76f1b-07af-4869-b242-1cdbdb0b1f98/watcher-decision-engine/0.log" Jan 22 07:10:37 crc kubenswrapper[4720]: I0122 07:10:37.906774 4720 generic.go:334] "Generic (PLEG): container finished" podID="63210d7b-5ccb-49b7-a85c-0c136a6ab0c9" containerID="16ed8beab0f87d6a21c1576aa5bdd58052f6c272d8b38b9a587d0a2eb080151e" exitCode=0 Jan 22 07:10:37 crc kubenswrapper[4720]: I0122 07:10:37.906829 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-db-sync-4dc5v" event={"ID":"63210d7b-5ccb-49b7-a85c-0c136a6ab0c9","Type":"ContainerDied","Data":"16ed8beab0f87d6a21c1576aa5bdd58052f6c272d8b38b9a587d0a2eb080151e"} Jan 22 07:10:38 crc kubenswrapper[4720]: I0122 07:10:38.426109 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_95a76f1b-07af-4869-b242-1cdbdb0b1f98/watcher-decision-engine/0.log" Jan 22 07:10:39 crc kubenswrapper[4720]: I0122 07:10:39.284634 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder-db-sync-4dc5v" Jan 22 07:10:39 crc kubenswrapper[4720]: I0122 07:10:39.337431 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/63210d7b-5ccb-49b7-a85c-0c136a6ab0c9-scripts\") pod \"63210d7b-5ccb-49b7-a85c-0c136a6ab0c9\" (UID: \"63210d7b-5ccb-49b7-a85c-0c136a6ab0c9\") " Jan 22 07:10:39 crc kubenswrapper[4720]: I0122 07:10:39.337500 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/63210d7b-5ccb-49b7-a85c-0c136a6ab0c9-db-sync-config-data\") pod \"63210d7b-5ccb-49b7-a85c-0c136a6ab0c9\" (UID: \"63210d7b-5ccb-49b7-a85c-0c136a6ab0c9\") " Jan 22 07:10:39 crc kubenswrapper[4720]: I0122 07:10:39.337525 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63210d7b-5ccb-49b7-a85c-0c136a6ab0c9-config-data\") pod \"63210d7b-5ccb-49b7-a85c-0c136a6ab0c9\" (UID: \"63210d7b-5ccb-49b7-a85c-0c136a6ab0c9\") " Jan 22 07:10:39 crc kubenswrapper[4720]: I0122 07:10:39.337581 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7df9x\" (UniqueName: \"kubernetes.io/projected/63210d7b-5ccb-49b7-a85c-0c136a6ab0c9-kube-api-access-7df9x\") pod \"63210d7b-5ccb-49b7-a85c-0c136a6ab0c9\" (UID: \"63210d7b-5ccb-49b7-a85c-0c136a6ab0c9\") " Jan 22 07:10:39 crc kubenswrapper[4720]: I0122 07:10:39.337631 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63210d7b-5ccb-49b7-a85c-0c136a6ab0c9-combined-ca-bundle\") pod \"63210d7b-5ccb-49b7-a85c-0c136a6ab0c9\" (UID: \"63210d7b-5ccb-49b7-a85c-0c136a6ab0c9\") " Jan 22 07:10:39 crc kubenswrapper[4720]: I0122 07:10:39.337873 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/63210d7b-5ccb-49b7-a85c-0c136a6ab0c9-etc-machine-id\") pod \"63210d7b-5ccb-49b7-a85c-0c136a6ab0c9\" (UID: \"63210d7b-5ccb-49b7-a85c-0c136a6ab0c9\") " Jan 22 07:10:39 crc kubenswrapper[4720]: I0122 07:10:39.337982 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/63210d7b-5ccb-49b7-a85c-0c136a6ab0c9-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "63210d7b-5ccb-49b7-a85c-0c136a6ab0c9" (UID: "63210d7b-5ccb-49b7-a85c-0c136a6ab0c9"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 07:10:39 crc kubenswrapper[4720]: I0122 07:10:39.338482 4720 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/63210d7b-5ccb-49b7-a85c-0c136a6ab0c9-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 22 07:10:39 crc kubenswrapper[4720]: I0122 07:10:39.346110 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/63210d7b-5ccb-49b7-a85c-0c136a6ab0c9-scripts" (OuterVolumeSpecName: "scripts") pod "63210d7b-5ccb-49b7-a85c-0c136a6ab0c9" (UID: "63210d7b-5ccb-49b7-a85c-0c136a6ab0c9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:10:39 crc kubenswrapper[4720]: I0122 07:10:39.347055 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/63210d7b-5ccb-49b7-a85c-0c136a6ab0c9-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "63210d7b-5ccb-49b7-a85c-0c136a6ab0c9" (UID: "63210d7b-5ccb-49b7-a85c-0c136a6ab0c9"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:10:39 crc kubenswrapper[4720]: I0122 07:10:39.347105 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/63210d7b-5ccb-49b7-a85c-0c136a6ab0c9-kube-api-access-7df9x" (OuterVolumeSpecName: "kube-api-access-7df9x") pod "63210d7b-5ccb-49b7-a85c-0c136a6ab0c9" (UID: "63210d7b-5ccb-49b7-a85c-0c136a6ab0c9"). InnerVolumeSpecName "kube-api-access-7df9x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 07:10:39 crc kubenswrapper[4720]: I0122 07:10:39.390616 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/63210d7b-5ccb-49b7-a85c-0c136a6ab0c9-config-data" (OuterVolumeSpecName: "config-data") pod "63210d7b-5ccb-49b7-a85c-0c136a6ab0c9" (UID: "63210d7b-5ccb-49b7-a85c-0c136a6ab0c9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:10:39 crc kubenswrapper[4720]: I0122 07:10:39.392942 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/63210d7b-5ccb-49b7-a85c-0c136a6ab0c9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "63210d7b-5ccb-49b7-a85c-0c136a6ab0c9" (UID: "63210d7b-5ccb-49b7-a85c-0c136a6ab0c9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:10:39 crc kubenswrapper[4720]: I0122 07:10:39.441144 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7df9x\" (UniqueName: \"kubernetes.io/projected/63210d7b-5ccb-49b7-a85c-0c136a6ab0c9-kube-api-access-7df9x\") on node \"crc\" DevicePath \"\"" Jan 22 07:10:39 crc kubenswrapper[4720]: I0122 07:10:39.441190 4720 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63210d7b-5ccb-49b7-a85c-0c136a6ab0c9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 07:10:39 crc kubenswrapper[4720]: I0122 07:10:39.441203 4720 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/63210d7b-5ccb-49b7-a85c-0c136a6ab0c9-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 07:10:39 crc kubenswrapper[4720]: I0122 07:10:39.441213 4720 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/63210d7b-5ccb-49b7-a85c-0c136a6ab0c9-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 07:10:39 crc kubenswrapper[4720]: I0122 07:10:39.441234 4720 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63210d7b-5ccb-49b7-a85c-0c136a6ab0c9-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 07:10:39 crc kubenswrapper[4720]: I0122 07:10:39.631709 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_95a76f1b-07af-4869-b242-1cdbdb0b1f98/watcher-decision-engine/0.log" Jan 22 07:10:39 crc kubenswrapper[4720]: I0122 07:10:39.922863 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-db-sync-4dc5v" event={"ID":"63210d7b-5ccb-49b7-a85c-0c136a6ab0c9","Type":"ContainerDied","Data":"f97af350dfe67214b1282381600bebde0f0ea35696603c7ab2342d2cf4aba1e1"} Jan 22 07:10:39 crc kubenswrapper[4720]: I0122 07:10:39.922940 4720 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f97af350dfe67214b1282381600bebde0f0ea35696603c7ab2342d2cf4aba1e1" Jan 22 07:10:39 crc kubenswrapper[4720]: I0122 07:10:39.922979 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder-db-sync-4dc5v" Jan 22 07:10:40 crc kubenswrapper[4720]: I0122 07:10:40.201382 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/cinder-scheduler-0"] Jan 22 07:10:40 crc kubenswrapper[4720]: E0122 07:10:40.201778 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63210d7b-5ccb-49b7-a85c-0c136a6ab0c9" containerName="cinder-db-sync" Jan 22 07:10:40 crc kubenswrapper[4720]: I0122 07:10:40.201797 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="63210d7b-5ccb-49b7-a85c-0c136a6ab0c9" containerName="cinder-db-sync" Jan 22 07:10:40 crc kubenswrapper[4720]: I0122 07:10:40.202082 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="63210d7b-5ccb-49b7-a85c-0c136a6ab0c9" containerName="cinder-db-sync" Jan 22 07:10:40 crc kubenswrapper[4720]: I0122 07:10:40.203414 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder-scheduler-0" Jan 22 07:10:40 crc kubenswrapper[4720]: I0122 07:10:40.205745 4720 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cinder-cinder-dockercfg-j7jsc" Jan 22 07:10:40 crc kubenswrapper[4720]: I0122 07:10:40.205878 4720 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cinder-scheduler-config-data" Jan 22 07:10:40 crc kubenswrapper[4720]: I0122 07:10:40.207798 4720 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cinder-config-data" Jan 22 07:10:40 crc kubenswrapper[4720]: I0122 07:10:40.207798 4720 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cinder-scripts" Jan 22 07:10:40 crc kubenswrapper[4720]: I0122 07:10:40.228929 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/cinder-scheduler-0"] Jan 22 07:10:40 crc kubenswrapper[4720]: I0122 07:10:40.254996 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/792291b0-c266-40bf-a0f1-650b6d8f4f6a-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"792291b0-c266-40bf-a0f1-650b6d8f4f6a\") " pod="watcher-kuttl-default/cinder-scheduler-0" Jan 22 07:10:40 crc kubenswrapper[4720]: I0122 07:10:40.255065 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/792291b0-c266-40bf-a0f1-650b6d8f4f6a-cert-memcached-mtls\") pod \"cinder-scheduler-0\" (UID: \"792291b0-c266-40bf-a0f1-650b6d8f4f6a\") " pod="watcher-kuttl-default/cinder-scheduler-0" Jan 22 07:10:40 crc kubenswrapper[4720]: I0122 07:10:40.255178 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ldzhm\" (UniqueName: \"kubernetes.io/projected/792291b0-c266-40bf-a0f1-650b6d8f4f6a-kube-api-access-ldzhm\") pod \"cinder-scheduler-0\" (UID: \"792291b0-c266-40bf-a0f1-650b6d8f4f6a\") " pod="watcher-kuttl-default/cinder-scheduler-0" Jan 22 07:10:40 crc kubenswrapper[4720]: I0122 07:10:40.255230 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/792291b0-c266-40bf-a0f1-650b6d8f4f6a-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"792291b0-c266-40bf-a0f1-650b6d8f4f6a\") " pod="watcher-kuttl-default/cinder-scheduler-0" Jan 22 07:10:40 crc kubenswrapper[4720]: I0122 07:10:40.255271 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/792291b0-c266-40bf-a0f1-650b6d8f4f6a-scripts\") pod \"cinder-scheduler-0\" (UID: \"792291b0-c266-40bf-a0f1-650b6d8f4f6a\") " pod="watcher-kuttl-default/cinder-scheduler-0" Jan 22 07:10:40 crc kubenswrapper[4720]: I0122 07:10:40.255312 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/792291b0-c266-40bf-a0f1-650b6d8f4f6a-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"792291b0-c266-40bf-a0f1-650b6d8f4f6a\") " pod="watcher-kuttl-default/cinder-scheduler-0" Jan 22 07:10:40 crc kubenswrapper[4720]: I0122 07:10:40.255338 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/792291b0-c266-40bf-a0f1-650b6d8f4f6a-config-data\") pod \"cinder-scheduler-0\" (UID: \"792291b0-c266-40bf-a0f1-650b6d8f4f6a\") " pod="watcher-kuttl-default/cinder-scheduler-0" Jan 22 07:10:40 crc kubenswrapper[4720]: I0122 07:10:40.337431 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/cinder-backup-0"] Jan 22 07:10:40 crc kubenswrapper[4720]: I0122 07:10:40.339153 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder-backup-0" Jan 22 07:10:40 crc kubenswrapper[4720]: I0122 07:10:40.343460 4720 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cinder-backup-config-data" Jan 22 07:10:40 crc kubenswrapper[4720]: I0122 07:10:40.357837 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/792291b0-c266-40bf-a0f1-650b6d8f4f6a-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"792291b0-c266-40bf-a0f1-650b6d8f4f6a\") " pod="watcher-kuttl-default/cinder-scheduler-0" Jan 22 07:10:40 crc kubenswrapper[4720]: I0122 07:10:40.357929 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33879577-2b66-4c4d-85bc-076f0ed1e056-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"33879577-2b66-4c4d-85bc-076f0ed1e056\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 22 07:10:40 crc kubenswrapper[4720]: I0122 07:10:40.357966 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/792291b0-c266-40bf-a0f1-650b6d8f4f6a-scripts\") pod \"cinder-scheduler-0\" (UID: \"792291b0-c266-40bf-a0f1-650b6d8f4f6a\") " pod="watcher-kuttl-default/cinder-scheduler-0" Jan 22 07:10:40 crc kubenswrapper[4720]: I0122 07:10:40.358003 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/33879577-2b66-4c4d-85bc-076f0ed1e056-cert-memcached-mtls\") pod \"cinder-backup-0\" (UID: \"33879577-2b66-4c4d-85bc-076f0ed1e056\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 22 07:10:40 crc kubenswrapper[4720]: I0122 07:10:40.358048 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/792291b0-c266-40bf-a0f1-650b6d8f4f6a-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"792291b0-c266-40bf-a0f1-650b6d8f4f6a\") " pod="watcher-kuttl-default/cinder-scheduler-0" Jan 22 07:10:40 crc kubenswrapper[4720]: I0122 07:10:40.358077 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/33879577-2b66-4c4d-85bc-076f0ed1e056-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"33879577-2b66-4c4d-85bc-076f0ed1e056\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 22 07:10:40 crc kubenswrapper[4720]: I0122 07:10:40.358099 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x2j2x\" (UniqueName: \"kubernetes.io/projected/33879577-2b66-4c4d-85bc-076f0ed1e056-kube-api-access-x2j2x\") pod \"cinder-backup-0\" (UID: \"33879577-2b66-4c4d-85bc-076f0ed1e056\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 22 07:10:40 crc kubenswrapper[4720]: I0122 07:10:40.358124 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/792291b0-c266-40bf-a0f1-650b6d8f4f6a-config-data\") pod \"cinder-scheduler-0\" (UID: \"792291b0-c266-40bf-a0f1-650b6d8f4f6a\") " pod="watcher-kuttl-default/cinder-scheduler-0" Jan 22 07:10:40 crc kubenswrapper[4720]: I0122 07:10:40.358148 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/33879577-2b66-4c4d-85bc-076f0ed1e056-sys\") pod \"cinder-backup-0\" (UID: \"33879577-2b66-4c4d-85bc-076f0ed1e056\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 22 07:10:40 crc kubenswrapper[4720]: I0122 07:10:40.358174 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/792291b0-c266-40bf-a0f1-650b6d8f4f6a-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"792291b0-c266-40bf-a0f1-650b6d8f4f6a\") " pod="watcher-kuttl-default/cinder-scheduler-0" Jan 22 07:10:40 crc kubenswrapper[4720]: I0122 07:10:40.358202 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/33879577-2b66-4c4d-85bc-076f0ed1e056-etc-nvme\") pod \"cinder-backup-0\" (UID: \"33879577-2b66-4c4d-85bc-076f0ed1e056\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 22 07:10:40 crc kubenswrapper[4720]: I0122 07:10:40.358232 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/33879577-2b66-4c4d-85bc-076f0ed1e056-scripts\") pod \"cinder-backup-0\" (UID: \"33879577-2b66-4c4d-85bc-076f0ed1e056\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 22 07:10:40 crc kubenswrapper[4720]: I0122 07:10:40.358255 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/33879577-2b66-4c4d-85bc-076f0ed1e056-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"33879577-2b66-4c4d-85bc-076f0ed1e056\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 22 07:10:40 crc kubenswrapper[4720]: I0122 07:10:40.358284 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/792291b0-c266-40bf-a0f1-650b6d8f4f6a-cert-memcached-mtls\") pod \"cinder-scheduler-0\" (UID: \"792291b0-c266-40bf-a0f1-650b6d8f4f6a\") " pod="watcher-kuttl-default/cinder-scheduler-0" Jan 22 07:10:40 crc kubenswrapper[4720]: I0122 07:10:40.358308 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/33879577-2b66-4c4d-85bc-076f0ed1e056-run\") pod \"cinder-backup-0\" (UID: \"33879577-2b66-4c4d-85bc-076f0ed1e056\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 22 07:10:40 crc kubenswrapper[4720]: I0122 07:10:40.358336 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/33879577-2b66-4c4d-85bc-076f0ed1e056-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"33879577-2b66-4c4d-85bc-076f0ed1e056\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 22 07:10:40 crc kubenswrapper[4720]: I0122 07:10:40.358368 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/33879577-2b66-4c4d-85bc-076f0ed1e056-config-data-custom\") pod \"cinder-backup-0\" (UID: \"33879577-2b66-4c4d-85bc-076f0ed1e056\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 22 07:10:40 crc kubenswrapper[4720]: I0122 07:10:40.358399 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33879577-2b66-4c4d-85bc-076f0ed1e056-config-data\") pod \"cinder-backup-0\" (UID: \"33879577-2b66-4c4d-85bc-076f0ed1e056\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 22 07:10:40 crc kubenswrapper[4720]: I0122 07:10:40.358426 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/33879577-2b66-4c4d-85bc-076f0ed1e056-lib-modules\") pod \"cinder-backup-0\" (UID: \"33879577-2b66-4c4d-85bc-076f0ed1e056\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 22 07:10:40 crc kubenswrapper[4720]: I0122 07:10:40.358448 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/33879577-2b66-4c4d-85bc-076f0ed1e056-dev\") pod \"cinder-backup-0\" (UID: \"33879577-2b66-4c4d-85bc-076f0ed1e056\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 22 07:10:40 crc kubenswrapper[4720]: I0122 07:10:40.358472 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/33879577-2b66-4c4d-85bc-076f0ed1e056-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"33879577-2b66-4c4d-85bc-076f0ed1e056\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 22 07:10:40 crc kubenswrapper[4720]: I0122 07:10:40.358516 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ldzhm\" (UniqueName: \"kubernetes.io/projected/792291b0-c266-40bf-a0f1-650b6d8f4f6a-kube-api-access-ldzhm\") pod \"cinder-scheduler-0\" (UID: \"792291b0-c266-40bf-a0f1-650b6d8f4f6a\") " pod="watcher-kuttl-default/cinder-scheduler-0" Jan 22 07:10:40 crc kubenswrapper[4720]: I0122 07:10:40.358553 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/33879577-2b66-4c4d-85bc-076f0ed1e056-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"33879577-2b66-4c4d-85bc-076f0ed1e056\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 22 07:10:40 crc kubenswrapper[4720]: I0122 07:10:40.358670 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/792291b0-c266-40bf-a0f1-650b6d8f4f6a-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"792291b0-c266-40bf-a0f1-650b6d8f4f6a\") " pod="watcher-kuttl-default/cinder-scheduler-0" Jan 22 07:10:40 crc kubenswrapper[4720]: I0122 07:10:40.362793 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/cinder-backup-0"] Jan 22 07:10:40 crc kubenswrapper[4720]: I0122 07:10:40.364989 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/792291b0-c266-40bf-a0f1-650b6d8f4f6a-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"792291b0-c266-40bf-a0f1-650b6d8f4f6a\") " pod="watcher-kuttl-default/cinder-scheduler-0" Jan 22 07:10:40 crc kubenswrapper[4720]: I0122 07:10:40.365935 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/792291b0-c266-40bf-a0f1-650b6d8f4f6a-scripts\") pod \"cinder-scheduler-0\" (UID: \"792291b0-c266-40bf-a0f1-650b6d8f4f6a\") " pod="watcher-kuttl-default/cinder-scheduler-0" Jan 22 07:10:40 crc kubenswrapper[4720]: I0122 07:10:40.366243 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/792291b0-c266-40bf-a0f1-650b6d8f4f6a-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"792291b0-c266-40bf-a0f1-650b6d8f4f6a\") " pod="watcher-kuttl-default/cinder-scheduler-0" Jan 22 07:10:40 crc kubenswrapper[4720]: I0122 07:10:40.373081 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/792291b0-c266-40bf-a0f1-650b6d8f4f6a-config-data\") pod \"cinder-scheduler-0\" (UID: \"792291b0-c266-40bf-a0f1-650b6d8f4f6a\") " pod="watcher-kuttl-default/cinder-scheduler-0" Jan 22 07:10:40 crc kubenswrapper[4720]: I0122 07:10:40.388803 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/792291b0-c266-40bf-a0f1-650b6d8f4f6a-cert-memcached-mtls\") pod \"cinder-scheduler-0\" (UID: \"792291b0-c266-40bf-a0f1-650b6d8f4f6a\") " pod="watcher-kuttl-default/cinder-scheduler-0" Jan 22 07:10:40 crc kubenswrapper[4720]: I0122 07:10:40.400830 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ldzhm\" (UniqueName: \"kubernetes.io/projected/792291b0-c266-40bf-a0f1-650b6d8f4f6a-kube-api-access-ldzhm\") pod \"cinder-scheduler-0\" (UID: \"792291b0-c266-40bf-a0f1-650b6d8f4f6a\") " pod="watcher-kuttl-default/cinder-scheduler-0" Jan 22 07:10:40 crc kubenswrapper[4720]: I0122 07:10:40.459552 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/33879577-2b66-4c4d-85bc-076f0ed1e056-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"33879577-2b66-4c4d-85bc-076f0ed1e056\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 22 07:10:40 crc kubenswrapper[4720]: I0122 07:10:40.459598 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/33879577-2b66-4c4d-85bc-076f0ed1e056-config-data-custom\") pod \"cinder-backup-0\" (UID: \"33879577-2b66-4c4d-85bc-076f0ed1e056\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 22 07:10:40 crc kubenswrapper[4720]: I0122 07:10:40.459628 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33879577-2b66-4c4d-85bc-076f0ed1e056-config-data\") pod \"cinder-backup-0\" (UID: \"33879577-2b66-4c4d-85bc-076f0ed1e056\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 22 07:10:40 crc kubenswrapper[4720]: I0122 07:10:40.459648 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/33879577-2b66-4c4d-85bc-076f0ed1e056-lib-modules\") pod \"cinder-backup-0\" (UID: \"33879577-2b66-4c4d-85bc-076f0ed1e056\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 22 07:10:40 crc kubenswrapper[4720]: I0122 07:10:40.459653 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/33879577-2b66-4c4d-85bc-076f0ed1e056-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"33879577-2b66-4c4d-85bc-076f0ed1e056\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 22 07:10:40 crc kubenswrapper[4720]: I0122 07:10:40.459666 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/33879577-2b66-4c4d-85bc-076f0ed1e056-dev\") pod \"cinder-backup-0\" (UID: \"33879577-2b66-4c4d-85bc-076f0ed1e056\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 22 07:10:40 crc kubenswrapper[4720]: I0122 07:10:40.459709 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/33879577-2b66-4c4d-85bc-076f0ed1e056-lib-modules\") pod \"cinder-backup-0\" (UID: \"33879577-2b66-4c4d-85bc-076f0ed1e056\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 22 07:10:40 crc kubenswrapper[4720]: I0122 07:10:40.459694 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/33879577-2b66-4c4d-85bc-076f0ed1e056-dev\") pod \"cinder-backup-0\" (UID: \"33879577-2b66-4c4d-85bc-076f0ed1e056\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 22 07:10:40 crc kubenswrapper[4720]: I0122 07:10:40.459720 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/33879577-2b66-4c4d-85bc-076f0ed1e056-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"33879577-2b66-4c4d-85bc-076f0ed1e056\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 22 07:10:40 crc kubenswrapper[4720]: I0122 07:10:40.459769 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/33879577-2b66-4c4d-85bc-076f0ed1e056-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"33879577-2b66-4c4d-85bc-076f0ed1e056\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 22 07:10:40 crc kubenswrapper[4720]: I0122 07:10:40.459797 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33879577-2b66-4c4d-85bc-076f0ed1e056-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"33879577-2b66-4c4d-85bc-076f0ed1e056\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 22 07:10:40 crc kubenswrapper[4720]: I0122 07:10:40.459821 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/33879577-2b66-4c4d-85bc-076f0ed1e056-cert-memcached-mtls\") pod \"cinder-backup-0\" (UID: \"33879577-2b66-4c4d-85bc-076f0ed1e056\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 22 07:10:40 crc kubenswrapper[4720]: I0122 07:10:40.459853 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x2j2x\" (UniqueName: \"kubernetes.io/projected/33879577-2b66-4c4d-85bc-076f0ed1e056-kube-api-access-x2j2x\") pod \"cinder-backup-0\" (UID: \"33879577-2b66-4c4d-85bc-076f0ed1e056\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 22 07:10:40 crc kubenswrapper[4720]: I0122 07:10:40.459868 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/33879577-2b66-4c4d-85bc-076f0ed1e056-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"33879577-2b66-4c4d-85bc-076f0ed1e056\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 22 07:10:40 crc kubenswrapper[4720]: I0122 07:10:40.459885 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/33879577-2b66-4c4d-85bc-076f0ed1e056-sys\") pod \"cinder-backup-0\" (UID: \"33879577-2b66-4c4d-85bc-076f0ed1e056\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 22 07:10:40 crc kubenswrapper[4720]: I0122 07:10:40.459920 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/33879577-2b66-4c4d-85bc-076f0ed1e056-etc-nvme\") pod \"cinder-backup-0\" (UID: \"33879577-2b66-4c4d-85bc-076f0ed1e056\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 22 07:10:40 crc kubenswrapper[4720]: I0122 07:10:40.459942 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/33879577-2b66-4c4d-85bc-076f0ed1e056-scripts\") pod \"cinder-backup-0\" (UID: \"33879577-2b66-4c4d-85bc-076f0ed1e056\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 22 07:10:40 crc kubenswrapper[4720]: I0122 07:10:40.459955 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/33879577-2b66-4c4d-85bc-076f0ed1e056-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"33879577-2b66-4c4d-85bc-076f0ed1e056\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 22 07:10:40 crc kubenswrapper[4720]: I0122 07:10:40.459975 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/33879577-2b66-4c4d-85bc-076f0ed1e056-run\") pod \"cinder-backup-0\" (UID: \"33879577-2b66-4c4d-85bc-076f0ed1e056\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 22 07:10:40 crc kubenswrapper[4720]: I0122 07:10:40.460003 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/33879577-2b66-4c4d-85bc-076f0ed1e056-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"33879577-2b66-4c4d-85bc-076f0ed1e056\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 22 07:10:40 crc kubenswrapper[4720]: I0122 07:10:40.460035 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/33879577-2b66-4c4d-85bc-076f0ed1e056-run\") pod \"cinder-backup-0\" (UID: \"33879577-2b66-4c4d-85bc-076f0ed1e056\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 22 07:10:40 crc kubenswrapper[4720]: I0122 07:10:40.460069 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/33879577-2b66-4c4d-85bc-076f0ed1e056-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"33879577-2b66-4c4d-85bc-076f0ed1e056\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 22 07:10:40 crc kubenswrapper[4720]: I0122 07:10:40.460098 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/33879577-2b66-4c4d-85bc-076f0ed1e056-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"33879577-2b66-4c4d-85bc-076f0ed1e056\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 22 07:10:40 crc kubenswrapper[4720]: I0122 07:10:40.460125 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/33879577-2b66-4c4d-85bc-076f0ed1e056-sys\") pod \"cinder-backup-0\" (UID: \"33879577-2b66-4c4d-85bc-076f0ed1e056\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 22 07:10:40 crc kubenswrapper[4720]: I0122 07:10:40.460186 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/33879577-2b66-4c4d-85bc-076f0ed1e056-etc-nvme\") pod \"cinder-backup-0\" (UID: \"33879577-2b66-4c4d-85bc-076f0ed1e056\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 22 07:10:40 crc kubenswrapper[4720]: I0122 07:10:40.460266 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/33879577-2b66-4c4d-85bc-076f0ed1e056-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"33879577-2b66-4c4d-85bc-076f0ed1e056\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 22 07:10:40 crc kubenswrapper[4720]: I0122 07:10:40.463452 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33879577-2b66-4c4d-85bc-076f0ed1e056-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"33879577-2b66-4c4d-85bc-076f0ed1e056\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 22 07:10:40 crc kubenswrapper[4720]: I0122 07:10:40.464460 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/33879577-2b66-4c4d-85bc-076f0ed1e056-config-data-custom\") pod \"cinder-backup-0\" (UID: \"33879577-2b66-4c4d-85bc-076f0ed1e056\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 22 07:10:40 crc kubenswrapper[4720]: I0122 07:10:40.464788 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33879577-2b66-4c4d-85bc-076f0ed1e056-config-data\") pod \"cinder-backup-0\" (UID: \"33879577-2b66-4c4d-85bc-076f0ed1e056\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 22 07:10:40 crc kubenswrapper[4720]: I0122 07:10:40.466260 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/33879577-2b66-4c4d-85bc-076f0ed1e056-scripts\") pod \"cinder-backup-0\" (UID: \"33879577-2b66-4c4d-85bc-076f0ed1e056\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 22 07:10:40 crc kubenswrapper[4720]: I0122 07:10:40.477047 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/33879577-2b66-4c4d-85bc-076f0ed1e056-cert-memcached-mtls\") pod \"cinder-backup-0\" (UID: \"33879577-2b66-4c4d-85bc-076f0ed1e056\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 22 07:10:40 crc kubenswrapper[4720]: I0122 07:10:40.478585 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x2j2x\" (UniqueName: \"kubernetes.io/projected/33879577-2b66-4c4d-85bc-076f0ed1e056-kube-api-access-x2j2x\") pod \"cinder-backup-0\" (UID: \"33879577-2b66-4c4d-85bc-076f0ed1e056\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 22 07:10:40 crc kubenswrapper[4720]: I0122 07:10:40.524411 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder-scheduler-0" Jan 22 07:10:40 crc kubenswrapper[4720]: I0122 07:10:40.539249 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/cinder-api-0"] Jan 22 07:10:40 crc kubenswrapper[4720]: I0122 07:10:40.540902 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder-api-0" Jan 22 07:10:40 crc kubenswrapper[4720]: I0122 07:10:40.544309 4720 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cinder-api-config-data" Jan 22 07:10:40 crc kubenswrapper[4720]: I0122 07:10:40.562098 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3fda2d72-d200-4d0a-ae15-328580f16d78-config-data\") pod \"cinder-api-0\" (UID: \"3fda2d72-d200-4d0a-ae15-328580f16d78\") " pod="watcher-kuttl-default/cinder-api-0" Jan 22 07:10:40 crc kubenswrapper[4720]: I0122 07:10:40.562150 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-77hch\" (UniqueName: \"kubernetes.io/projected/3fda2d72-d200-4d0a-ae15-328580f16d78-kube-api-access-77hch\") pod \"cinder-api-0\" (UID: \"3fda2d72-d200-4d0a-ae15-328580f16d78\") " pod="watcher-kuttl-default/cinder-api-0" Jan 22 07:10:40 crc kubenswrapper[4720]: I0122 07:10:40.562189 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3fda2d72-d200-4d0a-ae15-328580f16d78-config-data-custom\") pod \"cinder-api-0\" (UID: \"3fda2d72-d200-4d0a-ae15-328580f16d78\") " pod="watcher-kuttl-default/cinder-api-0" Jan 22 07:10:40 crc kubenswrapper[4720]: I0122 07:10:40.562209 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/3fda2d72-d200-4d0a-ae15-328580f16d78-cert-memcached-mtls\") pod \"cinder-api-0\" (UID: \"3fda2d72-d200-4d0a-ae15-328580f16d78\") " pod="watcher-kuttl-default/cinder-api-0" Jan 22 07:10:40 crc kubenswrapper[4720]: I0122 07:10:40.562247 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3fda2d72-d200-4d0a-ae15-328580f16d78-etc-machine-id\") pod \"cinder-api-0\" (UID: \"3fda2d72-d200-4d0a-ae15-328580f16d78\") " pod="watcher-kuttl-default/cinder-api-0" Jan 22 07:10:40 crc kubenswrapper[4720]: I0122 07:10:40.562274 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3fda2d72-d200-4d0a-ae15-328580f16d78-logs\") pod \"cinder-api-0\" (UID: \"3fda2d72-d200-4d0a-ae15-328580f16d78\") " pod="watcher-kuttl-default/cinder-api-0" Jan 22 07:10:40 crc kubenswrapper[4720]: I0122 07:10:40.562289 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3fda2d72-d200-4d0a-ae15-328580f16d78-scripts\") pod \"cinder-api-0\" (UID: \"3fda2d72-d200-4d0a-ae15-328580f16d78\") " pod="watcher-kuttl-default/cinder-api-0" Jan 22 07:10:40 crc kubenswrapper[4720]: I0122 07:10:40.562306 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3fda2d72-d200-4d0a-ae15-328580f16d78-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"3fda2d72-d200-4d0a-ae15-328580f16d78\") " pod="watcher-kuttl-default/cinder-api-0" Jan 22 07:10:40 crc kubenswrapper[4720]: I0122 07:10:40.570202 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/cinder-api-0"] Jan 22 07:10:40 crc kubenswrapper[4720]: I0122 07:10:40.665658 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder-backup-0" Jan 22 07:10:40 crc kubenswrapper[4720]: I0122 07:10:40.667639 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3fda2d72-d200-4d0a-ae15-328580f16d78-config-data\") pod \"cinder-api-0\" (UID: \"3fda2d72-d200-4d0a-ae15-328580f16d78\") " pod="watcher-kuttl-default/cinder-api-0" Jan 22 07:10:40 crc kubenswrapper[4720]: I0122 07:10:40.667893 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-77hch\" (UniqueName: \"kubernetes.io/projected/3fda2d72-d200-4d0a-ae15-328580f16d78-kube-api-access-77hch\") pod \"cinder-api-0\" (UID: \"3fda2d72-d200-4d0a-ae15-328580f16d78\") " pod="watcher-kuttl-default/cinder-api-0" Jan 22 07:10:40 crc kubenswrapper[4720]: I0122 07:10:40.668108 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3fda2d72-d200-4d0a-ae15-328580f16d78-config-data-custom\") pod \"cinder-api-0\" (UID: \"3fda2d72-d200-4d0a-ae15-328580f16d78\") " pod="watcher-kuttl-default/cinder-api-0" Jan 22 07:10:40 crc kubenswrapper[4720]: I0122 07:10:40.668207 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/3fda2d72-d200-4d0a-ae15-328580f16d78-cert-memcached-mtls\") pod \"cinder-api-0\" (UID: \"3fda2d72-d200-4d0a-ae15-328580f16d78\") " pod="watcher-kuttl-default/cinder-api-0" Jan 22 07:10:40 crc kubenswrapper[4720]: I0122 07:10:40.668422 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3fda2d72-d200-4d0a-ae15-328580f16d78-etc-machine-id\") pod \"cinder-api-0\" (UID: \"3fda2d72-d200-4d0a-ae15-328580f16d78\") " pod="watcher-kuttl-default/cinder-api-0" Jan 22 07:10:40 crc kubenswrapper[4720]: I0122 07:10:40.668539 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3fda2d72-d200-4d0a-ae15-328580f16d78-logs\") pod \"cinder-api-0\" (UID: \"3fda2d72-d200-4d0a-ae15-328580f16d78\") " pod="watcher-kuttl-default/cinder-api-0" Jan 22 07:10:40 crc kubenswrapper[4720]: I0122 07:10:40.668641 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3fda2d72-d200-4d0a-ae15-328580f16d78-scripts\") pod \"cinder-api-0\" (UID: \"3fda2d72-d200-4d0a-ae15-328580f16d78\") " pod="watcher-kuttl-default/cinder-api-0" Jan 22 07:10:40 crc kubenswrapper[4720]: I0122 07:10:40.668734 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3fda2d72-d200-4d0a-ae15-328580f16d78-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"3fda2d72-d200-4d0a-ae15-328580f16d78\") " pod="watcher-kuttl-default/cinder-api-0" Jan 22 07:10:40 crc kubenswrapper[4720]: I0122 07:10:40.675468 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3fda2d72-d200-4d0a-ae15-328580f16d78-etc-machine-id\") pod \"cinder-api-0\" (UID: \"3fda2d72-d200-4d0a-ae15-328580f16d78\") " pod="watcher-kuttl-default/cinder-api-0" Jan 22 07:10:40 crc kubenswrapper[4720]: I0122 07:10:40.681673 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3fda2d72-d200-4d0a-ae15-328580f16d78-logs\") pod \"cinder-api-0\" (UID: \"3fda2d72-d200-4d0a-ae15-328580f16d78\") " pod="watcher-kuttl-default/cinder-api-0" Jan 22 07:10:40 crc kubenswrapper[4720]: I0122 07:10:40.694583 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3fda2d72-d200-4d0a-ae15-328580f16d78-scripts\") pod \"cinder-api-0\" (UID: \"3fda2d72-d200-4d0a-ae15-328580f16d78\") " pod="watcher-kuttl-default/cinder-api-0" Jan 22 07:10:40 crc kubenswrapper[4720]: I0122 07:10:40.695328 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3fda2d72-d200-4d0a-ae15-328580f16d78-config-data\") pod \"cinder-api-0\" (UID: \"3fda2d72-d200-4d0a-ae15-328580f16d78\") " pod="watcher-kuttl-default/cinder-api-0" Jan 22 07:10:40 crc kubenswrapper[4720]: I0122 07:10:40.697493 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/3fda2d72-d200-4d0a-ae15-328580f16d78-cert-memcached-mtls\") pod \"cinder-api-0\" (UID: \"3fda2d72-d200-4d0a-ae15-328580f16d78\") " pod="watcher-kuttl-default/cinder-api-0" Jan 22 07:10:40 crc kubenswrapper[4720]: I0122 07:10:40.698200 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3fda2d72-d200-4d0a-ae15-328580f16d78-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"3fda2d72-d200-4d0a-ae15-328580f16d78\") " pod="watcher-kuttl-default/cinder-api-0" Jan 22 07:10:40 crc kubenswrapper[4720]: I0122 07:10:40.698474 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3fda2d72-d200-4d0a-ae15-328580f16d78-config-data-custom\") pod \"cinder-api-0\" (UID: \"3fda2d72-d200-4d0a-ae15-328580f16d78\") " pod="watcher-kuttl-default/cinder-api-0" Jan 22 07:10:40 crc kubenswrapper[4720]: I0122 07:10:40.707585 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-77hch\" (UniqueName: \"kubernetes.io/projected/3fda2d72-d200-4d0a-ae15-328580f16d78-kube-api-access-77hch\") pod \"cinder-api-0\" (UID: \"3fda2d72-d200-4d0a-ae15-328580f16d78\") " pod="watcher-kuttl-default/cinder-api-0" Jan 22 07:10:40 crc kubenswrapper[4720]: I0122 07:10:40.841772 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_95a76f1b-07af-4869-b242-1cdbdb0b1f98/watcher-decision-engine/0.log" Jan 22 07:10:40 crc kubenswrapper[4720]: I0122 07:10:40.913657 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder-api-0" Jan 22 07:10:41 crc kubenswrapper[4720]: I0122 07:10:41.471657 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/cinder-backup-0"] Jan 22 07:10:41 crc kubenswrapper[4720]: I0122 07:10:41.490531 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/cinder-api-0"] Jan 22 07:10:41 crc kubenswrapper[4720]: I0122 07:10:41.578351 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/cinder-scheduler-0"] Jan 22 07:10:41 crc kubenswrapper[4720]: I0122 07:10:41.954075 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-api-0" event={"ID":"3fda2d72-d200-4d0a-ae15-328580f16d78","Type":"ContainerStarted","Data":"4f791c51a8206c36cb482cbae68fad4d5b1d7e7707127b102287330218c7dda9"} Jan 22 07:10:41 crc kubenswrapper[4720]: I0122 07:10:41.961978 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-scheduler-0" event={"ID":"792291b0-c266-40bf-a0f1-650b6d8f4f6a","Type":"ContainerStarted","Data":"a5d0a5f4a9822c623b591aaa7ddf908cfeac3a63c26d25ad2a9d0a7492c3b9e4"} Jan 22 07:10:41 crc kubenswrapper[4720]: I0122 07:10:41.964517 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-backup-0" event={"ID":"33879577-2b66-4c4d-85bc-076f0ed1e056","Type":"ContainerStarted","Data":"d6d6117c1120a89cb4be5130ea68b5ffa4767b58752805c0520d64ada6e81193"} Jan 22 07:10:42 crc kubenswrapper[4720]: I0122 07:10:42.073786 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_95a76f1b-07af-4869-b242-1cdbdb0b1f98/watcher-decision-engine/0.log" Jan 22 07:10:43 crc kubenswrapper[4720]: I0122 07:10:43.042932 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/cinder-api-0"] Jan 22 07:10:43 crc kubenswrapper[4720]: I0122 07:10:43.251005 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_95a76f1b-07af-4869-b242-1cdbdb0b1f98/watcher-decision-engine/0.log" Jan 22 07:10:44 crc kubenswrapper[4720]: I0122 07:10:44.452985 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_95a76f1b-07af-4869-b242-1cdbdb0b1f98/watcher-decision-engine/0.log" Jan 22 07:10:45 crc kubenswrapper[4720]: I0122 07:10:45.001017 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-api-0" event={"ID":"3fda2d72-d200-4d0a-ae15-328580f16d78","Type":"ContainerStarted","Data":"c566717b7cb55be3162006e9bd92877acac36809a30845349f9d6f8c6d2cfe0a"} Jan 22 07:10:45 crc kubenswrapper[4720]: I0122 07:10:45.652708 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_95a76f1b-07af-4869-b242-1cdbdb0b1f98/watcher-decision-engine/0.log" Jan 22 07:10:46 crc kubenswrapper[4720]: I0122 07:10:46.013979 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-backup-0" event={"ID":"33879577-2b66-4c4d-85bc-076f0ed1e056","Type":"ContainerStarted","Data":"d920c0fcb026da0fa612db91e284c6adb790ed732454a2849a61321f4d4133ef"} Jan 22 07:10:47 crc kubenswrapper[4720]: I0122 07:10:47.025504 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-api-0" event={"ID":"3fda2d72-d200-4d0a-ae15-328580f16d78","Type":"ContainerStarted","Data":"c298b8171a75662106ee80c6927aff0bc585f67d81ae73eebcff5c076fcf093a"} Jan 22 07:10:47 crc kubenswrapper[4720]: I0122 07:10:47.025692 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/cinder-api-0" podUID="3fda2d72-d200-4d0a-ae15-328580f16d78" containerName="cinder-api-log" containerID="cri-o://c566717b7cb55be3162006e9bd92877acac36809a30845349f9d6f8c6d2cfe0a" gracePeriod=30 Jan 22 07:10:47 crc kubenswrapper[4720]: I0122 07:10:47.025789 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/cinder-api-0" podUID="3fda2d72-d200-4d0a-ae15-328580f16d78" containerName="cinder-api" containerID="cri-o://c298b8171a75662106ee80c6927aff0bc585f67d81ae73eebcff5c076fcf093a" gracePeriod=30 Jan 22 07:10:47 crc kubenswrapper[4720]: I0122 07:10:47.026258 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/cinder-api-0" Jan 22 07:10:47 crc kubenswrapper[4720]: I0122 07:10:47.032337 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-scheduler-0" event={"ID":"792291b0-c266-40bf-a0f1-650b6d8f4f6a","Type":"ContainerStarted","Data":"7c06979cbedc656c279cca90e6dc5e1aa723ead371a603c21e08615651217cf2"} Jan 22 07:10:47 crc kubenswrapper[4720]: I0122 07:10:47.032392 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-scheduler-0" event={"ID":"792291b0-c266-40bf-a0f1-650b6d8f4f6a","Type":"ContainerStarted","Data":"753d6c0185b49e4e7fdee9d1211d3956917c4728af83553450745e1adae50bbd"} Jan 22 07:10:47 crc kubenswrapper[4720]: I0122 07:10:47.035104 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-backup-0" event={"ID":"33879577-2b66-4c4d-85bc-076f0ed1e056","Type":"ContainerStarted","Data":"2891a8ccf19f9a593602d22f8f7b22201a8d9006d44d4c50d94fe07d43dbbe3e"} Jan 22 07:10:47 crc kubenswrapper[4720]: I0122 07:10:47.051129 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/cinder-api-0" podStartSLOduration=7.05109821 podStartE2EDuration="7.05109821s" podCreationTimestamp="2026-01-22 07:10:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 07:10:47.046686176 +0000 UTC m=+2139.188592901" watchObservedRunningTime="2026-01-22 07:10:47.05109821 +0000 UTC m=+2139.193004915" Jan 22 07:10:47 crc kubenswrapper[4720]: I0122 07:10:47.079250 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/cinder-backup-0" podStartSLOduration=3.095001577 podStartE2EDuration="7.079226064s" podCreationTimestamp="2026-01-22 07:10:40 +0000 UTC" firstStartedPulling="2026-01-22 07:10:41.484789295 +0000 UTC m=+2133.626696010" lastFinishedPulling="2026-01-22 07:10:45.469013792 +0000 UTC m=+2137.610920497" observedRunningTime="2026-01-22 07:10:47.071874547 +0000 UTC m=+2139.213781252" watchObservedRunningTime="2026-01-22 07:10:47.079226064 +0000 UTC m=+2139.221132769" Jan 22 07:10:47 crc kubenswrapper[4720]: I0122 07:10:47.096366 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/cinder-scheduler-0" podStartSLOduration=3.211803216 podStartE2EDuration="7.096339528s" podCreationTimestamp="2026-01-22 07:10:40 +0000 UTC" firstStartedPulling="2026-01-22 07:10:41.584905682 +0000 UTC m=+2133.726812397" lastFinishedPulling="2026-01-22 07:10:45.469442004 +0000 UTC m=+2137.611348709" observedRunningTime="2026-01-22 07:10:47.095737561 +0000 UTC m=+2139.237644286" watchObservedRunningTime="2026-01-22 07:10:47.096339528 +0000 UTC m=+2139.238246233" Jan 22 07:10:47 crc kubenswrapper[4720]: I0122 07:10:47.190682 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_95a76f1b-07af-4869-b242-1cdbdb0b1f98/watcher-decision-engine/0.log" Jan 22 07:10:47 crc kubenswrapper[4720]: I0122 07:10:47.675936 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder-api-0" Jan 22 07:10:47 crc kubenswrapper[4720]: I0122 07:10:47.791982 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3fda2d72-d200-4d0a-ae15-328580f16d78-combined-ca-bundle\") pod \"3fda2d72-d200-4d0a-ae15-328580f16d78\" (UID: \"3fda2d72-d200-4d0a-ae15-328580f16d78\") " Jan 22 07:10:47 crc kubenswrapper[4720]: I0122 07:10:47.792062 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-77hch\" (UniqueName: \"kubernetes.io/projected/3fda2d72-d200-4d0a-ae15-328580f16d78-kube-api-access-77hch\") pod \"3fda2d72-d200-4d0a-ae15-328580f16d78\" (UID: \"3fda2d72-d200-4d0a-ae15-328580f16d78\") " Jan 22 07:10:47 crc kubenswrapper[4720]: I0122 07:10:47.792189 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3fda2d72-d200-4d0a-ae15-328580f16d78-etc-machine-id\") pod \"3fda2d72-d200-4d0a-ae15-328580f16d78\" (UID: \"3fda2d72-d200-4d0a-ae15-328580f16d78\") " Jan 22 07:10:47 crc kubenswrapper[4720]: I0122 07:10:47.792306 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/3fda2d72-d200-4d0a-ae15-328580f16d78-cert-memcached-mtls\") pod \"3fda2d72-d200-4d0a-ae15-328580f16d78\" (UID: \"3fda2d72-d200-4d0a-ae15-328580f16d78\") " Jan 22 07:10:47 crc kubenswrapper[4720]: I0122 07:10:47.792344 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3fda2d72-d200-4d0a-ae15-328580f16d78-scripts\") pod \"3fda2d72-d200-4d0a-ae15-328580f16d78\" (UID: \"3fda2d72-d200-4d0a-ae15-328580f16d78\") " Jan 22 07:10:47 crc kubenswrapper[4720]: I0122 07:10:47.792405 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3fda2d72-d200-4d0a-ae15-328580f16d78-config-data-custom\") pod \"3fda2d72-d200-4d0a-ae15-328580f16d78\" (UID: \"3fda2d72-d200-4d0a-ae15-328580f16d78\") " Jan 22 07:10:47 crc kubenswrapper[4720]: I0122 07:10:47.792466 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3fda2d72-d200-4d0a-ae15-328580f16d78-logs\") pod \"3fda2d72-d200-4d0a-ae15-328580f16d78\" (UID: \"3fda2d72-d200-4d0a-ae15-328580f16d78\") " Jan 22 07:10:47 crc kubenswrapper[4720]: I0122 07:10:47.792503 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3fda2d72-d200-4d0a-ae15-328580f16d78-config-data\") pod \"3fda2d72-d200-4d0a-ae15-328580f16d78\" (UID: \"3fda2d72-d200-4d0a-ae15-328580f16d78\") " Jan 22 07:10:47 crc kubenswrapper[4720]: I0122 07:10:47.792433 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3fda2d72-d200-4d0a-ae15-328580f16d78-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "3fda2d72-d200-4d0a-ae15-328580f16d78" (UID: "3fda2d72-d200-4d0a-ae15-328580f16d78"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 07:10:47 crc kubenswrapper[4720]: I0122 07:10:47.793390 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3fda2d72-d200-4d0a-ae15-328580f16d78-logs" (OuterVolumeSpecName: "logs") pod "3fda2d72-d200-4d0a-ae15-328580f16d78" (UID: "3fda2d72-d200-4d0a-ae15-328580f16d78"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 07:10:47 crc kubenswrapper[4720]: I0122 07:10:47.802527 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3fda2d72-d200-4d0a-ae15-328580f16d78-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "3fda2d72-d200-4d0a-ae15-328580f16d78" (UID: "3fda2d72-d200-4d0a-ae15-328580f16d78"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:10:47 crc kubenswrapper[4720]: I0122 07:10:47.802649 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3fda2d72-d200-4d0a-ae15-328580f16d78-scripts" (OuterVolumeSpecName: "scripts") pod "3fda2d72-d200-4d0a-ae15-328580f16d78" (UID: "3fda2d72-d200-4d0a-ae15-328580f16d78"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:10:47 crc kubenswrapper[4720]: I0122 07:10:47.817123 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3fda2d72-d200-4d0a-ae15-328580f16d78-kube-api-access-77hch" (OuterVolumeSpecName: "kube-api-access-77hch") pod "3fda2d72-d200-4d0a-ae15-328580f16d78" (UID: "3fda2d72-d200-4d0a-ae15-328580f16d78"). InnerVolumeSpecName "kube-api-access-77hch". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 07:10:47 crc kubenswrapper[4720]: I0122 07:10:47.831376 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3fda2d72-d200-4d0a-ae15-328580f16d78-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3fda2d72-d200-4d0a-ae15-328580f16d78" (UID: "3fda2d72-d200-4d0a-ae15-328580f16d78"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:10:47 crc kubenswrapper[4720]: I0122 07:10:47.891046 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3fda2d72-d200-4d0a-ae15-328580f16d78-config-data" (OuterVolumeSpecName: "config-data") pod "3fda2d72-d200-4d0a-ae15-328580f16d78" (UID: "3fda2d72-d200-4d0a-ae15-328580f16d78"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:10:47 crc kubenswrapper[4720]: I0122 07:10:47.894650 4720 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3fda2d72-d200-4d0a-ae15-328580f16d78-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 07:10:47 crc kubenswrapper[4720]: I0122 07:10:47.894695 4720 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3fda2d72-d200-4d0a-ae15-328580f16d78-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 22 07:10:47 crc kubenswrapper[4720]: I0122 07:10:47.894711 4720 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3fda2d72-d200-4d0a-ae15-328580f16d78-logs\") on node \"crc\" DevicePath \"\"" Jan 22 07:10:47 crc kubenswrapper[4720]: I0122 07:10:47.894724 4720 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3fda2d72-d200-4d0a-ae15-328580f16d78-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 07:10:47 crc kubenswrapper[4720]: I0122 07:10:47.894734 4720 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3fda2d72-d200-4d0a-ae15-328580f16d78-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 07:10:47 crc kubenswrapper[4720]: I0122 07:10:47.894750 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-77hch\" (UniqueName: \"kubernetes.io/projected/3fda2d72-d200-4d0a-ae15-328580f16d78-kube-api-access-77hch\") on node \"crc\" DevicePath \"\"" Jan 22 07:10:47 crc kubenswrapper[4720]: I0122 07:10:47.894761 4720 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3fda2d72-d200-4d0a-ae15-328580f16d78-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 22 07:10:47 crc kubenswrapper[4720]: I0122 07:10:47.930609 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3fda2d72-d200-4d0a-ae15-328580f16d78-cert-memcached-mtls" (OuterVolumeSpecName: "cert-memcached-mtls") pod "3fda2d72-d200-4d0a-ae15-328580f16d78" (UID: "3fda2d72-d200-4d0a-ae15-328580f16d78"). InnerVolumeSpecName "cert-memcached-mtls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:10:47 crc kubenswrapper[4720]: I0122 07:10:47.996847 4720 reconciler_common.go:293] "Volume detached for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/3fda2d72-d200-4d0a-ae15-328580f16d78-cert-memcached-mtls\") on node \"crc\" DevicePath \"\"" Jan 22 07:10:48 crc kubenswrapper[4720]: I0122 07:10:48.057304 4720 generic.go:334] "Generic (PLEG): container finished" podID="3fda2d72-d200-4d0a-ae15-328580f16d78" containerID="c298b8171a75662106ee80c6927aff0bc585f67d81ae73eebcff5c076fcf093a" exitCode=0 Jan 22 07:10:48 crc kubenswrapper[4720]: I0122 07:10:48.057344 4720 generic.go:334] "Generic (PLEG): container finished" podID="3fda2d72-d200-4d0a-ae15-328580f16d78" containerID="c566717b7cb55be3162006e9bd92877acac36809a30845349f9d6f8c6d2cfe0a" exitCode=143 Jan 22 07:10:48 crc kubenswrapper[4720]: I0122 07:10:48.057453 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder-api-0" Jan 22 07:10:48 crc kubenswrapper[4720]: I0122 07:10:48.057505 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-api-0" event={"ID":"3fda2d72-d200-4d0a-ae15-328580f16d78","Type":"ContainerDied","Data":"c298b8171a75662106ee80c6927aff0bc585f67d81ae73eebcff5c076fcf093a"} Jan 22 07:10:48 crc kubenswrapper[4720]: I0122 07:10:48.057545 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-api-0" event={"ID":"3fda2d72-d200-4d0a-ae15-328580f16d78","Type":"ContainerDied","Data":"c566717b7cb55be3162006e9bd92877acac36809a30845349f9d6f8c6d2cfe0a"} Jan 22 07:10:48 crc kubenswrapper[4720]: I0122 07:10:48.057567 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-api-0" event={"ID":"3fda2d72-d200-4d0a-ae15-328580f16d78","Type":"ContainerDied","Data":"4f791c51a8206c36cb482cbae68fad4d5b1d7e7707127b102287330218c7dda9"} Jan 22 07:10:48 crc kubenswrapper[4720]: I0122 07:10:48.057585 4720 scope.go:117] "RemoveContainer" containerID="c298b8171a75662106ee80c6927aff0bc585f67d81ae73eebcff5c076fcf093a" Jan 22 07:10:48 crc kubenswrapper[4720]: I0122 07:10:48.093416 4720 scope.go:117] "RemoveContainer" containerID="c566717b7cb55be3162006e9bd92877acac36809a30845349f9d6f8c6d2cfe0a" Jan 22 07:10:48 crc kubenswrapper[4720]: I0122 07:10:48.101777 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/cinder-api-0"] Jan 22 07:10:48 crc kubenswrapper[4720]: I0122 07:10:48.113037 4720 scope.go:117] "RemoveContainer" containerID="c298b8171a75662106ee80c6927aff0bc585f67d81ae73eebcff5c076fcf093a" Jan 22 07:10:48 crc kubenswrapper[4720]: I0122 07:10:48.113081 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/cinder-api-0"] Jan 22 07:10:48 crc kubenswrapper[4720]: E0122 07:10:48.113503 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c298b8171a75662106ee80c6927aff0bc585f67d81ae73eebcff5c076fcf093a\": container with ID starting with c298b8171a75662106ee80c6927aff0bc585f67d81ae73eebcff5c076fcf093a not found: ID does not exist" containerID="c298b8171a75662106ee80c6927aff0bc585f67d81ae73eebcff5c076fcf093a" Jan 22 07:10:48 crc kubenswrapper[4720]: I0122 07:10:48.113575 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c298b8171a75662106ee80c6927aff0bc585f67d81ae73eebcff5c076fcf093a"} err="failed to get container status \"c298b8171a75662106ee80c6927aff0bc585f67d81ae73eebcff5c076fcf093a\": rpc error: code = NotFound desc = could not find container \"c298b8171a75662106ee80c6927aff0bc585f67d81ae73eebcff5c076fcf093a\": container with ID starting with c298b8171a75662106ee80c6927aff0bc585f67d81ae73eebcff5c076fcf093a not found: ID does not exist" Jan 22 07:10:48 crc kubenswrapper[4720]: I0122 07:10:48.113608 4720 scope.go:117] "RemoveContainer" containerID="c566717b7cb55be3162006e9bd92877acac36809a30845349f9d6f8c6d2cfe0a" Jan 22 07:10:48 crc kubenswrapper[4720]: E0122 07:10:48.113870 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c566717b7cb55be3162006e9bd92877acac36809a30845349f9d6f8c6d2cfe0a\": container with ID starting with c566717b7cb55be3162006e9bd92877acac36809a30845349f9d6f8c6d2cfe0a not found: ID does not exist" containerID="c566717b7cb55be3162006e9bd92877acac36809a30845349f9d6f8c6d2cfe0a" Jan 22 07:10:48 crc kubenswrapper[4720]: I0122 07:10:48.113901 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c566717b7cb55be3162006e9bd92877acac36809a30845349f9d6f8c6d2cfe0a"} err="failed to get container status \"c566717b7cb55be3162006e9bd92877acac36809a30845349f9d6f8c6d2cfe0a\": rpc error: code = NotFound desc = could not find container \"c566717b7cb55be3162006e9bd92877acac36809a30845349f9d6f8c6d2cfe0a\": container with ID starting with c566717b7cb55be3162006e9bd92877acac36809a30845349f9d6f8c6d2cfe0a not found: ID does not exist" Jan 22 07:10:48 crc kubenswrapper[4720]: I0122 07:10:48.114037 4720 scope.go:117] "RemoveContainer" containerID="c298b8171a75662106ee80c6927aff0bc585f67d81ae73eebcff5c076fcf093a" Jan 22 07:10:48 crc kubenswrapper[4720]: I0122 07:10:48.117094 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c298b8171a75662106ee80c6927aff0bc585f67d81ae73eebcff5c076fcf093a"} err="failed to get container status \"c298b8171a75662106ee80c6927aff0bc585f67d81ae73eebcff5c076fcf093a\": rpc error: code = NotFound desc = could not find container \"c298b8171a75662106ee80c6927aff0bc585f67d81ae73eebcff5c076fcf093a\": container with ID starting with c298b8171a75662106ee80c6927aff0bc585f67d81ae73eebcff5c076fcf093a not found: ID does not exist" Jan 22 07:10:48 crc kubenswrapper[4720]: I0122 07:10:48.117141 4720 scope.go:117] "RemoveContainer" containerID="c566717b7cb55be3162006e9bd92877acac36809a30845349f9d6f8c6d2cfe0a" Jan 22 07:10:48 crc kubenswrapper[4720]: I0122 07:10:48.117436 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c566717b7cb55be3162006e9bd92877acac36809a30845349f9d6f8c6d2cfe0a"} err="failed to get container status \"c566717b7cb55be3162006e9bd92877acac36809a30845349f9d6f8c6d2cfe0a\": rpc error: code = NotFound desc = could not find container \"c566717b7cb55be3162006e9bd92877acac36809a30845349f9d6f8c6d2cfe0a\": container with ID starting with c566717b7cb55be3162006e9bd92877acac36809a30845349f9d6f8c6d2cfe0a not found: ID does not exist" Jan 22 07:10:48 crc kubenswrapper[4720]: I0122 07:10:48.199652 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/cinder-api-0"] Jan 22 07:10:48 crc kubenswrapper[4720]: E0122 07:10:48.207026 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3fda2d72-d200-4d0a-ae15-328580f16d78" containerName="cinder-api" Jan 22 07:10:48 crc kubenswrapper[4720]: I0122 07:10:48.207079 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="3fda2d72-d200-4d0a-ae15-328580f16d78" containerName="cinder-api" Jan 22 07:10:48 crc kubenswrapper[4720]: E0122 07:10:48.207123 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3fda2d72-d200-4d0a-ae15-328580f16d78" containerName="cinder-api-log" Jan 22 07:10:48 crc kubenswrapper[4720]: I0122 07:10:48.207131 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="3fda2d72-d200-4d0a-ae15-328580f16d78" containerName="cinder-api-log" Jan 22 07:10:48 crc kubenswrapper[4720]: I0122 07:10:48.207442 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="3fda2d72-d200-4d0a-ae15-328580f16d78" containerName="cinder-api-log" Jan 22 07:10:48 crc kubenswrapper[4720]: I0122 07:10:48.207453 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="3fda2d72-d200-4d0a-ae15-328580f16d78" containerName="cinder-api" Jan 22 07:10:48 crc kubenswrapper[4720]: I0122 07:10:48.210028 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder-api-0" Jan 22 07:10:48 crc kubenswrapper[4720]: I0122 07:10:48.214571 4720 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-cinder-internal-svc" Jan 22 07:10:48 crc kubenswrapper[4720]: I0122 07:10:48.214871 4720 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-cinder-public-svc" Jan 22 07:10:48 crc kubenswrapper[4720]: I0122 07:10:48.215696 4720 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cinder-api-config-data" Jan 22 07:10:48 crc kubenswrapper[4720]: I0122 07:10:48.266079 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3fda2d72-d200-4d0a-ae15-328580f16d78" path="/var/lib/kubelet/pods/3fda2d72-d200-4d0a-ae15-328580f16d78/volumes" Jan 22 07:10:48 crc kubenswrapper[4720]: I0122 07:10:48.266964 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/cinder-api-0"] Jan 22 07:10:48 crc kubenswrapper[4720]: I0122 07:10:48.332048 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cd08ccf6-7d46-4a4a-a77b-571fa77bba36-logs\") pod \"cinder-api-0\" (UID: \"cd08ccf6-7d46-4a4a-a77b-571fa77bba36\") " pod="watcher-kuttl-default/cinder-api-0" Jan 22 07:10:48 crc kubenswrapper[4720]: I0122 07:10:48.332142 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/cd08ccf6-7d46-4a4a-a77b-571fa77bba36-etc-machine-id\") pod \"cinder-api-0\" (UID: \"cd08ccf6-7d46-4a4a-a77b-571fa77bba36\") " pod="watcher-kuttl-default/cinder-api-0" Jan 22 07:10:48 crc kubenswrapper[4720]: I0122 07:10:48.332191 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd08ccf6-7d46-4a4a-a77b-571fa77bba36-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"cd08ccf6-7d46-4a4a-a77b-571fa77bba36\") " pod="watcher-kuttl-default/cinder-api-0" Jan 22 07:10:48 crc kubenswrapper[4720]: I0122 07:10:48.332208 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cd08ccf6-7d46-4a4a-a77b-571fa77bba36-config-data-custom\") pod \"cinder-api-0\" (UID: \"cd08ccf6-7d46-4a4a-a77b-571fa77bba36\") " pod="watcher-kuttl-default/cinder-api-0" Jan 22 07:10:48 crc kubenswrapper[4720]: I0122 07:10:48.332224 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cd08ccf6-7d46-4a4a-a77b-571fa77bba36-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"cd08ccf6-7d46-4a4a-a77b-571fa77bba36\") " pod="watcher-kuttl-default/cinder-api-0" Jan 22 07:10:48 crc kubenswrapper[4720]: I0122 07:10:48.332260 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cd08ccf6-7d46-4a4a-a77b-571fa77bba36-public-tls-certs\") pod \"cinder-api-0\" (UID: \"cd08ccf6-7d46-4a4a-a77b-571fa77bba36\") " pod="watcher-kuttl-default/cinder-api-0" Jan 22 07:10:48 crc kubenswrapper[4720]: I0122 07:10:48.332280 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/cd08ccf6-7d46-4a4a-a77b-571fa77bba36-cert-memcached-mtls\") pod \"cinder-api-0\" (UID: \"cd08ccf6-7d46-4a4a-a77b-571fa77bba36\") " pod="watcher-kuttl-default/cinder-api-0" Jan 22 07:10:48 crc kubenswrapper[4720]: I0122 07:10:48.332295 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd08ccf6-7d46-4a4a-a77b-571fa77bba36-config-data\") pod \"cinder-api-0\" (UID: \"cd08ccf6-7d46-4a4a-a77b-571fa77bba36\") " pod="watcher-kuttl-default/cinder-api-0" Jan 22 07:10:48 crc kubenswrapper[4720]: I0122 07:10:48.332322 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cd08ccf6-7d46-4a4a-a77b-571fa77bba36-scripts\") pod \"cinder-api-0\" (UID: \"cd08ccf6-7d46-4a4a-a77b-571fa77bba36\") " pod="watcher-kuttl-default/cinder-api-0" Jan 22 07:10:48 crc kubenswrapper[4720]: I0122 07:10:48.332356 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4ct58\" (UniqueName: \"kubernetes.io/projected/cd08ccf6-7d46-4a4a-a77b-571fa77bba36-kube-api-access-4ct58\") pod \"cinder-api-0\" (UID: \"cd08ccf6-7d46-4a4a-a77b-571fa77bba36\") " pod="watcher-kuttl-default/cinder-api-0" Jan 22 07:10:48 crc kubenswrapper[4720]: I0122 07:10:48.380911 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_95a76f1b-07af-4869-b242-1cdbdb0b1f98/watcher-decision-engine/0.log" Jan 22 07:10:48 crc kubenswrapper[4720]: I0122 07:10:48.434284 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4ct58\" (UniqueName: \"kubernetes.io/projected/cd08ccf6-7d46-4a4a-a77b-571fa77bba36-kube-api-access-4ct58\") pod \"cinder-api-0\" (UID: \"cd08ccf6-7d46-4a4a-a77b-571fa77bba36\") " pod="watcher-kuttl-default/cinder-api-0" Jan 22 07:10:48 crc kubenswrapper[4720]: I0122 07:10:48.434369 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cd08ccf6-7d46-4a4a-a77b-571fa77bba36-logs\") pod \"cinder-api-0\" (UID: \"cd08ccf6-7d46-4a4a-a77b-571fa77bba36\") " pod="watcher-kuttl-default/cinder-api-0" Jan 22 07:10:48 crc kubenswrapper[4720]: I0122 07:10:48.434441 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/cd08ccf6-7d46-4a4a-a77b-571fa77bba36-etc-machine-id\") pod \"cinder-api-0\" (UID: \"cd08ccf6-7d46-4a4a-a77b-571fa77bba36\") " pod="watcher-kuttl-default/cinder-api-0" Jan 22 07:10:48 crc kubenswrapper[4720]: I0122 07:10:48.434490 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd08ccf6-7d46-4a4a-a77b-571fa77bba36-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"cd08ccf6-7d46-4a4a-a77b-571fa77bba36\") " pod="watcher-kuttl-default/cinder-api-0" Jan 22 07:10:48 crc kubenswrapper[4720]: I0122 07:10:48.434509 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cd08ccf6-7d46-4a4a-a77b-571fa77bba36-config-data-custom\") pod \"cinder-api-0\" (UID: \"cd08ccf6-7d46-4a4a-a77b-571fa77bba36\") " pod="watcher-kuttl-default/cinder-api-0" Jan 22 07:10:48 crc kubenswrapper[4720]: I0122 07:10:48.434526 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cd08ccf6-7d46-4a4a-a77b-571fa77bba36-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"cd08ccf6-7d46-4a4a-a77b-571fa77bba36\") " pod="watcher-kuttl-default/cinder-api-0" Jan 22 07:10:48 crc kubenswrapper[4720]: I0122 07:10:48.434552 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cd08ccf6-7d46-4a4a-a77b-571fa77bba36-public-tls-certs\") pod \"cinder-api-0\" (UID: \"cd08ccf6-7d46-4a4a-a77b-571fa77bba36\") " pod="watcher-kuttl-default/cinder-api-0" Jan 22 07:10:48 crc kubenswrapper[4720]: I0122 07:10:48.434575 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/cd08ccf6-7d46-4a4a-a77b-571fa77bba36-cert-memcached-mtls\") pod \"cinder-api-0\" (UID: \"cd08ccf6-7d46-4a4a-a77b-571fa77bba36\") " pod="watcher-kuttl-default/cinder-api-0" Jan 22 07:10:48 crc kubenswrapper[4720]: I0122 07:10:48.434593 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd08ccf6-7d46-4a4a-a77b-571fa77bba36-config-data\") pod \"cinder-api-0\" (UID: \"cd08ccf6-7d46-4a4a-a77b-571fa77bba36\") " pod="watcher-kuttl-default/cinder-api-0" Jan 22 07:10:48 crc kubenswrapper[4720]: I0122 07:10:48.434619 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cd08ccf6-7d46-4a4a-a77b-571fa77bba36-scripts\") pod \"cinder-api-0\" (UID: \"cd08ccf6-7d46-4a4a-a77b-571fa77bba36\") " pod="watcher-kuttl-default/cinder-api-0" Jan 22 07:10:48 crc kubenswrapper[4720]: I0122 07:10:48.434622 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/cd08ccf6-7d46-4a4a-a77b-571fa77bba36-etc-machine-id\") pod \"cinder-api-0\" (UID: \"cd08ccf6-7d46-4a4a-a77b-571fa77bba36\") " pod="watcher-kuttl-default/cinder-api-0" Jan 22 07:10:48 crc kubenswrapper[4720]: I0122 07:10:48.434904 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cd08ccf6-7d46-4a4a-a77b-571fa77bba36-logs\") pod \"cinder-api-0\" (UID: \"cd08ccf6-7d46-4a4a-a77b-571fa77bba36\") " pod="watcher-kuttl-default/cinder-api-0" Jan 22 07:10:48 crc kubenswrapper[4720]: I0122 07:10:48.443481 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cd08ccf6-7d46-4a4a-a77b-571fa77bba36-config-data-custom\") pod \"cinder-api-0\" (UID: \"cd08ccf6-7d46-4a4a-a77b-571fa77bba36\") " pod="watcher-kuttl-default/cinder-api-0" Jan 22 07:10:48 crc kubenswrapper[4720]: I0122 07:10:48.446669 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/cd08ccf6-7d46-4a4a-a77b-571fa77bba36-cert-memcached-mtls\") pod \"cinder-api-0\" (UID: \"cd08ccf6-7d46-4a4a-a77b-571fa77bba36\") " pod="watcher-kuttl-default/cinder-api-0" Jan 22 07:10:48 crc kubenswrapper[4720]: I0122 07:10:48.447480 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cd08ccf6-7d46-4a4a-a77b-571fa77bba36-scripts\") pod \"cinder-api-0\" (UID: \"cd08ccf6-7d46-4a4a-a77b-571fa77bba36\") " pod="watcher-kuttl-default/cinder-api-0" Jan 22 07:10:48 crc kubenswrapper[4720]: I0122 07:10:48.448275 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd08ccf6-7d46-4a4a-a77b-571fa77bba36-config-data\") pod \"cinder-api-0\" (UID: \"cd08ccf6-7d46-4a4a-a77b-571fa77bba36\") " pod="watcher-kuttl-default/cinder-api-0" Jan 22 07:10:48 crc kubenswrapper[4720]: I0122 07:10:48.448852 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd08ccf6-7d46-4a4a-a77b-571fa77bba36-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"cd08ccf6-7d46-4a4a-a77b-571fa77bba36\") " pod="watcher-kuttl-default/cinder-api-0" Jan 22 07:10:48 crc kubenswrapper[4720]: I0122 07:10:48.465641 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cd08ccf6-7d46-4a4a-a77b-571fa77bba36-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"cd08ccf6-7d46-4a4a-a77b-571fa77bba36\") " pod="watcher-kuttl-default/cinder-api-0" Jan 22 07:10:48 crc kubenswrapper[4720]: I0122 07:10:48.480572 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4ct58\" (UniqueName: \"kubernetes.io/projected/cd08ccf6-7d46-4a4a-a77b-571fa77bba36-kube-api-access-4ct58\") pod \"cinder-api-0\" (UID: \"cd08ccf6-7d46-4a4a-a77b-571fa77bba36\") " pod="watcher-kuttl-default/cinder-api-0" Jan 22 07:10:48 crc kubenswrapper[4720]: I0122 07:10:48.480980 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cd08ccf6-7d46-4a4a-a77b-571fa77bba36-public-tls-certs\") pod \"cinder-api-0\" (UID: \"cd08ccf6-7d46-4a4a-a77b-571fa77bba36\") " pod="watcher-kuttl-default/cinder-api-0" Jan 22 07:10:48 crc kubenswrapper[4720]: I0122 07:10:48.583684 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder-api-0" Jan 22 07:10:49 crc kubenswrapper[4720]: I0122 07:10:49.163057 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/cinder-api-0"] Jan 22 07:10:49 crc kubenswrapper[4720]: W0122 07:10:49.167243 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcd08ccf6_7d46_4a4a_a77b_571fa77bba36.slice/crio-fff401435ff0142c8b93a3a3ac2e08f891b08f63e316e2b31bf61fbc0b4cb0df WatchSource:0}: Error finding container fff401435ff0142c8b93a3a3ac2e08f891b08f63e316e2b31bf61fbc0b4cb0df: Status 404 returned error can't find the container with id fff401435ff0142c8b93a3a3ac2e08f891b08f63e316e2b31bf61fbc0b4cb0df Jan 22 07:10:49 crc kubenswrapper[4720]: I0122 07:10:49.604284 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_95a76f1b-07af-4869-b242-1cdbdb0b1f98/watcher-decision-engine/0.log" Jan 22 07:10:50 crc kubenswrapper[4720]: I0122 07:10:50.087653 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-api-0" event={"ID":"cd08ccf6-7d46-4a4a-a77b-571fa77bba36","Type":"ContainerStarted","Data":"a34476e119b754658da1e4b1043687520f9bb7dd42e99c7378ebf0c11f995894"} Jan 22 07:10:50 crc kubenswrapper[4720]: I0122 07:10:50.088026 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-api-0" event={"ID":"cd08ccf6-7d46-4a4a-a77b-571fa77bba36","Type":"ContainerStarted","Data":"fff401435ff0142c8b93a3a3ac2e08f891b08f63e316e2b31bf61fbc0b4cb0df"} Jan 22 07:10:50 crc kubenswrapper[4720]: I0122 07:10:50.525596 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/cinder-scheduler-0" Jan 22 07:10:50 crc kubenswrapper[4720]: I0122 07:10:50.666653 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/cinder-backup-0" Jan 22 07:10:50 crc kubenswrapper[4720]: I0122 07:10:50.785395 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_95a76f1b-07af-4869-b242-1cdbdb0b1f98/watcher-decision-engine/0.log" Jan 22 07:10:50 crc kubenswrapper[4720]: I0122 07:10:50.972384 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/cinder-backup-0" Jan 22 07:10:51 crc kubenswrapper[4720]: I0122 07:10:51.099824 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-api-0" event={"ID":"cd08ccf6-7d46-4a4a-a77b-571fa77bba36","Type":"ContainerStarted","Data":"54f87871a87363ea945108d40f1796b642e83c8e7ca3c68c49dce0ba66ee7d31"} Jan 22 07:10:51 crc kubenswrapper[4720]: I0122 07:10:51.100306 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/cinder-api-0" Jan 22 07:10:51 crc kubenswrapper[4720]: I0122 07:10:51.126768 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/cinder-api-0" podStartSLOduration=3.126742839 podStartE2EDuration="3.126742839s" podCreationTimestamp="2026-01-22 07:10:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 07:10:51.120042269 +0000 UTC m=+2143.261949004" watchObservedRunningTime="2026-01-22 07:10:51.126742839 +0000 UTC m=+2143.268649544" Jan 22 07:10:51 crc kubenswrapper[4720]: I0122 07:10:51.174817 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/cinder-backup-0"] Jan 22 07:10:51 crc kubenswrapper[4720]: I0122 07:10:51.974630 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_95a76f1b-07af-4869-b242-1cdbdb0b1f98/watcher-decision-engine/0.log" Jan 22 07:10:52 crc kubenswrapper[4720]: I0122 07:10:52.152765 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/cinder-backup-0" podUID="33879577-2b66-4c4d-85bc-076f0ed1e056" containerName="cinder-backup" containerID="cri-o://d920c0fcb026da0fa612db91e284c6adb790ed732454a2849a61321f4d4133ef" gracePeriod=30 Jan 22 07:10:52 crc kubenswrapper[4720]: I0122 07:10:52.152961 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/cinder-backup-0" podUID="33879577-2b66-4c4d-85bc-076f0ed1e056" containerName="probe" containerID="cri-o://2891a8ccf19f9a593602d22f8f7b22201a8d9006d44d4c50d94fe07d43dbbe3e" gracePeriod=30 Jan 22 07:10:52 crc kubenswrapper[4720]: I0122 07:10:52.176902 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 22 07:10:52 crc kubenswrapper[4720]: I0122 07:10:52.178045 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="95a76f1b-07af-4869-b242-1cdbdb0b1f98" containerName="watcher-decision-engine" containerID="cri-o://81723a08301dd466878bfb4f71b3ed672eb44ba3a7aa82d52fb24b1c976f949b" gracePeriod=30 Jan 22 07:10:53 crc kubenswrapper[4720]: I0122 07:10:53.145714 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 07:10:53 crc kubenswrapper[4720]: I0122 07:10:53.146080 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="ba5fb927-8677-4576-85bf-75621f514a9d" containerName="ceilometer-central-agent" containerID="cri-o://3f373f6594899caa060de0a7c781957193a49f39e24312aaf9742c8b300e488a" gracePeriod=30 Jan 22 07:10:53 crc kubenswrapper[4720]: I0122 07:10:53.146248 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="ba5fb927-8677-4576-85bf-75621f514a9d" containerName="proxy-httpd" containerID="cri-o://5ef80c746c7c023464e9624ad446abfc3200ae365249db7487fafd95954842d3" gracePeriod=30 Jan 22 07:10:53 crc kubenswrapper[4720]: I0122 07:10:53.146287 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="ba5fb927-8677-4576-85bf-75621f514a9d" containerName="sg-core" containerID="cri-o://0e84feceb255ba20a9809b499f4d49f9fcebdb02128d64c15388577a8d3649e7" gracePeriod=30 Jan 22 07:10:53 crc kubenswrapper[4720]: I0122 07:10:53.146345 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="ba5fb927-8677-4576-85bf-75621f514a9d" containerName="ceilometer-notification-agent" containerID="cri-o://fcaa5ffa174c2659a1c4077ca79099f49c331ab17f3870fae02ea2968d89ca46" gracePeriod=30 Jan 22 07:10:53 crc kubenswrapper[4720]: I0122 07:10:53.167201 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_95a76f1b-07af-4869-b242-1cdbdb0b1f98/watcher-decision-engine/0.log" Jan 22 07:10:53 crc kubenswrapper[4720]: I0122 07:10:53.227142 4720 generic.go:334] "Generic (PLEG): container finished" podID="33879577-2b66-4c4d-85bc-076f0ed1e056" containerID="2891a8ccf19f9a593602d22f8f7b22201a8d9006d44d4c50d94fe07d43dbbe3e" exitCode=0 Jan 22 07:10:53 crc kubenswrapper[4720]: I0122 07:10:53.227213 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-backup-0" event={"ID":"33879577-2b66-4c4d-85bc-076f0ed1e056","Type":"ContainerDied","Data":"2891a8ccf19f9a593602d22f8f7b22201a8d9006d44d4c50d94fe07d43dbbe3e"} Jan 22 07:10:53 crc kubenswrapper[4720]: I0122 07:10:53.803549 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder-backup-0" Jan 22 07:10:53 crc kubenswrapper[4720]: I0122 07:10:53.845574 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/33879577-2b66-4c4d-85bc-076f0ed1e056-run\") pod \"33879577-2b66-4c4d-85bc-076f0ed1e056\" (UID: \"33879577-2b66-4c4d-85bc-076f0ed1e056\") " Jan 22 07:10:53 crc kubenswrapper[4720]: I0122 07:10:53.845614 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/33879577-2b66-4c4d-85bc-076f0ed1e056-dev\") pod \"33879577-2b66-4c4d-85bc-076f0ed1e056\" (UID: \"33879577-2b66-4c4d-85bc-076f0ed1e056\") " Jan 22 07:10:53 crc kubenswrapper[4720]: I0122 07:10:53.845661 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33879577-2b66-4c4d-85bc-076f0ed1e056-combined-ca-bundle\") pod \"33879577-2b66-4c4d-85bc-076f0ed1e056\" (UID: \"33879577-2b66-4c4d-85bc-076f0ed1e056\") " Jan 22 07:10:53 crc kubenswrapper[4720]: I0122 07:10:53.845946 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/33879577-2b66-4c4d-85bc-076f0ed1e056-etc-nvme\") pod \"33879577-2b66-4c4d-85bc-076f0ed1e056\" (UID: \"33879577-2b66-4c4d-85bc-076f0ed1e056\") " Jan 22 07:10:53 crc kubenswrapper[4720]: I0122 07:10:53.845968 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/33879577-2b66-4c4d-85bc-076f0ed1e056-run" (OuterVolumeSpecName: "run") pod "33879577-2b66-4c4d-85bc-076f0ed1e056" (UID: "33879577-2b66-4c4d-85bc-076f0ed1e056"). InnerVolumeSpecName "run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 07:10:53 crc kubenswrapper[4720]: I0122 07:10:53.846001 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/33879577-2b66-4c4d-85bc-076f0ed1e056-dev" (OuterVolumeSpecName: "dev") pod "33879577-2b66-4c4d-85bc-076f0ed1e056" (UID: "33879577-2b66-4c4d-85bc-076f0ed1e056"). InnerVolumeSpecName "dev". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 07:10:53 crc kubenswrapper[4720]: I0122 07:10:53.846030 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2j2x\" (UniqueName: \"kubernetes.io/projected/33879577-2b66-4c4d-85bc-076f0ed1e056-kube-api-access-x2j2x\") pod \"33879577-2b66-4c4d-85bc-076f0ed1e056\" (UID: \"33879577-2b66-4c4d-85bc-076f0ed1e056\") " Jan 22 07:10:53 crc kubenswrapper[4720]: I0122 07:10:53.846058 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/33879577-2b66-4c4d-85bc-076f0ed1e056-etc-nvme" (OuterVolumeSpecName: "etc-nvme") pod "33879577-2b66-4c4d-85bc-076f0ed1e056" (UID: "33879577-2b66-4c4d-85bc-076f0ed1e056"). InnerVolumeSpecName "etc-nvme". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 07:10:53 crc kubenswrapper[4720]: I0122 07:10:53.846187 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/33879577-2b66-4c4d-85bc-076f0ed1e056-var-locks-brick\") pod \"33879577-2b66-4c4d-85bc-076f0ed1e056\" (UID: \"33879577-2b66-4c4d-85bc-076f0ed1e056\") " Jan 22 07:10:53 crc kubenswrapper[4720]: I0122 07:10:53.846276 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/33879577-2b66-4c4d-85bc-076f0ed1e056-cert-memcached-mtls\") pod \"33879577-2b66-4c4d-85bc-076f0ed1e056\" (UID: \"33879577-2b66-4c4d-85bc-076f0ed1e056\") " Jan 22 07:10:53 crc kubenswrapper[4720]: I0122 07:10:53.846290 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/33879577-2b66-4c4d-85bc-076f0ed1e056-var-locks-brick" (OuterVolumeSpecName: "var-locks-brick") pod "33879577-2b66-4c4d-85bc-076f0ed1e056" (UID: "33879577-2b66-4c4d-85bc-076f0ed1e056"). InnerVolumeSpecName "var-locks-brick". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 07:10:53 crc kubenswrapper[4720]: I0122 07:10:53.846307 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/33879577-2b66-4c4d-85bc-076f0ed1e056-etc-iscsi\") pod \"33879577-2b66-4c4d-85bc-076f0ed1e056\" (UID: \"33879577-2b66-4c4d-85bc-076f0ed1e056\") " Jan 22 07:10:53 crc kubenswrapper[4720]: I0122 07:10:53.846356 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/33879577-2b66-4c4d-85bc-076f0ed1e056-sys\") pod \"33879577-2b66-4c4d-85bc-076f0ed1e056\" (UID: \"33879577-2b66-4c4d-85bc-076f0ed1e056\") " Jan 22 07:10:53 crc kubenswrapper[4720]: I0122 07:10:53.846396 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/33879577-2b66-4c4d-85bc-076f0ed1e056-etc-iscsi" (OuterVolumeSpecName: "etc-iscsi") pod "33879577-2b66-4c4d-85bc-076f0ed1e056" (UID: "33879577-2b66-4c4d-85bc-076f0ed1e056"). InnerVolumeSpecName "etc-iscsi". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 07:10:53 crc kubenswrapper[4720]: I0122 07:10:53.846407 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/33879577-2b66-4c4d-85bc-076f0ed1e056-etc-machine-id\") pod \"33879577-2b66-4c4d-85bc-076f0ed1e056\" (UID: \"33879577-2b66-4c4d-85bc-076f0ed1e056\") " Jan 22 07:10:53 crc kubenswrapper[4720]: I0122 07:10:53.846440 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/33879577-2b66-4c4d-85bc-076f0ed1e056-sys" (OuterVolumeSpecName: "sys") pod "33879577-2b66-4c4d-85bc-076f0ed1e056" (UID: "33879577-2b66-4c4d-85bc-076f0ed1e056"). InnerVolumeSpecName "sys". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 07:10:53 crc kubenswrapper[4720]: I0122 07:10:53.846448 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/33879577-2b66-4c4d-85bc-076f0ed1e056-var-lib-cinder\") pod \"33879577-2b66-4c4d-85bc-076f0ed1e056\" (UID: \"33879577-2b66-4c4d-85bc-076f0ed1e056\") " Jan 22 07:10:53 crc kubenswrapper[4720]: I0122 07:10:53.846473 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/33879577-2b66-4c4d-85bc-076f0ed1e056-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "33879577-2b66-4c4d-85bc-076f0ed1e056" (UID: "33879577-2b66-4c4d-85bc-076f0ed1e056"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 07:10:53 crc kubenswrapper[4720]: I0122 07:10:53.846480 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33879577-2b66-4c4d-85bc-076f0ed1e056-config-data\") pod \"33879577-2b66-4c4d-85bc-076f0ed1e056\" (UID: \"33879577-2b66-4c4d-85bc-076f0ed1e056\") " Jan 22 07:10:53 crc kubenswrapper[4720]: I0122 07:10:53.846581 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/33879577-2b66-4c4d-85bc-076f0ed1e056-var-lib-cinder" (OuterVolumeSpecName: "var-lib-cinder") pod "33879577-2b66-4c4d-85bc-076f0ed1e056" (UID: "33879577-2b66-4c4d-85bc-076f0ed1e056"). InnerVolumeSpecName "var-lib-cinder". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 07:10:53 crc kubenswrapper[4720]: I0122 07:10:53.846627 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/33879577-2b66-4c4d-85bc-076f0ed1e056-config-data-custom\") pod \"33879577-2b66-4c4d-85bc-076f0ed1e056\" (UID: \"33879577-2b66-4c4d-85bc-076f0ed1e056\") " Jan 22 07:10:53 crc kubenswrapper[4720]: I0122 07:10:53.846693 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/33879577-2b66-4c4d-85bc-076f0ed1e056-var-locks-cinder\") pod \"33879577-2b66-4c4d-85bc-076f0ed1e056\" (UID: \"33879577-2b66-4c4d-85bc-076f0ed1e056\") " Jan 22 07:10:53 crc kubenswrapper[4720]: I0122 07:10:53.846726 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/33879577-2b66-4c4d-85bc-076f0ed1e056-scripts\") pod \"33879577-2b66-4c4d-85bc-076f0ed1e056\" (UID: \"33879577-2b66-4c4d-85bc-076f0ed1e056\") " Jan 22 07:10:53 crc kubenswrapper[4720]: I0122 07:10:53.846745 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/33879577-2b66-4c4d-85bc-076f0ed1e056-var-locks-cinder" (OuterVolumeSpecName: "var-locks-cinder") pod "33879577-2b66-4c4d-85bc-076f0ed1e056" (UID: "33879577-2b66-4c4d-85bc-076f0ed1e056"). InnerVolumeSpecName "var-locks-cinder". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 07:10:53 crc kubenswrapper[4720]: I0122 07:10:53.846752 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/33879577-2b66-4c4d-85bc-076f0ed1e056-lib-modules\") pod \"33879577-2b66-4c4d-85bc-076f0ed1e056\" (UID: \"33879577-2b66-4c4d-85bc-076f0ed1e056\") " Jan 22 07:10:53 crc kubenswrapper[4720]: I0122 07:10:53.846780 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/33879577-2b66-4c4d-85bc-076f0ed1e056-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "33879577-2b66-4c4d-85bc-076f0ed1e056" (UID: "33879577-2b66-4c4d-85bc-076f0ed1e056"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 07:10:53 crc kubenswrapper[4720]: I0122 07:10:53.847321 4720 reconciler_common.go:293] "Volume detached for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/33879577-2b66-4c4d-85bc-076f0ed1e056-etc-iscsi\") on node \"crc\" DevicePath \"\"" Jan 22 07:10:53 crc kubenswrapper[4720]: I0122 07:10:53.847338 4720 reconciler_common.go:293] "Volume detached for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/33879577-2b66-4c4d-85bc-076f0ed1e056-sys\") on node \"crc\" DevicePath \"\"" Jan 22 07:10:53 crc kubenswrapper[4720]: I0122 07:10:53.847347 4720 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/33879577-2b66-4c4d-85bc-076f0ed1e056-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 22 07:10:53 crc kubenswrapper[4720]: I0122 07:10:53.847360 4720 reconciler_common.go:293] "Volume detached for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/33879577-2b66-4c4d-85bc-076f0ed1e056-var-lib-cinder\") on node \"crc\" DevicePath \"\"" Jan 22 07:10:53 crc kubenswrapper[4720]: I0122 07:10:53.847368 4720 reconciler_common.go:293] "Volume detached for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/33879577-2b66-4c4d-85bc-076f0ed1e056-var-locks-cinder\") on node \"crc\" DevicePath \"\"" Jan 22 07:10:53 crc kubenswrapper[4720]: I0122 07:10:53.847376 4720 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/33879577-2b66-4c4d-85bc-076f0ed1e056-lib-modules\") on node \"crc\" DevicePath \"\"" Jan 22 07:10:53 crc kubenswrapper[4720]: I0122 07:10:53.847384 4720 reconciler_common.go:293] "Volume detached for volume \"run\" (UniqueName: \"kubernetes.io/host-path/33879577-2b66-4c4d-85bc-076f0ed1e056-run\") on node \"crc\" DevicePath \"\"" Jan 22 07:10:53 crc kubenswrapper[4720]: I0122 07:10:53.847392 4720 reconciler_common.go:293] "Volume detached for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/33879577-2b66-4c4d-85bc-076f0ed1e056-dev\") on node \"crc\" DevicePath \"\"" Jan 22 07:10:53 crc kubenswrapper[4720]: I0122 07:10:53.847401 4720 reconciler_common.go:293] "Volume detached for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/33879577-2b66-4c4d-85bc-076f0ed1e056-etc-nvme\") on node \"crc\" DevicePath \"\"" Jan 22 07:10:53 crc kubenswrapper[4720]: I0122 07:10:53.847411 4720 reconciler_common.go:293] "Volume detached for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/33879577-2b66-4c4d-85bc-076f0ed1e056-var-locks-brick\") on node \"crc\" DevicePath \"\"" Jan 22 07:10:53 crc kubenswrapper[4720]: I0122 07:10:53.853146 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/33879577-2b66-4c4d-85bc-076f0ed1e056-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "33879577-2b66-4c4d-85bc-076f0ed1e056" (UID: "33879577-2b66-4c4d-85bc-076f0ed1e056"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:10:53 crc kubenswrapper[4720]: I0122 07:10:53.855355 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/33879577-2b66-4c4d-85bc-076f0ed1e056-scripts" (OuterVolumeSpecName: "scripts") pod "33879577-2b66-4c4d-85bc-076f0ed1e056" (UID: "33879577-2b66-4c4d-85bc-076f0ed1e056"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:10:53 crc kubenswrapper[4720]: I0122 07:10:53.864440 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/33879577-2b66-4c4d-85bc-076f0ed1e056-kube-api-access-x2j2x" (OuterVolumeSpecName: "kube-api-access-x2j2x") pod "33879577-2b66-4c4d-85bc-076f0ed1e056" (UID: "33879577-2b66-4c4d-85bc-076f0ed1e056"). InnerVolumeSpecName "kube-api-access-x2j2x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 07:10:53 crc kubenswrapper[4720]: I0122 07:10:53.897323 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/33879577-2b66-4c4d-85bc-076f0ed1e056-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "33879577-2b66-4c4d-85bc-076f0ed1e056" (UID: "33879577-2b66-4c4d-85bc-076f0ed1e056"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:10:53 crc kubenswrapper[4720]: I0122 07:10:53.941853 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/33879577-2b66-4c4d-85bc-076f0ed1e056-config-data" (OuterVolumeSpecName: "config-data") pod "33879577-2b66-4c4d-85bc-076f0ed1e056" (UID: "33879577-2b66-4c4d-85bc-076f0ed1e056"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:10:53 crc kubenswrapper[4720]: I0122 07:10:53.949150 4720 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/33879577-2b66-4c4d-85bc-076f0ed1e056-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 07:10:53 crc kubenswrapper[4720]: I0122 07:10:53.949292 4720 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33879577-2b66-4c4d-85bc-076f0ed1e056-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 07:10:53 crc kubenswrapper[4720]: I0122 07:10:53.949353 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2j2x\" (UniqueName: \"kubernetes.io/projected/33879577-2b66-4c4d-85bc-076f0ed1e056-kube-api-access-x2j2x\") on node \"crc\" DevicePath \"\"" Jan 22 07:10:53 crc kubenswrapper[4720]: I0122 07:10:53.949407 4720 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33879577-2b66-4c4d-85bc-076f0ed1e056-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 07:10:53 crc kubenswrapper[4720]: I0122 07:10:53.949459 4720 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/33879577-2b66-4c4d-85bc-076f0ed1e056-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 22 07:10:53 crc kubenswrapper[4720]: I0122 07:10:53.996648 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/33879577-2b66-4c4d-85bc-076f0ed1e056-cert-memcached-mtls" (OuterVolumeSpecName: "cert-memcached-mtls") pod "33879577-2b66-4c4d-85bc-076f0ed1e056" (UID: "33879577-2b66-4c4d-85bc-076f0ed1e056"). InnerVolumeSpecName "cert-memcached-mtls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:10:54 crc kubenswrapper[4720]: I0122 07:10:54.051638 4720 reconciler_common.go:293] "Volume detached for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/33879577-2b66-4c4d-85bc-076f0ed1e056-cert-memcached-mtls\") on node \"crc\" DevicePath \"\"" Jan 22 07:10:54 crc kubenswrapper[4720]: I0122 07:10:54.238298 4720 generic.go:334] "Generic (PLEG): container finished" podID="33879577-2b66-4c4d-85bc-076f0ed1e056" containerID="d920c0fcb026da0fa612db91e284c6adb790ed732454a2849a61321f4d4133ef" exitCode=0 Jan 22 07:10:54 crc kubenswrapper[4720]: I0122 07:10:54.238386 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-backup-0" event={"ID":"33879577-2b66-4c4d-85bc-076f0ed1e056","Type":"ContainerDied","Data":"d920c0fcb026da0fa612db91e284c6adb790ed732454a2849a61321f4d4133ef"} Jan 22 07:10:54 crc kubenswrapper[4720]: I0122 07:10:54.238427 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-backup-0" event={"ID":"33879577-2b66-4c4d-85bc-076f0ed1e056","Type":"ContainerDied","Data":"d6d6117c1120a89cb4be5130ea68b5ffa4767b58752805c0520d64ada6e81193"} Jan 22 07:10:54 crc kubenswrapper[4720]: I0122 07:10:54.238466 4720 scope.go:117] "RemoveContainer" containerID="2891a8ccf19f9a593602d22f8f7b22201a8d9006d44d4c50d94fe07d43dbbe3e" Jan 22 07:10:54 crc kubenswrapper[4720]: I0122 07:10:54.238458 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder-backup-0" Jan 22 07:10:54 crc kubenswrapper[4720]: I0122 07:10:54.243401 4720 generic.go:334] "Generic (PLEG): container finished" podID="ba5fb927-8677-4576-85bf-75621f514a9d" containerID="5ef80c746c7c023464e9624ad446abfc3200ae365249db7487fafd95954842d3" exitCode=0 Jan 22 07:10:54 crc kubenswrapper[4720]: I0122 07:10:54.243437 4720 generic.go:334] "Generic (PLEG): container finished" podID="ba5fb927-8677-4576-85bf-75621f514a9d" containerID="0e84feceb255ba20a9809b499f4d49f9fcebdb02128d64c15388577a8d3649e7" exitCode=2 Jan 22 07:10:54 crc kubenswrapper[4720]: I0122 07:10:54.243447 4720 generic.go:334] "Generic (PLEG): container finished" podID="ba5fb927-8677-4576-85bf-75621f514a9d" containerID="3f373f6594899caa060de0a7c781957193a49f39e24312aaf9742c8b300e488a" exitCode=0 Jan 22 07:10:54 crc kubenswrapper[4720]: I0122 07:10:54.243473 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"ba5fb927-8677-4576-85bf-75621f514a9d","Type":"ContainerDied","Data":"5ef80c746c7c023464e9624ad446abfc3200ae365249db7487fafd95954842d3"} Jan 22 07:10:54 crc kubenswrapper[4720]: I0122 07:10:54.243507 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"ba5fb927-8677-4576-85bf-75621f514a9d","Type":"ContainerDied","Data":"0e84feceb255ba20a9809b499f4d49f9fcebdb02128d64c15388577a8d3649e7"} Jan 22 07:10:54 crc kubenswrapper[4720]: I0122 07:10:54.243523 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"ba5fb927-8677-4576-85bf-75621f514a9d","Type":"ContainerDied","Data":"3f373f6594899caa060de0a7c781957193a49f39e24312aaf9742c8b300e488a"} Jan 22 07:10:54 crc kubenswrapper[4720]: I0122 07:10:54.268223 4720 scope.go:117] "RemoveContainer" containerID="d920c0fcb026da0fa612db91e284c6adb790ed732454a2849a61321f4d4133ef" Jan 22 07:10:54 crc kubenswrapper[4720]: I0122 07:10:54.275107 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/cinder-backup-0"] Jan 22 07:10:54 crc kubenswrapper[4720]: I0122 07:10:54.291830 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/cinder-backup-0"] Jan 22 07:10:54 crc kubenswrapper[4720]: I0122 07:10:54.300528 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/cinder-backup-0"] Jan 22 07:10:54 crc kubenswrapper[4720]: E0122 07:10:54.301107 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33879577-2b66-4c4d-85bc-076f0ed1e056" containerName="cinder-backup" Jan 22 07:10:54 crc kubenswrapper[4720]: I0122 07:10:54.301132 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="33879577-2b66-4c4d-85bc-076f0ed1e056" containerName="cinder-backup" Jan 22 07:10:54 crc kubenswrapper[4720]: E0122 07:10:54.301166 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33879577-2b66-4c4d-85bc-076f0ed1e056" containerName="probe" Jan 22 07:10:54 crc kubenswrapper[4720]: I0122 07:10:54.301180 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="33879577-2b66-4c4d-85bc-076f0ed1e056" containerName="probe" Jan 22 07:10:54 crc kubenswrapper[4720]: I0122 07:10:54.301526 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="33879577-2b66-4c4d-85bc-076f0ed1e056" containerName="probe" Jan 22 07:10:54 crc kubenswrapper[4720]: I0122 07:10:54.301556 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="33879577-2b66-4c4d-85bc-076f0ed1e056" containerName="cinder-backup" Jan 22 07:10:54 crc kubenswrapper[4720]: I0122 07:10:54.302893 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder-backup-0" Jan 22 07:10:54 crc kubenswrapper[4720]: I0122 07:10:54.305871 4720 scope.go:117] "RemoveContainer" containerID="2891a8ccf19f9a593602d22f8f7b22201a8d9006d44d4c50d94fe07d43dbbe3e" Jan 22 07:10:54 crc kubenswrapper[4720]: E0122 07:10:54.310082 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2891a8ccf19f9a593602d22f8f7b22201a8d9006d44d4c50d94fe07d43dbbe3e\": container with ID starting with 2891a8ccf19f9a593602d22f8f7b22201a8d9006d44d4c50d94fe07d43dbbe3e not found: ID does not exist" containerID="2891a8ccf19f9a593602d22f8f7b22201a8d9006d44d4c50d94fe07d43dbbe3e" Jan 22 07:10:54 crc kubenswrapper[4720]: I0122 07:10:54.310136 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2891a8ccf19f9a593602d22f8f7b22201a8d9006d44d4c50d94fe07d43dbbe3e"} err="failed to get container status \"2891a8ccf19f9a593602d22f8f7b22201a8d9006d44d4c50d94fe07d43dbbe3e\": rpc error: code = NotFound desc = could not find container \"2891a8ccf19f9a593602d22f8f7b22201a8d9006d44d4c50d94fe07d43dbbe3e\": container with ID starting with 2891a8ccf19f9a593602d22f8f7b22201a8d9006d44d4c50d94fe07d43dbbe3e not found: ID does not exist" Jan 22 07:10:54 crc kubenswrapper[4720]: I0122 07:10:54.310167 4720 scope.go:117] "RemoveContainer" containerID="d920c0fcb026da0fa612db91e284c6adb790ed732454a2849a61321f4d4133ef" Jan 22 07:10:54 crc kubenswrapper[4720]: I0122 07:10:54.310359 4720 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cinder-backup-config-data" Jan 22 07:10:54 crc kubenswrapper[4720]: E0122 07:10:54.311395 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d920c0fcb026da0fa612db91e284c6adb790ed732454a2849a61321f4d4133ef\": container with ID starting with d920c0fcb026da0fa612db91e284c6adb790ed732454a2849a61321f4d4133ef not found: ID does not exist" containerID="d920c0fcb026da0fa612db91e284c6adb790ed732454a2849a61321f4d4133ef" Jan 22 07:10:54 crc kubenswrapper[4720]: I0122 07:10:54.311445 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d920c0fcb026da0fa612db91e284c6adb790ed732454a2849a61321f4d4133ef"} err="failed to get container status \"d920c0fcb026da0fa612db91e284c6adb790ed732454a2849a61321f4d4133ef\": rpc error: code = NotFound desc = could not find container \"d920c0fcb026da0fa612db91e284c6adb790ed732454a2849a61321f4d4133ef\": container with ID starting with d920c0fcb026da0fa612db91e284c6adb790ed732454a2849a61321f4d4133ef not found: ID does not exist" Jan 22 07:10:54 crc kubenswrapper[4720]: I0122 07:10:54.357753 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e9e7b3c-c94e-46e8-b6be-97f768f9993c-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"0e9e7b3c-c94e-46e8-b6be-97f768f9993c\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 22 07:10:54 crc kubenswrapper[4720]: I0122 07:10:54.357868 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/0e9e7b3c-c94e-46e8-b6be-97f768f9993c-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"0e9e7b3c-c94e-46e8-b6be-97f768f9993c\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 22 07:10:54 crc kubenswrapper[4720]: I0122 07:10:54.358081 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/0e9e7b3c-c94e-46e8-b6be-97f768f9993c-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"0e9e7b3c-c94e-46e8-b6be-97f768f9993c\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 22 07:10:54 crc kubenswrapper[4720]: I0122 07:10:54.358214 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/0e9e7b3c-c94e-46e8-b6be-97f768f9993c-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"0e9e7b3c-c94e-46e8-b6be-97f768f9993c\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 22 07:10:54 crc kubenswrapper[4720]: I0122 07:10:54.358247 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/0e9e7b3c-c94e-46e8-b6be-97f768f9993c-cert-memcached-mtls\") pod \"cinder-backup-0\" (UID: \"0e9e7b3c-c94e-46e8-b6be-97f768f9993c\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 22 07:10:54 crc kubenswrapper[4720]: I0122 07:10:54.358265 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/0e9e7b3c-c94e-46e8-b6be-97f768f9993c-run\") pod \"cinder-backup-0\" (UID: \"0e9e7b3c-c94e-46e8-b6be-97f768f9993c\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 22 07:10:54 crc kubenswrapper[4720]: I0122 07:10:54.358289 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/0e9e7b3c-c94e-46e8-b6be-97f768f9993c-dev\") pod \"cinder-backup-0\" (UID: \"0e9e7b3c-c94e-46e8-b6be-97f768f9993c\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 22 07:10:54 crc kubenswrapper[4720]: I0122 07:10:54.358305 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/0e9e7b3c-c94e-46e8-b6be-97f768f9993c-sys\") pod \"cinder-backup-0\" (UID: \"0e9e7b3c-c94e-46e8-b6be-97f768f9993c\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 22 07:10:54 crc kubenswrapper[4720]: I0122 07:10:54.358323 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0e9e7b3c-c94e-46e8-b6be-97f768f9993c-lib-modules\") pod \"cinder-backup-0\" (UID: \"0e9e7b3c-c94e-46e8-b6be-97f768f9993c\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 22 07:10:54 crc kubenswrapper[4720]: I0122 07:10:54.358373 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/0e9e7b3c-c94e-46e8-b6be-97f768f9993c-etc-nvme\") pod \"cinder-backup-0\" (UID: \"0e9e7b3c-c94e-46e8-b6be-97f768f9993c\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 22 07:10:54 crc kubenswrapper[4720]: I0122 07:10:54.358404 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e9e7b3c-c94e-46e8-b6be-97f768f9993c-config-data\") pod \"cinder-backup-0\" (UID: \"0e9e7b3c-c94e-46e8-b6be-97f768f9993c\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 22 07:10:54 crc kubenswrapper[4720]: I0122 07:10:54.358421 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/0e9e7b3c-c94e-46e8-b6be-97f768f9993c-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"0e9e7b3c-c94e-46e8-b6be-97f768f9993c\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 22 07:10:54 crc kubenswrapper[4720]: I0122 07:10:54.358436 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0e9e7b3c-c94e-46e8-b6be-97f768f9993c-config-data-custom\") pod \"cinder-backup-0\" (UID: \"0e9e7b3c-c94e-46e8-b6be-97f768f9993c\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 22 07:10:54 crc kubenswrapper[4720]: I0122 07:10:54.358461 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dtb72\" (UniqueName: \"kubernetes.io/projected/0e9e7b3c-c94e-46e8-b6be-97f768f9993c-kube-api-access-dtb72\") pod \"cinder-backup-0\" (UID: \"0e9e7b3c-c94e-46e8-b6be-97f768f9993c\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 22 07:10:54 crc kubenswrapper[4720]: I0122 07:10:54.358492 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0e9e7b3c-c94e-46e8-b6be-97f768f9993c-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"0e9e7b3c-c94e-46e8-b6be-97f768f9993c\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 22 07:10:54 crc kubenswrapper[4720]: I0122 07:10:54.358510 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0e9e7b3c-c94e-46e8-b6be-97f768f9993c-scripts\") pod \"cinder-backup-0\" (UID: \"0e9e7b3c-c94e-46e8-b6be-97f768f9993c\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 22 07:10:54 crc kubenswrapper[4720]: I0122 07:10:54.360327 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/cinder-backup-0"] Jan 22 07:10:54 crc kubenswrapper[4720]: I0122 07:10:54.429350 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_95a76f1b-07af-4869-b242-1cdbdb0b1f98/watcher-decision-engine/0.log" Jan 22 07:10:54 crc kubenswrapper[4720]: I0122 07:10:54.460709 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0e9e7b3c-c94e-46e8-b6be-97f768f9993c-lib-modules\") pod \"cinder-backup-0\" (UID: \"0e9e7b3c-c94e-46e8-b6be-97f768f9993c\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 22 07:10:54 crc kubenswrapper[4720]: I0122 07:10:54.460816 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/0e9e7b3c-c94e-46e8-b6be-97f768f9993c-etc-nvme\") pod \"cinder-backup-0\" (UID: \"0e9e7b3c-c94e-46e8-b6be-97f768f9993c\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 22 07:10:54 crc kubenswrapper[4720]: I0122 07:10:54.460861 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e9e7b3c-c94e-46e8-b6be-97f768f9993c-config-data\") pod \"cinder-backup-0\" (UID: \"0e9e7b3c-c94e-46e8-b6be-97f768f9993c\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 22 07:10:54 crc kubenswrapper[4720]: I0122 07:10:54.460883 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/0e9e7b3c-c94e-46e8-b6be-97f768f9993c-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"0e9e7b3c-c94e-46e8-b6be-97f768f9993c\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 22 07:10:54 crc kubenswrapper[4720]: I0122 07:10:54.460883 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0e9e7b3c-c94e-46e8-b6be-97f768f9993c-lib-modules\") pod \"cinder-backup-0\" (UID: \"0e9e7b3c-c94e-46e8-b6be-97f768f9993c\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 22 07:10:54 crc kubenswrapper[4720]: I0122 07:10:54.460908 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0e9e7b3c-c94e-46e8-b6be-97f768f9993c-config-data-custom\") pod \"cinder-backup-0\" (UID: \"0e9e7b3c-c94e-46e8-b6be-97f768f9993c\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 22 07:10:54 crc kubenswrapper[4720]: I0122 07:10:54.461022 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dtb72\" (UniqueName: \"kubernetes.io/projected/0e9e7b3c-c94e-46e8-b6be-97f768f9993c-kube-api-access-dtb72\") pod \"cinder-backup-0\" (UID: \"0e9e7b3c-c94e-46e8-b6be-97f768f9993c\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 22 07:10:54 crc kubenswrapper[4720]: I0122 07:10:54.461102 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0e9e7b3c-c94e-46e8-b6be-97f768f9993c-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"0e9e7b3c-c94e-46e8-b6be-97f768f9993c\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 22 07:10:54 crc kubenswrapper[4720]: I0122 07:10:54.461134 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0e9e7b3c-c94e-46e8-b6be-97f768f9993c-scripts\") pod \"cinder-backup-0\" (UID: \"0e9e7b3c-c94e-46e8-b6be-97f768f9993c\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 22 07:10:54 crc kubenswrapper[4720]: I0122 07:10:54.461288 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e9e7b3c-c94e-46e8-b6be-97f768f9993c-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"0e9e7b3c-c94e-46e8-b6be-97f768f9993c\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 22 07:10:54 crc kubenswrapper[4720]: I0122 07:10:54.461323 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/0e9e7b3c-c94e-46e8-b6be-97f768f9993c-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"0e9e7b3c-c94e-46e8-b6be-97f768f9993c\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 22 07:10:54 crc kubenswrapper[4720]: I0122 07:10:54.461346 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/0e9e7b3c-c94e-46e8-b6be-97f768f9993c-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"0e9e7b3c-c94e-46e8-b6be-97f768f9993c\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 22 07:10:54 crc kubenswrapper[4720]: I0122 07:10:54.461401 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/0e9e7b3c-c94e-46e8-b6be-97f768f9993c-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"0e9e7b3c-c94e-46e8-b6be-97f768f9993c\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 22 07:10:54 crc kubenswrapper[4720]: I0122 07:10:54.461444 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/0e9e7b3c-c94e-46e8-b6be-97f768f9993c-cert-memcached-mtls\") pod \"cinder-backup-0\" (UID: \"0e9e7b3c-c94e-46e8-b6be-97f768f9993c\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 22 07:10:54 crc kubenswrapper[4720]: I0122 07:10:54.461461 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/0e9e7b3c-c94e-46e8-b6be-97f768f9993c-run\") pod \"cinder-backup-0\" (UID: \"0e9e7b3c-c94e-46e8-b6be-97f768f9993c\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 22 07:10:54 crc kubenswrapper[4720]: I0122 07:10:54.461495 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/0e9e7b3c-c94e-46e8-b6be-97f768f9993c-dev\") pod \"cinder-backup-0\" (UID: \"0e9e7b3c-c94e-46e8-b6be-97f768f9993c\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 22 07:10:54 crc kubenswrapper[4720]: I0122 07:10:54.461508 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/0e9e7b3c-c94e-46e8-b6be-97f768f9993c-sys\") pod \"cinder-backup-0\" (UID: \"0e9e7b3c-c94e-46e8-b6be-97f768f9993c\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 22 07:10:54 crc kubenswrapper[4720]: I0122 07:10:54.461608 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/0e9e7b3c-c94e-46e8-b6be-97f768f9993c-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"0e9e7b3c-c94e-46e8-b6be-97f768f9993c\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 22 07:10:54 crc kubenswrapper[4720]: I0122 07:10:54.461615 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/0e9e7b3c-c94e-46e8-b6be-97f768f9993c-sys\") pod \"cinder-backup-0\" (UID: \"0e9e7b3c-c94e-46e8-b6be-97f768f9993c\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 22 07:10:54 crc kubenswrapper[4720]: I0122 07:10:54.460961 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/0e9e7b3c-c94e-46e8-b6be-97f768f9993c-etc-nvme\") pod \"cinder-backup-0\" (UID: \"0e9e7b3c-c94e-46e8-b6be-97f768f9993c\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 22 07:10:54 crc kubenswrapper[4720]: I0122 07:10:54.461665 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/0e9e7b3c-c94e-46e8-b6be-97f768f9993c-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"0e9e7b3c-c94e-46e8-b6be-97f768f9993c\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 22 07:10:54 crc kubenswrapper[4720]: I0122 07:10:54.461694 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/0e9e7b3c-c94e-46e8-b6be-97f768f9993c-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"0e9e7b3c-c94e-46e8-b6be-97f768f9993c\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 22 07:10:54 crc kubenswrapper[4720]: I0122 07:10:54.462027 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0e9e7b3c-c94e-46e8-b6be-97f768f9993c-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"0e9e7b3c-c94e-46e8-b6be-97f768f9993c\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 22 07:10:54 crc kubenswrapper[4720]: I0122 07:10:54.462570 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/0e9e7b3c-c94e-46e8-b6be-97f768f9993c-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"0e9e7b3c-c94e-46e8-b6be-97f768f9993c\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 22 07:10:54 crc kubenswrapper[4720]: I0122 07:10:54.462983 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/0e9e7b3c-c94e-46e8-b6be-97f768f9993c-dev\") pod \"cinder-backup-0\" (UID: \"0e9e7b3c-c94e-46e8-b6be-97f768f9993c\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 22 07:10:54 crc kubenswrapper[4720]: I0122 07:10:54.463734 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/0e9e7b3c-c94e-46e8-b6be-97f768f9993c-run\") pod \"cinder-backup-0\" (UID: \"0e9e7b3c-c94e-46e8-b6be-97f768f9993c\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 22 07:10:54 crc kubenswrapper[4720]: I0122 07:10:54.466142 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0e9e7b3c-c94e-46e8-b6be-97f768f9993c-scripts\") pod \"cinder-backup-0\" (UID: \"0e9e7b3c-c94e-46e8-b6be-97f768f9993c\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 22 07:10:54 crc kubenswrapper[4720]: I0122 07:10:54.466332 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e9e7b3c-c94e-46e8-b6be-97f768f9993c-config-data\") pod \"cinder-backup-0\" (UID: \"0e9e7b3c-c94e-46e8-b6be-97f768f9993c\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 22 07:10:54 crc kubenswrapper[4720]: I0122 07:10:54.467493 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/0e9e7b3c-c94e-46e8-b6be-97f768f9993c-cert-memcached-mtls\") pod \"cinder-backup-0\" (UID: \"0e9e7b3c-c94e-46e8-b6be-97f768f9993c\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 22 07:10:54 crc kubenswrapper[4720]: I0122 07:10:54.468870 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e9e7b3c-c94e-46e8-b6be-97f768f9993c-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"0e9e7b3c-c94e-46e8-b6be-97f768f9993c\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 22 07:10:54 crc kubenswrapper[4720]: I0122 07:10:54.478544 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0e9e7b3c-c94e-46e8-b6be-97f768f9993c-config-data-custom\") pod \"cinder-backup-0\" (UID: \"0e9e7b3c-c94e-46e8-b6be-97f768f9993c\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 22 07:10:54 crc kubenswrapper[4720]: I0122 07:10:54.492530 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dtb72\" (UniqueName: \"kubernetes.io/projected/0e9e7b3c-c94e-46e8-b6be-97f768f9993c-kube-api-access-dtb72\") pod \"cinder-backup-0\" (UID: \"0e9e7b3c-c94e-46e8-b6be-97f768f9993c\") " pod="watcher-kuttl-default/cinder-backup-0" Jan 22 07:10:54 crc kubenswrapper[4720]: I0122 07:10:54.652368 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder-backup-0" Jan 22 07:10:55 crc kubenswrapper[4720]: I0122 07:10:55.198101 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/cinder-backup-0"] Jan 22 07:10:55 crc kubenswrapper[4720]: I0122 07:10:55.273475 4720 generic.go:334] "Generic (PLEG): container finished" podID="95a76f1b-07af-4869-b242-1cdbdb0b1f98" containerID="81723a08301dd466878bfb4f71b3ed672eb44ba3a7aa82d52fb24b1c976f949b" exitCode=0 Jan 22 07:10:55 crc kubenswrapper[4720]: I0122 07:10:55.273685 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"95a76f1b-07af-4869-b242-1cdbdb0b1f98","Type":"ContainerDied","Data":"81723a08301dd466878bfb4f71b3ed672eb44ba3a7aa82d52fb24b1c976f949b"} Jan 22 07:10:55 crc kubenswrapper[4720]: I0122 07:10:55.275902 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-backup-0" event={"ID":"0e9e7b3c-c94e-46e8-b6be-97f768f9993c","Type":"ContainerStarted","Data":"8b49f480cca34a15f9fb78d01db794e71c66aa4575f9e8345cfc8fbf0f51fe13"} Jan 22 07:10:55 crc kubenswrapper[4720]: I0122 07:10:55.286551 4720 generic.go:334] "Generic (PLEG): container finished" podID="ba5fb927-8677-4576-85bf-75621f514a9d" containerID="fcaa5ffa174c2659a1c4077ca79099f49c331ab17f3870fae02ea2968d89ca46" exitCode=0 Jan 22 07:10:55 crc kubenswrapper[4720]: I0122 07:10:55.286620 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"ba5fb927-8677-4576-85bf-75621f514a9d","Type":"ContainerDied","Data":"fcaa5ffa174c2659a1c4077ca79099f49c331ab17f3870fae02ea2968d89ca46"} Jan 22 07:10:55 crc kubenswrapper[4720]: I0122 07:10:55.317639 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 07:10:55 crc kubenswrapper[4720]: I0122 07:10:55.378217 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/95a76f1b-07af-4869-b242-1cdbdb0b1f98-cert-memcached-mtls\") pod \"95a76f1b-07af-4869-b242-1cdbdb0b1f98\" (UID: \"95a76f1b-07af-4869-b242-1cdbdb0b1f98\") " Jan 22 07:10:55 crc kubenswrapper[4720]: I0122 07:10:55.378308 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lm59q\" (UniqueName: \"kubernetes.io/projected/95a76f1b-07af-4869-b242-1cdbdb0b1f98-kube-api-access-lm59q\") pod \"95a76f1b-07af-4869-b242-1cdbdb0b1f98\" (UID: \"95a76f1b-07af-4869-b242-1cdbdb0b1f98\") " Jan 22 07:10:55 crc kubenswrapper[4720]: I0122 07:10:55.378343 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95a76f1b-07af-4869-b242-1cdbdb0b1f98-config-data\") pod \"95a76f1b-07af-4869-b242-1cdbdb0b1f98\" (UID: \"95a76f1b-07af-4869-b242-1cdbdb0b1f98\") " Jan 22 07:10:55 crc kubenswrapper[4720]: I0122 07:10:55.378490 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/95a76f1b-07af-4869-b242-1cdbdb0b1f98-custom-prometheus-ca\") pod \"95a76f1b-07af-4869-b242-1cdbdb0b1f98\" (UID: \"95a76f1b-07af-4869-b242-1cdbdb0b1f98\") " Jan 22 07:10:55 crc kubenswrapper[4720]: I0122 07:10:55.378590 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/95a76f1b-07af-4869-b242-1cdbdb0b1f98-logs\") pod \"95a76f1b-07af-4869-b242-1cdbdb0b1f98\" (UID: \"95a76f1b-07af-4869-b242-1cdbdb0b1f98\") " Jan 22 07:10:55 crc kubenswrapper[4720]: I0122 07:10:55.378683 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95a76f1b-07af-4869-b242-1cdbdb0b1f98-combined-ca-bundle\") pod \"95a76f1b-07af-4869-b242-1cdbdb0b1f98\" (UID: \"95a76f1b-07af-4869-b242-1cdbdb0b1f98\") " Jan 22 07:10:55 crc kubenswrapper[4720]: I0122 07:10:55.381500 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/95a76f1b-07af-4869-b242-1cdbdb0b1f98-logs" (OuterVolumeSpecName: "logs") pod "95a76f1b-07af-4869-b242-1cdbdb0b1f98" (UID: "95a76f1b-07af-4869-b242-1cdbdb0b1f98"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 07:10:55 crc kubenswrapper[4720]: I0122 07:10:55.387959 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/95a76f1b-07af-4869-b242-1cdbdb0b1f98-kube-api-access-lm59q" (OuterVolumeSpecName: "kube-api-access-lm59q") pod "95a76f1b-07af-4869-b242-1cdbdb0b1f98" (UID: "95a76f1b-07af-4869-b242-1cdbdb0b1f98"). InnerVolumeSpecName "kube-api-access-lm59q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 07:10:55 crc kubenswrapper[4720]: I0122 07:10:55.435323 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95a76f1b-07af-4869-b242-1cdbdb0b1f98-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "95a76f1b-07af-4869-b242-1cdbdb0b1f98" (UID: "95a76f1b-07af-4869-b242-1cdbdb0b1f98"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:10:55 crc kubenswrapper[4720]: I0122 07:10:55.438065 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95a76f1b-07af-4869-b242-1cdbdb0b1f98-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "95a76f1b-07af-4869-b242-1cdbdb0b1f98" (UID: "95a76f1b-07af-4869-b242-1cdbdb0b1f98"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:10:55 crc kubenswrapper[4720]: I0122 07:10:55.513492 4720 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95a76f1b-07af-4869-b242-1cdbdb0b1f98-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 07:10:55 crc kubenswrapper[4720]: I0122 07:10:55.513527 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lm59q\" (UniqueName: \"kubernetes.io/projected/95a76f1b-07af-4869-b242-1cdbdb0b1f98-kube-api-access-lm59q\") on node \"crc\" DevicePath \"\"" Jan 22 07:10:55 crc kubenswrapper[4720]: I0122 07:10:55.513539 4720 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/95a76f1b-07af-4869-b242-1cdbdb0b1f98-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Jan 22 07:10:55 crc kubenswrapper[4720]: I0122 07:10:55.513549 4720 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/95a76f1b-07af-4869-b242-1cdbdb0b1f98-logs\") on node \"crc\" DevicePath \"\"" Jan 22 07:10:55 crc kubenswrapper[4720]: I0122 07:10:55.572957 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95a76f1b-07af-4869-b242-1cdbdb0b1f98-cert-memcached-mtls" (OuterVolumeSpecName: "cert-memcached-mtls") pod "95a76f1b-07af-4869-b242-1cdbdb0b1f98" (UID: "95a76f1b-07af-4869-b242-1cdbdb0b1f98"). InnerVolumeSpecName "cert-memcached-mtls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:10:55 crc kubenswrapper[4720]: I0122 07:10:55.574116 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95a76f1b-07af-4869-b242-1cdbdb0b1f98-config-data" (OuterVolumeSpecName: "config-data") pod "95a76f1b-07af-4869-b242-1cdbdb0b1f98" (UID: "95a76f1b-07af-4869-b242-1cdbdb0b1f98"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:10:55 crc kubenswrapper[4720]: I0122 07:10:55.612465 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:10:55 crc kubenswrapper[4720]: I0122 07:10:55.615803 4720 reconciler_common.go:293] "Volume detached for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/95a76f1b-07af-4869-b242-1cdbdb0b1f98-cert-memcached-mtls\") on node \"crc\" DevicePath \"\"" Jan 22 07:10:55 crc kubenswrapper[4720]: I0122 07:10:55.615826 4720 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95a76f1b-07af-4869-b242-1cdbdb0b1f98-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 07:10:55 crc kubenswrapper[4720]: I0122 07:10:55.692975 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_95a76f1b-07af-4869-b242-1cdbdb0b1f98/watcher-decision-engine/0.log" Jan 22 07:10:55 crc kubenswrapper[4720]: I0122 07:10:55.717341 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ba5fb927-8677-4576-85bf-75621f514a9d-log-httpd\") pod \"ba5fb927-8677-4576-85bf-75621f514a9d\" (UID: \"ba5fb927-8677-4576-85bf-75621f514a9d\") " Jan 22 07:10:55 crc kubenswrapper[4720]: I0122 07:10:55.717726 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba5fb927-8677-4576-85bf-75621f514a9d-combined-ca-bundle\") pod \"ba5fb927-8677-4576-85bf-75621f514a9d\" (UID: \"ba5fb927-8677-4576-85bf-75621f514a9d\") " Jan 22 07:10:55 crc kubenswrapper[4720]: I0122 07:10:55.717956 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba5fb927-8677-4576-85bf-75621f514a9d-config-data\") pod \"ba5fb927-8677-4576-85bf-75621f514a9d\" (UID: \"ba5fb927-8677-4576-85bf-75621f514a9d\") " Jan 22 07:10:55 crc kubenswrapper[4720]: I0122 07:10:55.718211 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/ba5fb927-8677-4576-85bf-75621f514a9d-ceilometer-tls-certs\") pod \"ba5fb927-8677-4576-85bf-75621f514a9d\" (UID: \"ba5fb927-8677-4576-85bf-75621f514a9d\") " Jan 22 07:10:55 crc kubenswrapper[4720]: I0122 07:10:55.718322 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ba5fb927-8677-4576-85bf-75621f514a9d-sg-core-conf-yaml\") pod \"ba5fb927-8677-4576-85bf-75621f514a9d\" (UID: \"ba5fb927-8677-4576-85bf-75621f514a9d\") " Jan 22 07:10:55 crc kubenswrapper[4720]: I0122 07:10:55.718418 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ba5fb927-8677-4576-85bf-75621f514a9d-run-httpd\") pod \"ba5fb927-8677-4576-85bf-75621f514a9d\" (UID: \"ba5fb927-8677-4576-85bf-75621f514a9d\") " Jan 22 07:10:55 crc kubenswrapper[4720]: I0122 07:10:55.718527 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cptjb\" (UniqueName: \"kubernetes.io/projected/ba5fb927-8677-4576-85bf-75621f514a9d-kube-api-access-cptjb\") pod \"ba5fb927-8677-4576-85bf-75621f514a9d\" (UID: \"ba5fb927-8677-4576-85bf-75621f514a9d\") " Jan 22 07:10:55 crc kubenswrapper[4720]: I0122 07:10:55.718645 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ba5fb927-8677-4576-85bf-75621f514a9d-scripts\") pod \"ba5fb927-8677-4576-85bf-75621f514a9d\" (UID: \"ba5fb927-8677-4576-85bf-75621f514a9d\") " Jan 22 07:10:55 crc kubenswrapper[4720]: I0122 07:10:55.718666 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ba5fb927-8677-4576-85bf-75621f514a9d-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "ba5fb927-8677-4576-85bf-75621f514a9d" (UID: "ba5fb927-8677-4576-85bf-75621f514a9d"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 07:10:55 crc kubenswrapper[4720]: I0122 07:10:55.719314 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ba5fb927-8677-4576-85bf-75621f514a9d-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "ba5fb927-8677-4576-85bf-75621f514a9d" (UID: "ba5fb927-8677-4576-85bf-75621f514a9d"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 07:10:55 crc kubenswrapper[4720]: I0122 07:10:55.725506 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ba5fb927-8677-4576-85bf-75621f514a9d-kube-api-access-cptjb" (OuterVolumeSpecName: "kube-api-access-cptjb") pod "ba5fb927-8677-4576-85bf-75621f514a9d" (UID: "ba5fb927-8677-4576-85bf-75621f514a9d"). InnerVolumeSpecName "kube-api-access-cptjb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 07:10:55 crc kubenswrapper[4720]: I0122 07:10:55.728829 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ba5fb927-8677-4576-85bf-75621f514a9d-scripts" (OuterVolumeSpecName: "scripts") pod "ba5fb927-8677-4576-85bf-75621f514a9d" (UID: "ba5fb927-8677-4576-85bf-75621f514a9d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:10:55 crc kubenswrapper[4720]: I0122 07:10:55.773260 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ba5fb927-8677-4576-85bf-75621f514a9d-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "ba5fb927-8677-4576-85bf-75621f514a9d" (UID: "ba5fb927-8677-4576-85bf-75621f514a9d"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:10:55 crc kubenswrapper[4720]: I0122 07:10:55.811784 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ba5fb927-8677-4576-85bf-75621f514a9d-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "ba5fb927-8677-4576-85bf-75621f514a9d" (UID: "ba5fb927-8677-4576-85bf-75621f514a9d"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:10:55 crc kubenswrapper[4720]: I0122 07:10:55.820257 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/cinder-scheduler-0" Jan 22 07:10:55 crc kubenswrapper[4720]: I0122 07:10:55.822536 4720 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/ba5fb927-8677-4576-85bf-75621f514a9d-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 22 07:10:55 crc kubenswrapper[4720]: I0122 07:10:55.822587 4720 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ba5fb927-8677-4576-85bf-75621f514a9d-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 22 07:10:55 crc kubenswrapper[4720]: I0122 07:10:55.822599 4720 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ba5fb927-8677-4576-85bf-75621f514a9d-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 22 07:10:55 crc kubenswrapper[4720]: I0122 07:10:55.822610 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cptjb\" (UniqueName: \"kubernetes.io/projected/ba5fb927-8677-4576-85bf-75621f514a9d-kube-api-access-cptjb\") on node \"crc\" DevicePath \"\"" Jan 22 07:10:55 crc kubenswrapper[4720]: I0122 07:10:55.822640 4720 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ba5fb927-8677-4576-85bf-75621f514a9d-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 07:10:55 crc kubenswrapper[4720]: I0122 07:10:55.822651 4720 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ba5fb927-8677-4576-85bf-75621f514a9d-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 22 07:10:55 crc kubenswrapper[4720]: I0122 07:10:55.835136 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ba5fb927-8677-4576-85bf-75621f514a9d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ba5fb927-8677-4576-85bf-75621f514a9d" (UID: "ba5fb927-8677-4576-85bf-75621f514a9d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:10:55 crc kubenswrapper[4720]: I0122 07:10:55.870366 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/cinder-scheduler-0"] Jan 22 07:10:55 crc kubenswrapper[4720]: I0122 07:10:55.878588 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ba5fb927-8677-4576-85bf-75621f514a9d-config-data" (OuterVolumeSpecName: "config-data") pod "ba5fb927-8677-4576-85bf-75621f514a9d" (UID: "ba5fb927-8677-4576-85bf-75621f514a9d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:10:55 crc kubenswrapper[4720]: I0122 07:10:55.925432 4720 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba5fb927-8677-4576-85bf-75621f514a9d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 07:10:55 crc kubenswrapper[4720]: I0122 07:10:55.925471 4720 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba5fb927-8677-4576-85bf-75621f514a9d-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 07:10:56 crc kubenswrapper[4720]: I0122 07:10:56.235810 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="33879577-2b66-4c4d-85bc-076f0ed1e056" path="/var/lib/kubelet/pods/33879577-2b66-4c4d-85bc-076f0ed1e056/volumes" Jan 22 07:10:56 crc kubenswrapper[4720]: I0122 07:10:56.297555 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"95a76f1b-07af-4869-b242-1cdbdb0b1f98","Type":"ContainerDied","Data":"9f37c7cc01420a6cfef868e161e13747d31de623a5722864437ebfa1df22c805"} Jan 22 07:10:56 crc kubenswrapper[4720]: I0122 07:10:56.297586 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 07:10:56 crc kubenswrapper[4720]: I0122 07:10:56.297624 4720 scope.go:117] "RemoveContainer" containerID="81723a08301dd466878bfb4f71b3ed672eb44ba3a7aa82d52fb24b1c976f949b" Jan 22 07:10:56 crc kubenswrapper[4720]: I0122 07:10:56.303489 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-backup-0" event={"ID":"0e9e7b3c-c94e-46e8-b6be-97f768f9993c","Type":"ContainerStarted","Data":"1a4839933a75b8972cdc01bc3de5f1b0c79ca17dd5eda0b12e5e367dfcb45b2e"} Jan 22 07:10:56 crc kubenswrapper[4720]: I0122 07:10:56.303534 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-backup-0" event={"ID":"0e9e7b3c-c94e-46e8-b6be-97f768f9993c","Type":"ContainerStarted","Data":"3a8e01ab11c787cbb8acfcf350f75e8e7b24fc3c966e42bc9b03932fcdc149a7"} Jan 22 07:10:56 crc kubenswrapper[4720]: I0122 07:10:56.316688 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/cinder-scheduler-0" podUID="792291b0-c266-40bf-a0f1-650b6d8f4f6a" containerName="cinder-scheduler" containerID="cri-o://753d6c0185b49e4e7fdee9d1211d3956917c4728af83553450745e1adae50bbd" gracePeriod=30 Jan 22 07:10:56 crc kubenswrapper[4720]: I0122 07:10:56.317054 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:10:56 crc kubenswrapper[4720]: I0122 07:10:56.317840 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"ba5fb927-8677-4576-85bf-75621f514a9d","Type":"ContainerDied","Data":"238f963a3d7f27ac0966c30ea48077e94e8f9d9f4e848d71654873a6b6ac0122"} Jan 22 07:10:56 crc kubenswrapper[4720]: I0122 07:10:56.317907 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/cinder-scheduler-0" podUID="792291b0-c266-40bf-a0f1-650b6d8f4f6a" containerName="probe" containerID="cri-o://7c06979cbedc656c279cca90e6dc5e1aa723ead371a603c21e08615651217cf2" gracePeriod=30 Jan 22 07:10:56 crc kubenswrapper[4720]: I0122 07:10:56.326010 4720 scope.go:117] "RemoveContainer" containerID="5ef80c746c7c023464e9624ad446abfc3200ae365249db7487fafd95954842d3" Jan 22 07:10:56 crc kubenswrapper[4720]: I0122 07:10:56.337637 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/cinder-backup-0" podStartSLOduration=2.337609216 podStartE2EDuration="2.337609216s" podCreationTimestamp="2026-01-22 07:10:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 07:10:56.33670361 +0000 UTC m=+2148.478610315" watchObservedRunningTime="2026-01-22 07:10:56.337609216 +0000 UTC m=+2148.479515921" Jan 22 07:10:56 crc kubenswrapper[4720]: I0122 07:10:56.367463 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 07:10:56 crc kubenswrapper[4720]: I0122 07:10:56.376756 4720 scope.go:117] "RemoveContainer" containerID="0e84feceb255ba20a9809b499f4d49f9fcebdb02128d64c15388577a8d3649e7" Jan 22 07:10:56 crc kubenswrapper[4720]: I0122 07:10:56.379014 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 07:10:56 crc kubenswrapper[4720]: I0122 07:10:56.399063 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 22 07:10:56 crc kubenswrapper[4720]: I0122 07:10:56.410123 4720 scope.go:117] "RemoveContainer" containerID="fcaa5ffa174c2659a1c4077ca79099f49c331ab17f3870fae02ea2968d89ca46" Jan 22 07:10:56 crc kubenswrapper[4720]: I0122 07:10:56.413629 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 22 07:10:56 crc kubenswrapper[4720]: I0122 07:10:56.430787 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 07:10:56 crc kubenswrapper[4720]: E0122 07:10:56.431278 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba5fb927-8677-4576-85bf-75621f514a9d" containerName="ceilometer-notification-agent" Jan 22 07:10:56 crc kubenswrapper[4720]: I0122 07:10:56.431296 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba5fb927-8677-4576-85bf-75621f514a9d" containerName="ceilometer-notification-agent" Jan 22 07:10:56 crc kubenswrapper[4720]: E0122 07:10:56.431314 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba5fb927-8677-4576-85bf-75621f514a9d" containerName="sg-core" Jan 22 07:10:56 crc kubenswrapper[4720]: I0122 07:10:56.431321 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba5fb927-8677-4576-85bf-75621f514a9d" containerName="sg-core" Jan 22 07:10:56 crc kubenswrapper[4720]: E0122 07:10:56.431331 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95a76f1b-07af-4869-b242-1cdbdb0b1f98" containerName="watcher-decision-engine" Jan 22 07:10:56 crc kubenswrapper[4720]: I0122 07:10:56.431340 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="95a76f1b-07af-4869-b242-1cdbdb0b1f98" containerName="watcher-decision-engine" Jan 22 07:10:56 crc kubenswrapper[4720]: E0122 07:10:56.431354 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba5fb927-8677-4576-85bf-75621f514a9d" containerName="proxy-httpd" Jan 22 07:10:56 crc kubenswrapper[4720]: I0122 07:10:56.431360 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba5fb927-8677-4576-85bf-75621f514a9d" containerName="proxy-httpd" Jan 22 07:10:56 crc kubenswrapper[4720]: E0122 07:10:56.431385 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba5fb927-8677-4576-85bf-75621f514a9d" containerName="ceilometer-central-agent" Jan 22 07:10:56 crc kubenswrapper[4720]: I0122 07:10:56.431390 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba5fb927-8677-4576-85bf-75621f514a9d" containerName="ceilometer-central-agent" Jan 22 07:10:56 crc kubenswrapper[4720]: I0122 07:10:56.431534 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba5fb927-8677-4576-85bf-75621f514a9d" containerName="proxy-httpd" Jan 22 07:10:56 crc kubenswrapper[4720]: I0122 07:10:56.431550 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba5fb927-8677-4576-85bf-75621f514a9d" containerName="sg-core" Jan 22 07:10:56 crc kubenswrapper[4720]: I0122 07:10:56.431560 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="95a76f1b-07af-4869-b242-1cdbdb0b1f98" containerName="watcher-decision-engine" Jan 22 07:10:56 crc kubenswrapper[4720]: I0122 07:10:56.431570 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba5fb927-8677-4576-85bf-75621f514a9d" containerName="ceilometer-central-agent" Jan 22 07:10:56 crc kubenswrapper[4720]: I0122 07:10:56.431584 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba5fb927-8677-4576-85bf-75621f514a9d" containerName="ceilometer-notification-agent" Jan 22 07:10:56 crc kubenswrapper[4720]: I0122 07:10:56.433216 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:10:56 crc kubenswrapper[4720]: I0122 07:10:56.440127 4720 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-scripts" Jan 22 07:10:56 crc kubenswrapper[4720]: I0122 07:10:56.440247 4720 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-ceilometer-internal-svc" Jan 22 07:10:56 crc kubenswrapper[4720]: I0122 07:10:56.440352 4720 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-config-data" Jan 22 07:10:56 crc kubenswrapper[4720]: I0122 07:10:56.448378 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/25e30ff1-96ee-4b8d-94d7-bb2803d7641d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"25e30ff1-96ee-4b8d-94d7-bb2803d7641d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:10:56 crc kubenswrapper[4720]: I0122 07:10:56.448450 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/25e30ff1-96ee-4b8d-94d7-bb2803d7641d-run-httpd\") pod \"ceilometer-0\" (UID: \"25e30ff1-96ee-4b8d-94d7-bb2803d7641d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:10:56 crc kubenswrapper[4720]: I0122 07:10:56.448523 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/25e30ff1-96ee-4b8d-94d7-bb2803d7641d-log-httpd\") pod \"ceilometer-0\" (UID: \"25e30ff1-96ee-4b8d-94d7-bb2803d7641d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:10:56 crc kubenswrapper[4720]: I0122 07:10:56.448557 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25e30ff1-96ee-4b8d-94d7-bb2803d7641d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"25e30ff1-96ee-4b8d-94d7-bb2803d7641d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:10:56 crc kubenswrapper[4720]: I0122 07:10:56.448607 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25e30ff1-96ee-4b8d-94d7-bb2803d7641d-config-data\") pod \"ceilometer-0\" (UID: \"25e30ff1-96ee-4b8d-94d7-bb2803d7641d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:10:56 crc kubenswrapper[4720]: I0122 07:10:56.448631 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/25e30ff1-96ee-4b8d-94d7-bb2803d7641d-scripts\") pod \"ceilometer-0\" (UID: \"25e30ff1-96ee-4b8d-94d7-bb2803d7641d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:10:56 crc kubenswrapper[4720]: I0122 07:10:56.448686 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/25e30ff1-96ee-4b8d-94d7-bb2803d7641d-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"25e30ff1-96ee-4b8d-94d7-bb2803d7641d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:10:56 crc kubenswrapper[4720]: I0122 07:10:56.448829 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fxgjz\" (UniqueName: \"kubernetes.io/projected/25e30ff1-96ee-4b8d-94d7-bb2803d7641d-kube-api-access-fxgjz\") pod \"ceilometer-0\" (UID: \"25e30ff1-96ee-4b8d-94d7-bb2803d7641d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:10:56 crc kubenswrapper[4720]: I0122 07:10:56.454033 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 22 07:10:56 crc kubenswrapper[4720]: I0122 07:10:56.456678 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 07:10:56 crc kubenswrapper[4720]: I0122 07:10:56.463281 4720 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-decision-engine-config-data" Jan 22 07:10:56 crc kubenswrapper[4720]: I0122 07:10:56.467888 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 07:10:56 crc kubenswrapper[4720]: I0122 07:10:56.518089 4720 scope.go:117] "RemoveContainer" containerID="3f373f6594899caa060de0a7c781957193a49f39e24312aaf9742c8b300e488a" Jan 22 07:10:56 crc kubenswrapper[4720]: I0122 07:10:56.521046 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 22 07:10:56 crc kubenswrapper[4720]: I0122 07:10:56.550251 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/c37e37bb-9267-4a15-90a8-cf5cb101730d-cert-memcached-mtls\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"c37e37bb-9267-4a15-90a8-cf5cb101730d\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 07:10:56 crc kubenswrapper[4720]: I0122 07:10:56.550321 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/25e30ff1-96ee-4b8d-94d7-bb2803d7641d-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"25e30ff1-96ee-4b8d-94d7-bb2803d7641d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:10:56 crc kubenswrapper[4720]: I0122 07:10:56.550407 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/c37e37bb-9267-4a15-90a8-cf5cb101730d-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"c37e37bb-9267-4a15-90a8-cf5cb101730d\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 07:10:56 crc kubenswrapper[4720]: I0122 07:10:56.550445 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fxgjz\" (UniqueName: \"kubernetes.io/projected/25e30ff1-96ee-4b8d-94d7-bb2803d7641d-kube-api-access-fxgjz\") pod \"ceilometer-0\" (UID: \"25e30ff1-96ee-4b8d-94d7-bb2803d7641d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:10:56 crc kubenswrapper[4720]: I0122 07:10:56.550480 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c37e37bb-9267-4a15-90a8-cf5cb101730d-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"c37e37bb-9267-4a15-90a8-cf5cb101730d\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 07:10:56 crc kubenswrapper[4720]: I0122 07:10:56.550500 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c37e37bb-9267-4a15-90a8-cf5cb101730d-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"c37e37bb-9267-4a15-90a8-cf5cb101730d\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 07:10:56 crc kubenswrapper[4720]: I0122 07:10:56.550520 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/25e30ff1-96ee-4b8d-94d7-bb2803d7641d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"25e30ff1-96ee-4b8d-94d7-bb2803d7641d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:10:56 crc kubenswrapper[4720]: I0122 07:10:56.550553 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/25e30ff1-96ee-4b8d-94d7-bb2803d7641d-run-httpd\") pod \"ceilometer-0\" (UID: \"25e30ff1-96ee-4b8d-94d7-bb2803d7641d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:10:56 crc kubenswrapper[4720]: I0122 07:10:56.550579 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c37e37bb-9267-4a15-90a8-cf5cb101730d-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"c37e37bb-9267-4a15-90a8-cf5cb101730d\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 07:10:56 crc kubenswrapper[4720]: I0122 07:10:56.550600 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pmt7s\" (UniqueName: \"kubernetes.io/projected/c37e37bb-9267-4a15-90a8-cf5cb101730d-kube-api-access-pmt7s\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"c37e37bb-9267-4a15-90a8-cf5cb101730d\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 07:10:56 crc kubenswrapper[4720]: I0122 07:10:56.550620 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/25e30ff1-96ee-4b8d-94d7-bb2803d7641d-log-httpd\") pod \"ceilometer-0\" (UID: \"25e30ff1-96ee-4b8d-94d7-bb2803d7641d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:10:56 crc kubenswrapper[4720]: I0122 07:10:56.550644 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25e30ff1-96ee-4b8d-94d7-bb2803d7641d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"25e30ff1-96ee-4b8d-94d7-bb2803d7641d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:10:56 crc kubenswrapper[4720]: I0122 07:10:56.550674 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25e30ff1-96ee-4b8d-94d7-bb2803d7641d-config-data\") pod \"ceilometer-0\" (UID: \"25e30ff1-96ee-4b8d-94d7-bb2803d7641d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:10:56 crc kubenswrapper[4720]: I0122 07:10:56.550691 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/25e30ff1-96ee-4b8d-94d7-bb2803d7641d-scripts\") pod \"ceilometer-0\" (UID: \"25e30ff1-96ee-4b8d-94d7-bb2803d7641d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:10:56 crc kubenswrapper[4720]: I0122 07:10:56.551650 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/25e30ff1-96ee-4b8d-94d7-bb2803d7641d-run-httpd\") pod \"ceilometer-0\" (UID: \"25e30ff1-96ee-4b8d-94d7-bb2803d7641d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:10:56 crc kubenswrapper[4720]: I0122 07:10:56.552331 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/25e30ff1-96ee-4b8d-94d7-bb2803d7641d-log-httpd\") pod \"ceilometer-0\" (UID: \"25e30ff1-96ee-4b8d-94d7-bb2803d7641d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:10:56 crc kubenswrapper[4720]: I0122 07:10:56.556990 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/25e30ff1-96ee-4b8d-94d7-bb2803d7641d-scripts\") pod \"ceilometer-0\" (UID: \"25e30ff1-96ee-4b8d-94d7-bb2803d7641d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:10:56 crc kubenswrapper[4720]: I0122 07:10:56.558327 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/25e30ff1-96ee-4b8d-94d7-bb2803d7641d-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"25e30ff1-96ee-4b8d-94d7-bb2803d7641d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:10:56 crc kubenswrapper[4720]: I0122 07:10:56.558829 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25e30ff1-96ee-4b8d-94d7-bb2803d7641d-config-data\") pod \"ceilometer-0\" (UID: \"25e30ff1-96ee-4b8d-94d7-bb2803d7641d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:10:56 crc kubenswrapper[4720]: I0122 07:10:56.560206 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25e30ff1-96ee-4b8d-94d7-bb2803d7641d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"25e30ff1-96ee-4b8d-94d7-bb2803d7641d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:10:56 crc kubenswrapper[4720]: I0122 07:10:56.562678 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/25e30ff1-96ee-4b8d-94d7-bb2803d7641d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"25e30ff1-96ee-4b8d-94d7-bb2803d7641d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:10:56 crc kubenswrapper[4720]: I0122 07:10:56.568527 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fxgjz\" (UniqueName: \"kubernetes.io/projected/25e30ff1-96ee-4b8d-94d7-bb2803d7641d-kube-api-access-fxgjz\") pod \"ceilometer-0\" (UID: \"25e30ff1-96ee-4b8d-94d7-bb2803d7641d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:10:56 crc kubenswrapper[4720]: I0122 07:10:56.652476 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/c37e37bb-9267-4a15-90a8-cf5cb101730d-cert-memcached-mtls\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"c37e37bb-9267-4a15-90a8-cf5cb101730d\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 07:10:56 crc kubenswrapper[4720]: I0122 07:10:56.652575 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/c37e37bb-9267-4a15-90a8-cf5cb101730d-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"c37e37bb-9267-4a15-90a8-cf5cb101730d\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 07:10:56 crc kubenswrapper[4720]: I0122 07:10:56.652616 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c37e37bb-9267-4a15-90a8-cf5cb101730d-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"c37e37bb-9267-4a15-90a8-cf5cb101730d\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 07:10:56 crc kubenswrapper[4720]: I0122 07:10:56.652637 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c37e37bb-9267-4a15-90a8-cf5cb101730d-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"c37e37bb-9267-4a15-90a8-cf5cb101730d\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 07:10:56 crc kubenswrapper[4720]: I0122 07:10:56.652668 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c37e37bb-9267-4a15-90a8-cf5cb101730d-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"c37e37bb-9267-4a15-90a8-cf5cb101730d\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 07:10:56 crc kubenswrapper[4720]: I0122 07:10:56.652704 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pmt7s\" (UniqueName: \"kubernetes.io/projected/c37e37bb-9267-4a15-90a8-cf5cb101730d-kube-api-access-pmt7s\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"c37e37bb-9267-4a15-90a8-cf5cb101730d\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 07:10:56 crc kubenswrapper[4720]: I0122 07:10:56.658734 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c37e37bb-9267-4a15-90a8-cf5cb101730d-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"c37e37bb-9267-4a15-90a8-cf5cb101730d\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 07:10:56 crc kubenswrapper[4720]: I0122 07:10:56.668508 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/c37e37bb-9267-4a15-90a8-cf5cb101730d-cert-memcached-mtls\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"c37e37bb-9267-4a15-90a8-cf5cb101730d\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 07:10:56 crc kubenswrapper[4720]: I0122 07:10:56.673265 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c37e37bb-9267-4a15-90a8-cf5cb101730d-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"c37e37bb-9267-4a15-90a8-cf5cb101730d\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 07:10:56 crc kubenswrapper[4720]: I0122 07:10:56.675398 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c37e37bb-9267-4a15-90a8-cf5cb101730d-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"c37e37bb-9267-4a15-90a8-cf5cb101730d\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 07:10:56 crc kubenswrapper[4720]: I0122 07:10:56.701155 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pmt7s\" (UniqueName: \"kubernetes.io/projected/c37e37bb-9267-4a15-90a8-cf5cb101730d-kube-api-access-pmt7s\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"c37e37bb-9267-4a15-90a8-cf5cb101730d\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 07:10:56 crc kubenswrapper[4720]: I0122 07:10:56.705219 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/c37e37bb-9267-4a15-90a8-cf5cb101730d-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"c37e37bb-9267-4a15-90a8-cf5cb101730d\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 07:10:56 crc kubenswrapper[4720]: I0122 07:10:56.760605 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:10:56 crc kubenswrapper[4720]: I0122 07:10:56.781625 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 07:10:57 crc kubenswrapper[4720]: I0122 07:10:57.693340 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 22 07:10:57 crc kubenswrapper[4720]: I0122 07:10:57.883974 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 07:10:58 crc kubenswrapper[4720]: I0122 07:10:58.496272 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="95a76f1b-07af-4869-b242-1cdbdb0b1f98" path="/var/lib/kubelet/pods/95a76f1b-07af-4869-b242-1cdbdb0b1f98/volumes" Jan 22 07:10:58 crc kubenswrapper[4720]: I0122 07:10:58.497216 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ba5fb927-8677-4576-85bf-75621f514a9d" path="/var/lib/kubelet/pods/ba5fb927-8677-4576-85bf-75621f514a9d/volumes" Jan 22 07:10:58 crc kubenswrapper[4720]: I0122 07:10:58.624104 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"c37e37bb-9267-4a15-90a8-cf5cb101730d","Type":"ContainerStarted","Data":"37ffc5b53586441d22624c34ddde403cbd2dc8c740d4a9892c32e1b4a7a9b8e4"} Jan 22 07:10:58 crc kubenswrapper[4720]: I0122 07:10:58.624184 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"c37e37bb-9267-4a15-90a8-cf5cb101730d","Type":"ContainerStarted","Data":"3c346aa67441273617cc4f15cbce4e97cc797efab3c70ec6333713f416535cc8"} Jan 22 07:10:58 crc kubenswrapper[4720]: I0122 07:10:58.631242 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-scheduler-0" event={"ID":"792291b0-c266-40bf-a0f1-650b6d8f4f6a","Type":"ContainerDied","Data":"7c06979cbedc656c279cca90e6dc5e1aa723ead371a603c21e08615651217cf2"} Jan 22 07:10:58 crc kubenswrapper[4720]: I0122 07:10:58.631149 4720 generic.go:334] "Generic (PLEG): container finished" podID="792291b0-c266-40bf-a0f1-650b6d8f4f6a" containerID="7c06979cbedc656c279cca90e6dc5e1aa723ead371a603c21e08615651217cf2" exitCode=0 Jan 22 07:10:58 crc kubenswrapper[4720]: I0122 07:10:58.638048 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"25e30ff1-96ee-4b8d-94d7-bb2803d7641d","Type":"ContainerStarted","Data":"7e3983b058e0562b56743837146187be98b6bb11f0a5b478338819ea5ef33f61"} Jan 22 07:10:58 crc kubenswrapper[4720]: I0122 07:10:58.654060 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podStartSLOduration=2.654029453 podStartE2EDuration="2.654029453s" podCreationTimestamp="2026-01-22 07:10:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 07:10:58.64717165 +0000 UTC m=+2150.789078355" watchObservedRunningTime="2026-01-22 07:10:58.654029453 +0000 UTC m=+2150.795936158" Jan 22 07:10:59 crc kubenswrapper[4720]: I0122 07:10:59.365516 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_c37e37bb-9267-4a15-90a8-cf5cb101730d/watcher-decision-engine/0.log" Jan 22 07:10:59 crc kubenswrapper[4720]: I0122 07:10:59.656305 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/cinder-backup-0" Jan 22 07:10:59 crc kubenswrapper[4720]: I0122 07:10:59.670746 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"25e30ff1-96ee-4b8d-94d7-bb2803d7641d","Type":"ContainerStarted","Data":"74ecf3620a1619c6572e4a3ee81cb7259519a34f78b6c750b0f29000e20c55ec"} Jan 22 07:10:59 crc kubenswrapper[4720]: I0122 07:10:59.670827 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"25e30ff1-96ee-4b8d-94d7-bb2803d7641d","Type":"ContainerStarted","Data":"0661d8728d51d3efc6bf42662279a9eec0139ae41fb633e12cca81ff9c5b76e5"} Jan 22 07:10:59 crc kubenswrapper[4720]: I0122 07:10:59.676054 4720 generic.go:334] "Generic (PLEG): container finished" podID="792291b0-c266-40bf-a0f1-650b6d8f4f6a" containerID="753d6c0185b49e4e7fdee9d1211d3956917c4728af83553450745e1adae50bbd" exitCode=0 Jan 22 07:10:59 crc kubenswrapper[4720]: I0122 07:10:59.676132 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-scheduler-0" event={"ID":"792291b0-c266-40bf-a0f1-650b6d8f4f6a","Type":"ContainerDied","Data":"753d6c0185b49e4e7fdee9d1211d3956917c4728af83553450745e1adae50bbd"} Jan 22 07:10:59 crc kubenswrapper[4720]: I0122 07:10:59.709525 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder-scheduler-0" Jan 22 07:10:59 crc kubenswrapper[4720]: I0122 07:10:59.761583 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/792291b0-c266-40bf-a0f1-650b6d8f4f6a-config-data\") pod \"792291b0-c266-40bf-a0f1-650b6d8f4f6a\" (UID: \"792291b0-c266-40bf-a0f1-650b6d8f4f6a\") " Jan 22 07:10:59 crc kubenswrapper[4720]: I0122 07:10:59.761712 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/792291b0-c266-40bf-a0f1-650b6d8f4f6a-etc-machine-id\") pod \"792291b0-c266-40bf-a0f1-650b6d8f4f6a\" (UID: \"792291b0-c266-40bf-a0f1-650b6d8f4f6a\") " Jan 22 07:10:59 crc kubenswrapper[4720]: I0122 07:10:59.761876 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/792291b0-c266-40bf-a0f1-650b6d8f4f6a-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "792291b0-c266-40bf-a0f1-650b6d8f4f6a" (UID: "792291b0-c266-40bf-a0f1-650b6d8f4f6a"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 07:10:59 crc kubenswrapper[4720]: I0122 07:10:59.761920 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/792291b0-c266-40bf-a0f1-650b6d8f4f6a-scripts\") pod \"792291b0-c266-40bf-a0f1-650b6d8f4f6a\" (UID: \"792291b0-c266-40bf-a0f1-650b6d8f4f6a\") " Jan 22 07:10:59 crc kubenswrapper[4720]: I0122 07:10:59.762184 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/792291b0-c266-40bf-a0f1-650b6d8f4f6a-config-data-custom\") pod \"792291b0-c266-40bf-a0f1-650b6d8f4f6a\" (UID: \"792291b0-c266-40bf-a0f1-650b6d8f4f6a\") " Jan 22 07:10:59 crc kubenswrapper[4720]: I0122 07:10:59.762224 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/792291b0-c266-40bf-a0f1-650b6d8f4f6a-cert-memcached-mtls\") pod \"792291b0-c266-40bf-a0f1-650b6d8f4f6a\" (UID: \"792291b0-c266-40bf-a0f1-650b6d8f4f6a\") " Jan 22 07:10:59 crc kubenswrapper[4720]: I0122 07:10:59.762276 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ldzhm\" (UniqueName: \"kubernetes.io/projected/792291b0-c266-40bf-a0f1-650b6d8f4f6a-kube-api-access-ldzhm\") pod \"792291b0-c266-40bf-a0f1-650b6d8f4f6a\" (UID: \"792291b0-c266-40bf-a0f1-650b6d8f4f6a\") " Jan 22 07:10:59 crc kubenswrapper[4720]: I0122 07:10:59.762331 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/792291b0-c266-40bf-a0f1-650b6d8f4f6a-combined-ca-bundle\") pod \"792291b0-c266-40bf-a0f1-650b6d8f4f6a\" (UID: \"792291b0-c266-40bf-a0f1-650b6d8f4f6a\") " Jan 22 07:10:59 crc kubenswrapper[4720]: I0122 07:10:59.763429 4720 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/792291b0-c266-40bf-a0f1-650b6d8f4f6a-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 22 07:10:59 crc kubenswrapper[4720]: I0122 07:10:59.773365 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/792291b0-c266-40bf-a0f1-650b6d8f4f6a-scripts" (OuterVolumeSpecName: "scripts") pod "792291b0-c266-40bf-a0f1-650b6d8f4f6a" (UID: "792291b0-c266-40bf-a0f1-650b6d8f4f6a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:10:59 crc kubenswrapper[4720]: I0122 07:10:59.775499 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/792291b0-c266-40bf-a0f1-650b6d8f4f6a-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "792291b0-c266-40bf-a0f1-650b6d8f4f6a" (UID: "792291b0-c266-40bf-a0f1-650b6d8f4f6a"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:10:59 crc kubenswrapper[4720]: I0122 07:10:59.779571 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/792291b0-c266-40bf-a0f1-650b6d8f4f6a-kube-api-access-ldzhm" (OuterVolumeSpecName: "kube-api-access-ldzhm") pod "792291b0-c266-40bf-a0f1-650b6d8f4f6a" (UID: "792291b0-c266-40bf-a0f1-650b6d8f4f6a"). InnerVolumeSpecName "kube-api-access-ldzhm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 07:10:59 crc kubenswrapper[4720]: I0122 07:10:59.780059 4720 patch_prober.go:28] interesting pod/machine-config-daemon-bnsvd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 07:10:59 crc kubenswrapper[4720]: I0122 07:10:59.780132 4720 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" podUID="f4b26e9d-6a95-4b1c-9750-88b6aa100c67" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 07:10:59 crc kubenswrapper[4720]: I0122 07:10:59.865859 4720 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/792291b0-c266-40bf-a0f1-650b6d8f4f6a-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 07:10:59 crc kubenswrapper[4720]: I0122 07:10:59.866186 4720 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/792291b0-c266-40bf-a0f1-650b6d8f4f6a-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 22 07:10:59 crc kubenswrapper[4720]: I0122 07:10:59.866263 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ldzhm\" (UniqueName: \"kubernetes.io/projected/792291b0-c266-40bf-a0f1-650b6d8f4f6a-kube-api-access-ldzhm\") on node \"crc\" DevicePath \"\"" Jan 22 07:10:59 crc kubenswrapper[4720]: I0122 07:10:59.885375 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/792291b0-c266-40bf-a0f1-650b6d8f4f6a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "792291b0-c266-40bf-a0f1-650b6d8f4f6a" (UID: "792291b0-c266-40bf-a0f1-650b6d8f4f6a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:10:59 crc kubenswrapper[4720]: I0122 07:10:59.896567 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/792291b0-c266-40bf-a0f1-650b6d8f4f6a-config-data" (OuterVolumeSpecName: "config-data") pod "792291b0-c266-40bf-a0f1-650b6d8f4f6a" (UID: "792291b0-c266-40bf-a0f1-650b6d8f4f6a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:10:59 crc kubenswrapper[4720]: I0122 07:10:59.971241 4720 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/792291b0-c266-40bf-a0f1-650b6d8f4f6a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 07:10:59 crc kubenswrapper[4720]: I0122 07:10:59.971277 4720 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/792291b0-c266-40bf-a0f1-650b6d8f4f6a-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 07:10:59 crc kubenswrapper[4720]: I0122 07:10:59.976284 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/792291b0-c266-40bf-a0f1-650b6d8f4f6a-cert-memcached-mtls" (OuterVolumeSpecName: "cert-memcached-mtls") pod "792291b0-c266-40bf-a0f1-650b6d8f4f6a" (UID: "792291b0-c266-40bf-a0f1-650b6d8f4f6a"). InnerVolumeSpecName "cert-memcached-mtls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:11:00 crc kubenswrapper[4720]: I0122 07:11:00.073173 4720 reconciler_common.go:293] "Volume detached for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/792291b0-c266-40bf-a0f1-650b6d8f4f6a-cert-memcached-mtls\") on node \"crc\" DevicePath \"\"" Jan 22 07:11:00 crc kubenswrapper[4720]: I0122 07:11:00.562675 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_c37e37bb-9267-4a15-90a8-cf5cb101730d/watcher-decision-engine/0.log" Jan 22 07:11:00 crc kubenswrapper[4720]: I0122 07:11:00.687257 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-scheduler-0" event={"ID":"792291b0-c266-40bf-a0f1-650b6d8f4f6a","Type":"ContainerDied","Data":"a5d0a5f4a9822c623b591aaa7ddf908cfeac3a63c26d25ad2a9d0a7492c3b9e4"} Jan 22 07:11:00 crc kubenswrapper[4720]: I0122 07:11:00.687344 4720 scope.go:117] "RemoveContainer" containerID="7c06979cbedc656c279cca90e6dc5e1aa723ead371a603c21e08615651217cf2" Jan 22 07:11:00 crc kubenswrapper[4720]: I0122 07:11:00.687424 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder-scheduler-0" Jan 22 07:11:00 crc kubenswrapper[4720]: I0122 07:11:00.690128 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"25e30ff1-96ee-4b8d-94d7-bb2803d7641d","Type":"ContainerStarted","Data":"7a479aba9d8840a8888b478cff9acf5bb3a09d6534f18935cbb6c4162c11c1da"} Jan 22 07:11:00 crc kubenswrapper[4720]: I0122 07:11:00.707455 4720 scope.go:117] "RemoveContainer" containerID="753d6c0185b49e4e7fdee9d1211d3956917c4728af83553450745e1adae50bbd" Jan 22 07:11:00 crc kubenswrapper[4720]: I0122 07:11:00.729771 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/cinder-scheduler-0"] Jan 22 07:11:00 crc kubenswrapper[4720]: I0122 07:11:00.742858 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/cinder-scheduler-0"] Jan 22 07:11:00 crc kubenswrapper[4720]: I0122 07:11:00.772779 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/cinder-scheduler-0"] Jan 22 07:11:00 crc kubenswrapper[4720]: E0122 07:11:00.773235 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="792291b0-c266-40bf-a0f1-650b6d8f4f6a" containerName="cinder-scheduler" Jan 22 07:11:00 crc kubenswrapper[4720]: I0122 07:11:00.773257 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="792291b0-c266-40bf-a0f1-650b6d8f4f6a" containerName="cinder-scheduler" Jan 22 07:11:00 crc kubenswrapper[4720]: E0122 07:11:00.773287 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="792291b0-c266-40bf-a0f1-650b6d8f4f6a" containerName="probe" Jan 22 07:11:00 crc kubenswrapper[4720]: I0122 07:11:00.773296 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="792291b0-c266-40bf-a0f1-650b6d8f4f6a" containerName="probe" Jan 22 07:11:00 crc kubenswrapper[4720]: I0122 07:11:00.773470 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="792291b0-c266-40bf-a0f1-650b6d8f4f6a" containerName="probe" Jan 22 07:11:00 crc kubenswrapper[4720]: I0122 07:11:00.773484 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="792291b0-c266-40bf-a0f1-650b6d8f4f6a" containerName="cinder-scheduler" Jan 22 07:11:00 crc kubenswrapper[4720]: I0122 07:11:00.774433 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder-scheduler-0" Jan 22 07:11:00 crc kubenswrapper[4720]: I0122 07:11:00.781620 4720 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cinder-scheduler-config-data" Jan 22 07:11:00 crc kubenswrapper[4720]: I0122 07:11:00.818022 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/cinder-scheduler-0"] Jan 22 07:11:00 crc kubenswrapper[4720]: I0122 07:11:00.893339 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/f21b7de7-3386-4be9-bf70-d57bacd76850-cert-memcached-mtls\") pod \"cinder-scheduler-0\" (UID: \"f21b7de7-3386-4be9-bf70-d57bacd76850\") " pod="watcher-kuttl-default/cinder-scheduler-0" Jan 22 07:11:00 crc kubenswrapper[4720]: I0122 07:11:00.893408 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f21b7de7-3386-4be9-bf70-d57bacd76850-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"f21b7de7-3386-4be9-bf70-d57bacd76850\") " pod="watcher-kuttl-default/cinder-scheduler-0" Jan 22 07:11:00 crc kubenswrapper[4720]: I0122 07:11:00.893431 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f21b7de7-3386-4be9-bf70-d57bacd76850-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"f21b7de7-3386-4be9-bf70-d57bacd76850\") " pod="watcher-kuttl-default/cinder-scheduler-0" Jan 22 07:11:00 crc kubenswrapper[4720]: I0122 07:11:00.893732 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f21b7de7-3386-4be9-bf70-d57bacd76850-config-data\") pod \"cinder-scheduler-0\" (UID: \"f21b7de7-3386-4be9-bf70-d57bacd76850\") " pod="watcher-kuttl-default/cinder-scheduler-0" Jan 22 07:11:00 crc kubenswrapper[4720]: I0122 07:11:00.893845 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-thfdr\" (UniqueName: \"kubernetes.io/projected/f21b7de7-3386-4be9-bf70-d57bacd76850-kube-api-access-thfdr\") pod \"cinder-scheduler-0\" (UID: \"f21b7de7-3386-4be9-bf70-d57bacd76850\") " pod="watcher-kuttl-default/cinder-scheduler-0" Jan 22 07:11:00 crc kubenswrapper[4720]: I0122 07:11:00.894041 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f21b7de7-3386-4be9-bf70-d57bacd76850-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"f21b7de7-3386-4be9-bf70-d57bacd76850\") " pod="watcher-kuttl-default/cinder-scheduler-0" Jan 22 07:11:00 crc kubenswrapper[4720]: I0122 07:11:00.894141 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f21b7de7-3386-4be9-bf70-d57bacd76850-scripts\") pod \"cinder-scheduler-0\" (UID: \"f21b7de7-3386-4be9-bf70-d57bacd76850\") " pod="watcher-kuttl-default/cinder-scheduler-0" Jan 22 07:11:00 crc kubenswrapper[4720]: I0122 07:11:00.996045 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/f21b7de7-3386-4be9-bf70-d57bacd76850-cert-memcached-mtls\") pod \"cinder-scheduler-0\" (UID: \"f21b7de7-3386-4be9-bf70-d57bacd76850\") " pod="watcher-kuttl-default/cinder-scheduler-0" Jan 22 07:11:00 crc kubenswrapper[4720]: I0122 07:11:00.996374 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f21b7de7-3386-4be9-bf70-d57bacd76850-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"f21b7de7-3386-4be9-bf70-d57bacd76850\") " pod="watcher-kuttl-default/cinder-scheduler-0" Jan 22 07:11:00 crc kubenswrapper[4720]: I0122 07:11:00.996458 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f21b7de7-3386-4be9-bf70-d57bacd76850-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"f21b7de7-3386-4be9-bf70-d57bacd76850\") " pod="watcher-kuttl-default/cinder-scheduler-0" Jan 22 07:11:00 crc kubenswrapper[4720]: I0122 07:11:00.996558 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f21b7de7-3386-4be9-bf70-d57bacd76850-config-data\") pod \"cinder-scheduler-0\" (UID: \"f21b7de7-3386-4be9-bf70-d57bacd76850\") " pod="watcher-kuttl-default/cinder-scheduler-0" Jan 22 07:11:00 crc kubenswrapper[4720]: I0122 07:11:00.996633 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-thfdr\" (UniqueName: \"kubernetes.io/projected/f21b7de7-3386-4be9-bf70-d57bacd76850-kube-api-access-thfdr\") pod \"cinder-scheduler-0\" (UID: \"f21b7de7-3386-4be9-bf70-d57bacd76850\") " pod="watcher-kuttl-default/cinder-scheduler-0" Jan 22 07:11:00 crc kubenswrapper[4720]: I0122 07:11:00.996740 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f21b7de7-3386-4be9-bf70-d57bacd76850-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"f21b7de7-3386-4be9-bf70-d57bacd76850\") " pod="watcher-kuttl-default/cinder-scheduler-0" Jan 22 07:11:00 crc kubenswrapper[4720]: I0122 07:11:00.996831 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f21b7de7-3386-4be9-bf70-d57bacd76850-scripts\") pod \"cinder-scheduler-0\" (UID: \"f21b7de7-3386-4be9-bf70-d57bacd76850\") " pod="watcher-kuttl-default/cinder-scheduler-0" Jan 22 07:11:00 crc kubenswrapper[4720]: I0122 07:11:00.996626 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f21b7de7-3386-4be9-bf70-d57bacd76850-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"f21b7de7-3386-4be9-bf70-d57bacd76850\") " pod="watcher-kuttl-default/cinder-scheduler-0" Jan 22 07:11:01 crc kubenswrapper[4720]: I0122 07:11:01.001933 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f21b7de7-3386-4be9-bf70-d57bacd76850-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"f21b7de7-3386-4be9-bf70-d57bacd76850\") " pod="watcher-kuttl-default/cinder-scheduler-0" Jan 22 07:11:01 crc kubenswrapper[4720]: I0122 07:11:01.002769 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f21b7de7-3386-4be9-bf70-d57bacd76850-config-data\") pod \"cinder-scheduler-0\" (UID: \"f21b7de7-3386-4be9-bf70-d57bacd76850\") " pod="watcher-kuttl-default/cinder-scheduler-0" Jan 22 07:11:01 crc kubenswrapper[4720]: I0122 07:11:01.008363 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/f21b7de7-3386-4be9-bf70-d57bacd76850-cert-memcached-mtls\") pod \"cinder-scheduler-0\" (UID: \"f21b7de7-3386-4be9-bf70-d57bacd76850\") " pod="watcher-kuttl-default/cinder-scheduler-0" Jan 22 07:11:01 crc kubenswrapper[4720]: I0122 07:11:01.015392 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f21b7de7-3386-4be9-bf70-d57bacd76850-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"f21b7de7-3386-4be9-bf70-d57bacd76850\") " pod="watcher-kuttl-default/cinder-scheduler-0" Jan 22 07:11:01 crc kubenswrapper[4720]: I0122 07:11:01.018429 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f21b7de7-3386-4be9-bf70-d57bacd76850-scripts\") pod \"cinder-scheduler-0\" (UID: \"f21b7de7-3386-4be9-bf70-d57bacd76850\") " pod="watcher-kuttl-default/cinder-scheduler-0" Jan 22 07:11:01 crc kubenswrapper[4720]: I0122 07:11:01.053534 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-thfdr\" (UniqueName: \"kubernetes.io/projected/f21b7de7-3386-4be9-bf70-d57bacd76850-kube-api-access-thfdr\") pod \"cinder-scheduler-0\" (UID: \"f21b7de7-3386-4be9-bf70-d57bacd76850\") " pod="watcher-kuttl-default/cinder-scheduler-0" Jan 22 07:11:01 crc kubenswrapper[4720]: I0122 07:11:01.091007 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder-scheduler-0" Jan 22 07:11:01 crc kubenswrapper[4720]: I0122 07:11:01.749591 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/cinder-scheduler-0"] Jan 22 07:11:01 crc kubenswrapper[4720]: W0122 07:11:01.757348 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf21b7de7_3386_4be9_bf70_d57bacd76850.slice/crio-68319237fbdc25fd19d5457396ab4546a13a35d72bb7cb7afcb90bc10dbae0b7 WatchSource:0}: Error finding container 68319237fbdc25fd19d5457396ab4546a13a35d72bb7cb7afcb90bc10dbae0b7: Status 404 returned error can't find the container with id 68319237fbdc25fd19d5457396ab4546a13a35d72bb7cb7afcb90bc10dbae0b7 Jan 22 07:11:01 crc kubenswrapper[4720]: I0122 07:11:01.768509 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_c37e37bb-9267-4a15-90a8-cf5cb101730d/watcher-decision-engine/0.log" Jan 22 07:11:02 crc kubenswrapper[4720]: I0122 07:11:02.043124 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/cinder-api-0" Jan 22 07:11:02 crc kubenswrapper[4720]: I0122 07:11:02.282951 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="792291b0-c266-40bf-a0f1-650b6d8f4f6a" path="/var/lib/kubelet/pods/792291b0-c266-40bf-a0f1-650b6d8f4f6a/volumes" Jan 22 07:11:02 crc kubenswrapper[4720]: I0122 07:11:02.720127 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"25e30ff1-96ee-4b8d-94d7-bb2803d7641d","Type":"ContainerStarted","Data":"ab8261bf51b5336c381a579811d613ec9e9313de656c2274e5e3c89d39bc09ca"} Jan 22 07:11:02 crc kubenswrapper[4720]: I0122 07:11:02.720305 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:11:02 crc kubenswrapper[4720]: I0122 07:11:02.729978 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-scheduler-0" event={"ID":"f21b7de7-3386-4be9-bf70-d57bacd76850","Type":"ContainerStarted","Data":"cb2c875123a8b9394c3a82b32dc92196135d498c8788ed2029efc6fbd9529754"} Jan 22 07:11:02 crc kubenswrapper[4720]: I0122 07:11:02.730036 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-scheduler-0" event={"ID":"f21b7de7-3386-4be9-bf70-d57bacd76850","Type":"ContainerStarted","Data":"68319237fbdc25fd19d5457396ab4546a13a35d72bb7cb7afcb90bc10dbae0b7"} Jan 22 07:11:02 crc kubenswrapper[4720]: I0122 07:11:02.750084 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/ceilometer-0" podStartSLOduration=3.125372335 podStartE2EDuration="6.750060878s" podCreationTimestamp="2026-01-22 07:10:56 +0000 UTC" firstStartedPulling="2026-01-22 07:10:57.905746821 +0000 UTC m=+2150.047653526" lastFinishedPulling="2026-01-22 07:11:01.530435364 +0000 UTC m=+2153.672342069" observedRunningTime="2026-01-22 07:11:02.748762321 +0000 UTC m=+2154.890669046" watchObservedRunningTime="2026-01-22 07:11:02.750060878 +0000 UTC m=+2154.891967583" Jan 22 07:11:02 crc kubenswrapper[4720]: I0122 07:11:02.963504 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_c37e37bb-9267-4a15-90a8-cf5cb101730d/watcher-decision-engine/0.log" Jan 22 07:11:03 crc kubenswrapper[4720]: I0122 07:11:03.742738 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-scheduler-0" event={"ID":"f21b7de7-3386-4be9-bf70-d57bacd76850","Type":"ContainerStarted","Data":"099b168a514a1d149ba4874197cc29fc9f737acda58f63e495f7bb0816d7493e"} Jan 22 07:11:03 crc kubenswrapper[4720]: I0122 07:11:03.772593 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/cinder-scheduler-0" podStartSLOduration=3.772572203 podStartE2EDuration="3.772572203s" podCreationTimestamp="2026-01-22 07:11:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 07:11:03.76679843 +0000 UTC m=+2155.908705135" watchObservedRunningTime="2026-01-22 07:11:03.772572203 +0000 UTC m=+2155.914478908" Jan 22 07:11:04 crc kubenswrapper[4720]: I0122 07:11:04.174849 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_c37e37bb-9267-4a15-90a8-cf5cb101730d/watcher-decision-engine/0.log" Jan 22 07:11:05 crc kubenswrapper[4720]: I0122 07:11:05.229846 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/cinder-backup-0" Jan 22 07:11:05 crc kubenswrapper[4720]: I0122 07:11:05.384038 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_c37e37bb-9267-4a15-90a8-cf5cb101730d/watcher-decision-engine/0.log" Jan 22 07:11:06 crc kubenswrapper[4720]: I0122 07:11:06.092558 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/cinder-scheduler-0" Jan 22 07:11:06 crc kubenswrapper[4720]: I0122 07:11:06.621809 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_c37e37bb-9267-4a15-90a8-cf5cb101730d/watcher-decision-engine/0.log" Jan 22 07:11:06 crc kubenswrapper[4720]: I0122 07:11:06.782824 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 07:11:06 crc kubenswrapper[4720]: I0122 07:11:06.819275 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 07:11:07 crc kubenswrapper[4720]: I0122 07:11:07.776076 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 07:11:07 crc kubenswrapper[4720]: I0122 07:11:07.816674 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 07:11:07 crc kubenswrapper[4720]: I0122 07:11:07.896978 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_c37e37bb-9267-4a15-90a8-cf5cb101730d/watcher-decision-engine/0.log" Jan 22 07:11:09 crc kubenswrapper[4720]: I0122 07:11:09.127122 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_c37e37bb-9267-4a15-90a8-cf5cb101730d/watcher-decision-engine/0.log" Jan 22 07:11:09 crc kubenswrapper[4720]: I0122 07:11:09.363427 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_c37e37bb-9267-4a15-90a8-cf5cb101730d/watcher-decision-engine/0.log" Jan 22 07:11:09 crc kubenswrapper[4720]: I0122 07:11:09.405656 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/cinder-db-sync-4dc5v"] Jan 22 07:11:09 crc kubenswrapper[4720]: I0122 07:11:09.415366 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/cinder-db-sync-4dc5v"] Jan 22 07:11:09 crc kubenswrapper[4720]: I0122 07:11:09.466014 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/cinder-scheduler-0"] Jan 22 07:11:09 crc kubenswrapper[4720]: I0122 07:11:09.466321 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/cinder-scheduler-0" podUID="f21b7de7-3386-4be9-bf70-d57bacd76850" containerName="cinder-scheduler" containerID="cri-o://cb2c875123a8b9394c3a82b32dc92196135d498c8788ed2029efc6fbd9529754" gracePeriod=30 Jan 22 07:11:09 crc kubenswrapper[4720]: I0122 07:11:09.466400 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/cinder-scheduler-0" podUID="f21b7de7-3386-4be9-bf70-d57bacd76850" containerName="probe" containerID="cri-o://099b168a514a1d149ba4874197cc29fc9f737acda58f63e495f7bb0816d7493e" gracePeriod=30 Jan 22 07:11:09 crc kubenswrapper[4720]: I0122 07:11:09.482225 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/cinder-backup-0"] Jan 22 07:11:09 crc kubenswrapper[4720]: I0122 07:11:09.482648 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/cinder-backup-0" podUID="0e9e7b3c-c94e-46e8-b6be-97f768f9993c" containerName="cinder-backup" containerID="cri-o://3a8e01ab11c787cbb8acfcf350f75e8e7b24fc3c966e42bc9b03932fcdc149a7" gracePeriod=30 Jan 22 07:11:09 crc kubenswrapper[4720]: I0122 07:11:09.482737 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/cinder-backup-0" podUID="0e9e7b3c-c94e-46e8-b6be-97f768f9993c" containerName="probe" containerID="cri-o://1a4839933a75b8972cdc01bc3de5f1b0c79ca17dd5eda0b12e5e367dfcb45b2e" gracePeriod=30 Jan 22 07:11:09 crc kubenswrapper[4720]: I0122 07:11:09.545843 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/cinder17a7-account-delete-kw5t9"] Jan 22 07:11:09 crc kubenswrapper[4720]: I0122 07:11:09.547444 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder17a7-account-delete-kw5t9" Jan 22 07:11:09 crc kubenswrapper[4720]: I0122 07:11:09.559399 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/cinder-api-0"] Jan 22 07:11:09 crc kubenswrapper[4720]: I0122 07:11:09.559774 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/cinder-api-0" podUID="cd08ccf6-7d46-4a4a-a77b-571fa77bba36" containerName="cinder-api-log" containerID="cri-o://a34476e119b754658da1e4b1043687520f9bb7dd42e99c7378ebf0c11f995894" gracePeriod=30 Jan 22 07:11:09 crc kubenswrapper[4720]: I0122 07:11:09.559872 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/cinder-api-0" podUID="cd08ccf6-7d46-4a4a-a77b-571fa77bba36" containerName="cinder-api" containerID="cri-o://54f87871a87363ea945108d40f1796b642e83c8e7ca3c68c49dce0ba66ee7d31" gracePeriod=30 Jan 22 07:11:09 crc kubenswrapper[4720]: I0122 07:11:09.568737 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/cinder17a7-account-delete-kw5t9"] Jan 22 07:11:09 crc kubenswrapper[4720]: I0122 07:11:09.676855 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6dsnd\" (UniqueName: \"kubernetes.io/projected/176f161a-f1f2-4b8b-9824-98379378d401-kube-api-access-6dsnd\") pod \"cinder17a7-account-delete-kw5t9\" (UID: \"176f161a-f1f2-4b8b-9824-98379378d401\") " pod="watcher-kuttl-default/cinder17a7-account-delete-kw5t9" Jan 22 07:11:09 crc kubenswrapper[4720]: I0122 07:11:09.677049 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/176f161a-f1f2-4b8b-9824-98379378d401-operator-scripts\") pod \"cinder17a7-account-delete-kw5t9\" (UID: \"176f161a-f1f2-4b8b-9824-98379378d401\") " pod="watcher-kuttl-default/cinder17a7-account-delete-kw5t9" Jan 22 07:11:09 crc kubenswrapper[4720]: I0122 07:11:09.779110 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/176f161a-f1f2-4b8b-9824-98379378d401-operator-scripts\") pod \"cinder17a7-account-delete-kw5t9\" (UID: \"176f161a-f1f2-4b8b-9824-98379378d401\") " pod="watcher-kuttl-default/cinder17a7-account-delete-kw5t9" Jan 22 07:11:09 crc kubenswrapper[4720]: I0122 07:11:09.779186 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6dsnd\" (UniqueName: \"kubernetes.io/projected/176f161a-f1f2-4b8b-9824-98379378d401-kube-api-access-6dsnd\") pod \"cinder17a7-account-delete-kw5t9\" (UID: \"176f161a-f1f2-4b8b-9824-98379378d401\") " pod="watcher-kuttl-default/cinder17a7-account-delete-kw5t9" Jan 22 07:11:09 crc kubenswrapper[4720]: I0122 07:11:09.780381 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/176f161a-f1f2-4b8b-9824-98379378d401-operator-scripts\") pod \"cinder17a7-account-delete-kw5t9\" (UID: \"176f161a-f1f2-4b8b-9824-98379378d401\") " pod="watcher-kuttl-default/cinder17a7-account-delete-kw5t9" Jan 22 07:11:09 crc kubenswrapper[4720]: I0122 07:11:09.795185 4720 generic.go:334] "Generic (PLEG): container finished" podID="cd08ccf6-7d46-4a4a-a77b-571fa77bba36" containerID="a34476e119b754658da1e4b1043687520f9bb7dd42e99c7378ebf0c11f995894" exitCode=143 Jan 22 07:11:09 crc kubenswrapper[4720]: I0122 07:11:09.796228 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-api-0" event={"ID":"cd08ccf6-7d46-4a4a-a77b-571fa77bba36","Type":"ContainerDied","Data":"a34476e119b754658da1e4b1043687520f9bb7dd42e99c7378ebf0c11f995894"} Jan 22 07:11:09 crc kubenswrapper[4720]: I0122 07:11:09.810990 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6dsnd\" (UniqueName: \"kubernetes.io/projected/176f161a-f1f2-4b8b-9824-98379378d401-kube-api-access-6dsnd\") pod \"cinder17a7-account-delete-kw5t9\" (UID: \"176f161a-f1f2-4b8b-9824-98379378d401\") " pod="watcher-kuttl-default/cinder17a7-account-delete-kw5t9" Jan 22 07:11:09 crc kubenswrapper[4720]: I0122 07:11:09.876825 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder17a7-account-delete-kw5t9" Jan 22 07:11:10 crc kubenswrapper[4720]: I0122 07:11:10.263603 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="63210d7b-5ccb-49b7-a85c-0c136a6ab0c9" path="/var/lib/kubelet/pods/63210d7b-5ccb-49b7-a85c-0c136a6ab0c9/volumes" Jan 22 07:11:10 crc kubenswrapper[4720]: I0122 07:11:10.381536 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/cinder17a7-account-delete-kw5t9"] Jan 22 07:11:10 crc kubenswrapper[4720]: I0122 07:11:10.642521 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_c37e37bb-9267-4a15-90a8-cf5cb101730d/watcher-decision-engine/0.log" Jan 22 07:11:10 crc kubenswrapper[4720]: I0122 07:11:10.824320 4720 generic.go:334] "Generic (PLEG): container finished" podID="0e9e7b3c-c94e-46e8-b6be-97f768f9993c" containerID="1a4839933a75b8972cdc01bc3de5f1b0c79ca17dd5eda0b12e5e367dfcb45b2e" exitCode=0 Jan 22 07:11:10 crc kubenswrapper[4720]: I0122 07:11:10.824684 4720 generic.go:334] "Generic (PLEG): container finished" podID="0e9e7b3c-c94e-46e8-b6be-97f768f9993c" containerID="3a8e01ab11c787cbb8acfcf350f75e8e7b24fc3c966e42bc9b03932fcdc149a7" exitCode=0 Jan 22 07:11:10 crc kubenswrapper[4720]: I0122 07:11:10.824741 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-backup-0" event={"ID":"0e9e7b3c-c94e-46e8-b6be-97f768f9993c","Type":"ContainerDied","Data":"1a4839933a75b8972cdc01bc3de5f1b0c79ca17dd5eda0b12e5e367dfcb45b2e"} Jan 22 07:11:10 crc kubenswrapper[4720]: I0122 07:11:10.824775 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-backup-0" event={"ID":"0e9e7b3c-c94e-46e8-b6be-97f768f9993c","Type":"ContainerDied","Data":"3a8e01ab11c787cbb8acfcf350f75e8e7b24fc3c966e42bc9b03932fcdc149a7"} Jan 22 07:11:10 crc kubenswrapper[4720]: I0122 07:11:10.827542 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder17a7-account-delete-kw5t9" event={"ID":"176f161a-f1f2-4b8b-9824-98379378d401","Type":"ContainerStarted","Data":"ae3e4a93a59399cddf562b020360a952cf4aff72540f0a9855c2754dd4ced9d1"} Jan 22 07:11:10 crc kubenswrapper[4720]: I0122 07:11:10.827605 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder17a7-account-delete-kw5t9" event={"ID":"176f161a-f1f2-4b8b-9824-98379378d401","Type":"ContainerStarted","Data":"8d6caa59fb075796beab7eaa99dc64c67ae79a64f4c592032608d26c8ed0edb7"} Jan 22 07:11:10 crc kubenswrapper[4720]: I0122 07:11:10.833651 4720 generic.go:334] "Generic (PLEG): container finished" podID="f21b7de7-3386-4be9-bf70-d57bacd76850" containerID="099b168a514a1d149ba4874197cc29fc9f737acda58f63e495f7bb0816d7493e" exitCode=0 Jan 22 07:11:10 crc kubenswrapper[4720]: I0122 07:11:10.833680 4720 generic.go:334] "Generic (PLEG): container finished" podID="f21b7de7-3386-4be9-bf70-d57bacd76850" containerID="cb2c875123a8b9394c3a82b32dc92196135d498c8788ed2029efc6fbd9529754" exitCode=0 Jan 22 07:11:10 crc kubenswrapper[4720]: I0122 07:11:10.833737 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-scheduler-0" event={"ID":"f21b7de7-3386-4be9-bf70-d57bacd76850","Type":"ContainerDied","Data":"099b168a514a1d149ba4874197cc29fc9f737acda58f63e495f7bb0816d7493e"} Jan 22 07:11:10 crc kubenswrapper[4720]: I0122 07:11:10.833787 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-scheduler-0" event={"ID":"f21b7de7-3386-4be9-bf70-d57bacd76850","Type":"ContainerDied","Data":"cb2c875123a8b9394c3a82b32dc92196135d498c8788ed2029efc6fbd9529754"} Jan 22 07:11:10 crc kubenswrapper[4720]: I0122 07:11:10.854021 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/cinder17a7-account-delete-kw5t9" podStartSLOduration=1.8539991059999998 podStartE2EDuration="1.853999106s" podCreationTimestamp="2026-01-22 07:11:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 07:11:10.847145503 +0000 UTC m=+2162.989052208" watchObservedRunningTime="2026-01-22 07:11:10.853999106 +0000 UTC m=+2162.995905801" Jan 22 07:11:11 crc kubenswrapper[4720]: I0122 07:11:11.168199 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder-backup-0" Jan 22 07:11:11 crc kubenswrapper[4720]: I0122 07:11:11.219072 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e9e7b3c-c94e-46e8-b6be-97f768f9993c-config-data\") pod \"0e9e7b3c-c94e-46e8-b6be-97f768f9993c\" (UID: \"0e9e7b3c-c94e-46e8-b6be-97f768f9993c\") " Jan 22 07:11:11 crc kubenswrapper[4720]: I0122 07:11:11.219180 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/0e9e7b3c-c94e-46e8-b6be-97f768f9993c-run\") pod \"0e9e7b3c-c94e-46e8-b6be-97f768f9993c\" (UID: \"0e9e7b3c-c94e-46e8-b6be-97f768f9993c\") " Jan 22 07:11:11 crc kubenswrapper[4720]: I0122 07:11:11.219238 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e9e7b3c-c94e-46e8-b6be-97f768f9993c-combined-ca-bundle\") pod \"0e9e7b3c-c94e-46e8-b6be-97f768f9993c\" (UID: \"0e9e7b3c-c94e-46e8-b6be-97f768f9993c\") " Jan 22 07:11:11 crc kubenswrapper[4720]: I0122 07:11:11.219289 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/0e9e7b3c-c94e-46e8-b6be-97f768f9993c-etc-iscsi\") pod \"0e9e7b3c-c94e-46e8-b6be-97f768f9993c\" (UID: \"0e9e7b3c-c94e-46e8-b6be-97f768f9993c\") " Jan 22 07:11:11 crc kubenswrapper[4720]: I0122 07:11:11.219362 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0e9e7b3c-c94e-46e8-b6be-97f768f9993c-config-data-custom\") pod \"0e9e7b3c-c94e-46e8-b6be-97f768f9993c\" (UID: \"0e9e7b3c-c94e-46e8-b6be-97f768f9993c\") " Jan 22 07:11:11 crc kubenswrapper[4720]: I0122 07:11:11.219412 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0e9e7b3c-c94e-46e8-b6be-97f768f9993c-scripts\") pod \"0e9e7b3c-c94e-46e8-b6be-97f768f9993c\" (UID: \"0e9e7b3c-c94e-46e8-b6be-97f768f9993c\") " Jan 22 07:11:11 crc kubenswrapper[4720]: I0122 07:11:11.219439 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/0e9e7b3c-c94e-46e8-b6be-97f768f9993c-sys\") pod \"0e9e7b3c-c94e-46e8-b6be-97f768f9993c\" (UID: \"0e9e7b3c-c94e-46e8-b6be-97f768f9993c\") " Jan 22 07:11:11 crc kubenswrapper[4720]: I0122 07:11:11.219470 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/0e9e7b3c-c94e-46e8-b6be-97f768f9993c-var-locks-brick\") pod \"0e9e7b3c-c94e-46e8-b6be-97f768f9993c\" (UID: \"0e9e7b3c-c94e-46e8-b6be-97f768f9993c\") " Jan 22 07:11:11 crc kubenswrapper[4720]: I0122 07:11:11.219495 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dtb72\" (UniqueName: \"kubernetes.io/projected/0e9e7b3c-c94e-46e8-b6be-97f768f9993c-kube-api-access-dtb72\") pod \"0e9e7b3c-c94e-46e8-b6be-97f768f9993c\" (UID: \"0e9e7b3c-c94e-46e8-b6be-97f768f9993c\") " Jan 22 07:11:11 crc kubenswrapper[4720]: I0122 07:11:11.219570 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/0e9e7b3c-c94e-46e8-b6be-97f768f9993c-dev\") pod \"0e9e7b3c-c94e-46e8-b6be-97f768f9993c\" (UID: \"0e9e7b3c-c94e-46e8-b6be-97f768f9993c\") " Jan 22 07:11:11 crc kubenswrapper[4720]: I0122 07:11:11.219620 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/0e9e7b3c-c94e-46e8-b6be-97f768f9993c-var-locks-cinder\") pod \"0e9e7b3c-c94e-46e8-b6be-97f768f9993c\" (UID: \"0e9e7b3c-c94e-46e8-b6be-97f768f9993c\") " Jan 22 07:11:11 crc kubenswrapper[4720]: I0122 07:11:11.219653 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/0e9e7b3c-c94e-46e8-b6be-97f768f9993c-cert-memcached-mtls\") pod \"0e9e7b3c-c94e-46e8-b6be-97f768f9993c\" (UID: \"0e9e7b3c-c94e-46e8-b6be-97f768f9993c\") " Jan 22 07:11:11 crc kubenswrapper[4720]: I0122 07:11:11.219678 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0e9e7b3c-c94e-46e8-b6be-97f768f9993c-etc-machine-id\") pod \"0e9e7b3c-c94e-46e8-b6be-97f768f9993c\" (UID: \"0e9e7b3c-c94e-46e8-b6be-97f768f9993c\") " Jan 22 07:11:11 crc kubenswrapper[4720]: I0122 07:11:11.219723 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/0e9e7b3c-c94e-46e8-b6be-97f768f9993c-var-lib-cinder\") pod \"0e9e7b3c-c94e-46e8-b6be-97f768f9993c\" (UID: \"0e9e7b3c-c94e-46e8-b6be-97f768f9993c\") " Jan 22 07:11:11 crc kubenswrapper[4720]: I0122 07:11:11.219773 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/0e9e7b3c-c94e-46e8-b6be-97f768f9993c-etc-nvme\") pod \"0e9e7b3c-c94e-46e8-b6be-97f768f9993c\" (UID: \"0e9e7b3c-c94e-46e8-b6be-97f768f9993c\") " Jan 22 07:11:11 crc kubenswrapper[4720]: I0122 07:11:11.219831 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0e9e7b3c-c94e-46e8-b6be-97f768f9993c-lib-modules\") pod \"0e9e7b3c-c94e-46e8-b6be-97f768f9993c\" (UID: \"0e9e7b3c-c94e-46e8-b6be-97f768f9993c\") " Jan 22 07:11:11 crc kubenswrapper[4720]: I0122 07:11:11.220425 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0e9e7b3c-c94e-46e8-b6be-97f768f9993c-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "0e9e7b3c-c94e-46e8-b6be-97f768f9993c" (UID: "0e9e7b3c-c94e-46e8-b6be-97f768f9993c"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 07:11:11 crc kubenswrapper[4720]: I0122 07:11:11.223164 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0e9e7b3c-c94e-46e8-b6be-97f768f9993c-var-locks-brick" (OuterVolumeSpecName: "var-locks-brick") pod "0e9e7b3c-c94e-46e8-b6be-97f768f9993c" (UID: "0e9e7b3c-c94e-46e8-b6be-97f768f9993c"). InnerVolumeSpecName "var-locks-brick". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 07:11:11 crc kubenswrapper[4720]: I0122 07:11:11.223248 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0e9e7b3c-c94e-46e8-b6be-97f768f9993c-run" (OuterVolumeSpecName: "run") pod "0e9e7b3c-c94e-46e8-b6be-97f768f9993c" (UID: "0e9e7b3c-c94e-46e8-b6be-97f768f9993c"). InnerVolumeSpecName "run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 07:11:11 crc kubenswrapper[4720]: I0122 07:11:11.233195 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0e9e7b3c-c94e-46e8-b6be-97f768f9993c-dev" (OuterVolumeSpecName: "dev") pod "0e9e7b3c-c94e-46e8-b6be-97f768f9993c" (UID: "0e9e7b3c-c94e-46e8-b6be-97f768f9993c"). InnerVolumeSpecName "dev". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 07:11:11 crc kubenswrapper[4720]: I0122 07:11:11.234159 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0e9e7b3c-c94e-46e8-b6be-97f768f9993c-var-lib-cinder" (OuterVolumeSpecName: "var-lib-cinder") pod "0e9e7b3c-c94e-46e8-b6be-97f768f9993c" (UID: "0e9e7b3c-c94e-46e8-b6be-97f768f9993c"). InnerVolumeSpecName "var-lib-cinder". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 07:11:11 crc kubenswrapper[4720]: I0122 07:11:11.234261 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0e9e7b3c-c94e-46e8-b6be-97f768f9993c-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "0e9e7b3c-c94e-46e8-b6be-97f768f9993c" (UID: "0e9e7b3c-c94e-46e8-b6be-97f768f9993c"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 07:11:11 crc kubenswrapper[4720]: I0122 07:11:11.237131 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0e9e7b3c-c94e-46e8-b6be-97f768f9993c-etc-iscsi" (OuterVolumeSpecName: "etc-iscsi") pod "0e9e7b3c-c94e-46e8-b6be-97f768f9993c" (UID: "0e9e7b3c-c94e-46e8-b6be-97f768f9993c"). InnerVolumeSpecName "etc-iscsi". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 07:11:11 crc kubenswrapper[4720]: I0122 07:11:11.237212 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0e9e7b3c-c94e-46e8-b6be-97f768f9993c-etc-nvme" (OuterVolumeSpecName: "etc-nvme") pod "0e9e7b3c-c94e-46e8-b6be-97f768f9993c" (UID: "0e9e7b3c-c94e-46e8-b6be-97f768f9993c"). InnerVolumeSpecName "etc-nvme". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 07:11:11 crc kubenswrapper[4720]: I0122 07:11:11.237258 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0e9e7b3c-c94e-46e8-b6be-97f768f9993c-var-locks-cinder" (OuterVolumeSpecName: "var-locks-cinder") pod "0e9e7b3c-c94e-46e8-b6be-97f768f9993c" (UID: "0e9e7b3c-c94e-46e8-b6be-97f768f9993c"). InnerVolumeSpecName "var-locks-cinder". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 07:11:11 crc kubenswrapper[4720]: I0122 07:11:11.241370 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e9e7b3c-c94e-46e8-b6be-97f768f9993c-scripts" (OuterVolumeSpecName: "scripts") pod "0e9e7b3c-c94e-46e8-b6be-97f768f9993c" (UID: "0e9e7b3c-c94e-46e8-b6be-97f768f9993c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:11:11 crc kubenswrapper[4720]: I0122 07:11:11.241438 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0e9e7b3c-c94e-46e8-b6be-97f768f9993c-sys" (OuterVolumeSpecName: "sys") pod "0e9e7b3c-c94e-46e8-b6be-97f768f9993c" (UID: "0e9e7b3c-c94e-46e8-b6be-97f768f9993c"). InnerVolumeSpecName "sys". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 07:11:11 crc kubenswrapper[4720]: I0122 07:11:11.244902 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e9e7b3c-c94e-46e8-b6be-97f768f9993c-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "0e9e7b3c-c94e-46e8-b6be-97f768f9993c" (UID: "0e9e7b3c-c94e-46e8-b6be-97f768f9993c"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:11:11 crc kubenswrapper[4720]: I0122 07:11:11.246501 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0e9e7b3c-c94e-46e8-b6be-97f768f9993c-kube-api-access-dtb72" (OuterVolumeSpecName: "kube-api-access-dtb72") pod "0e9e7b3c-c94e-46e8-b6be-97f768f9993c" (UID: "0e9e7b3c-c94e-46e8-b6be-97f768f9993c"). InnerVolumeSpecName "kube-api-access-dtb72". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 07:11:11 crc kubenswrapper[4720]: I0122 07:11:11.288991 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder-scheduler-0" Jan 22 07:11:11 crc kubenswrapper[4720]: I0122 07:11:11.322170 4720 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0e9e7b3c-c94e-46e8-b6be-97f768f9993c-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 07:11:11 crc kubenswrapper[4720]: I0122 07:11:11.322211 4720 reconciler_common.go:293] "Volume detached for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/0e9e7b3c-c94e-46e8-b6be-97f768f9993c-sys\") on node \"crc\" DevicePath \"\"" Jan 22 07:11:11 crc kubenswrapper[4720]: I0122 07:11:11.322222 4720 reconciler_common.go:293] "Volume detached for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/0e9e7b3c-c94e-46e8-b6be-97f768f9993c-var-locks-brick\") on node \"crc\" DevicePath \"\"" Jan 22 07:11:11 crc kubenswrapper[4720]: I0122 07:11:11.322232 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dtb72\" (UniqueName: \"kubernetes.io/projected/0e9e7b3c-c94e-46e8-b6be-97f768f9993c-kube-api-access-dtb72\") on node \"crc\" DevicePath \"\"" Jan 22 07:11:11 crc kubenswrapper[4720]: I0122 07:11:11.322242 4720 reconciler_common.go:293] "Volume detached for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/0e9e7b3c-c94e-46e8-b6be-97f768f9993c-dev\") on node \"crc\" DevicePath \"\"" Jan 22 07:11:11 crc kubenswrapper[4720]: I0122 07:11:11.322251 4720 reconciler_common.go:293] "Volume detached for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/0e9e7b3c-c94e-46e8-b6be-97f768f9993c-var-locks-cinder\") on node \"crc\" DevicePath \"\"" Jan 22 07:11:11 crc kubenswrapper[4720]: I0122 07:11:11.322261 4720 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0e9e7b3c-c94e-46e8-b6be-97f768f9993c-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 22 07:11:11 crc kubenswrapper[4720]: I0122 07:11:11.322270 4720 reconciler_common.go:293] "Volume detached for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/0e9e7b3c-c94e-46e8-b6be-97f768f9993c-var-lib-cinder\") on node \"crc\" DevicePath \"\"" Jan 22 07:11:11 crc kubenswrapper[4720]: I0122 07:11:11.322280 4720 reconciler_common.go:293] "Volume detached for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/0e9e7b3c-c94e-46e8-b6be-97f768f9993c-etc-nvme\") on node \"crc\" DevicePath \"\"" Jan 22 07:11:11 crc kubenswrapper[4720]: I0122 07:11:11.322289 4720 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0e9e7b3c-c94e-46e8-b6be-97f768f9993c-lib-modules\") on node \"crc\" DevicePath \"\"" Jan 22 07:11:11 crc kubenswrapper[4720]: I0122 07:11:11.322297 4720 reconciler_common.go:293] "Volume detached for volume \"run\" (UniqueName: \"kubernetes.io/host-path/0e9e7b3c-c94e-46e8-b6be-97f768f9993c-run\") on node \"crc\" DevicePath \"\"" Jan 22 07:11:11 crc kubenswrapper[4720]: I0122 07:11:11.322305 4720 reconciler_common.go:293] "Volume detached for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/0e9e7b3c-c94e-46e8-b6be-97f768f9993c-etc-iscsi\") on node \"crc\" DevicePath \"\"" Jan 22 07:11:11 crc kubenswrapper[4720]: I0122 07:11:11.322313 4720 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0e9e7b3c-c94e-46e8-b6be-97f768f9993c-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 22 07:11:11 crc kubenswrapper[4720]: I0122 07:11:11.333291 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e9e7b3c-c94e-46e8-b6be-97f768f9993c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0e9e7b3c-c94e-46e8-b6be-97f768f9993c" (UID: "0e9e7b3c-c94e-46e8-b6be-97f768f9993c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:11:11 crc kubenswrapper[4720]: I0122 07:11:11.381697 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e9e7b3c-c94e-46e8-b6be-97f768f9993c-config-data" (OuterVolumeSpecName: "config-data") pod "0e9e7b3c-c94e-46e8-b6be-97f768f9993c" (UID: "0e9e7b3c-c94e-46e8-b6be-97f768f9993c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:11:11 crc kubenswrapper[4720]: I0122 07:11:11.424125 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-thfdr\" (UniqueName: \"kubernetes.io/projected/f21b7de7-3386-4be9-bf70-d57bacd76850-kube-api-access-thfdr\") pod \"f21b7de7-3386-4be9-bf70-d57bacd76850\" (UID: \"f21b7de7-3386-4be9-bf70-d57bacd76850\") " Jan 22 07:11:11 crc kubenswrapper[4720]: I0122 07:11:11.424254 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/f21b7de7-3386-4be9-bf70-d57bacd76850-cert-memcached-mtls\") pod \"f21b7de7-3386-4be9-bf70-d57bacd76850\" (UID: \"f21b7de7-3386-4be9-bf70-d57bacd76850\") " Jan 22 07:11:11 crc kubenswrapper[4720]: I0122 07:11:11.424290 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f21b7de7-3386-4be9-bf70-d57bacd76850-etc-machine-id\") pod \"f21b7de7-3386-4be9-bf70-d57bacd76850\" (UID: \"f21b7de7-3386-4be9-bf70-d57bacd76850\") " Jan 22 07:11:11 crc kubenswrapper[4720]: I0122 07:11:11.424369 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f21b7de7-3386-4be9-bf70-d57bacd76850-config-data\") pod \"f21b7de7-3386-4be9-bf70-d57bacd76850\" (UID: \"f21b7de7-3386-4be9-bf70-d57bacd76850\") " Jan 22 07:11:11 crc kubenswrapper[4720]: I0122 07:11:11.424404 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f21b7de7-3386-4be9-bf70-d57bacd76850-combined-ca-bundle\") pod \"f21b7de7-3386-4be9-bf70-d57bacd76850\" (UID: \"f21b7de7-3386-4be9-bf70-d57bacd76850\") " Jan 22 07:11:11 crc kubenswrapper[4720]: I0122 07:11:11.424468 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f21b7de7-3386-4be9-bf70-d57bacd76850-scripts\") pod \"f21b7de7-3386-4be9-bf70-d57bacd76850\" (UID: \"f21b7de7-3386-4be9-bf70-d57bacd76850\") " Jan 22 07:11:11 crc kubenswrapper[4720]: I0122 07:11:11.424483 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f21b7de7-3386-4be9-bf70-d57bacd76850-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "f21b7de7-3386-4be9-bf70-d57bacd76850" (UID: "f21b7de7-3386-4be9-bf70-d57bacd76850"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 07:11:11 crc kubenswrapper[4720]: I0122 07:11:11.424585 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f21b7de7-3386-4be9-bf70-d57bacd76850-config-data-custom\") pod \"f21b7de7-3386-4be9-bf70-d57bacd76850\" (UID: \"f21b7de7-3386-4be9-bf70-d57bacd76850\") " Jan 22 07:11:11 crc kubenswrapper[4720]: I0122 07:11:11.425053 4720 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e9e7b3c-c94e-46e8-b6be-97f768f9993c-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 07:11:11 crc kubenswrapper[4720]: I0122 07:11:11.425084 4720 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e9e7b3c-c94e-46e8-b6be-97f768f9993c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 07:11:11 crc kubenswrapper[4720]: I0122 07:11:11.425095 4720 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f21b7de7-3386-4be9-bf70-d57bacd76850-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 22 07:11:11 crc kubenswrapper[4720]: I0122 07:11:11.428790 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f21b7de7-3386-4be9-bf70-d57bacd76850-kube-api-access-thfdr" (OuterVolumeSpecName: "kube-api-access-thfdr") pod "f21b7de7-3386-4be9-bf70-d57bacd76850" (UID: "f21b7de7-3386-4be9-bf70-d57bacd76850"). InnerVolumeSpecName "kube-api-access-thfdr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 07:11:11 crc kubenswrapper[4720]: I0122 07:11:11.429811 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f21b7de7-3386-4be9-bf70-d57bacd76850-scripts" (OuterVolumeSpecName: "scripts") pod "f21b7de7-3386-4be9-bf70-d57bacd76850" (UID: "f21b7de7-3386-4be9-bf70-d57bacd76850"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:11:11 crc kubenswrapper[4720]: I0122 07:11:11.429950 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e9e7b3c-c94e-46e8-b6be-97f768f9993c-cert-memcached-mtls" (OuterVolumeSpecName: "cert-memcached-mtls") pod "0e9e7b3c-c94e-46e8-b6be-97f768f9993c" (UID: "0e9e7b3c-c94e-46e8-b6be-97f768f9993c"). InnerVolumeSpecName "cert-memcached-mtls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:11:11 crc kubenswrapper[4720]: I0122 07:11:11.432652 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f21b7de7-3386-4be9-bf70-d57bacd76850-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "f21b7de7-3386-4be9-bf70-d57bacd76850" (UID: "f21b7de7-3386-4be9-bf70-d57bacd76850"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:11:11 crc kubenswrapper[4720]: I0122 07:11:11.481992 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f21b7de7-3386-4be9-bf70-d57bacd76850-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f21b7de7-3386-4be9-bf70-d57bacd76850" (UID: "f21b7de7-3386-4be9-bf70-d57bacd76850"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:11:11 crc kubenswrapper[4720]: I0122 07:11:11.526965 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-thfdr\" (UniqueName: \"kubernetes.io/projected/f21b7de7-3386-4be9-bf70-d57bacd76850-kube-api-access-thfdr\") on node \"crc\" DevicePath \"\"" Jan 22 07:11:11 crc kubenswrapper[4720]: I0122 07:11:11.527004 4720 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f21b7de7-3386-4be9-bf70-d57bacd76850-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 07:11:11 crc kubenswrapper[4720]: I0122 07:11:11.527016 4720 reconciler_common.go:293] "Volume detached for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/0e9e7b3c-c94e-46e8-b6be-97f768f9993c-cert-memcached-mtls\") on node \"crc\" DevicePath \"\"" Jan 22 07:11:11 crc kubenswrapper[4720]: I0122 07:11:11.527024 4720 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f21b7de7-3386-4be9-bf70-d57bacd76850-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 07:11:11 crc kubenswrapper[4720]: I0122 07:11:11.527034 4720 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f21b7de7-3386-4be9-bf70-d57bacd76850-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 22 07:11:11 crc kubenswrapper[4720]: I0122 07:11:11.556071 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f21b7de7-3386-4be9-bf70-d57bacd76850-config-data" (OuterVolumeSpecName: "config-data") pod "f21b7de7-3386-4be9-bf70-d57bacd76850" (UID: "f21b7de7-3386-4be9-bf70-d57bacd76850"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:11:11 crc kubenswrapper[4720]: I0122 07:11:11.597101 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f21b7de7-3386-4be9-bf70-d57bacd76850-cert-memcached-mtls" (OuterVolumeSpecName: "cert-memcached-mtls") pod "f21b7de7-3386-4be9-bf70-d57bacd76850" (UID: "f21b7de7-3386-4be9-bf70-d57bacd76850"). InnerVolumeSpecName "cert-memcached-mtls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:11:11 crc kubenswrapper[4720]: I0122 07:11:11.629081 4720 reconciler_common.go:293] "Volume detached for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/f21b7de7-3386-4be9-bf70-d57bacd76850-cert-memcached-mtls\") on node \"crc\" DevicePath \"\"" Jan 22 07:11:11 crc kubenswrapper[4720]: I0122 07:11:11.629123 4720 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f21b7de7-3386-4be9-bf70-d57bacd76850-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 07:11:11 crc kubenswrapper[4720]: I0122 07:11:11.861154 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_c37e37bb-9267-4a15-90a8-cf5cb101730d/watcher-decision-engine/0.log" Jan 22 07:11:11 crc kubenswrapper[4720]: I0122 07:11:11.863192 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-backup-0" event={"ID":"0e9e7b3c-c94e-46e8-b6be-97f768f9993c","Type":"ContainerDied","Data":"8b49f480cca34a15f9fb78d01db794e71c66aa4575f9e8345cfc8fbf0f51fe13"} Jan 22 07:11:11 crc kubenswrapper[4720]: I0122 07:11:11.863280 4720 scope.go:117] "RemoveContainer" containerID="1a4839933a75b8972cdc01bc3de5f1b0c79ca17dd5eda0b12e5e367dfcb45b2e" Jan 22 07:11:11 crc kubenswrapper[4720]: I0122 07:11:11.863463 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder-backup-0" Jan 22 07:11:11 crc kubenswrapper[4720]: I0122 07:11:11.871492 4720 generic.go:334] "Generic (PLEG): container finished" podID="176f161a-f1f2-4b8b-9824-98379378d401" containerID="ae3e4a93a59399cddf562b020360a952cf4aff72540f0a9855c2754dd4ced9d1" exitCode=0 Jan 22 07:11:11 crc kubenswrapper[4720]: I0122 07:11:11.871633 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder17a7-account-delete-kw5t9" event={"ID":"176f161a-f1f2-4b8b-9824-98379378d401","Type":"ContainerDied","Data":"ae3e4a93a59399cddf562b020360a952cf4aff72540f0a9855c2754dd4ced9d1"} Jan 22 07:11:11 crc kubenswrapper[4720]: I0122 07:11:11.877971 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-scheduler-0" event={"ID":"f21b7de7-3386-4be9-bf70-d57bacd76850","Type":"ContainerDied","Data":"68319237fbdc25fd19d5457396ab4546a13a35d72bb7cb7afcb90bc10dbae0b7"} Jan 22 07:11:11 crc kubenswrapper[4720]: I0122 07:11:11.878092 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder-scheduler-0" Jan 22 07:11:11 crc kubenswrapper[4720]: I0122 07:11:11.904143 4720 scope.go:117] "RemoveContainer" containerID="3a8e01ab11c787cbb8acfcf350f75e8e7b24fc3c966e42bc9b03932fcdc149a7" Jan 22 07:11:11 crc kubenswrapper[4720]: I0122 07:11:11.932093 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/cinder-scheduler-0"] Jan 22 07:11:11 crc kubenswrapper[4720]: I0122 07:11:11.939209 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/cinder-scheduler-0"] Jan 22 07:11:11 crc kubenswrapper[4720]: I0122 07:11:11.941499 4720 scope.go:117] "RemoveContainer" containerID="099b168a514a1d149ba4874197cc29fc9f737acda58f63e495f7bb0816d7493e" Jan 22 07:11:11 crc kubenswrapper[4720]: I0122 07:11:11.959075 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/cinder-backup-0"] Jan 22 07:11:11 crc kubenswrapper[4720]: I0122 07:11:11.968812 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/cinder-backup-0"] Jan 22 07:11:11 crc kubenswrapper[4720]: I0122 07:11:11.970140 4720 scope.go:117] "RemoveContainer" containerID="cb2c875123a8b9394c3a82b32dc92196135d498c8788ed2029efc6fbd9529754" Jan 22 07:11:12 crc kubenswrapper[4720]: I0122 07:11:12.221078 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0e9e7b3c-c94e-46e8-b6be-97f768f9993c" path="/var/lib/kubelet/pods/0e9e7b3c-c94e-46e8-b6be-97f768f9993c/volumes" Jan 22 07:11:12 crc kubenswrapper[4720]: I0122 07:11:12.221714 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f21b7de7-3386-4be9-bf70-d57bacd76850" path="/var/lib/kubelet/pods/f21b7de7-3386-4be9-bf70-d57bacd76850/volumes" Jan 22 07:11:12 crc kubenswrapper[4720]: I0122 07:11:12.376154 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 22 07:11:12 crc kubenswrapper[4720]: I0122 07:11:12.376505 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="c37e37bb-9267-4a15-90a8-cf5cb101730d" containerName="watcher-decision-engine" containerID="cri-o://37ffc5b53586441d22624c34ddde403cbd2dc8c740d4a9892c32e1b4a7a9b8e4" gracePeriod=30 Jan 22 07:11:12 crc kubenswrapper[4720]: I0122 07:11:12.781276 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 07:11:12 crc kubenswrapper[4720]: I0122 07:11:12.781995 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="25e30ff1-96ee-4b8d-94d7-bb2803d7641d" containerName="ceilometer-central-agent" containerID="cri-o://0661d8728d51d3efc6bf42662279a9eec0139ae41fb633e12cca81ff9c5b76e5" gracePeriod=30 Jan 22 07:11:12 crc kubenswrapper[4720]: I0122 07:11:12.782039 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="25e30ff1-96ee-4b8d-94d7-bb2803d7641d" containerName="sg-core" containerID="cri-o://7a479aba9d8840a8888b478cff9acf5bb3a09d6534f18935cbb6c4162c11c1da" gracePeriod=30 Jan 22 07:11:12 crc kubenswrapper[4720]: I0122 07:11:12.782094 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="25e30ff1-96ee-4b8d-94d7-bb2803d7641d" containerName="proxy-httpd" containerID="cri-o://ab8261bf51b5336c381a579811d613ec9e9313de656c2274e5e3c89d39bc09ca" gracePeriod=30 Jan 22 07:11:12 crc kubenswrapper[4720]: I0122 07:11:12.782139 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="25e30ff1-96ee-4b8d-94d7-bb2803d7641d" containerName="ceilometer-notification-agent" containerID="cri-o://74ecf3620a1619c6572e4a3ee81cb7259519a34f78b6c750b0f29000e20c55ec" gracePeriod=30 Jan 22 07:11:12 crc kubenswrapper[4720]: I0122 07:11:12.802512 4720 prober.go:107] "Probe failed" probeType="Readiness" pod="watcher-kuttl-default/ceilometer-0" podUID="25e30ff1-96ee-4b8d-94d7-bb2803d7641d" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.210:3000/\": EOF" Jan 22 07:11:13 crc kubenswrapper[4720]: I0122 07:11:13.084760 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_c37e37bb-9267-4a15-90a8-cf5cb101730d/watcher-decision-engine/0.log" Jan 22 07:11:13 crc kubenswrapper[4720]: I0122 07:11:13.333294 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder17a7-account-delete-kw5t9" Jan 22 07:11:13 crc kubenswrapper[4720]: I0122 07:11:13.375775 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/176f161a-f1f2-4b8b-9824-98379378d401-operator-scripts\") pod \"176f161a-f1f2-4b8b-9824-98379378d401\" (UID: \"176f161a-f1f2-4b8b-9824-98379378d401\") " Jan 22 07:11:13 crc kubenswrapper[4720]: I0122 07:11:13.376005 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6dsnd\" (UniqueName: \"kubernetes.io/projected/176f161a-f1f2-4b8b-9824-98379378d401-kube-api-access-6dsnd\") pod \"176f161a-f1f2-4b8b-9824-98379378d401\" (UID: \"176f161a-f1f2-4b8b-9824-98379378d401\") " Jan 22 07:11:13 crc kubenswrapper[4720]: I0122 07:11:13.380689 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/176f161a-f1f2-4b8b-9824-98379378d401-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "176f161a-f1f2-4b8b-9824-98379378d401" (UID: "176f161a-f1f2-4b8b-9824-98379378d401"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 07:11:13 crc kubenswrapper[4720]: I0122 07:11:13.697329 4720 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/176f161a-f1f2-4b8b-9824-98379378d401-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 07:11:13 crc kubenswrapper[4720]: I0122 07:11:13.699582 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/176f161a-f1f2-4b8b-9824-98379378d401-kube-api-access-6dsnd" (OuterVolumeSpecName: "kube-api-access-6dsnd") pod "176f161a-f1f2-4b8b-9824-98379378d401" (UID: "176f161a-f1f2-4b8b-9824-98379378d401"). InnerVolumeSpecName "kube-api-access-6dsnd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 07:11:13 crc kubenswrapper[4720]: I0122 07:11:13.798579 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6dsnd\" (UniqueName: \"kubernetes.io/projected/176f161a-f1f2-4b8b-9824-98379378d401-kube-api-access-6dsnd\") on node \"crc\" DevicePath \"\"" Jan 22 07:11:13 crc kubenswrapper[4720]: I0122 07:11:13.912733 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder17a7-account-delete-kw5t9" Jan 22 07:11:13 crc kubenswrapper[4720]: I0122 07:11:13.913815 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder17a7-account-delete-kw5t9" event={"ID":"176f161a-f1f2-4b8b-9824-98379378d401","Type":"ContainerDied","Data":"8d6caa59fb075796beab7eaa99dc64c67ae79a64f4c592032608d26c8ed0edb7"} Jan 22 07:11:13 crc kubenswrapper[4720]: I0122 07:11:13.913875 4720 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8d6caa59fb075796beab7eaa99dc64c67ae79a64f4c592032608d26c8ed0edb7" Jan 22 07:11:13 crc kubenswrapper[4720]: I0122 07:11:13.927367 4720 generic.go:334] "Generic (PLEG): container finished" podID="cd08ccf6-7d46-4a4a-a77b-571fa77bba36" containerID="54f87871a87363ea945108d40f1796b642e83c8e7ca3c68c49dce0ba66ee7d31" exitCode=0 Jan 22 07:11:13 crc kubenswrapper[4720]: I0122 07:11:13.927674 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-api-0" event={"ID":"cd08ccf6-7d46-4a4a-a77b-571fa77bba36","Type":"ContainerDied","Data":"54f87871a87363ea945108d40f1796b642e83c8e7ca3c68c49dce0ba66ee7d31"} Jan 22 07:11:13 crc kubenswrapper[4720]: I0122 07:11:13.927731 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/cinder-api-0" event={"ID":"cd08ccf6-7d46-4a4a-a77b-571fa77bba36","Type":"ContainerDied","Data":"fff401435ff0142c8b93a3a3ac2e08f891b08f63e316e2b31bf61fbc0b4cb0df"} Jan 22 07:11:13 crc kubenswrapper[4720]: I0122 07:11:13.927749 4720 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fff401435ff0142c8b93a3a3ac2e08f891b08f63e316e2b31bf61fbc0b4cb0df" Jan 22 07:11:13 crc kubenswrapper[4720]: I0122 07:11:13.939585 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder-api-0" Jan 22 07:11:13 crc kubenswrapper[4720]: I0122 07:11:13.940700 4720 generic.go:334] "Generic (PLEG): container finished" podID="25e30ff1-96ee-4b8d-94d7-bb2803d7641d" containerID="ab8261bf51b5336c381a579811d613ec9e9313de656c2274e5e3c89d39bc09ca" exitCode=0 Jan 22 07:11:13 crc kubenswrapper[4720]: I0122 07:11:13.941216 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"25e30ff1-96ee-4b8d-94d7-bb2803d7641d","Type":"ContainerDied","Data":"ab8261bf51b5336c381a579811d613ec9e9313de656c2274e5e3c89d39bc09ca"} Jan 22 07:11:13 crc kubenswrapper[4720]: I0122 07:11:13.941287 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"25e30ff1-96ee-4b8d-94d7-bb2803d7641d","Type":"ContainerDied","Data":"7a479aba9d8840a8888b478cff9acf5bb3a09d6534f18935cbb6c4162c11c1da"} Jan 22 07:11:13 crc kubenswrapper[4720]: I0122 07:11:13.943574 4720 generic.go:334] "Generic (PLEG): container finished" podID="25e30ff1-96ee-4b8d-94d7-bb2803d7641d" containerID="7a479aba9d8840a8888b478cff9acf5bb3a09d6534f18935cbb6c4162c11c1da" exitCode=2 Jan 22 07:11:13 crc kubenswrapper[4720]: I0122 07:11:13.943658 4720 generic.go:334] "Generic (PLEG): container finished" podID="25e30ff1-96ee-4b8d-94d7-bb2803d7641d" containerID="74ecf3620a1619c6572e4a3ee81cb7259519a34f78b6c750b0f29000e20c55ec" exitCode=0 Jan 22 07:11:13 crc kubenswrapper[4720]: I0122 07:11:13.943693 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"25e30ff1-96ee-4b8d-94d7-bb2803d7641d","Type":"ContainerDied","Data":"74ecf3620a1619c6572e4a3ee81cb7259519a34f78b6c750b0f29000e20c55ec"} Jan 22 07:11:14 crc kubenswrapper[4720]: I0122 07:11:14.001866 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cd08ccf6-7d46-4a4a-a77b-571fa77bba36-public-tls-certs\") pod \"cd08ccf6-7d46-4a4a-a77b-571fa77bba36\" (UID: \"cd08ccf6-7d46-4a4a-a77b-571fa77bba36\") " Jan 22 07:11:14 crc kubenswrapper[4720]: I0122 07:11:14.002067 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/cd08ccf6-7d46-4a4a-a77b-571fa77bba36-etc-machine-id\") pod \"cd08ccf6-7d46-4a4a-a77b-571fa77bba36\" (UID: \"cd08ccf6-7d46-4a4a-a77b-571fa77bba36\") " Jan 22 07:11:14 crc kubenswrapper[4720]: I0122 07:11:14.002128 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cd08ccf6-7d46-4a4a-a77b-571fa77bba36-logs\") pod \"cd08ccf6-7d46-4a4a-a77b-571fa77bba36\" (UID: \"cd08ccf6-7d46-4a4a-a77b-571fa77bba36\") " Jan 22 07:11:14 crc kubenswrapper[4720]: I0122 07:11:14.002186 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/cd08ccf6-7d46-4a4a-a77b-571fa77bba36-cert-memcached-mtls\") pod \"cd08ccf6-7d46-4a4a-a77b-571fa77bba36\" (UID: \"cd08ccf6-7d46-4a4a-a77b-571fa77bba36\") " Jan 22 07:11:14 crc kubenswrapper[4720]: I0122 07:11:14.002224 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd08ccf6-7d46-4a4a-a77b-571fa77bba36-combined-ca-bundle\") pod \"cd08ccf6-7d46-4a4a-a77b-571fa77bba36\" (UID: \"cd08ccf6-7d46-4a4a-a77b-571fa77bba36\") " Jan 22 07:11:14 crc kubenswrapper[4720]: I0122 07:11:14.002264 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cd08ccf6-7d46-4a4a-a77b-571fa77bba36-config-data-custom\") pod \"cd08ccf6-7d46-4a4a-a77b-571fa77bba36\" (UID: \"cd08ccf6-7d46-4a4a-a77b-571fa77bba36\") " Jan 22 07:11:14 crc kubenswrapper[4720]: I0122 07:11:14.002311 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cd08ccf6-7d46-4a4a-a77b-571fa77bba36-internal-tls-certs\") pod \"cd08ccf6-7d46-4a4a-a77b-571fa77bba36\" (UID: \"cd08ccf6-7d46-4a4a-a77b-571fa77bba36\") " Jan 22 07:11:14 crc kubenswrapper[4720]: I0122 07:11:14.002347 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cd08ccf6-7d46-4a4a-a77b-571fa77bba36-scripts\") pod \"cd08ccf6-7d46-4a4a-a77b-571fa77bba36\" (UID: \"cd08ccf6-7d46-4a4a-a77b-571fa77bba36\") " Jan 22 07:11:14 crc kubenswrapper[4720]: I0122 07:11:14.002370 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4ct58\" (UniqueName: \"kubernetes.io/projected/cd08ccf6-7d46-4a4a-a77b-571fa77bba36-kube-api-access-4ct58\") pod \"cd08ccf6-7d46-4a4a-a77b-571fa77bba36\" (UID: \"cd08ccf6-7d46-4a4a-a77b-571fa77bba36\") " Jan 22 07:11:14 crc kubenswrapper[4720]: I0122 07:11:14.002435 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd08ccf6-7d46-4a4a-a77b-571fa77bba36-config-data\") pod \"cd08ccf6-7d46-4a4a-a77b-571fa77bba36\" (UID: \"cd08ccf6-7d46-4a4a-a77b-571fa77bba36\") " Jan 22 07:11:14 crc kubenswrapper[4720]: I0122 07:11:14.004346 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cd08ccf6-7d46-4a4a-a77b-571fa77bba36-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "cd08ccf6-7d46-4a4a-a77b-571fa77bba36" (UID: "cd08ccf6-7d46-4a4a-a77b-571fa77bba36"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 22 07:11:14 crc kubenswrapper[4720]: I0122 07:11:14.004841 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cd08ccf6-7d46-4a4a-a77b-571fa77bba36-logs" (OuterVolumeSpecName: "logs") pod "cd08ccf6-7d46-4a4a-a77b-571fa77bba36" (UID: "cd08ccf6-7d46-4a4a-a77b-571fa77bba36"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 07:11:14 crc kubenswrapper[4720]: I0122 07:11:14.028205 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd08ccf6-7d46-4a4a-a77b-571fa77bba36-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "cd08ccf6-7d46-4a4a-a77b-571fa77bba36" (UID: "cd08ccf6-7d46-4a4a-a77b-571fa77bba36"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:11:14 crc kubenswrapper[4720]: I0122 07:11:14.042712 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd08ccf6-7d46-4a4a-a77b-571fa77bba36-scripts" (OuterVolumeSpecName: "scripts") pod "cd08ccf6-7d46-4a4a-a77b-571fa77bba36" (UID: "cd08ccf6-7d46-4a4a-a77b-571fa77bba36"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:11:14 crc kubenswrapper[4720]: I0122 07:11:14.070461 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd08ccf6-7d46-4a4a-a77b-571fa77bba36-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cd08ccf6-7d46-4a4a-a77b-571fa77bba36" (UID: "cd08ccf6-7d46-4a4a-a77b-571fa77bba36"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:11:14 crc kubenswrapper[4720]: I0122 07:11:14.070529 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd08ccf6-7d46-4a4a-a77b-571fa77bba36-kube-api-access-4ct58" (OuterVolumeSpecName: "kube-api-access-4ct58") pod "cd08ccf6-7d46-4a4a-a77b-571fa77bba36" (UID: "cd08ccf6-7d46-4a4a-a77b-571fa77bba36"). InnerVolumeSpecName "kube-api-access-4ct58". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 07:11:14 crc kubenswrapper[4720]: I0122 07:11:14.103372 4720 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cd08ccf6-7d46-4a4a-a77b-571fa77bba36-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 07:11:14 crc kubenswrapper[4720]: I0122 07:11:14.103402 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4ct58\" (UniqueName: \"kubernetes.io/projected/cd08ccf6-7d46-4a4a-a77b-571fa77bba36-kube-api-access-4ct58\") on node \"crc\" DevicePath \"\"" Jan 22 07:11:14 crc kubenswrapper[4720]: I0122 07:11:14.103414 4720 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/cd08ccf6-7d46-4a4a-a77b-571fa77bba36-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 22 07:11:14 crc kubenswrapper[4720]: I0122 07:11:14.103423 4720 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cd08ccf6-7d46-4a4a-a77b-571fa77bba36-logs\") on node \"crc\" DevicePath \"\"" Jan 22 07:11:14 crc kubenswrapper[4720]: I0122 07:11:14.103431 4720 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd08ccf6-7d46-4a4a-a77b-571fa77bba36-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 07:11:14 crc kubenswrapper[4720]: I0122 07:11:14.103440 4720 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cd08ccf6-7d46-4a4a-a77b-571fa77bba36-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 22 07:11:14 crc kubenswrapper[4720]: I0122 07:11:14.107097 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd08ccf6-7d46-4a4a-a77b-571fa77bba36-config-data" (OuterVolumeSpecName: "config-data") pod "cd08ccf6-7d46-4a4a-a77b-571fa77bba36" (UID: "cd08ccf6-7d46-4a4a-a77b-571fa77bba36"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:11:14 crc kubenswrapper[4720]: I0122 07:11:14.136390 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd08ccf6-7d46-4a4a-a77b-571fa77bba36-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "cd08ccf6-7d46-4a4a-a77b-571fa77bba36" (UID: "cd08ccf6-7d46-4a4a-a77b-571fa77bba36"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:11:14 crc kubenswrapper[4720]: I0122 07:11:14.136565 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd08ccf6-7d46-4a4a-a77b-571fa77bba36-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "cd08ccf6-7d46-4a4a-a77b-571fa77bba36" (UID: "cd08ccf6-7d46-4a4a-a77b-571fa77bba36"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:11:14 crc kubenswrapper[4720]: I0122 07:11:14.187121 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd08ccf6-7d46-4a4a-a77b-571fa77bba36-cert-memcached-mtls" (OuterVolumeSpecName: "cert-memcached-mtls") pod "cd08ccf6-7d46-4a4a-a77b-571fa77bba36" (UID: "cd08ccf6-7d46-4a4a-a77b-571fa77bba36"). InnerVolumeSpecName "cert-memcached-mtls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:11:14 crc kubenswrapper[4720]: I0122 07:11:14.205004 4720 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cd08ccf6-7d46-4a4a-a77b-571fa77bba36-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 22 07:11:14 crc kubenswrapper[4720]: I0122 07:11:14.205056 4720 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd08ccf6-7d46-4a4a-a77b-571fa77bba36-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 07:11:14 crc kubenswrapper[4720]: I0122 07:11:14.205069 4720 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cd08ccf6-7d46-4a4a-a77b-571fa77bba36-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 22 07:11:14 crc kubenswrapper[4720]: I0122 07:11:14.205081 4720 reconciler_common.go:293] "Volume detached for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/cd08ccf6-7d46-4a4a-a77b-571fa77bba36-cert-memcached-mtls\") on node \"crc\" DevicePath \"\"" Jan 22 07:11:14 crc kubenswrapper[4720]: I0122 07:11:14.250332 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:11:14 crc kubenswrapper[4720]: I0122 07:11:14.306246 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25e30ff1-96ee-4b8d-94d7-bb2803d7641d-combined-ca-bundle\") pod \"25e30ff1-96ee-4b8d-94d7-bb2803d7641d\" (UID: \"25e30ff1-96ee-4b8d-94d7-bb2803d7641d\") " Jan 22 07:11:14 crc kubenswrapper[4720]: I0122 07:11:14.306310 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/25e30ff1-96ee-4b8d-94d7-bb2803d7641d-run-httpd\") pod \"25e30ff1-96ee-4b8d-94d7-bb2803d7641d\" (UID: \"25e30ff1-96ee-4b8d-94d7-bb2803d7641d\") " Jan 22 07:11:14 crc kubenswrapper[4720]: I0122 07:11:14.306368 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/25e30ff1-96ee-4b8d-94d7-bb2803d7641d-scripts\") pod \"25e30ff1-96ee-4b8d-94d7-bb2803d7641d\" (UID: \"25e30ff1-96ee-4b8d-94d7-bb2803d7641d\") " Jan 22 07:11:14 crc kubenswrapper[4720]: I0122 07:11:14.306442 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fxgjz\" (UniqueName: \"kubernetes.io/projected/25e30ff1-96ee-4b8d-94d7-bb2803d7641d-kube-api-access-fxgjz\") pod \"25e30ff1-96ee-4b8d-94d7-bb2803d7641d\" (UID: \"25e30ff1-96ee-4b8d-94d7-bb2803d7641d\") " Jan 22 07:11:14 crc kubenswrapper[4720]: I0122 07:11:14.306468 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25e30ff1-96ee-4b8d-94d7-bb2803d7641d-config-data\") pod \"25e30ff1-96ee-4b8d-94d7-bb2803d7641d\" (UID: \"25e30ff1-96ee-4b8d-94d7-bb2803d7641d\") " Jan 22 07:11:14 crc kubenswrapper[4720]: I0122 07:11:14.306499 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/25e30ff1-96ee-4b8d-94d7-bb2803d7641d-ceilometer-tls-certs\") pod \"25e30ff1-96ee-4b8d-94d7-bb2803d7641d\" (UID: \"25e30ff1-96ee-4b8d-94d7-bb2803d7641d\") " Jan 22 07:11:14 crc kubenswrapper[4720]: I0122 07:11:14.306503 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_c37e37bb-9267-4a15-90a8-cf5cb101730d/watcher-decision-engine/0.log" Jan 22 07:11:14 crc kubenswrapper[4720]: I0122 07:11:14.306563 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/25e30ff1-96ee-4b8d-94d7-bb2803d7641d-sg-core-conf-yaml\") pod \"25e30ff1-96ee-4b8d-94d7-bb2803d7641d\" (UID: \"25e30ff1-96ee-4b8d-94d7-bb2803d7641d\") " Jan 22 07:11:14 crc kubenswrapper[4720]: I0122 07:11:14.306587 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/25e30ff1-96ee-4b8d-94d7-bb2803d7641d-log-httpd\") pod \"25e30ff1-96ee-4b8d-94d7-bb2803d7641d\" (UID: \"25e30ff1-96ee-4b8d-94d7-bb2803d7641d\") " Jan 22 07:11:14 crc kubenswrapper[4720]: I0122 07:11:14.309075 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/25e30ff1-96ee-4b8d-94d7-bb2803d7641d-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "25e30ff1-96ee-4b8d-94d7-bb2803d7641d" (UID: "25e30ff1-96ee-4b8d-94d7-bb2803d7641d"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 07:11:14 crc kubenswrapper[4720]: I0122 07:11:14.309676 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/25e30ff1-96ee-4b8d-94d7-bb2803d7641d-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "25e30ff1-96ee-4b8d-94d7-bb2803d7641d" (UID: "25e30ff1-96ee-4b8d-94d7-bb2803d7641d"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 07:11:14 crc kubenswrapper[4720]: I0122 07:11:14.312049 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e30ff1-96ee-4b8d-94d7-bb2803d7641d-scripts" (OuterVolumeSpecName: "scripts") pod "25e30ff1-96ee-4b8d-94d7-bb2803d7641d" (UID: "25e30ff1-96ee-4b8d-94d7-bb2803d7641d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:11:14 crc kubenswrapper[4720]: I0122 07:11:14.312608 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e30ff1-96ee-4b8d-94d7-bb2803d7641d-kube-api-access-fxgjz" (OuterVolumeSpecName: "kube-api-access-fxgjz") pod "25e30ff1-96ee-4b8d-94d7-bb2803d7641d" (UID: "25e30ff1-96ee-4b8d-94d7-bb2803d7641d"). InnerVolumeSpecName "kube-api-access-fxgjz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 07:11:14 crc kubenswrapper[4720]: I0122 07:11:14.362303 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e30ff1-96ee-4b8d-94d7-bb2803d7641d-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "25e30ff1-96ee-4b8d-94d7-bb2803d7641d" (UID: "25e30ff1-96ee-4b8d-94d7-bb2803d7641d"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:11:14 crc kubenswrapper[4720]: I0122 07:11:14.377333 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e30ff1-96ee-4b8d-94d7-bb2803d7641d-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "25e30ff1-96ee-4b8d-94d7-bb2803d7641d" (UID: "25e30ff1-96ee-4b8d-94d7-bb2803d7641d"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:11:14 crc kubenswrapper[4720]: I0122 07:11:14.382350 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e30ff1-96ee-4b8d-94d7-bb2803d7641d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "25e30ff1-96ee-4b8d-94d7-bb2803d7641d" (UID: "25e30ff1-96ee-4b8d-94d7-bb2803d7641d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:11:14 crc kubenswrapper[4720]: I0122 07:11:14.408484 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fxgjz\" (UniqueName: \"kubernetes.io/projected/25e30ff1-96ee-4b8d-94d7-bb2803d7641d-kube-api-access-fxgjz\") on node \"crc\" DevicePath \"\"" Jan 22 07:11:14 crc kubenswrapper[4720]: I0122 07:11:14.408513 4720 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/25e30ff1-96ee-4b8d-94d7-bb2803d7641d-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 22 07:11:14 crc kubenswrapper[4720]: I0122 07:11:14.408524 4720 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/25e30ff1-96ee-4b8d-94d7-bb2803d7641d-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 22 07:11:14 crc kubenswrapper[4720]: I0122 07:11:14.408534 4720 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/25e30ff1-96ee-4b8d-94d7-bb2803d7641d-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 22 07:11:14 crc kubenswrapper[4720]: I0122 07:11:14.408545 4720 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25e30ff1-96ee-4b8d-94d7-bb2803d7641d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 07:11:14 crc kubenswrapper[4720]: I0122 07:11:14.408554 4720 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/25e30ff1-96ee-4b8d-94d7-bb2803d7641d-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 22 07:11:14 crc kubenswrapper[4720]: I0122 07:11:14.408566 4720 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/25e30ff1-96ee-4b8d-94d7-bb2803d7641d-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 07:11:14 crc kubenswrapper[4720]: I0122 07:11:14.427232 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e30ff1-96ee-4b8d-94d7-bb2803d7641d-config-data" (OuterVolumeSpecName: "config-data") pod "25e30ff1-96ee-4b8d-94d7-bb2803d7641d" (UID: "25e30ff1-96ee-4b8d-94d7-bb2803d7641d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:11:14 crc kubenswrapper[4720]: I0122 07:11:14.509124 4720 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25e30ff1-96ee-4b8d-94d7-bb2803d7641d-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 07:11:14 crc kubenswrapper[4720]: I0122 07:11:14.578013 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/cinder-db-create-q6sv2"] Jan 22 07:11:14 crc kubenswrapper[4720]: I0122 07:11:14.584453 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/cinder-db-create-q6sv2"] Jan 22 07:11:14 crc kubenswrapper[4720]: I0122 07:11:14.603719 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/cinder17a7-account-delete-kw5t9"] Jan 22 07:11:14 crc kubenswrapper[4720]: I0122 07:11:14.614546 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/cinder-17a7-account-create-update-lx4js"] Jan 22 07:11:14 crc kubenswrapper[4720]: I0122 07:11:14.623680 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/cinder17a7-account-delete-kw5t9"] Jan 22 07:11:14 crc kubenswrapper[4720]: I0122 07:11:14.630891 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/cinder-17a7-account-create-update-lx4js"] Jan 22 07:11:14 crc kubenswrapper[4720]: I0122 07:11:14.956376 4720 generic.go:334] "Generic (PLEG): container finished" podID="25e30ff1-96ee-4b8d-94d7-bb2803d7641d" containerID="0661d8728d51d3efc6bf42662279a9eec0139ae41fb633e12cca81ff9c5b76e5" exitCode=0 Jan 22 07:11:14 crc kubenswrapper[4720]: I0122 07:11:14.956679 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/cinder-api-0" Jan 22 07:11:14 crc kubenswrapper[4720]: I0122 07:11:14.957097 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"25e30ff1-96ee-4b8d-94d7-bb2803d7641d","Type":"ContainerDied","Data":"0661d8728d51d3efc6bf42662279a9eec0139ae41fb633e12cca81ff9c5b76e5"} Jan 22 07:11:14 crc kubenswrapper[4720]: I0122 07:11:14.957123 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:11:14 crc kubenswrapper[4720]: I0122 07:11:14.957171 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"25e30ff1-96ee-4b8d-94d7-bb2803d7641d","Type":"ContainerDied","Data":"7e3983b058e0562b56743837146187be98b6bb11f0a5b478338819ea5ef33f61"} Jan 22 07:11:14 crc kubenswrapper[4720]: I0122 07:11:14.957194 4720 scope.go:117] "RemoveContainer" containerID="ab8261bf51b5336c381a579811d613ec9e9313de656c2274e5e3c89d39bc09ca" Jan 22 07:11:14 crc kubenswrapper[4720]: I0122 07:11:14.986314 4720 scope.go:117] "RemoveContainer" containerID="7a479aba9d8840a8888b478cff9acf5bb3a09d6534f18935cbb6c4162c11c1da" Jan 22 07:11:15 crc kubenswrapper[4720]: I0122 07:11:15.004979 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/cinder-api-0"] Jan 22 07:11:15 crc kubenswrapper[4720]: I0122 07:11:15.016116 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/cinder-api-0"] Jan 22 07:11:15 crc kubenswrapper[4720]: I0122 07:11:15.177991 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 07:11:15 crc kubenswrapper[4720]: I0122 07:11:15.183392 4720 scope.go:117] "RemoveContainer" containerID="74ecf3620a1619c6572e4a3ee81cb7259519a34f78b6c750b0f29000e20c55ec" Jan 22 07:11:15 crc kubenswrapper[4720]: I0122 07:11:15.185773 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 07:11:15 crc kubenswrapper[4720]: I0122 07:11:15.225613 4720 scope.go:117] "RemoveContainer" containerID="0661d8728d51d3efc6bf42662279a9eec0139ae41fb633e12cca81ff9c5b76e5" Jan 22 07:11:15 crc kubenswrapper[4720]: I0122 07:11:15.230595 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 07:11:15 crc kubenswrapper[4720]: E0122 07:11:15.231000 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e9e7b3c-c94e-46e8-b6be-97f768f9993c" containerName="probe" Jan 22 07:11:15 crc kubenswrapper[4720]: I0122 07:11:15.231018 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e9e7b3c-c94e-46e8-b6be-97f768f9993c" containerName="probe" Jan 22 07:11:15 crc kubenswrapper[4720]: E0122 07:11:15.231034 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25e30ff1-96ee-4b8d-94d7-bb2803d7641d" containerName="ceilometer-notification-agent" Jan 22 07:11:15 crc kubenswrapper[4720]: I0122 07:11:15.231042 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="25e30ff1-96ee-4b8d-94d7-bb2803d7641d" containerName="ceilometer-notification-agent" Jan 22 07:11:15 crc kubenswrapper[4720]: E0122 07:11:15.231056 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd08ccf6-7d46-4a4a-a77b-571fa77bba36" containerName="cinder-api-log" Jan 22 07:11:15 crc kubenswrapper[4720]: I0122 07:11:15.231062 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd08ccf6-7d46-4a4a-a77b-571fa77bba36" containerName="cinder-api-log" Jan 22 07:11:15 crc kubenswrapper[4720]: E0122 07:11:15.231074 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd08ccf6-7d46-4a4a-a77b-571fa77bba36" containerName="cinder-api" Jan 22 07:11:15 crc kubenswrapper[4720]: I0122 07:11:15.231080 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd08ccf6-7d46-4a4a-a77b-571fa77bba36" containerName="cinder-api" Jan 22 07:11:15 crc kubenswrapper[4720]: E0122 07:11:15.231092 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25e30ff1-96ee-4b8d-94d7-bb2803d7641d" containerName="proxy-httpd" Jan 22 07:11:15 crc kubenswrapper[4720]: I0122 07:11:15.231099 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="25e30ff1-96ee-4b8d-94d7-bb2803d7641d" containerName="proxy-httpd" Jan 22 07:11:15 crc kubenswrapper[4720]: E0122 07:11:15.231110 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25e30ff1-96ee-4b8d-94d7-bb2803d7641d" containerName="ceilometer-central-agent" Jan 22 07:11:15 crc kubenswrapper[4720]: I0122 07:11:15.231118 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="25e30ff1-96ee-4b8d-94d7-bb2803d7641d" containerName="ceilometer-central-agent" Jan 22 07:11:15 crc kubenswrapper[4720]: E0122 07:11:15.231130 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25e30ff1-96ee-4b8d-94d7-bb2803d7641d" containerName="sg-core" Jan 22 07:11:15 crc kubenswrapper[4720]: I0122 07:11:15.231135 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="25e30ff1-96ee-4b8d-94d7-bb2803d7641d" containerName="sg-core" Jan 22 07:11:15 crc kubenswrapper[4720]: E0122 07:11:15.231144 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e9e7b3c-c94e-46e8-b6be-97f768f9993c" containerName="cinder-backup" Jan 22 07:11:15 crc kubenswrapper[4720]: I0122 07:11:15.231151 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e9e7b3c-c94e-46e8-b6be-97f768f9993c" containerName="cinder-backup" Jan 22 07:11:15 crc kubenswrapper[4720]: E0122 07:11:15.231161 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="176f161a-f1f2-4b8b-9824-98379378d401" containerName="mariadb-account-delete" Jan 22 07:11:15 crc kubenswrapper[4720]: I0122 07:11:15.231169 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="176f161a-f1f2-4b8b-9824-98379378d401" containerName="mariadb-account-delete" Jan 22 07:11:15 crc kubenswrapper[4720]: E0122 07:11:15.231184 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f21b7de7-3386-4be9-bf70-d57bacd76850" containerName="cinder-scheduler" Jan 22 07:11:15 crc kubenswrapper[4720]: I0122 07:11:15.231190 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="f21b7de7-3386-4be9-bf70-d57bacd76850" containerName="cinder-scheduler" Jan 22 07:11:15 crc kubenswrapper[4720]: E0122 07:11:15.231202 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f21b7de7-3386-4be9-bf70-d57bacd76850" containerName="probe" Jan 22 07:11:15 crc kubenswrapper[4720]: I0122 07:11:15.231208 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="f21b7de7-3386-4be9-bf70-d57bacd76850" containerName="probe" Jan 22 07:11:15 crc kubenswrapper[4720]: I0122 07:11:15.231353 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="176f161a-f1f2-4b8b-9824-98379378d401" containerName="mariadb-account-delete" Jan 22 07:11:15 crc kubenswrapper[4720]: I0122 07:11:15.231369 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e9e7b3c-c94e-46e8-b6be-97f768f9993c" containerName="probe" Jan 22 07:11:15 crc kubenswrapper[4720]: I0122 07:11:15.231380 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="25e30ff1-96ee-4b8d-94d7-bb2803d7641d" containerName="ceilometer-notification-agent" Jan 22 07:11:15 crc kubenswrapper[4720]: I0122 07:11:15.231386 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="25e30ff1-96ee-4b8d-94d7-bb2803d7641d" containerName="ceilometer-central-agent" Jan 22 07:11:15 crc kubenswrapper[4720]: I0122 07:11:15.231396 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="f21b7de7-3386-4be9-bf70-d57bacd76850" containerName="cinder-scheduler" Jan 22 07:11:15 crc kubenswrapper[4720]: I0122 07:11:15.231405 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e9e7b3c-c94e-46e8-b6be-97f768f9993c" containerName="cinder-backup" Jan 22 07:11:15 crc kubenswrapper[4720]: I0122 07:11:15.231412 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="f21b7de7-3386-4be9-bf70-d57bacd76850" containerName="probe" Jan 22 07:11:15 crc kubenswrapper[4720]: I0122 07:11:15.231419 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="25e30ff1-96ee-4b8d-94d7-bb2803d7641d" containerName="proxy-httpd" Jan 22 07:11:15 crc kubenswrapper[4720]: I0122 07:11:15.231427 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd08ccf6-7d46-4a4a-a77b-571fa77bba36" containerName="cinder-api-log" Jan 22 07:11:15 crc kubenswrapper[4720]: I0122 07:11:15.231435 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd08ccf6-7d46-4a4a-a77b-571fa77bba36" containerName="cinder-api" Jan 22 07:11:15 crc kubenswrapper[4720]: I0122 07:11:15.231441 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="25e30ff1-96ee-4b8d-94d7-bb2803d7641d" containerName="sg-core" Jan 22 07:11:15 crc kubenswrapper[4720]: I0122 07:11:15.237021 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:11:15 crc kubenswrapper[4720]: I0122 07:11:15.243316 4720 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-scripts" Jan 22 07:11:15 crc kubenswrapper[4720]: I0122 07:11:15.243570 4720 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-config-data" Jan 22 07:11:15 crc kubenswrapper[4720]: I0122 07:11:15.243735 4720 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-ceilometer-internal-svc" Jan 22 07:11:15 crc kubenswrapper[4720]: I0122 07:11:15.255114 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 07:11:15 crc kubenswrapper[4720]: I0122 07:11:15.287980 4720 scope.go:117] "RemoveContainer" containerID="ab8261bf51b5336c381a579811d613ec9e9313de656c2274e5e3c89d39bc09ca" Jan 22 07:11:15 crc kubenswrapper[4720]: E0122 07:11:15.288517 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ab8261bf51b5336c381a579811d613ec9e9313de656c2274e5e3c89d39bc09ca\": container with ID starting with ab8261bf51b5336c381a579811d613ec9e9313de656c2274e5e3c89d39bc09ca not found: ID does not exist" containerID="ab8261bf51b5336c381a579811d613ec9e9313de656c2274e5e3c89d39bc09ca" Jan 22 07:11:15 crc kubenswrapper[4720]: I0122 07:11:15.288548 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ab8261bf51b5336c381a579811d613ec9e9313de656c2274e5e3c89d39bc09ca"} err="failed to get container status \"ab8261bf51b5336c381a579811d613ec9e9313de656c2274e5e3c89d39bc09ca\": rpc error: code = NotFound desc = could not find container \"ab8261bf51b5336c381a579811d613ec9e9313de656c2274e5e3c89d39bc09ca\": container with ID starting with ab8261bf51b5336c381a579811d613ec9e9313de656c2274e5e3c89d39bc09ca not found: ID does not exist" Jan 22 07:11:15 crc kubenswrapper[4720]: I0122 07:11:15.288570 4720 scope.go:117] "RemoveContainer" containerID="7a479aba9d8840a8888b478cff9acf5bb3a09d6534f18935cbb6c4162c11c1da" Jan 22 07:11:15 crc kubenswrapper[4720]: E0122 07:11:15.288812 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7a479aba9d8840a8888b478cff9acf5bb3a09d6534f18935cbb6c4162c11c1da\": container with ID starting with 7a479aba9d8840a8888b478cff9acf5bb3a09d6534f18935cbb6c4162c11c1da not found: ID does not exist" containerID="7a479aba9d8840a8888b478cff9acf5bb3a09d6534f18935cbb6c4162c11c1da" Jan 22 07:11:15 crc kubenswrapper[4720]: I0122 07:11:15.288830 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7a479aba9d8840a8888b478cff9acf5bb3a09d6534f18935cbb6c4162c11c1da"} err="failed to get container status \"7a479aba9d8840a8888b478cff9acf5bb3a09d6534f18935cbb6c4162c11c1da\": rpc error: code = NotFound desc = could not find container \"7a479aba9d8840a8888b478cff9acf5bb3a09d6534f18935cbb6c4162c11c1da\": container with ID starting with 7a479aba9d8840a8888b478cff9acf5bb3a09d6534f18935cbb6c4162c11c1da not found: ID does not exist" Jan 22 07:11:15 crc kubenswrapper[4720]: I0122 07:11:15.288842 4720 scope.go:117] "RemoveContainer" containerID="74ecf3620a1619c6572e4a3ee81cb7259519a34f78b6c750b0f29000e20c55ec" Jan 22 07:11:15 crc kubenswrapper[4720]: E0122 07:11:15.289056 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"74ecf3620a1619c6572e4a3ee81cb7259519a34f78b6c750b0f29000e20c55ec\": container with ID starting with 74ecf3620a1619c6572e4a3ee81cb7259519a34f78b6c750b0f29000e20c55ec not found: ID does not exist" containerID="74ecf3620a1619c6572e4a3ee81cb7259519a34f78b6c750b0f29000e20c55ec" Jan 22 07:11:15 crc kubenswrapper[4720]: I0122 07:11:15.289085 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"74ecf3620a1619c6572e4a3ee81cb7259519a34f78b6c750b0f29000e20c55ec"} err="failed to get container status \"74ecf3620a1619c6572e4a3ee81cb7259519a34f78b6c750b0f29000e20c55ec\": rpc error: code = NotFound desc = could not find container \"74ecf3620a1619c6572e4a3ee81cb7259519a34f78b6c750b0f29000e20c55ec\": container with ID starting with 74ecf3620a1619c6572e4a3ee81cb7259519a34f78b6c750b0f29000e20c55ec not found: ID does not exist" Jan 22 07:11:15 crc kubenswrapper[4720]: I0122 07:11:15.289098 4720 scope.go:117] "RemoveContainer" containerID="0661d8728d51d3efc6bf42662279a9eec0139ae41fb633e12cca81ff9c5b76e5" Jan 22 07:11:15 crc kubenswrapper[4720]: E0122 07:11:15.289305 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0661d8728d51d3efc6bf42662279a9eec0139ae41fb633e12cca81ff9c5b76e5\": container with ID starting with 0661d8728d51d3efc6bf42662279a9eec0139ae41fb633e12cca81ff9c5b76e5 not found: ID does not exist" containerID="0661d8728d51d3efc6bf42662279a9eec0139ae41fb633e12cca81ff9c5b76e5" Jan 22 07:11:15 crc kubenswrapper[4720]: I0122 07:11:15.289321 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0661d8728d51d3efc6bf42662279a9eec0139ae41fb633e12cca81ff9c5b76e5"} err="failed to get container status \"0661d8728d51d3efc6bf42662279a9eec0139ae41fb633e12cca81ff9c5b76e5\": rpc error: code = NotFound desc = could not find container \"0661d8728d51d3efc6bf42662279a9eec0139ae41fb633e12cca81ff9c5b76e5\": container with ID starting with 0661d8728d51d3efc6bf42662279a9eec0139ae41fb633e12cca81ff9c5b76e5 not found: ID does not exist" Jan 22 07:11:15 crc kubenswrapper[4720]: I0122 07:11:15.351340 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/cd50208a-0f02-4a61-9393-5d24423ffd69-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"cd50208a-0f02-4a61-9393-5d24423ffd69\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:11:15 crc kubenswrapper[4720]: I0122 07:11:15.351402 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cd50208a-0f02-4a61-9393-5d24423ffd69-run-httpd\") pod \"ceilometer-0\" (UID: \"cd50208a-0f02-4a61-9393-5d24423ffd69\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:11:15 crc kubenswrapper[4720]: I0122 07:11:15.351423 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kdm8g\" (UniqueName: \"kubernetes.io/projected/cd50208a-0f02-4a61-9393-5d24423ffd69-kube-api-access-kdm8g\") pod \"ceilometer-0\" (UID: \"cd50208a-0f02-4a61-9393-5d24423ffd69\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:11:15 crc kubenswrapper[4720]: I0122 07:11:15.351440 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cd50208a-0f02-4a61-9393-5d24423ffd69-scripts\") pod \"ceilometer-0\" (UID: \"cd50208a-0f02-4a61-9393-5d24423ffd69\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:11:15 crc kubenswrapper[4720]: I0122 07:11:15.351453 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cd50208a-0f02-4a61-9393-5d24423ffd69-log-httpd\") pod \"ceilometer-0\" (UID: \"cd50208a-0f02-4a61-9393-5d24423ffd69\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:11:15 crc kubenswrapper[4720]: I0122 07:11:15.351823 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd50208a-0f02-4a61-9393-5d24423ffd69-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"cd50208a-0f02-4a61-9393-5d24423ffd69\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:11:15 crc kubenswrapper[4720]: I0122 07:11:15.351997 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/cd50208a-0f02-4a61-9393-5d24423ffd69-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"cd50208a-0f02-4a61-9393-5d24423ffd69\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:11:15 crc kubenswrapper[4720]: I0122 07:11:15.352473 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd50208a-0f02-4a61-9393-5d24423ffd69-config-data\") pod \"ceilometer-0\" (UID: \"cd50208a-0f02-4a61-9393-5d24423ffd69\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:11:15 crc kubenswrapper[4720]: I0122 07:11:15.454726 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/cd50208a-0f02-4a61-9393-5d24423ffd69-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"cd50208a-0f02-4a61-9393-5d24423ffd69\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:11:15 crc kubenswrapper[4720]: I0122 07:11:15.454775 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cd50208a-0f02-4a61-9393-5d24423ffd69-run-httpd\") pod \"ceilometer-0\" (UID: \"cd50208a-0f02-4a61-9393-5d24423ffd69\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:11:15 crc kubenswrapper[4720]: I0122 07:11:15.454799 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kdm8g\" (UniqueName: \"kubernetes.io/projected/cd50208a-0f02-4a61-9393-5d24423ffd69-kube-api-access-kdm8g\") pod \"ceilometer-0\" (UID: \"cd50208a-0f02-4a61-9393-5d24423ffd69\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:11:15 crc kubenswrapper[4720]: I0122 07:11:15.454815 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cd50208a-0f02-4a61-9393-5d24423ffd69-log-httpd\") pod \"ceilometer-0\" (UID: \"cd50208a-0f02-4a61-9393-5d24423ffd69\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:11:15 crc kubenswrapper[4720]: I0122 07:11:15.454836 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cd50208a-0f02-4a61-9393-5d24423ffd69-scripts\") pod \"ceilometer-0\" (UID: \"cd50208a-0f02-4a61-9393-5d24423ffd69\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:11:15 crc kubenswrapper[4720]: I0122 07:11:15.454879 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd50208a-0f02-4a61-9393-5d24423ffd69-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"cd50208a-0f02-4a61-9393-5d24423ffd69\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:11:15 crc kubenswrapper[4720]: I0122 07:11:15.454932 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/cd50208a-0f02-4a61-9393-5d24423ffd69-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"cd50208a-0f02-4a61-9393-5d24423ffd69\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:11:15 crc kubenswrapper[4720]: I0122 07:11:15.454991 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd50208a-0f02-4a61-9393-5d24423ffd69-config-data\") pod \"ceilometer-0\" (UID: \"cd50208a-0f02-4a61-9393-5d24423ffd69\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:11:15 crc kubenswrapper[4720]: I0122 07:11:15.455395 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cd50208a-0f02-4a61-9393-5d24423ffd69-log-httpd\") pod \"ceilometer-0\" (UID: \"cd50208a-0f02-4a61-9393-5d24423ffd69\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:11:15 crc kubenswrapper[4720]: I0122 07:11:15.455670 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cd50208a-0f02-4a61-9393-5d24423ffd69-run-httpd\") pod \"ceilometer-0\" (UID: \"cd50208a-0f02-4a61-9393-5d24423ffd69\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:11:15 crc kubenswrapper[4720]: I0122 07:11:15.461336 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/cd50208a-0f02-4a61-9393-5d24423ffd69-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"cd50208a-0f02-4a61-9393-5d24423ffd69\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:11:15 crc kubenswrapper[4720]: I0122 07:11:15.466797 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cd50208a-0f02-4a61-9393-5d24423ffd69-scripts\") pod \"ceilometer-0\" (UID: \"cd50208a-0f02-4a61-9393-5d24423ffd69\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:11:15 crc kubenswrapper[4720]: I0122 07:11:15.467685 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd50208a-0f02-4a61-9393-5d24423ffd69-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"cd50208a-0f02-4a61-9393-5d24423ffd69\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:11:15 crc kubenswrapper[4720]: I0122 07:11:15.469255 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd50208a-0f02-4a61-9393-5d24423ffd69-config-data\") pod \"ceilometer-0\" (UID: \"cd50208a-0f02-4a61-9393-5d24423ffd69\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:11:15 crc kubenswrapper[4720]: I0122 07:11:15.471180 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/cd50208a-0f02-4a61-9393-5d24423ffd69-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"cd50208a-0f02-4a61-9393-5d24423ffd69\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:11:15 crc kubenswrapper[4720]: I0122 07:11:15.474477 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kdm8g\" (UniqueName: \"kubernetes.io/projected/cd50208a-0f02-4a61-9393-5d24423ffd69-kube-api-access-kdm8g\") pod \"ceilometer-0\" (UID: \"cd50208a-0f02-4a61-9393-5d24423ffd69\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:11:15 crc kubenswrapper[4720]: I0122 07:11:15.519017 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_c37e37bb-9267-4a15-90a8-cf5cb101730d/watcher-decision-engine/0.log" Jan 22 07:11:15 crc kubenswrapper[4720]: I0122 07:11:15.576951 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:11:16 crc kubenswrapper[4720]: I0122 07:11:16.023290 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 07:11:16 crc kubenswrapper[4720]: W0122 07:11:16.031147 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcd50208a_0f02_4a61_9393_5d24423ffd69.slice/crio-35c46ddaa92d075bf5487171c87492a57d4d02fd1115d60ef34a593e7268a940 WatchSource:0}: Error finding container 35c46ddaa92d075bf5487171c87492a57d4d02fd1115d60ef34a593e7268a940: Status 404 returned error can't find the container with id 35c46ddaa92d075bf5487171c87492a57d4d02fd1115d60ef34a593e7268a940 Jan 22 07:11:16 crc kubenswrapper[4720]: I0122 07:11:16.223548 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0024e023-1c1d-4b82-bd73-fc7646298fb6" path="/var/lib/kubelet/pods/0024e023-1c1d-4b82-bd73-fc7646298fb6/volumes" Jan 22 07:11:16 crc kubenswrapper[4720]: I0122 07:11:16.224833 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="176f161a-f1f2-4b8b-9824-98379378d401" path="/var/lib/kubelet/pods/176f161a-f1f2-4b8b-9824-98379378d401/volumes" Jan 22 07:11:16 crc kubenswrapper[4720]: I0122 07:11:16.226147 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e30ff1-96ee-4b8d-94d7-bb2803d7641d" path="/var/lib/kubelet/pods/25e30ff1-96ee-4b8d-94d7-bb2803d7641d/volumes" Jan 22 07:11:16 crc kubenswrapper[4720]: I0122 07:11:16.228756 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd08ccf6-7d46-4a4a-a77b-571fa77bba36" path="/var/lib/kubelet/pods/cd08ccf6-7d46-4a4a-a77b-571fa77bba36/volumes" Jan 22 07:11:16 crc kubenswrapper[4720]: I0122 07:11:16.230272 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d34d613b-05e5-4e43-8bf6-0eb5a5b1bebc" path="/var/lib/kubelet/pods/d34d613b-05e5-4e43-8bf6-0eb5a5b1bebc/volumes" Jan 22 07:11:16 crc kubenswrapper[4720]: I0122 07:11:16.764775 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_c37e37bb-9267-4a15-90a8-cf5cb101730d/watcher-decision-engine/0.log" Jan 22 07:11:16 crc kubenswrapper[4720]: I0122 07:11:16.973415 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"cd50208a-0f02-4a61-9393-5d24423ffd69","Type":"ContainerStarted","Data":"279d85fac63b31574b0a4f107177c8bb36fc6e2884c4d61f91ac06f36c619862"} Jan 22 07:11:16 crc kubenswrapper[4720]: I0122 07:11:16.973757 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"cd50208a-0f02-4a61-9393-5d24423ffd69","Type":"ContainerStarted","Data":"35c46ddaa92d075bf5487171c87492a57d4d02fd1115d60ef34a593e7268a940"} Jan 22 07:11:17 crc kubenswrapper[4720]: I0122 07:11:17.948808 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_c37e37bb-9267-4a15-90a8-cf5cb101730d/watcher-decision-engine/0.log" Jan 22 07:11:18 crc kubenswrapper[4720]: I0122 07:11:18.698506 4720 prober.go:107] "Probe failed" probeType="Readiness" pod="watcher-kuttl-default/cinder-api-0" podUID="cd08ccf6-7d46-4a4a-a77b-571fa77bba36" containerName="cinder-api" probeResult="failure" output="Get \"https://10.217.0.208:8776/healthcheck\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 22 07:11:18 crc kubenswrapper[4720]: I0122 07:11:18.974145 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-5rffs"] Jan 22 07:11:18 crc kubenswrapper[4720]: I0122 07:11:18.978304 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5rffs" Jan 22 07:11:19 crc kubenswrapper[4720]: I0122 07:11:19.005445 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-5rffs"] Jan 22 07:11:19 crc kubenswrapper[4720]: I0122 07:11:19.035733 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"cd50208a-0f02-4a61-9393-5d24423ffd69","Type":"ContainerStarted","Data":"0cec08b78d2b43050454b187a864abd15ef26d87f024551f6c067c29cd851906"} Jan 22 07:11:19 crc kubenswrapper[4720]: I0122 07:11:19.130711 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zrwcz\" (UniqueName: \"kubernetes.io/projected/6033cd71-d459-4f45-b3a4-3f38a48309a6-kube-api-access-zrwcz\") pod \"redhat-marketplace-5rffs\" (UID: \"6033cd71-d459-4f45-b3a4-3f38a48309a6\") " pod="openshift-marketplace/redhat-marketplace-5rffs" Jan 22 07:11:19 crc kubenswrapper[4720]: I0122 07:11:19.130891 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6033cd71-d459-4f45-b3a4-3f38a48309a6-utilities\") pod \"redhat-marketplace-5rffs\" (UID: \"6033cd71-d459-4f45-b3a4-3f38a48309a6\") " pod="openshift-marketplace/redhat-marketplace-5rffs" Jan 22 07:11:19 crc kubenswrapper[4720]: I0122 07:11:19.131155 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6033cd71-d459-4f45-b3a4-3f38a48309a6-catalog-content\") pod \"redhat-marketplace-5rffs\" (UID: \"6033cd71-d459-4f45-b3a4-3f38a48309a6\") " pod="openshift-marketplace/redhat-marketplace-5rffs" Jan 22 07:11:19 crc kubenswrapper[4720]: I0122 07:11:19.195338 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_c37e37bb-9267-4a15-90a8-cf5cb101730d/watcher-decision-engine/0.log" Jan 22 07:11:19 crc kubenswrapper[4720]: I0122 07:11:19.232713 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6033cd71-d459-4f45-b3a4-3f38a48309a6-utilities\") pod \"redhat-marketplace-5rffs\" (UID: \"6033cd71-d459-4f45-b3a4-3f38a48309a6\") " pod="openshift-marketplace/redhat-marketplace-5rffs" Jan 22 07:11:19 crc kubenswrapper[4720]: I0122 07:11:19.232853 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6033cd71-d459-4f45-b3a4-3f38a48309a6-catalog-content\") pod \"redhat-marketplace-5rffs\" (UID: \"6033cd71-d459-4f45-b3a4-3f38a48309a6\") " pod="openshift-marketplace/redhat-marketplace-5rffs" Jan 22 07:11:19 crc kubenswrapper[4720]: I0122 07:11:19.232942 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zrwcz\" (UniqueName: \"kubernetes.io/projected/6033cd71-d459-4f45-b3a4-3f38a48309a6-kube-api-access-zrwcz\") pod \"redhat-marketplace-5rffs\" (UID: \"6033cd71-d459-4f45-b3a4-3f38a48309a6\") " pod="openshift-marketplace/redhat-marketplace-5rffs" Jan 22 07:11:19 crc kubenswrapper[4720]: I0122 07:11:19.233367 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6033cd71-d459-4f45-b3a4-3f38a48309a6-utilities\") pod \"redhat-marketplace-5rffs\" (UID: \"6033cd71-d459-4f45-b3a4-3f38a48309a6\") " pod="openshift-marketplace/redhat-marketplace-5rffs" Jan 22 07:11:19 crc kubenswrapper[4720]: I0122 07:11:19.233377 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6033cd71-d459-4f45-b3a4-3f38a48309a6-catalog-content\") pod \"redhat-marketplace-5rffs\" (UID: \"6033cd71-d459-4f45-b3a4-3f38a48309a6\") " pod="openshift-marketplace/redhat-marketplace-5rffs" Jan 22 07:11:19 crc kubenswrapper[4720]: I0122 07:11:19.253662 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zrwcz\" (UniqueName: \"kubernetes.io/projected/6033cd71-d459-4f45-b3a4-3f38a48309a6-kube-api-access-zrwcz\") pod \"redhat-marketplace-5rffs\" (UID: \"6033cd71-d459-4f45-b3a4-3f38a48309a6\") " pod="openshift-marketplace/redhat-marketplace-5rffs" Jan 22 07:11:19 crc kubenswrapper[4720]: I0122 07:11:19.301032 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5rffs" Jan 22 07:11:19 crc kubenswrapper[4720]: I0122 07:11:19.765109 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-5rffs"] Jan 22 07:11:19 crc kubenswrapper[4720]: W0122 07:11:19.765924 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6033cd71_d459_4f45_b3a4_3f38a48309a6.slice/crio-6ff729964db3ab42f88838e8bca1e492adf7ac182e1ab4c79a88c55846716096 WatchSource:0}: Error finding container 6ff729964db3ab42f88838e8bca1e492adf7ac182e1ab4c79a88c55846716096: Status 404 returned error can't find the container with id 6ff729964db3ab42f88838e8bca1e492adf7ac182e1ab4c79a88c55846716096 Jan 22 07:11:20 crc kubenswrapper[4720]: I0122 07:11:20.044644 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"cd50208a-0f02-4a61-9393-5d24423ffd69","Type":"ContainerStarted","Data":"25c43d8a14f5ff1a61cad89d904df608af3e4b02abcdc4254f4ad81b4a6c5274"} Jan 22 07:11:20 crc kubenswrapper[4720]: I0122 07:11:20.049427 4720 generic.go:334] "Generic (PLEG): container finished" podID="6033cd71-d459-4f45-b3a4-3f38a48309a6" containerID="a4a36ba7e7f611b48ae7b67ef07f2620449220084f8780a1b34e4af32f987f0f" exitCode=0 Jan 22 07:11:20 crc kubenswrapper[4720]: I0122 07:11:20.049472 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5rffs" event={"ID":"6033cd71-d459-4f45-b3a4-3f38a48309a6","Type":"ContainerDied","Data":"a4a36ba7e7f611b48ae7b67ef07f2620449220084f8780a1b34e4af32f987f0f"} Jan 22 07:11:20 crc kubenswrapper[4720]: I0122 07:11:20.049504 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5rffs" event={"ID":"6033cd71-d459-4f45-b3a4-3f38a48309a6","Type":"ContainerStarted","Data":"6ff729964db3ab42f88838e8bca1e492adf7ac182e1ab4c79a88c55846716096"} Jan 22 07:11:20 crc kubenswrapper[4720]: I0122 07:11:20.384564 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_c37e37bb-9267-4a15-90a8-cf5cb101730d/watcher-decision-engine/0.log" Jan 22 07:11:21 crc kubenswrapper[4720]: I0122 07:11:21.061589 4720 generic.go:334] "Generic (PLEG): container finished" podID="6033cd71-d459-4f45-b3a4-3f38a48309a6" containerID="bdec42351414b2e79c3a17650f09e3f2538e8a3df1c7c4af4625b6018230e05a" exitCode=0 Jan 22 07:11:21 crc kubenswrapper[4720]: I0122 07:11:21.061945 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5rffs" event={"ID":"6033cd71-d459-4f45-b3a4-3f38a48309a6","Type":"ContainerDied","Data":"bdec42351414b2e79c3a17650f09e3f2538e8a3df1c7c4af4625b6018230e05a"} Jan 22 07:11:21 crc kubenswrapper[4720]: I0122 07:11:21.069976 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"cd50208a-0f02-4a61-9393-5d24423ffd69","Type":"ContainerStarted","Data":"a575556003e7d42da5f702637bf19a509657be4642dab87a14a0ab20935437d8"} Jan 22 07:11:21 crc kubenswrapper[4720]: I0122 07:11:21.070159 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:11:21 crc kubenswrapper[4720]: I0122 07:11:21.114646 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/ceilometer-0" podStartSLOduration=1.555160431 podStartE2EDuration="6.114623803s" podCreationTimestamp="2026-01-22 07:11:15 +0000 UTC" firstStartedPulling="2026-01-22 07:11:16.033111168 +0000 UTC m=+2168.175017873" lastFinishedPulling="2026-01-22 07:11:20.59257454 +0000 UTC m=+2172.734481245" observedRunningTime="2026-01-22 07:11:21.111277468 +0000 UTC m=+2173.253184183" watchObservedRunningTime="2026-01-22 07:11:21.114623803 +0000 UTC m=+2173.256530508" Jan 22 07:11:21 crc kubenswrapper[4720]: I0122 07:11:21.570004 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_c37e37bb-9267-4a15-90a8-cf5cb101730d/watcher-decision-engine/0.log" Jan 22 07:11:22 crc kubenswrapper[4720]: I0122 07:11:22.078977 4720 generic.go:334] "Generic (PLEG): container finished" podID="c37e37bb-9267-4a15-90a8-cf5cb101730d" containerID="37ffc5b53586441d22624c34ddde403cbd2dc8c740d4a9892c32e1b4a7a9b8e4" exitCode=0 Jan 22 07:11:22 crc kubenswrapper[4720]: I0122 07:11:22.079366 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"c37e37bb-9267-4a15-90a8-cf5cb101730d","Type":"ContainerDied","Data":"37ffc5b53586441d22624c34ddde403cbd2dc8c740d4a9892c32e1b4a7a9b8e4"} Jan 22 07:11:22 crc kubenswrapper[4720]: I0122 07:11:22.079404 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"c37e37bb-9267-4a15-90a8-cf5cb101730d","Type":"ContainerDied","Data":"3c346aa67441273617cc4f15cbce4e97cc797efab3c70ec6333713f416535cc8"} Jan 22 07:11:22 crc kubenswrapper[4720]: I0122 07:11:22.079416 4720 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3c346aa67441273617cc4f15cbce4e97cc797efab3c70ec6333713f416535cc8" Jan 22 07:11:22 crc kubenswrapper[4720]: I0122 07:11:22.084221 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5rffs" event={"ID":"6033cd71-d459-4f45-b3a4-3f38a48309a6","Type":"ContainerStarted","Data":"4158a9d27efd3ac3d6959741fde39ba44de53e600ec9f04334b0e21cbb6ae641"} Jan 22 07:11:22 crc kubenswrapper[4720]: I0122 07:11:22.099654 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 07:11:22 crc kubenswrapper[4720]: I0122 07:11:22.111803 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-5rffs" podStartSLOduration=2.701211097 podStartE2EDuration="4.111780332s" podCreationTimestamp="2026-01-22 07:11:18 +0000 UTC" firstStartedPulling="2026-01-22 07:11:20.056243683 +0000 UTC m=+2172.198150388" lastFinishedPulling="2026-01-22 07:11:21.466812918 +0000 UTC m=+2173.608719623" observedRunningTime="2026-01-22 07:11:22.103774546 +0000 UTC m=+2174.245681251" watchObservedRunningTime="2026-01-22 07:11:22.111780332 +0000 UTC m=+2174.253687037" Jan 22 07:11:22 crc kubenswrapper[4720]: I0122 07:11:22.192740 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/c37e37bb-9267-4a15-90a8-cf5cb101730d-cert-memcached-mtls\") pod \"c37e37bb-9267-4a15-90a8-cf5cb101730d\" (UID: \"c37e37bb-9267-4a15-90a8-cf5cb101730d\") " Jan 22 07:11:22 crc kubenswrapper[4720]: I0122 07:11:22.192860 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c37e37bb-9267-4a15-90a8-cf5cb101730d-logs\") pod \"c37e37bb-9267-4a15-90a8-cf5cb101730d\" (UID: \"c37e37bb-9267-4a15-90a8-cf5cb101730d\") " Jan 22 07:11:22 crc kubenswrapper[4720]: I0122 07:11:22.192893 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c37e37bb-9267-4a15-90a8-cf5cb101730d-combined-ca-bundle\") pod \"c37e37bb-9267-4a15-90a8-cf5cb101730d\" (UID: \"c37e37bb-9267-4a15-90a8-cf5cb101730d\") " Jan 22 07:11:22 crc kubenswrapper[4720]: I0122 07:11:22.192959 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c37e37bb-9267-4a15-90a8-cf5cb101730d-config-data\") pod \"c37e37bb-9267-4a15-90a8-cf5cb101730d\" (UID: \"c37e37bb-9267-4a15-90a8-cf5cb101730d\") " Jan 22 07:11:22 crc kubenswrapper[4720]: I0122 07:11:22.193093 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/c37e37bb-9267-4a15-90a8-cf5cb101730d-custom-prometheus-ca\") pod \"c37e37bb-9267-4a15-90a8-cf5cb101730d\" (UID: \"c37e37bb-9267-4a15-90a8-cf5cb101730d\") " Jan 22 07:11:22 crc kubenswrapper[4720]: I0122 07:11:22.193147 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pmt7s\" (UniqueName: \"kubernetes.io/projected/c37e37bb-9267-4a15-90a8-cf5cb101730d-kube-api-access-pmt7s\") pod \"c37e37bb-9267-4a15-90a8-cf5cb101730d\" (UID: \"c37e37bb-9267-4a15-90a8-cf5cb101730d\") " Jan 22 07:11:22 crc kubenswrapper[4720]: I0122 07:11:22.193437 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c37e37bb-9267-4a15-90a8-cf5cb101730d-logs" (OuterVolumeSpecName: "logs") pod "c37e37bb-9267-4a15-90a8-cf5cb101730d" (UID: "c37e37bb-9267-4a15-90a8-cf5cb101730d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 07:11:22 crc kubenswrapper[4720]: I0122 07:11:22.193561 4720 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c37e37bb-9267-4a15-90a8-cf5cb101730d-logs\") on node \"crc\" DevicePath \"\"" Jan 22 07:11:22 crc kubenswrapper[4720]: I0122 07:11:22.207153 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c37e37bb-9267-4a15-90a8-cf5cb101730d-kube-api-access-pmt7s" (OuterVolumeSpecName: "kube-api-access-pmt7s") pod "c37e37bb-9267-4a15-90a8-cf5cb101730d" (UID: "c37e37bb-9267-4a15-90a8-cf5cb101730d"). InnerVolumeSpecName "kube-api-access-pmt7s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 07:11:22 crc kubenswrapper[4720]: I0122 07:11:22.227198 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c37e37bb-9267-4a15-90a8-cf5cb101730d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c37e37bb-9267-4a15-90a8-cf5cb101730d" (UID: "c37e37bb-9267-4a15-90a8-cf5cb101730d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:11:22 crc kubenswrapper[4720]: I0122 07:11:22.239286 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c37e37bb-9267-4a15-90a8-cf5cb101730d-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "c37e37bb-9267-4a15-90a8-cf5cb101730d" (UID: "c37e37bb-9267-4a15-90a8-cf5cb101730d"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:11:22 crc kubenswrapper[4720]: I0122 07:11:22.276026 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c37e37bb-9267-4a15-90a8-cf5cb101730d-config-data" (OuterVolumeSpecName: "config-data") pod "c37e37bb-9267-4a15-90a8-cf5cb101730d" (UID: "c37e37bb-9267-4a15-90a8-cf5cb101730d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:11:22 crc kubenswrapper[4720]: I0122 07:11:22.297647 4720 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/c37e37bb-9267-4a15-90a8-cf5cb101730d-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Jan 22 07:11:22 crc kubenswrapper[4720]: I0122 07:11:22.297672 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pmt7s\" (UniqueName: \"kubernetes.io/projected/c37e37bb-9267-4a15-90a8-cf5cb101730d-kube-api-access-pmt7s\") on node \"crc\" DevicePath \"\"" Jan 22 07:11:22 crc kubenswrapper[4720]: I0122 07:11:22.297682 4720 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c37e37bb-9267-4a15-90a8-cf5cb101730d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 07:11:22 crc kubenswrapper[4720]: I0122 07:11:22.297691 4720 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c37e37bb-9267-4a15-90a8-cf5cb101730d-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 07:11:22 crc kubenswrapper[4720]: I0122 07:11:22.305054 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c37e37bb-9267-4a15-90a8-cf5cb101730d-cert-memcached-mtls" (OuterVolumeSpecName: "cert-memcached-mtls") pod "c37e37bb-9267-4a15-90a8-cf5cb101730d" (UID: "c37e37bb-9267-4a15-90a8-cf5cb101730d"). InnerVolumeSpecName "cert-memcached-mtls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:11:22 crc kubenswrapper[4720]: I0122 07:11:22.399113 4720 reconciler_common.go:293] "Volume detached for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/c37e37bb-9267-4a15-90a8-cf5cb101730d-cert-memcached-mtls\") on node \"crc\" DevicePath \"\"" Jan 22 07:11:22 crc kubenswrapper[4720]: I0122 07:11:22.783192 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_c37e37bb-9267-4a15-90a8-cf5cb101730d/watcher-decision-engine/0.log" Jan 22 07:11:23 crc kubenswrapper[4720]: I0122 07:11:23.091155 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 07:11:23 crc kubenswrapper[4720]: I0122 07:11:23.122652 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 22 07:11:23 crc kubenswrapper[4720]: I0122 07:11:23.132534 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 22 07:11:23 crc kubenswrapper[4720]: I0122 07:11:23.145415 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 22 07:11:23 crc kubenswrapper[4720]: E0122 07:11:23.145836 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c37e37bb-9267-4a15-90a8-cf5cb101730d" containerName="watcher-decision-engine" Jan 22 07:11:23 crc kubenswrapper[4720]: I0122 07:11:23.145855 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="c37e37bb-9267-4a15-90a8-cf5cb101730d" containerName="watcher-decision-engine" Jan 22 07:11:23 crc kubenswrapper[4720]: I0122 07:11:23.146023 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="c37e37bb-9267-4a15-90a8-cf5cb101730d" containerName="watcher-decision-engine" Jan 22 07:11:23 crc kubenswrapper[4720]: I0122 07:11:23.146628 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 07:11:23 crc kubenswrapper[4720]: I0122 07:11:23.148485 4720 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-decision-engine-config-data" Jan 22 07:11:23 crc kubenswrapper[4720]: I0122 07:11:23.160999 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 22 07:11:23 crc kubenswrapper[4720]: I0122 07:11:23.315277 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08a4eaaa-38f1-4956-9a9a-0d42286ee20e-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"08a4eaaa-38f1-4956-9a9a-0d42286ee20e\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 07:11:23 crc kubenswrapper[4720]: I0122 07:11:23.315334 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dpnbx\" (UniqueName: \"kubernetes.io/projected/08a4eaaa-38f1-4956-9a9a-0d42286ee20e-kube-api-access-dpnbx\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"08a4eaaa-38f1-4956-9a9a-0d42286ee20e\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 07:11:23 crc kubenswrapper[4720]: I0122 07:11:23.315380 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08a4eaaa-38f1-4956-9a9a-0d42286ee20e-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"08a4eaaa-38f1-4956-9a9a-0d42286ee20e\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 07:11:23 crc kubenswrapper[4720]: I0122 07:11:23.315509 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/08a4eaaa-38f1-4956-9a9a-0d42286ee20e-cert-memcached-mtls\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"08a4eaaa-38f1-4956-9a9a-0d42286ee20e\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 07:11:23 crc kubenswrapper[4720]: I0122 07:11:23.315548 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/08a4eaaa-38f1-4956-9a9a-0d42286ee20e-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"08a4eaaa-38f1-4956-9a9a-0d42286ee20e\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 07:11:23 crc kubenswrapper[4720]: I0122 07:11:23.315571 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/08a4eaaa-38f1-4956-9a9a-0d42286ee20e-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"08a4eaaa-38f1-4956-9a9a-0d42286ee20e\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 07:11:23 crc kubenswrapper[4720]: I0122 07:11:23.417411 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08a4eaaa-38f1-4956-9a9a-0d42286ee20e-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"08a4eaaa-38f1-4956-9a9a-0d42286ee20e\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 07:11:23 crc kubenswrapper[4720]: I0122 07:11:23.417757 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dpnbx\" (UniqueName: \"kubernetes.io/projected/08a4eaaa-38f1-4956-9a9a-0d42286ee20e-kube-api-access-dpnbx\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"08a4eaaa-38f1-4956-9a9a-0d42286ee20e\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 07:11:23 crc kubenswrapper[4720]: I0122 07:11:23.417860 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08a4eaaa-38f1-4956-9a9a-0d42286ee20e-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"08a4eaaa-38f1-4956-9a9a-0d42286ee20e\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 07:11:23 crc kubenswrapper[4720]: I0122 07:11:23.417991 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/08a4eaaa-38f1-4956-9a9a-0d42286ee20e-cert-memcached-mtls\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"08a4eaaa-38f1-4956-9a9a-0d42286ee20e\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 07:11:23 crc kubenswrapper[4720]: I0122 07:11:23.418069 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/08a4eaaa-38f1-4956-9a9a-0d42286ee20e-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"08a4eaaa-38f1-4956-9a9a-0d42286ee20e\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 07:11:23 crc kubenswrapper[4720]: I0122 07:11:23.418138 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/08a4eaaa-38f1-4956-9a9a-0d42286ee20e-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"08a4eaaa-38f1-4956-9a9a-0d42286ee20e\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 07:11:23 crc kubenswrapper[4720]: I0122 07:11:23.418591 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/08a4eaaa-38f1-4956-9a9a-0d42286ee20e-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"08a4eaaa-38f1-4956-9a9a-0d42286ee20e\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 07:11:23 crc kubenswrapper[4720]: I0122 07:11:23.421535 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/08a4eaaa-38f1-4956-9a9a-0d42286ee20e-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"08a4eaaa-38f1-4956-9a9a-0d42286ee20e\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 07:11:23 crc kubenswrapper[4720]: I0122 07:11:23.421630 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/08a4eaaa-38f1-4956-9a9a-0d42286ee20e-cert-memcached-mtls\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"08a4eaaa-38f1-4956-9a9a-0d42286ee20e\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 07:11:23 crc kubenswrapper[4720]: I0122 07:11:23.422100 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08a4eaaa-38f1-4956-9a9a-0d42286ee20e-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"08a4eaaa-38f1-4956-9a9a-0d42286ee20e\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 07:11:23 crc kubenswrapper[4720]: I0122 07:11:23.430113 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08a4eaaa-38f1-4956-9a9a-0d42286ee20e-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"08a4eaaa-38f1-4956-9a9a-0d42286ee20e\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 07:11:23 crc kubenswrapper[4720]: I0122 07:11:23.455662 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dpnbx\" (UniqueName: \"kubernetes.io/projected/08a4eaaa-38f1-4956-9a9a-0d42286ee20e-kube-api-access-dpnbx\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"08a4eaaa-38f1-4956-9a9a-0d42286ee20e\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 07:11:23 crc kubenswrapper[4720]: I0122 07:11:23.471781 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 07:11:23 crc kubenswrapper[4720]: I0122 07:11:23.901800 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 22 07:11:23 crc kubenswrapper[4720]: W0122 07:11:23.904270 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod08a4eaaa_38f1_4956_9a9a_0d42286ee20e.slice/crio-cf35391377ce928094cba3486c1bba50c00fcb9c84b9b6fc998504d89bdf3190 WatchSource:0}: Error finding container cf35391377ce928094cba3486c1bba50c00fcb9c84b9b6fc998504d89bdf3190: Status 404 returned error can't find the container with id cf35391377ce928094cba3486c1bba50c00fcb9c84b9b6fc998504d89bdf3190 Jan 22 07:11:24 crc kubenswrapper[4720]: I0122 07:11:24.100074 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"08a4eaaa-38f1-4956-9a9a-0d42286ee20e","Type":"ContainerStarted","Data":"cf35391377ce928094cba3486c1bba50c00fcb9c84b9b6fc998504d89bdf3190"} Jan 22 07:11:24 crc kubenswrapper[4720]: I0122 07:11:24.221465 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c37e37bb-9267-4a15-90a8-cf5cb101730d" path="/var/lib/kubelet/pods/c37e37bb-9267-4a15-90a8-cf5cb101730d/volumes" Jan 22 07:11:25 crc kubenswrapper[4720]: I0122 07:11:25.110578 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"08a4eaaa-38f1-4956-9a9a-0d42286ee20e","Type":"ContainerStarted","Data":"17c2d00fe7004d679924e717586b6da3c7015366b9d03dddd525fdb7b785cad1"} Jan 22 07:11:25 crc kubenswrapper[4720]: I0122 07:11:25.135243 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podStartSLOduration=2.135215716 podStartE2EDuration="2.135215716s" podCreationTimestamp="2026-01-22 07:11:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 07:11:25.128270149 +0000 UTC m=+2177.270176854" watchObservedRunningTime="2026-01-22 07:11:25.135215716 +0000 UTC m=+2177.277122431" Jan 22 07:11:25 crc kubenswrapper[4720]: I0122 07:11:25.153000 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_08a4eaaa-38f1-4956-9a9a-0d42286ee20e/watcher-decision-engine/0.log" Jan 22 07:11:26 crc kubenswrapper[4720]: I0122 07:11:26.363098 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_08a4eaaa-38f1-4956-9a9a-0d42286ee20e/watcher-decision-engine/0.log" Jan 22 07:11:27 crc kubenswrapper[4720]: I0122 07:11:27.604063 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_08a4eaaa-38f1-4956-9a9a-0d42286ee20e/watcher-decision-engine/0.log" Jan 22 07:11:28 crc kubenswrapper[4720]: I0122 07:11:28.819034 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_08a4eaaa-38f1-4956-9a9a-0d42286ee20e/watcher-decision-engine/0.log" Jan 22 07:11:29 crc kubenswrapper[4720]: I0122 07:11:29.301884 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-5rffs" Jan 22 07:11:29 crc kubenswrapper[4720]: I0122 07:11:29.301958 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-5rffs" Jan 22 07:11:29 crc kubenswrapper[4720]: I0122 07:11:29.361541 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-5rffs" Jan 22 07:11:29 crc kubenswrapper[4720]: I0122 07:11:29.780972 4720 patch_prober.go:28] interesting pod/machine-config-daemon-bnsvd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 07:11:29 crc kubenswrapper[4720]: I0122 07:11:29.781064 4720 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" podUID="f4b26e9d-6a95-4b1c-9750-88b6aa100c67" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 07:11:30 crc kubenswrapper[4720]: I0122 07:11:30.037186 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_08a4eaaa-38f1-4956-9a9a-0d42286ee20e/watcher-decision-engine/0.log" Jan 22 07:11:30 crc kubenswrapper[4720]: I0122 07:11:30.220426 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-5rffs" Jan 22 07:11:31 crc kubenswrapper[4720]: I0122 07:11:31.234466 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_08a4eaaa-38f1-4956-9a9a-0d42286ee20e/watcher-decision-engine/0.log" Jan 22 07:11:32 crc kubenswrapper[4720]: I0122 07:11:32.428155 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_08a4eaaa-38f1-4956-9a9a-0d42286ee20e/watcher-decision-engine/0.log" Jan 22 07:11:32 crc kubenswrapper[4720]: I0122 07:11:32.967837 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-5rffs"] Jan 22 07:11:32 crc kubenswrapper[4720]: I0122 07:11:32.968142 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-5rffs" podUID="6033cd71-d459-4f45-b3a4-3f38a48309a6" containerName="registry-server" containerID="cri-o://4158a9d27efd3ac3d6959741fde39ba44de53e600ec9f04334b0e21cbb6ae641" gracePeriod=2 Jan 22 07:11:33 crc kubenswrapper[4720]: I0122 07:11:33.264577 4720 generic.go:334] "Generic (PLEG): container finished" podID="6033cd71-d459-4f45-b3a4-3f38a48309a6" containerID="4158a9d27efd3ac3d6959741fde39ba44de53e600ec9f04334b0e21cbb6ae641" exitCode=0 Jan 22 07:11:33 crc kubenswrapper[4720]: I0122 07:11:33.264664 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5rffs" event={"ID":"6033cd71-d459-4f45-b3a4-3f38a48309a6","Type":"ContainerDied","Data":"4158a9d27efd3ac3d6959741fde39ba44de53e600ec9f04334b0e21cbb6ae641"} Jan 22 07:11:33 crc kubenswrapper[4720]: I0122 07:11:33.472757 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 07:11:33 crc kubenswrapper[4720]: I0122 07:11:33.506973 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 07:11:33 crc kubenswrapper[4720]: I0122 07:11:33.521655 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5rffs" Jan 22 07:11:33 crc kubenswrapper[4720]: I0122 07:11:33.643362 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_08a4eaaa-38f1-4956-9a9a-0d42286ee20e/watcher-decision-engine/0.log" Jan 22 07:11:33 crc kubenswrapper[4720]: I0122 07:11:33.683979 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zrwcz\" (UniqueName: \"kubernetes.io/projected/6033cd71-d459-4f45-b3a4-3f38a48309a6-kube-api-access-zrwcz\") pod \"6033cd71-d459-4f45-b3a4-3f38a48309a6\" (UID: \"6033cd71-d459-4f45-b3a4-3f38a48309a6\") " Jan 22 07:11:33 crc kubenswrapper[4720]: I0122 07:11:33.684073 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6033cd71-d459-4f45-b3a4-3f38a48309a6-utilities\") pod \"6033cd71-d459-4f45-b3a4-3f38a48309a6\" (UID: \"6033cd71-d459-4f45-b3a4-3f38a48309a6\") " Jan 22 07:11:33 crc kubenswrapper[4720]: I0122 07:11:33.684184 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6033cd71-d459-4f45-b3a4-3f38a48309a6-catalog-content\") pod \"6033cd71-d459-4f45-b3a4-3f38a48309a6\" (UID: \"6033cd71-d459-4f45-b3a4-3f38a48309a6\") " Jan 22 07:11:33 crc kubenswrapper[4720]: I0122 07:11:33.686051 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6033cd71-d459-4f45-b3a4-3f38a48309a6-utilities" (OuterVolumeSpecName: "utilities") pod "6033cd71-d459-4f45-b3a4-3f38a48309a6" (UID: "6033cd71-d459-4f45-b3a4-3f38a48309a6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 07:11:33 crc kubenswrapper[4720]: I0122 07:11:33.690714 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6033cd71-d459-4f45-b3a4-3f38a48309a6-kube-api-access-zrwcz" (OuterVolumeSpecName: "kube-api-access-zrwcz") pod "6033cd71-d459-4f45-b3a4-3f38a48309a6" (UID: "6033cd71-d459-4f45-b3a4-3f38a48309a6"). InnerVolumeSpecName "kube-api-access-zrwcz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 07:11:33 crc kubenswrapper[4720]: I0122 07:11:33.712367 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6033cd71-d459-4f45-b3a4-3f38a48309a6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6033cd71-d459-4f45-b3a4-3f38a48309a6" (UID: "6033cd71-d459-4f45-b3a4-3f38a48309a6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 07:11:33 crc kubenswrapper[4720]: I0122 07:11:33.786721 4720 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6033cd71-d459-4f45-b3a4-3f38a48309a6-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 07:11:33 crc kubenswrapper[4720]: I0122 07:11:33.786778 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zrwcz\" (UniqueName: \"kubernetes.io/projected/6033cd71-d459-4f45-b3a4-3f38a48309a6-kube-api-access-zrwcz\") on node \"crc\" DevicePath \"\"" Jan 22 07:11:33 crc kubenswrapper[4720]: I0122 07:11:33.786793 4720 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6033cd71-d459-4f45-b3a4-3f38a48309a6-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 07:11:34 crc kubenswrapper[4720]: I0122 07:11:34.278555 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5rffs" event={"ID":"6033cd71-d459-4f45-b3a4-3f38a48309a6","Type":"ContainerDied","Data":"6ff729964db3ab42f88838e8bca1e492adf7ac182e1ab4c79a88c55846716096"} Jan 22 07:11:34 crc kubenswrapper[4720]: I0122 07:11:34.278643 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5rffs" Jan 22 07:11:34 crc kubenswrapper[4720]: I0122 07:11:34.279024 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 07:11:34 crc kubenswrapper[4720]: I0122 07:11:34.279056 4720 scope.go:117] "RemoveContainer" containerID="4158a9d27efd3ac3d6959741fde39ba44de53e600ec9f04334b0e21cbb6ae641" Jan 22 07:11:34 crc kubenswrapper[4720]: I0122 07:11:34.342077 4720 scope.go:117] "RemoveContainer" containerID="bdec42351414b2e79c3a17650f09e3f2538e8a3df1c7c4af4625b6018230e05a" Jan 22 07:11:34 crc kubenswrapper[4720]: I0122 07:11:34.337273 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-5rffs"] Jan 22 07:11:34 crc kubenswrapper[4720]: I0122 07:11:34.349851 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-5rffs"] Jan 22 07:11:34 crc kubenswrapper[4720]: I0122 07:11:34.417332 4720 scope.go:117] "RemoveContainer" containerID="a4a36ba7e7f611b48ae7b67ef07f2620449220084f8780a1b34e4af32f987f0f" Jan 22 07:11:34 crc kubenswrapper[4720]: I0122 07:11:34.495789 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 07:11:34 crc kubenswrapper[4720]: I0122 07:11:34.873367 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_08a4eaaa-38f1-4956-9a9a-0d42286ee20e/watcher-decision-engine/0.log" Jan 22 07:11:34 crc kubenswrapper[4720]: I0122 07:11:34.995442 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-bfbqw"] Jan 22 07:11:35 crc kubenswrapper[4720]: I0122 07:11:35.003952 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-bfbqw"] Jan 22 07:11:35 crc kubenswrapper[4720]: I0122 07:11:35.054834 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher5d39-account-delete-tgqmp"] Jan 22 07:11:35 crc kubenswrapper[4720]: E0122 07:11:35.055827 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6033cd71-d459-4f45-b3a4-3f38a48309a6" containerName="extract-utilities" Jan 22 07:11:35 crc kubenswrapper[4720]: I0122 07:11:35.055871 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="6033cd71-d459-4f45-b3a4-3f38a48309a6" containerName="extract-utilities" Jan 22 07:11:35 crc kubenswrapper[4720]: E0122 07:11:35.055881 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6033cd71-d459-4f45-b3a4-3f38a48309a6" containerName="extract-content" Jan 22 07:11:35 crc kubenswrapper[4720]: I0122 07:11:35.055889 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="6033cd71-d459-4f45-b3a4-3f38a48309a6" containerName="extract-content" Jan 22 07:11:35 crc kubenswrapper[4720]: E0122 07:11:35.055940 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6033cd71-d459-4f45-b3a4-3f38a48309a6" containerName="registry-server" Jan 22 07:11:35 crc kubenswrapper[4720]: I0122 07:11:35.055947 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="6033cd71-d459-4f45-b3a4-3f38a48309a6" containerName="registry-server" Jan 22 07:11:35 crc kubenswrapper[4720]: I0122 07:11:35.056218 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="6033cd71-d459-4f45-b3a4-3f38a48309a6" containerName="registry-server" Jan 22 07:11:35 crc kubenswrapper[4720]: I0122 07:11:35.056976 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher5d39-account-delete-tgqmp" Jan 22 07:11:35 crc kubenswrapper[4720]: I0122 07:11:35.085263 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher5d39-account-delete-tgqmp"] Jan 22 07:11:35 crc kubenswrapper[4720]: I0122 07:11:35.103933 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 22 07:11:35 crc kubenswrapper[4720]: I0122 07:11:35.144786 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 22 07:11:35 crc kubenswrapper[4720]: I0122 07:11:35.145141 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="bde2542f-6d84-4fee-8690-23325fb92c83" containerName="watcher-kuttl-api-log" containerID="cri-o://de6570d94cc0eb56bc1241f55456918f2f6ecc692da21e09b65ee0d291644536" gracePeriod=30 Jan 22 07:11:35 crc kubenswrapper[4720]: I0122 07:11:35.145324 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="bde2542f-6d84-4fee-8690-23325fb92c83" containerName="watcher-api" containerID="cri-o://71fee65bd2a62e26d0e6179628570fd693e3bce7b1561fbb3408c11c7fdd5cda" gracePeriod=30 Jan 22 07:11:35 crc kubenswrapper[4720]: I0122 07:11:35.197843 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 22 07:11:35 crc kubenswrapper[4720]: I0122 07:11:35.198558 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podUID="be2f9b40-2fd1-4ae5-8772-d8770884bd9d" containerName="watcher-applier" containerID="cri-o://a9096406605c76a8731deae712fd1bfe87eedb40f9b51d28e66bc9ab53ddf51e" gracePeriod=30 Jan 22 07:11:35 crc kubenswrapper[4720]: I0122 07:11:35.219347 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b88c663e-c2ac-48aa-8e78-19b596dd9b92-operator-scripts\") pod \"watcher5d39-account-delete-tgqmp\" (UID: \"b88c663e-c2ac-48aa-8e78-19b596dd9b92\") " pod="watcher-kuttl-default/watcher5d39-account-delete-tgqmp" Jan 22 07:11:35 crc kubenswrapper[4720]: I0122 07:11:35.219418 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-952lh\" (UniqueName: \"kubernetes.io/projected/b88c663e-c2ac-48aa-8e78-19b596dd9b92-kube-api-access-952lh\") pod \"watcher5d39-account-delete-tgqmp\" (UID: \"b88c663e-c2ac-48aa-8e78-19b596dd9b92\") " pod="watcher-kuttl-default/watcher5d39-account-delete-tgqmp" Jan 22 07:11:35 crc kubenswrapper[4720]: I0122 07:11:35.292401 4720 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" secret="" err="secret \"watcher-watcher-kuttl-dockercfg-57hws\" not found" Jan 22 07:11:35 crc kubenswrapper[4720]: I0122 07:11:35.321503 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-952lh\" (UniqueName: \"kubernetes.io/projected/b88c663e-c2ac-48aa-8e78-19b596dd9b92-kube-api-access-952lh\") pod \"watcher5d39-account-delete-tgqmp\" (UID: \"b88c663e-c2ac-48aa-8e78-19b596dd9b92\") " pod="watcher-kuttl-default/watcher5d39-account-delete-tgqmp" Jan 22 07:11:35 crc kubenswrapper[4720]: I0122 07:11:35.322557 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b88c663e-c2ac-48aa-8e78-19b596dd9b92-operator-scripts\") pod \"watcher5d39-account-delete-tgqmp\" (UID: \"b88c663e-c2ac-48aa-8e78-19b596dd9b92\") " pod="watcher-kuttl-default/watcher5d39-account-delete-tgqmp" Jan 22 07:11:35 crc kubenswrapper[4720]: I0122 07:11:35.323186 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b88c663e-c2ac-48aa-8e78-19b596dd9b92-operator-scripts\") pod \"watcher5d39-account-delete-tgqmp\" (UID: \"b88c663e-c2ac-48aa-8e78-19b596dd9b92\") " pod="watcher-kuttl-default/watcher5d39-account-delete-tgqmp" Jan 22 07:11:35 crc kubenswrapper[4720]: I0122 07:11:35.355760 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-952lh\" (UniqueName: \"kubernetes.io/projected/b88c663e-c2ac-48aa-8e78-19b596dd9b92-kube-api-access-952lh\") pod \"watcher5d39-account-delete-tgqmp\" (UID: \"b88c663e-c2ac-48aa-8e78-19b596dd9b92\") " pod="watcher-kuttl-default/watcher5d39-account-delete-tgqmp" Jan 22 07:11:35 crc kubenswrapper[4720]: I0122 07:11:35.380452 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher5d39-account-delete-tgqmp" Jan 22 07:11:35 crc kubenswrapper[4720]: E0122 07:11:35.425016 4720 secret.go:188] Couldn't get secret watcher-kuttl-default/watcher-kuttl-decision-engine-config-data: secret "watcher-kuttl-decision-engine-config-data" not found Jan 22 07:11:35 crc kubenswrapper[4720]: E0122 07:11:35.425115 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/08a4eaaa-38f1-4956-9a9a-0d42286ee20e-config-data podName:08a4eaaa-38f1-4956-9a9a-0d42286ee20e nodeName:}" failed. No retries permitted until 2026-01-22 07:11:35.925087728 +0000 UTC m=+2188.066994433 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/secret/08a4eaaa-38f1-4956-9a9a-0d42286ee20e-config-data") pod "watcher-kuttl-decision-engine-0" (UID: "08a4eaaa-38f1-4956-9a9a-0d42286ee20e") : secret "watcher-kuttl-decision-engine-config-data" not found Jan 22 07:11:35 crc kubenswrapper[4720]: E0122 07:11:35.937940 4720 secret.go:188] Couldn't get secret watcher-kuttl-default/watcher-kuttl-decision-engine-config-data: secret "watcher-kuttl-decision-engine-config-data" not found Jan 22 07:11:35 crc kubenswrapper[4720]: E0122 07:11:35.938580 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/08a4eaaa-38f1-4956-9a9a-0d42286ee20e-config-data podName:08a4eaaa-38f1-4956-9a9a-0d42286ee20e nodeName:}" failed. No retries permitted until 2026-01-22 07:11:36.938354082 +0000 UTC m=+2189.080260787 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/secret/08a4eaaa-38f1-4956-9a9a-0d42286ee20e-config-data") pod "watcher-kuttl-decision-engine-0" (UID: "08a4eaaa-38f1-4956-9a9a-0d42286ee20e") : secret "watcher-kuttl-decision-engine-config-data" not found Jan 22 07:11:36 crc kubenswrapper[4720]: I0122 07:11:36.006498 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher5d39-account-delete-tgqmp"] Jan 22 07:11:36 crc kubenswrapper[4720]: I0122 07:11:36.229335 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6033cd71-d459-4f45-b3a4-3f38a48309a6" path="/var/lib/kubelet/pods/6033cd71-d459-4f45-b3a4-3f38a48309a6/volumes" Jan 22 07:11:36 crc kubenswrapper[4720]: I0122 07:11:36.230395 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d93a94ce-74e1-414f-930d-e74f67d17f2c" path="/var/lib/kubelet/pods/d93a94ce-74e1-414f-930d-e74f67d17f2c/volumes" Jan 22 07:11:36 crc kubenswrapper[4720]: I0122 07:11:36.328963 4720 generic.go:334] "Generic (PLEG): container finished" podID="bde2542f-6d84-4fee-8690-23325fb92c83" containerID="71fee65bd2a62e26d0e6179628570fd693e3bce7b1561fbb3408c11c7fdd5cda" exitCode=0 Jan 22 07:11:36 crc kubenswrapper[4720]: I0122 07:11:36.329271 4720 generic.go:334] "Generic (PLEG): container finished" podID="bde2542f-6d84-4fee-8690-23325fb92c83" containerID="de6570d94cc0eb56bc1241f55456918f2f6ecc692da21e09b65ee0d291644536" exitCode=143 Jan 22 07:11:36 crc kubenswrapper[4720]: I0122 07:11:36.329101 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"bde2542f-6d84-4fee-8690-23325fb92c83","Type":"ContainerDied","Data":"71fee65bd2a62e26d0e6179628570fd693e3bce7b1561fbb3408c11c7fdd5cda"} Jan 22 07:11:36 crc kubenswrapper[4720]: I0122 07:11:36.329352 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"bde2542f-6d84-4fee-8690-23325fb92c83","Type":"ContainerDied","Data":"de6570d94cc0eb56bc1241f55456918f2f6ecc692da21e09b65ee0d291644536"} Jan 22 07:11:36 crc kubenswrapper[4720]: I0122 07:11:36.330891 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="08a4eaaa-38f1-4956-9a9a-0d42286ee20e" containerName="watcher-decision-engine" containerID="cri-o://17c2d00fe7004d679924e717586b6da3c7015366b9d03dddd525fdb7b785cad1" gracePeriod=30 Jan 22 07:11:36 crc kubenswrapper[4720]: I0122 07:11:36.331282 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher5d39-account-delete-tgqmp" event={"ID":"b88c663e-c2ac-48aa-8e78-19b596dd9b92","Type":"ContainerStarted","Data":"c4b31c0fce4808321ee2f9a663e233dfaadc56bbcfc5804839b2de2ebbb3a2e4"} Jan 22 07:11:36 crc kubenswrapper[4720]: I0122 07:11:36.554440 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:11:36 crc kubenswrapper[4720]: I0122 07:11:36.649483 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bde2542f-6d84-4fee-8690-23325fb92c83-config-data\") pod \"bde2542f-6d84-4fee-8690-23325fb92c83\" (UID: \"bde2542f-6d84-4fee-8690-23325fb92c83\") " Jan 22 07:11:36 crc kubenswrapper[4720]: I0122 07:11:36.649549 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/bde2542f-6d84-4fee-8690-23325fb92c83-custom-prometheus-ca\") pod \"bde2542f-6d84-4fee-8690-23325fb92c83\" (UID: \"bde2542f-6d84-4fee-8690-23325fb92c83\") " Jan 22 07:11:36 crc kubenswrapper[4720]: I0122 07:11:36.649731 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bde2542f-6d84-4fee-8690-23325fb92c83-logs\") pod \"bde2542f-6d84-4fee-8690-23325fb92c83\" (UID: \"bde2542f-6d84-4fee-8690-23325fb92c83\") " Jan 22 07:11:36 crc kubenswrapper[4720]: I0122 07:11:36.649787 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bde2542f-6d84-4fee-8690-23325fb92c83-combined-ca-bundle\") pod \"bde2542f-6d84-4fee-8690-23325fb92c83\" (UID: \"bde2542f-6d84-4fee-8690-23325fb92c83\") " Jan 22 07:11:36 crc kubenswrapper[4720]: I0122 07:11:36.649807 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/bde2542f-6d84-4fee-8690-23325fb92c83-cert-memcached-mtls\") pod \"bde2542f-6d84-4fee-8690-23325fb92c83\" (UID: \"bde2542f-6d84-4fee-8690-23325fb92c83\") " Jan 22 07:11:36 crc kubenswrapper[4720]: I0122 07:11:36.649836 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8c6g7\" (UniqueName: \"kubernetes.io/projected/bde2542f-6d84-4fee-8690-23325fb92c83-kube-api-access-8c6g7\") pod \"bde2542f-6d84-4fee-8690-23325fb92c83\" (UID: \"bde2542f-6d84-4fee-8690-23325fb92c83\") " Jan 22 07:11:36 crc kubenswrapper[4720]: I0122 07:11:36.650413 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bde2542f-6d84-4fee-8690-23325fb92c83-logs" (OuterVolumeSpecName: "logs") pod "bde2542f-6d84-4fee-8690-23325fb92c83" (UID: "bde2542f-6d84-4fee-8690-23325fb92c83"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 07:11:36 crc kubenswrapper[4720]: I0122 07:11:36.659406 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bde2542f-6d84-4fee-8690-23325fb92c83-kube-api-access-8c6g7" (OuterVolumeSpecName: "kube-api-access-8c6g7") pod "bde2542f-6d84-4fee-8690-23325fb92c83" (UID: "bde2542f-6d84-4fee-8690-23325fb92c83"). InnerVolumeSpecName "kube-api-access-8c6g7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 07:11:36 crc kubenswrapper[4720]: I0122 07:11:36.678479 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bde2542f-6d84-4fee-8690-23325fb92c83-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bde2542f-6d84-4fee-8690-23325fb92c83" (UID: "bde2542f-6d84-4fee-8690-23325fb92c83"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:11:36 crc kubenswrapper[4720]: I0122 07:11:36.690031 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bde2542f-6d84-4fee-8690-23325fb92c83-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "bde2542f-6d84-4fee-8690-23325fb92c83" (UID: "bde2542f-6d84-4fee-8690-23325fb92c83"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:11:36 crc kubenswrapper[4720]: I0122 07:11:36.747232 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bde2542f-6d84-4fee-8690-23325fb92c83-config-data" (OuterVolumeSpecName: "config-data") pod "bde2542f-6d84-4fee-8690-23325fb92c83" (UID: "bde2542f-6d84-4fee-8690-23325fb92c83"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:11:36 crc kubenswrapper[4720]: I0122 07:11:36.754327 4720 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bde2542f-6d84-4fee-8690-23325fb92c83-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 07:11:36 crc kubenswrapper[4720]: I0122 07:11:36.754363 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8c6g7\" (UniqueName: \"kubernetes.io/projected/bde2542f-6d84-4fee-8690-23325fb92c83-kube-api-access-8c6g7\") on node \"crc\" DevicePath \"\"" Jan 22 07:11:36 crc kubenswrapper[4720]: I0122 07:11:36.754375 4720 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bde2542f-6d84-4fee-8690-23325fb92c83-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 07:11:36 crc kubenswrapper[4720]: I0122 07:11:36.754383 4720 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/bde2542f-6d84-4fee-8690-23325fb92c83-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Jan 22 07:11:36 crc kubenswrapper[4720]: I0122 07:11:36.754394 4720 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bde2542f-6d84-4fee-8690-23325fb92c83-logs\") on node \"crc\" DevicePath \"\"" Jan 22 07:11:36 crc kubenswrapper[4720]: I0122 07:11:36.762863 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bde2542f-6d84-4fee-8690-23325fb92c83-cert-memcached-mtls" (OuterVolumeSpecName: "cert-memcached-mtls") pod "bde2542f-6d84-4fee-8690-23325fb92c83" (UID: "bde2542f-6d84-4fee-8690-23325fb92c83"). InnerVolumeSpecName "cert-memcached-mtls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:11:36 crc kubenswrapper[4720]: I0122 07:11:36.856685 4720 reconciler_common.go:293] "Volume detached for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/bde2542f-6d84-4fee-8690-23325fb92c83-cert-memcached-mtls\") on node \"crc\" DevicePath \"\"" Jan 22 07:11:36 crc kubenswrapper[4720]: E0122 07:11:36.958982 4720 secret.go:188] Couldn't get secret watcher-kuttl-default/watcher-kuttl-decision-engine-config-data: secret "watcher-kuttl-decision-engine-config-data" not found Jan 22 07:11:36 crc kubenswrapper[4720]: E0122 07:11:36.959084 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/08a4eaaa-38f1-4956-9a9a-0d42286ee20e-config-data podName:08a4eaaa-38f1-4956-9a9a-0d42286ee20e nodeName:}" failed. No retries permitted until 2026-01-22 07:11:38.959059668 +0000 UTC m=+2191.100966373 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/secret/08a4eaaa-38f1-4956-9a9a-0d42286ee20e-config-data") pod "watcher-kuttl-decision-engine-0" (UID: "08a4eaaa-38f1-4956-9a9a-0d42286ee20e") : secret "watcher-kuttl-decision-engine-config-data" not found Jan 22 07:11:37 crc kubenswrapper[4720]: I0122 07:11:37.341498 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:11:37 crc kubenswrapper[4720]: I0122 07:11:37.341517 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"bde2542f-6d84-4fee-8690-23325fb92c83","Type":"ContainerDied","Data":"71e5ece357d2a36464161da08fb7b28cbcfbea9f4161795951a993ef4a5c01a6"} Jan 22 07:11:37 crc kubenswrapper[4720]: I0122 07:11:37.341594 4720 scope.go:117] "RemoveContainer" containerID="71fee65bd2a62e26d0e6179628570fd693e3bce7b1561fbb3408c11c7fdd5cda" Jan 22 07:11:37 crc kubenswrapper[4720]: I0122 07:11:37.343352 4720 generic.go:334] "Generic (PLEG): container finished" podID="b88c663e-c2ac-48aa-8e78-19b596dd9b92" containerID="77bb54fccc09173dae5f0b5112b1b4564a07b8ab034fbebd99dbecbce071a4ec" exitCode=0 Jan 22 07:11:37 crc kubenswrapper[4720]: I0122 07:11:37.343417 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher5d39-account-delete-tgqmp" event={"ID":"b88c663e-c2ac-48aa-8e78-19b596dd9b92","Type":"ContainerDied","Data":"77bb54fccc09173dae5f0b5112b1b4564a07b8ab034fbebd99dbecbce071a4ec"} Jan 22 07:11:37 crc kubenswrapper[4720]: I0122 07:11:37.384460 4720 scope.go:117] "RemoveContainer" containerID="de6570d94cc0eb56bc1241f55456918f2f6ecc692da21e09b65ee0d291644536" Jan 22 07:11:37 crc kubenswrapper[4720]: I0122 07:11:37.456985 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 22 07:11:37 crc kubenswrapper[4720]: I0122 07:11:37.477111 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 22 07:11:38 crc kubenswrapper[4720]: I0122 07:11:38.004702 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 07:11:38 crc kubenswrapper[4720]: I0122 07:11:38.006096 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="cd50208a-0f02-4a61-9393-5d24423ffd69" containerName="ceilometer-central-agent" containerID="cri-o://279d85fac63b31574b0a4f107177c8bb36fc6e2884c4d61f91ac06f36c619862" gracePeriod=30 Jan 22 07:11:38 crc kubenswrapper[4720]: I0122 07:11:38.006198 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="cd50208a-0f02-4a61-9393-5d24423ffd69" containerName="proxy-httpd" containerID="cri-o://a575556003e7d42da5f702637bf19a509657be4642dab87a14a0ab20935437d8" gracePeriod=30 Jan 22 07:11:38 crc kubenswrapper[4720]: I0122 07:11:38.006291 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="cd50208a-0f02-4a61-9393-5d24423ffd69" containerName="ceilometer-notification-agent" containerID="cri-o://0cec08b78d2b43050454b187a864abd15ef26d87f024551f6c067c29cd851906" gracePeriod=30 Jan 22 07:11:38 crc kubenswrapper[4720]: I0122 07:11:38.006245 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="cd50208a-0f02-4a61-9393-5d24423ffd69" containerName="sg-core" containerID="cri-o://25c43d8a14f5ff1a61cad89d904df608af3e4b02abcdc4254f4ad81b4a6c5274" gracePeriod=30 Jan 22 07:11:38 crc kubenswrapper[4720]: I0122 07:11:38.016888 4720 prober.go:107] "Probe failed" probeType="Readiness" pod="watcher-kuttl-default/ceilometer-0" podUID="cd50208a-0f02-4a61-9393-5d24423ffd69" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.214:3000/\": EOF" Jan 22 07:11:38 crc kubenswrapper[4720]: I0122 07:11:38.226318 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bde2542f-6d84-4fee-8690-23325fb92c83" path="/var/lib/kubelet/pods/bde2542f-6d84-4fee-8690-23325fb92c83/volumes" Jan 22 07:11:38 crc kubenswrapper[4720]: I0122 07:11:38.356348 4720 generic.go:334] "Generic (PLEG): container finished" podID="cd50208a-0f02-4a61-9393-5d24423ffd69" containerID="a575556003e7d42da5f702637bf19a509657be4642dab87a14a0ab20935437d8" exitCode=0 Jan 22 07:11:38 crc kubenswrapper[4720]: I0122 07:11:38.356392 4720 generic.go:334] "Generic (PLEG): container finished" podID="cd50208a-0f02-4a61-9393-5d24423ffd69" containerID="25c43d8a14f5ff1a61cad89d904df608af3e4b02abcdc4254f4ad81b4a6c5274" exitCode=2 Jan 22 07:11:38 crc kubenswrapper[4720]: I0122 07:11:38.356613 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"cd50208a-0f02-4a61-9393-5d24423ffd69","Type":"ContainerDied","Data":"a575556003e7d42da5f702637bf19a509657be4642dab87a14a0ab20935437d8"} Jan 22 07:11:38 crc kubenswrapper[4720]: I0122 07:11:38.356660 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"cd50208a-0f02-4a61-9393-5d24423ffd69","Type":"ContainerDied","Data":"25c43d8a14f5ff1a61cad89d904df608af3e4b02abcdc4254f4ad81b4a6c5274"} Jan 22 07:11:38 crc kubenswrapper[4720]: I0122 07:11:38.766537 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher5d39-account-delete-tgqmp" Jan 22 07:11:38 crc kubenswrapper[4720]: I0122 07:11:38.903880 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b88c663e-c2ac-48aa-8e78-19b596dd9b92-operator-scripts\") pod \"b88c663e-c2ac-48aa-8e78-19b596dd9b92\" (UID: \"b88c663e-c2ac-48aa-8e78-19b596dd9b92\") " Jan 22 07:11:38 crc kubenswrapper[4720]: I0122 07:11:38.904028 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-952lh\" (UniqueName: \"kubernetes.io/projected/b88c663e-c2ac-48aa-8e78-19b596dd9b92-kube-api-access-952lh\") pod \"b88c663e-c2ac-48aa-8e78-19b596dd9b92\" (UID: \"b88c663e-c2ac-48aa-8e78-19b596dd9b92\") " Jan 22 07:11:38 crc kubenswrapper[4720]: I0122 07:11:38.905820 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b88c663e-c2ac-48aa-8e78-19b596dd9b92-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b88c663e-c2ac-48aa-8e78-19b596dd9b92" (UID: "b88c663e-c2ac-48aa-8e78-19b596dd9b92"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 07:11:38 crc kubenswrapper[4720]: I0122 07:11:38.925643 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b88c663e-c2ac-48aa-8e78-19b596dd9b92-kube-api-access-952lh" (OuterVolumeSpecName: "kube-api-access-952lh") pod "b88c663e-c2ac-48aa-8e78-19b596dd9b92" (UID: "b88c663e-c2ac-48aa-8e78-19b596dd9b92"). InnerVolumeSpecName "kube-api-access-952lh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 07:11:38 crc kubenswrapper[4720]: I0122 07:11:38.969271 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-s6427"] Jan 22 07:11:38 crc kubenswrapper[4720]: E0122 07:11:38.969867 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b88c663e-c2ac-48aa-8e78-19b596dd9b92" containerName="mariadb-account-delete" Jan 22 07:11:38 crc kubenswrapper[4720]: I0122 07:11:38.969889 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="b88c663e-c2ac-48aa-8e78-19b596dd9b92" containerName="mariadb-account-delete" Jan 22 07:11:38 crc kubenswrapper[4720]: E0122 07:11:38.969931 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bde2542f-6d84-4fee-8690-23325fb92c83" containerName="watcher-kuttl-api-log" Jan 22 07:11:38 crc kubenswrapper[4720]: I0122 07:11:38.969941 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="bde2542f-6d84-4fee-8690-23325fb92c83" containerName="watcher-kuttl-api-log" Jan 22 07:11:38 crc kubenswrapper[4720]: E0122 07:11:38.969964 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bde2542f-6d84-4fee-8690-23325fb92c83" containerName="watcher-api" Jan 22 07:11:38 crc kubenswrapper[4720]: I0122 07:11:38.969973 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="bde2542f-6d84-4fee-8690-23325fb92c83" containerName="watcher-api" Jan 22 07:11:38 crc kubenswrapper[4720]: I0122 07:11:38.970196 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="bde2542f-6d84-4fee-8690-23325fb92c83" containerName="watcher-api" Jan 22 07:11:38 crc kubenswrapper[4720]: I0122 07:11:38.970214 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="b88c663e-c2ac-48aa-8e78-19b596dd9b92" containerName="mariadb-account-delete" Jan 22 07:11:38 crc kubenswrapper[4720]: I0122 07:11:38.970232 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="bde2542f-6d84-4fee-8690-23325fb92c83" containerName="watcher-kuttl-api-log" Jan 22 07:11:38 crc kubenswrapper[4720]: I0122 07:11:38.976037 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-s6427" Jan 22 07:11:38 crc kubenswrapper[4720]: I0122 07:11:38.984095 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-s6427"] Jan 22 07:11:39 crc kubenswrapper[4720]: I0122 07:11:39.006769 4720 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b88c663e-c2ac-48aa-8e78-19b596dd9b92-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 07:11:39 crc kubenswrapper[4720]: I0122 07:11:39.006805 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-952lh\" (UniqueName: \"kubernetes.io/projected/b88c663e-c2ac-48aa-8e78-19b596dd9b92-kube-api-access-952lh\") on node \"crc\" DevicePath \"\"" Jan 22 07:11:39 crc kubenswrapper[4720]: E0122 07:11:39.006936 4720 secret.go:188] Couldn't get secret watcher-kuttl-default/watcher-kuttl-decision-engine-config-data: secret "watcher-kuttl-decision-engine-config-data" not found Jan 22 07:11:39 crc kubenswrapper[4720]: E0122 07:11:39.006993 4720 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/08a4eaaa-38f1-4956-9a9a-0d42286ee20e-config-data podName:08a4eaaa-38f1-4956-9a9a-0d42286ee20e nodeName:}" failed. No retries permitted until 2026-01-22 07:11:43.006977003 +0000 UTC m=+2195.148883708 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/secret/08a4eaaa-38f1-4956-9a9a-0d42286ee20e-config-data") pod "watcher-kuttl-decision-engine-0" (UID: "08a4eaaa-38f1-4956-9a9a-0d42286ee20e") : secret "watcher-kuttl-decision-engine-config-data" not found Jan 22 07:11:39 crc kubenswrapper[4720]: I0122 07:11:39.044683 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 07:11:39 crc kubenswrapper[4720]: I0122 07:11:39.108005 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be2f9b40-2fd1-4ae5-8772-d8770884bd9d-combined-ca-bundle\") pod \"be2f9b40-2fd1-4ae5-8772-d8770884bd9d\" (UID: \"be2f9b40-2fd1-4ae5-8772-d8770884bd9d\") " Jan 22 07:11:39 crc kubenswrapper[4720]: I0122 07:11:39.108144 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pr654\" (UniqueName: \"kubernetes.io/projected/be2f9b40-2fd1-4ae5-8772-d8770884bd9d-kube-api-access-pr654\") pod \"be2f9b40-2fd1-4ae5-8772-d8770884bd9d\" (UID: \"be2f9b40-2fd1-4ae5-8772-d8770884bd9d\") " Jan 22 07:11:39 crc kubenswrapper[4720]: I0122 07:11:39.108187 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/be2f9b40-2fd1-4ae5-8772-d8770884bd9d-cert-memcached-mtls\") pod \"be2f9b40-2fd1-4ae5-8772-d8770884bd9d\" (UID: \"be2f9b40-2fd1-4ae5-8772-d8770884bd9d\") " Jan 22 07:11:39 crc kubenswrapper[4720]: I0122 07:11:39.108278 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be2f9b40-2fd1-4ae5-8772-d8770884bd9d-config-data\") pod \"be2f9b40-2fd1-4ae5-8772-d8770884bd9d\" (UID: \"be2f9b40-2fd1-4ae5-8772-d8770884bd9d\") " Jan 22 07:11:39 crc kubenswrapper[4720]: I0122 07:11:39.108368 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/be2f9b40-2fd1-4ae5-8772-d8770884bd9d-logs\") pod \"be2f9b40-2fd1-4ae5-8772-d8770884bd9d\" (UID: \"be2f9b40-2fd1-4ae5-8772-d8770884bd9d\") " Jan 22 07:11:39 crc kubenswrapper[4720]: I0122 07:11:39.108691 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bvq86\" (UniqueName: \"kubernetes.io/projected/d95b789f-6df6-421f-bfe5-1d06f018b526-kube-api-access-bvq86\") pod \"community-operators-s6427\" (UID: \"d95b789f-6df6-421f-bfe5-1d06f018b526\") " pod="openshift-marketplace/community-operators-s6427" Jan 22 07:11:39 crc kubenswrapper[4720]: I0122 07:11:39.108817 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d95b789f-6df6-421f-bfe5-1d06f018b526-utilities\") pod \"community-operators-s6427\" (UID: \"d95b789f-6df6-421f-bfe5-1d06f018b526\") " pod="openshift-marketplace/community-operators-s6427" Jan 22 07:11:39 crc kubenswrapper[4720]: I0122 07:11:39.108902 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d95b789f-6df6-421f-bfe5-1d06f018b526-catalog-content\") pod \"community-operators-s6427\" (UID: \"d95b789f-6df6-421f-bfe5-1d06f018b526\") " pod="openshift-marketplace/community-operators-s6427" Jan 22 07:11:39 crc kubenswrapper[4720]: I0122 07:11:39.110082 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/be2f9b40-2fd1-4ae5-8772-d8770884bd9d-logs" (OuterVolumeSpecName: "logs") pod "be2f9b40-2fd1-4ae5-8772-d8770884bd9d" (UID: "be2f9b40-2fd1-4ae5-8772-d8770884bd9d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 07:11:39 crc kubenswrapper[4720]: I0122 07:11:39.113247 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/be2f9b40-2fd1-4ae5-8772-d8770884bd9d-kube-api-access-pr654" (OuterVolumeSpecName: "kube-api-access-pr654") pod "be2f9b40-2fd1-4ae5-8772-d8770884bd9d" (UID: "be2f9b40-2fd1-4ae5-8772-d8770884bd9d"). InnerVolumeSpecName "kube-api-access-pr654". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 07:11:39 crc kubenswrapper[4720]: I0122 07:11:39.140389 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be2f9b40-2fd1-4ae5-8772-d8770884bd9d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "be2f9b40-2fd1-4ae5-8772-d8770884bd9d" (UID: "be2f9b40-2fd1-4ae5-8772-d8770884bd9d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:11:39 crc kubenswrapper[4720]: I0122 07:11:39.157294 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be2f9b40-2fd1-4ae5-8772-d8770884bd9d-config-data" (OuterVolumeSpecName: "config-data") pod "be2f9b40-2fd1-4ae5-8772-d8770884bd9d" (UID: "be2f9b40-2fd1-4ae5-8772-d8770884bd9d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:11:39 crc kubenswrapper[4720]: I0122 07:11:39.187654 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be2f9b40-2fd1-4ae5-8772-d8770884bd9d-cert-memcached-mtls" (OuterVolumeSpecName: "cert-memcached-mtls") pod "be2f9b40-2fd1-4ae5-8772-d8770884bd9d" (UID: "be2f9b40-2fd1-4ae5-8772-d8770884bd9d"). InnerVolumeSpecName "cert-memcached-mtls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:11:39 crc kubenswrapper[4720]: I0122 07:11:39.210684 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d95b789f-6df6-421f-bfe5-1d06f018b526-utilities\") pod \"community-operators-s6427\" (UID: \"d95b789f-6df6-421f-bfe5-1d06f018b526\") " pod="openshift-marketplace/community-operators-s6427" Jan 22 07:11:39 crc kubenswrapper[4720]: I0122 07:11:39.210797 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d95b789f-6df6-421f-bfe5-1d06f018b526-catalog-content\") pod \"community-operators-s6427\" (UID: \"d95b789f-6df6-421f-bfe5-1d06f018b526\") " pod="openshift-marketplace/community-operators-s6427" Jan 22 07:11:39 crc kubenswrapper[4720]: I0122 07:11:39.210848 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bvq86\" (UniqueName: \"kubernetes.io/projected/d95b789f-6df6-421f-bfe5-1d06f018b526-kube-api-access-bvq86\") pod \"community-operators-s6427\" (UID: \"d95b789f-6df6-421f-bfe5-1d06f018b526\") " pod="openshift-marketplace/community-operators-s6427" Jan 22 07:11:39 crc kubenswrapper[4720]: I0122 07:11:39.210930 4720 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be2f9b40-2fd1-4ae5-8772-d8770884bd9d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 07:11:39 crc kubenswrapper[4720]: I0122 07:11:39.210944 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pr654\" (UniqueName: \"kubernetes.io/projected/be2f9b40-2fd1-4ae5-8772-d8770884bd9d-kube-api-access-pr654\") on node \"crc\" DevicePath \"\"" Jan 22 07:11:39 crc kubenswrapper[4720]: I0122 07:11:39.210957 4720 reconciler_common.go:293] "Volume detached for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/be2f9b40-2fd1-4ae5-8772-d8770884bd9d-cert-memcached-mtls\") on node \"crc\" DevicePath \"\"" Jan 22 07:11:39 crc kubenswrapper[4720]: I0122 07:11:39.210966 4720 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be2f9b40-2fd1-4ae5-8772-d8770884bd9d-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 07:11:39 crc kubenswrapper[4720]: I0122 07:11:39.210974 4720 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/be2f9b40-2fd1-4ae5-8772-d8770884bd9d-logs\") on node \"crc\" DevicePath \"\"" Jan 22 07:11:39 crc kubenswrapper[4720]: I0122 07:11:39.211153 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d95b789f-6df6-421f-bfe5-1d06f018b526-utilities\") pod \"community-operators-s6427\" (UID: \"d95b789f-6df6-421f-bfe5-1d06f018b526\") " pod="openshift-marketplace/community-operators-s6427" Jan 22 07:11:39 crc kubenswrapper[4720]: I0122 07:11:39.211214 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d95b789f-6df6-421f-bfe5-1d06f018b526-catalog-content\") pod \"community-operators-s6427\" (UID: \"d95b789f-6df6-421f-bfe5-1d06f018b526\") " pod="openshift-marketplace/community-operators-s6427" Jan 22 07:11:39 crc kubenswrapper[4720]: I0122 07:11:39.229250 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bvq86\" (UniqueName: \"kubernetes.io/projected/d95b789f-6df6-421f-bfe5-1d06f018b526-kube-api-access-bvq86\") pod \"community-operators-s6427\" (UID: \"d95b789f-6df6-421f-bfe5-1d06f018b526\") " pod="openshift-marketplace/community-operators-s6427" Jan 22 07:11:39 crc kubenswrapper[4720]: I0122 07:11:39.310743 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-s6427" Jan 22 07:11:39 crc kubenswrapper[4720]: I0122 07:11:39.372601 4720 generic.go:334] "Generic (PLEG): container finished" podID="cd50208a-0f02-4a61-9393-5d24423ffd69" containerID="279d85fac63b31574b0a4f107177c8bb36fc6e2884c4d61f91ac06f36c619862" exitCode=0 Jan 22 07:11:39 crc kubenswrapper[4720]: I0122 07:11:39.372841 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"cd50208a-0f02-4a61-9393-5d24423ffd69","Type":"ContainerDied","Data":"279d85fac63b31574b0a4f107177c8bb36fc6e2884c4d61f91ac06f36c619862"} Jan 22 07:11:39 crc kubenswrapper[4720]: I0122 07:11:39.390043 4720 generic.go:334] "Generic (PLEG): container finished" podID="be2f9b40-2fd1-4ae5-8772-d8770884bd9d" containerID="a9096406605c76a8731deae712fd1bfe87eedb40f9b51d28e66bc9ab53ddf51e" exitCode=0 Jan 22 07:11:39 crc kubenswrapper[4720]: I0122 07:11:39.390124 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"be2f9b40-2fd1-4ae5-8772-d8770884bd9d","Type":"ContainerDied","Data":"a9096406605c76a8731deae712fd1bfe87eedb40f9b51d28e66bc9ab53ddf51e"} Jan 22 07:11:39 crc kubenswrapper[4720]: I0122 07:11:39.390138 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 07:11:39 crc kubenswrapper[4720]: I0122 07:11:39.390173 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"be2f9b40-2fd1-4ae5-8772-d8770884bd9d","Type":"ContainerDied","Data":"00318edb60daddb11d74c999395b56ffb244a3f537c4a21c7e759ddf41dd5f16"} Jan 22 07:11:39 crc kubenswrapper[4720]: I0122 07:11:39.390229 4720 scope.go:117] "RemoveContainer" containerID="a9096406605c76a8731deae712fd1bfe87eedb40f9b51d28e66bc9ab53ddf51e" Jan 22 07:11:39 crc kubenswrapper[4720]: I0122 07:11:39.406341 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher5d39-account-delete-tgqmp" event={"ID":"b88c663e-c2ac-48aa-8e78-19b596dd9b92","Type":"ContainerDied","Data":"c4b31c0fce4808321ee2f9a663e233dfaadc56bbcfc5804839b2de2ebbb3a2e4"} Jan 22 07:11:39 crc kubenswrapper[4720]: I0122 07:11:39.406380 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher5d39-account-delete-tgqmp" Jan 22 07:11:39 crc kubenswrapper[4720]: I0122 07:11:39.406399 4720 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c4b31c0fce4808321ee2f9a663e233dfaadc56bbcfc5804839b2de2ebbb3a2e4" Jan 22 07:11:39 crc kubenswrapper[4720]: I0122 07:11:39.448845 4720 scope.go:117] "RemoveContainer" containerID="a9096406605c76a8731deae712fd1bfe87eedb40f9b51d28e66bc9ab53ddf51e" Jan 22 07:11:39 crc kubenswrapper[4720]: I0122 07:11:39.452025 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 22 07:11:39 crc kubenswrapper[4720]: E0122 07:11:39.452780 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a9096406605c76a8731deae712fd1bfe87eedb40f9b51d28e66bc9ab53ddf51e\": container with ID starting with a9096406605c76a8731deae712fd1bfe87eedb40f9b51d28e66bc9ab53ddf51e not found: ID does not exist" containerID="a9096406605c76a8731deae712fd1bfe87eedb40f9b51d28e66bc9ab53ddf51e" Jan 22 07:11:39 crc kubenswrapper[4720]: I0122 07:11:39.452808 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a9096406605c76a8731deae712fd1bfe87eedb40f9b51d28e66bc9ab53ddf51e"} err="failed to get container status \"a9096406605c76a8731deae712fd1bfe87eedb40f9b51d28e66bc9ab53ddf51e\": rpc error: code = NotFound desc = could not find container \"a9096406605c76a8731deae712fd1bfe87eedb40f9b51d28e66bc9ab53ddf51e\": container with ID starting with a9096406605c76a8731deae712fd1bfe87eedb40f9b51d28e66bc9ab53ddf51e not found: ID does not exist" Jan 22 07:11:39 crc kubenswrapper[4720]: I0122 07:11:39.462562 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 22 07:11:39 crc kubenswrapper[4720]: I0122 07:11:39.870964 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-s6427"] Jan 22 07:11:39 crc kubenswrapper[4720]: W0122 07:11:39.880199 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd95b789f_6df6_421f_bfe5_1d06f018b526.slice/crio-18b6eff2a150736cd4c0c198bf4e04592693fb4c84930f34e73e43f0660e7d30 WatchSource:0}: Error finding container 18b6eff2a150736cd4c0c198bf4e04592693fb4c84930f34e73e43f0660e7d30: Status 404 returned error can't find the container with id 18b6eff2a150736cd4c0c198bf4e04592693fb4c84930f34e73e43f0660e7d30 Jan 22 07:11:40 crc kubenswrapper[4720]: I0122 07:11:40.116946 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-db-create-7bkbr"] Jan 22 07:11:40 crc kubenswrapper[4720]: I0122 07:11:40.123015 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-db-create-7bkbr"] Jan 22 07:11:40 crc kubenswrapper[4720]: I0122 07:11:40.134865 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher5d39-account-delete-tgqmp"] Jan 22 07:11:40 crc kubenswrapper[4720]: I0122 07:11:40.145965 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher5d39-account-delete-tgqmp"] Jan 22 07:11:40 crc kubenswrapper[4720]: I0122 07:11:40.151226 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-5d39-account-create-update-7cf7p"] Jan 22 07:11:40 crc kubenswrapper[4720]: I0122 07:11:40.159009 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-5d39-account-create-update-7cf7p"] Jan 22 07:11:40 crc kubenswrapper[4720]: I0122 07:11:40.224755 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5324884d-c405-4664-b229-59325b6fff1b" path="/var/lib/kubelet/pods/5324884d-c405-4664-b229-59325b6fff1b/volumes" Jan 22 07:11:40 crc kubenswrapper[4720]: I0122 07:11:40.225500 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b88c663e-c2ac-48aa-8e78-19b596dd9b92" path="/var/lib/kubelet/pods/b88c663e-c2ac-48aa-8e78-19b596dd9b92/volumes" Jan 22 07:11:40 crc kubenswrapper[4720]: I0122 07:11:40.226104 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="be2f9b40-2fd1-4ae5-8772-d8770884bd9d" path="/var/lib/kubelet/pods/be2f9b40-2fd1-4ae5-8772-d8770884bd9d/volumes" Jan 22 07:11:40 crc kubenswrapper[4720]: I0122 07:11:40.227188 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e26c83ad-445d-4fb7-92f9-a830d1fd4e41" path="/var/lib/kubelet/pods/e26c83ad-445d-4fb7-92f9-a830d1fd4e41/volumes" Jan 22 07:11:40 crc kubenswrapper[4720]: I0122 07:11:40.419931 4720 generic.go:334] "Generic (PLEG): container finished" podID="cd50208a-0f02-4a61-9393-5d24423ffd69" containerID="0cec08b78d2b43050454b187a864abd15ef26d87f024551f6c067c29cd851906" exitCode=0 Jan 22 07:11:40 crc kubenswrapper[4720]: I0122 07:11:40.420057 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"cd50208a-0f02-4a61-9393-5d24423ffd69","Type":"ContainerDied","Data":"0cec08b78d2b43050454b187a864abd15ef26d87f024551f6c067c29cd851906"} Jan 22 07:11:40 crc kubenswrapper[4720]: I0122 07:11:40.423937 4720 generic.go:334] "Generic (PLEG): container finished" podID="d95b789f-6df6-421f-bfe5-1d06f018b526" containerID="9a7358250d1c7112039ce45749b18785b072fdcfe3ff5a85340feb03b9470437" exitCode=0 Jan 22 07:11:40 crc kubenswrapper[4720]: I0122 07:11:40.423982 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s6427" event={"ID":"d95b789f-6df6-421f-bfe5-1d06f018b526","Type":"ContainerDied","Data":"9a7358250d1c7112039ce45749b18785b072fdcfe3ff5a85340feb03b9470437"} Jan 22 07:11:40 crc kubenswrapper[4720]: I0122 07:11:40.424007 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s6427" event={"ID":"d95b789f-6df6-421f-bfe5-1d06f018b526","Type":"ContainerStarted","Data":"18b6eff2a150736cd4c0c198bf4e04592693fb4c84930f34e73e43f0660e7d30"} Jan 22 07:11:40 crc kubenswrapper[4720]: I0122 07:11:40.733893 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:11:40 crc kubenswrapper[4720]: I0122 07:11:40.853846 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd50208a-0f02-4a61-9393-5d24423ffd69-config-data\") pod \"cd50208a-0f02-4a61-9393-5d24423ffd69\" (UID: \"cd50208a-0f02-4a61-9393-5d24423ffd69\") " Jan 22 07:11:40 crc kubenswrapper[4720]: I0122 07:11:40.853896 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cd50208a-0f02-4a61-9393-5d24423ffd69-scripts\") pod \"cd50208a-0f02-4a61-9393-5d24423ffd69\" (UID: \"cd50208a-0f02-4a61-9393-5d24423ffd69\") " Jan 22 07:11:40 crc kubenswrapper[4720]: I0122 07:11:40.853996 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/cd50208a-0f02-4a61-9393-5d24423ffd69-sg-core-conf-yaml\") pod \"cd50208a-0f02-4a61-9393-5d24423ffd69\" (UID: \"cd50208a-0f02-4a61-9393-5d24423ffd69\") " Jan 22 07:11:40 crc kubenswrapper[4720]: I0122 07:11:40.854021 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cd50208a-0f02-4a61-9393-5d24423ffd69-run-httpd\") pod \"cd50208a-0f02-4a61-9393-5d24423ffd69\" (UID: \"cd50208a-0f02-4a61-9393-5d24423ffd69\") " Jan 22 07:11:40 crc kubenswrapper[4720]: I0122 07:11:40.854058 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/cd50208a-0f02-4a61-9393-5d24423ffd69-ceilometer-tls-certs\") pod \"cd50208a-0f02-4a61-9393-5d24423ffd69\" (UID: \"cd50208a-0f02-4a61-9393-5d24423ffd69\") " Jan 22 07:11:40 crc kubenswrapper[4720]: I0122 07:11:40.854248 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kdm8g\" (UniqueName: \"kubernetes.io/projected/cd50208a-0f02-4a61-9393-5d24423ffd69-kube-api-access-kdm8g\") pod \"cd50208a-0f02-4a61-9393-5d24423ffd69\" (UID: \"cd50208a-0f02-4a61-9393-5d24423ffd69\") " Jan 22 07:11:40 crc kubenswrapper[4720]: I0122 07:11:40.854269 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd50208a-0f02-4a61-9393-5d24423ffd69-combined-ca-bundle\") pod \"cd50208a-0f02-4a61-9393-5d24423ffd69\" (UID: \"cd50208a-0f02-4a61-9393-5d24423ffd69\") " Jan 22 07:11:40 crc kubenswrapper[4720]: I0122 07:11:40.854404 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cd50208a-0f02-4a61-9393-5d24423ffd69-log-httpd\") pod \"cd50208a-0f02-4a61-9393-5d24423ffd69\" (UID: \"cd50208a-0f02-4a61-9393-5d24423ffd69\") " Jan 22 07:11:40 crc kubenswrapper[4720]: I0122 07:11:40.854768 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cd50208a-0f02-4a61-9393-5d24423ffd69-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "cd50208a-0f02-4a61-9393-5d24423ffd69" (UID: "cd50208a-0f02-4a61-9393-5d24423ffd69"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 07:11:40 crc kubenswrapper[4720]: I0122 07:11:40.855014 4720 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cd50208a-0f02-4a61-9393-5d24423ffd69-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 22 07:11:40 crc kubenswrapper[4720]: I0122 07:11:40.856312 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cd50208a-0f02-4a61-9393-5d24423ffd69-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "cd50208a-0f02-4a61-9393-5d24423ffd69" (UID: "cd50208a-0f02-4a61-9393-5d24423ffd69"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 07:11:40 crc kubenswrapper[4720]: I0122 07:11:40.861074 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd50208a-0f02-4a61-9393-5d24423ffd69-scripts" (OuterVolumeSpecName: "scripts") pod "cd50208a-0f02-4a61-9393-5d24423ffd69" (UID: "cd50208a-0f02-4a61-9393-5d24423ffd69"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:11:40 crc kubenswrapper[4720]: I0122 07:11:40.861102 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd50208a-0f02-4a61-9393-5d24423ffd69-kube-api-access-kdm8g" (OuterVolumeSpecName: "kube-api-access-kdm8g") pod "cd50208a-0f02-4a61-9393-5d24423ffd69" (UID: "cd50208a-0f02-4a61-9393-5d24423ffd69"). InnerVolumeSpecName "kube-api-access-kdm8g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 07:11:40 crc kubenswrapper[4720]: I0122 07:11:40.879626 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd50208a-0f02-4a61-9393-5d24423ffd69-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "cd50208a-0f02-4a61-9393-5d24423ffd69" (UID: "cd50208a-0f02-4a61-9393-5d24423ffd69"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:11:40 crc kubenswrapper[4720]: I0122 07:11:40.900521 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd50208a-0f02-4a61-9393-5d24423ffd69-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "cd50208a-0f02-4a61-9393-5d24423ffd69" (UID: "cd50208a-0f02-4a61-9393-5d24423ffd69"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:11:40 crc kubenswrapper[4720]: I0122 07:11:40.916857 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd50208a-0f02-4a61-9393-5d24423ffd69-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cd50208a-0f02-4a61-9393-5d24423ffd69" (UID: "cd50208a-0f02-4a61-9393-5d24423ffd69"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:11:40 crc kubenswrapper[4720]: I0122 07:11:40.934571 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd50208a-0f02-4a61-9393-5d24423ffd69-config-data" (OuterVolumeSpecName: "config-data") pod "cd50208a-0f02-4a61-9393-5d24423ffd69" (UID: "cd50208a-0f02-4a61-9393-5d24423ffd69"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:11:40 crc kubenswrapper[4720]: I0122 07:11:40.959297 4720 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/cd50208a-0f02-4a61-9393-5d24423ffd69-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 22 07:11:40 crc kubenswrapper[4720]: I0122 07:11:40.959340 4720 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/cd50208a-0f02-4a61-9393-5d24423ffd69-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 22 07:11:40 crc kubenswrapper[4720]: I0122 07:11:40.959360 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kdm8g\" (UniqueName: \"kubernetes.io/projected/cd50208a-0f02-4a61-9393-5d24423ffd69-kube-api-access-kdm8g\") on node \"crc\" DevicePath \"\"" Jan 22 07:11:40 crc kubenswrapper[4720]: I0122 07:11:40.959373 4720 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd50208a-0f02-4a61-9393-5d24423ffd69-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 07:11:40 crc kubenswrapper[4720]: I0122 07:11:40.959388 4720 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cd50208a-0f02-4a61-9393-5d24423ffd69-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 22 07:11:40 crc kubenswrapper[4720]: I0122 07:11:40.959399 4720 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd50208a-0f02-4a61-9393-5d24423ffd69-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 07:11:40 crc kubenswrapper[4720]: I0122 07:11:40.959409 4720 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cd50208a-0f02-4a61-9393-5d24423ffd69-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 07:11:41 crc kubenswrapper[4720]: I0122 07:11:41.433291 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s6427" event={"ID":"d95b789f-6df6-421f-bfe5-1d06f018b526","Type":"ContainerStarted","Data":"561af8a9f1e70f913484bb556be9e8363103c73e67ebc2df010d47b799bce331"} Jan 22 07:11:41 crc kubenswrapper[4720]: I0122 07:11:41.436225 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"cd50208a-0f02-4a61-9393-5d24423ffd69","Type":"ContainerDied","Data":"35c46ddaa92d075bf5487171c87492a57d4d02fd1115d60ef34a593e7268a940"} Jan 22 07:11:41 crc kubenswrapper[4720]: I0122 07:11:41.436292 4720 scope.go:117] "RemoveContainer" containerID="a575556003e7d42da5f702637bf19a509657be4642dab87a14a0ab20935437d8" Jan 22 07:11:41 crc kubenswrapper[4720]: I0122 07:11:41.436346 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:11:41 crc kubenswrapper[4720]: I0122 07:11:41.489083 4720 scope.go:117] "RemoveContainer" containerID="25c43d8a14f5ff1a61cad89d904df608af3e4b02abcdc4254f4ad81b4a6c5274" Jan 22 07:11:41 crc kubenswrapper[4720]: I0122 07:11:41.489949 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 07:11:41 crc kubenswrapper[4720]: I0122 07:11:41.506187 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 07:11:41 crc kubenswrapper[4720]: I0122 07:11:41.537027 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 07:11:41 crc kubenswrapper[4720]: E0122 07:11:41.537397 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be2f9b40-2fd1-4ae5-8772-d8770884bd9d" containerName="watcher-applier" Jan 22 07:11:41 crc kubenswrapper[4720]: I0122 07:11:41.537416 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="be2f9b40-2fd1-4ae5-8772-d8770884bd9d" containerName="watcher-applier" Jan 22 07:11:41 crc kubenswrapper[4720]: E0122 07:11:41.537427 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd50208a-0f02-4a61-9393-5d24423ffd69" containerName="ceilometer-notification-agent" Jan 22 07:11:41 crc kubenswrapper[4720]: I0122 07:11:41.537435 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd50208a-0f02-4a61-9393-5d24423ffd69" containerName="ceilometer-notification-agent" Jan 22 07:11:41 crc kubenswrapper[4720]: E0122 07:11:41.537443 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd50208a-0f02-4a61-9393-5d24423ffd69" containerName="proxy-httpd" Jan 22 07:11:41 crc kubenswrapper[4720]: I0122 07:11:41.537450 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd50208a-0f02-4a61-9393-5d24423ffd69" containerName="proxy-httpd" Jan 22 07:11:41 crc kubenswrapper[4720]: E0122 07:11:41.537465 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd50208a-0f02-4a61-9393-5d24423ffd69" containerName="sg-core" Jan 22 07:11:41 crc kubenswrapper[4720]: I0122 07:11:41.537471 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd50208a-0f02-4a61-9393-5d24423ffd69" containerName="sg-core" Jan 22 07:11:41 crc kubenswrapper[4720]: E0122 07:11:41.537502 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd50208a-0f02-4a61-9393-5d24423ffd69" containerName="ceilometer-central-agent" Jan 22 07:11:41 crc kubenswrapper[4720]: I0122 07:11:41.537507 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd50208a-0f02-4a61-9393-5d24423ffd69" containerName="ceilometer-central-agent" Jan 22 07:11:41 crc kubenswrapper[4720]: I0122 07:11:41.537644 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="be2f9b40-2fd1-4ae5-8772-d8770884bd9d" containerName="watcher-applier" Jan 22 07:11:41 crc kubenswrapper[4720]: I0122 07:11:41.537661 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd50208a-0f02-4a61-9393-5d24423ffd69" containerName="sg-core" Jan 22 07:11:41 crc kubenswrapper[4720]: I0122 07:11:41.537670 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd50208a-0f02-4a61-9393-5d24423ffd69" containerName="ceilometer-notification-agent" Jan 22 07:11:41 crc kubenswrapper[4720]: I0122 07:11:41.537678 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd50208a-0f02-4a61-9393-5d24423ffd69" containerName="proxy-httpd" Jan 22 07:11:41 crc kubenswrapper[4720]: I0122 07:11:41.537690 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd50208a-0f02-4a61-9393-5d24423ffd69" containerName="ceilometer-central-agent" Jan 22 07:11:41 crc kubenswrapper[4720]: I0122 07:11:41.539362 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:11:41 crc kubenswrapper[4720]: I0122 07:11:41.544607 4720 scope.go:117] "RemoveContainer" containerID="0cec08b78d2b43050454b187a864abd15ef26d87f024551f6c067c29cd851906" Jan 22 07:11:41 crc kubenswrapper[4720]: I0122 07:11:41.549542 4720 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-config-data" Jan 22 07:11:41 crc kubenswrapper[4720]: I0122 07:11:41.549580 4720 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-ceilometer-internal-svc" Jan 22 07:11:41 crc kubenswrapper[4720]: I0122 07:11:41.549809 4720 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-scripts" Jan 22 07:11:41 crc kubenswrapper[4720]: I0122 07:11:41.634110 4720 scope.go:117] "RemoveContainer" containerID="279d85fac63b31574b0a4f107177c8bb36fc6e2884c4d61f91ac06f36c619862" Jan 22 07:11:41 crc kubenswrapper[4720]: I0122 07:11:41.636714 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 07:11:41 crc kubenswrapper[4720]: I0122 07:11:41.678293 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32462def-5ce6-4eee-9b2e-4bb394fff83d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"32462def-5ce6-4eee-9b2e-4bb394fff83d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:11:41 crc kubenswrapper[4720]: I0122 07:11:41.678363 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/32462def-5ce6-4eee-9b2e-4bb394fff83d-log-httpd\") pod \"ceilometer-0\" (UID: \"32462def-5ce6-4eee-9b2e-4bb394fff83d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:11:41 crc kubenswrapper[4720]: I0122 07:11:41.678389 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/32462def-5ce6-4eee-9b2e-4bb394fff83d-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"32462def-5ce6-4eee-9b2e-4bb394fff83d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:11:41 crc kubenswrapper[4720]: I0122 07:11:41.678460 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-446bv\" (UniqueName: \"kubernetes.io/projected/32462def-5ce6-4eee-9b2e-4bb394fff83d-kube-api-access-446bv\") pod \"ceilometer-0\" (UID: \"32462def-5ce6-4eee-9b2e-4bb394fff83d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:11:41 crc kubenswrapper[4720]: I0122 07:11:41.678488 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32462def-5ce6-4eee-9b2e-4bb394fff83d-config-data\") pod \"ceilometer-0\" (UID: \"32462def-5ce6-4eee-9b2e-4bb394fff83d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:11:41 crc kubenswrapper[4720]: I0122 07:11:41.678540 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/32462def-5ce6-4eee-9b2e-4bb394fff83d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"32462def-5ce6-4eee-9b2e-4bb394fff83d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:11:41 crc kubenswrapper[4720]: I0122 07:11:41.678560 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/32462def-5ce6-4eee-9b2e-4bb394fff83d-scripts\") pod \"ceilometer-0\" (UID: \"32462def-5ce6-4eee-9b2e-4bb394fff83d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:11:41 crc kubenswrapper[4720]: I0122 07:11:41.678583 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/32462def-5ce6-4eee-9b2e-4bb394fff83d-run-httpd\") pod \"ceilometer-0\" (UID: \"32462def-5ce6-4eee-9b2e-4bb394fff83d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:11:41 crc kubenswrapper[4720]: I0122 07:11:41.780083 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32462def-5ce6-4eee-9b2e-4bb394fff83d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"32462def-5ce6-4eee-9b2e-4bb394fff83d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:11:41 crc kubenswrapper[4720]: I0122 07:11:41.780144 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/32462def-5ce6-4eee-9b2e-4bb394fff83d-log-httpd\") pod \"ceilometer-0\" (UID: \"32462def-5ce6-4eee-9b2e-4bb394fff83d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:11:41 crc kubenswrapper[4720]: I0122 07:11:41.780166 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/32462def-5ce6-4eee-9b2e-4bb394fff83d-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"32462def-5ce6-4eee-9b2e-4bb394fff83d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:11:41 crc kubenswrapper[4720]: I0122 07:11:41.780261 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-446bv\" (UniqueName: \"kubernetes.io/projected/32462def-5ce6-4eee-9b2e-4bb394fff83d-kube-api-access-446bv\") pod \"ceilometer-0\" (UID: \"32462def-5ce6-4eee-9b2e-4bb394fff83d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:11:41 crc kubenswrapper[4720]: I0122 07:11:41.780300 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32462def-5ce6-4eee-9b2e-4bb394fff83d-config-data\") pod \"ceilometer-0\" (UID: \"32462def-5ce6-4eee-9b2e-4bb394fff83d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:11:41 crc kubenswrapper[4720]: I0122 07:11:41.780360 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/32462def-5ce6-4eee-9b2e-4bb394fff83d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"32462def-5ce6-4eee-9b2e-4bb394fff83d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:11:41 crc kubenswrapper[4720]: I0122 07:11:41.780383 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/32462def-5ce6-4eee-9b2e-4bb394fff83d-scripts\") pod \"ceilometer-0\" (UID: \"32462def-5ce6-4eee-9b2e-4bb394fff83d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:11:41 crc kubenswrapper[4720]: I0122 07:11:41.780404 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/32462def-5ce6-4eee-9b2e-4bb394fff83d-run-httpd\") pod \"ceilometer-0\" (UID: \"32462def-5ce6-4eee-9b2e-4bb394fff83d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:11:41 crc kubenswrapper[4720]: I0122 07:11:41.780827 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/32462def-5ce6-4eee-9b2e-4bb394fff83d-run-httpd\") pod \"ceilometer-0\" (UID: \"32462def-5ce6-4eee-9b2e-4bb394fff83d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:11:41 crc kubenswrapper[4720]: I0122 07:11:41.781429 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/32462def-5ce6-4eee-9b2e-4bb394fff83d-log-httpd\") pod \"ceilometer-0\" (UID: \"32462def-5ce6-4eee-9b2e-4bb394fff83d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:11:41 crc kubenswrapper[4720]: I0122 07:11:41.790126 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32462def-5ce6-4eee-9b2e-4bb394fff83d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"32462def-5ce6-4eee-9b2e-4bb394fff83d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:11:41 crc kubenswrapper[4720]: I0122 07:11:41.791683 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/32462def-5ce6-4eee-9b2e-4bb394fff83d-scripts\") pod \"ceilometer-0\" (UID: \"32462def-5ce6-4eee-9b2e-4bb394fff83d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:11:41 crc kubenswrapper[4720]: I0122 07:11:41.793856 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/32462def-5ce6-4eee-9b2e-4bb394fff83d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"32462def-5ce6-4eee-9b2e-4bb394fff83d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:11:41 crc kubenswrapper[4720]: I0122 07:11:41.796690 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/32462def-5ce6-4eee-9b2e-4bb394fff83d-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"32462def-5ce6-4eee-9b2e-4bb394fff83d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:11:41 crc kubenswrapper[4720]: I0122 07:11:41.808070 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-446bv\" (UniqueName: \"kubernetes.io/projected/32462def-5ce6-4eee-9b2e-4bb394fff83d-kube-api-access-446bv\") pod \"ceilometer-0\" (UID: \"32462def-5ce6-4eee-9b2e-4bb394fff83d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:11:41 crc kubenswrapper[4720]: I0122 07:11:41.811187 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32462def-5ce6-4eee-9b2e-4bb394fff83d-config-data\") pod \"ceilometer-0\" (UID: \"32462def-5ce6-4eee-9b2e-4bb394fff83d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:11:41 crc kubenswrapper[4720]: I0122 07:11:41.908583 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:11:42 crc kubenswrapper[4720]: I0122 07:11:42.000686 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 07:11:42 crc kubenswrapper[4720]: I0122 07:11:42.087449 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dpnbx\" (UniqueName: \"kubernetes.io/projected/08a4eaaa-38f1-4956-9a9a-0d42286ee20e-kube-api-access-dpnbx\") pod \"08a4eaaa-38f1-4956-9a9a-0d42286ee20e\" (UID: \"08a4eaaa-38f1-4956-9a9a-0d42286ee20e\") " Jan 22 07:11:42 crc kubenswrapper[4720]: I0122 07:11:42.087539 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08a4eaaa-38f1-4956-9a9a-0d42286ee20e-combined-ca-bundle\") pod \"08a4eaaa-38f1-4956-9a9a-0d42286ee20e\" (UID: \"08a4eaaa-38f1-4956-9a9a-0d42286ee20e\") " Jan 22 07:11:42 crc kubenswrapper[4720]: I0122 07:11:42.087566 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/08a4eaaa-38f1-4956-9a9a-0d42286ee20e-cert-memcached-mtls\") pod \"08a4eaaa-38f1-4956-9a9a-0d42286ee20e\" (UID: \"08a4eaaa-38f1-4956-9a9a-0d42286ee20e\") " Jan 22 07:11:42 crc kubenswrapper[4720]: I0122 07:11:42.087619 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08a4eaaa-38f1-4956-9a9a-0d42286ee20e-config-data\") pod \"08a4eaaa-38f1-4956-9a9a-0d42286ee20e\" (UID: \"08a4eaaa-38f1-4956-9a9a-0d42286ee20e\") " Jan 22 07:11:42 crc kubenswrapper[4720]: I0122 07:11:42.087719 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/08a4eaaa-38f1-4956-9a9a-0d42286ee20e-custom-prometheus-ca\") pod \"08a4eaaa-38f1-4956-9a9a-0d42286ee20e\" (UID: \"08a4eaaa-38f1-4956-9a9a-0d42286ee20e\") " Jan 22 07:11:42 crc kubenswrapper[4720]: I0122 07:11:42.087770 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/08a4eaaa-38f1-4956-9a9a-0d42286ee20e-logs\") pod \"08a4eaaa-38f1-4956-9a9a-0d42286ee20e\" (UID: \"08a4eaaa-38f1-4956-9a9a-0d42286ee20e\") " Jan 22 07:11:42 crc kubenswrapper[4720]: I0122 07:11:42.088432 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/08a4eaaa-38f1-4956-9a9a-0d42286ee20e-logs" (OuterVolumeSpecName: "logs") pod "08a4eaaa-38f1-4956-9a9a-0d42286ee20e" (UID: "08a4eaaa-38f1-4956-9a9a-0d42286ee20e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 07:11:42 crc kubenswrapper[4720]: I0122 07:11:42.104788 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/08a4eaaa-38f1-4956-9a9a-0d42286ee20e-kube-api-access-dpnbx" (OuterVolumeSpecName: "kube-api-access-dpnbx") pod "08a4eaaa-38f1-4956-9a9a-0d42286ee20e" (UID: "08a4eaaa-38f1-4956-9a9a-0d42286ee20e"). InnerVolumeSpecName "kube-api-access-dpnbx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 07:11:42 crc kubenswrapper[4720]: I0122 07:11:42.119375 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08a4eaaa-38f1-4956-9a9a-0d42286ee20e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "08a4eaaa-38f1-4956-9a9a-0d42286ee20e" (UID: "08a4eaaa-38f1-4956-9a9a-0d42286ee20e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:11:42 crc kubenswrapper[4720]: I0122 07:11:42.165054 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08a4eaaa-38f1-4956-9a9a-0d42286ee20e-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "08a4eaaa-38f1-4956-9a9a-0d42286ee20e" (UID: "08a4eaaa-38f1-4956-9a9a-0d42286ee20e"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:11:42 crc kubenswrapper[4720]: I0122 07:11:42.181362 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08a4eaaa-38f1-4956-9a9a-0d42286ee20e-config-data" (OuterVolumeSpecName: "config-data") pod "08a4eaaa-38f1-4956-9a9a-0d42286ee20e" (UID: "08a4eaaa-38f1-4956-9a9a-0d42286ee20e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:11:42 crc kubenswrapper[4720]: I0122 07:11:42.189621 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dpnbx\" (UniqueName: \"kubernetes.io/projected/08a4eaaa-38f1-4956-9a9a-0d42286ee20e-kube-api-access-dpnbx\") on node \"crc\" DevicePath \"\"" Jan 22 07:11:42 crc kubenswrapper[4720]: I0122 07:11:42.189647 4720 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08a4eaaa-38f1-4956-9a9a-0d42286ee20e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 07:11:42 crc kubenswrapper[4720]: I0122 07:11:42.189659 4720 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08a4eaaa-38f1-4956-9a9a-0d42286ee20e-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 07:11:42 crc kubenswrapper[4720]: I0122 07:11:42.189668 4720 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/08a4eaaa-38f1-4956-9a9a-0d42286ee20e-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Jan 22 07:11:42 crc kubenswrapper[4720]: I0122 07:11:42.189678 4720 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/08a4eaaa-38f1-4956-9a9a-0d42286ee20e-logs\") on node \"crc\" DevicePath \"\"" Jan 22 07:11:42 crc kubenswrapper[4720]: I0122 07:11:42.207496 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08a4eaaa-38f1-4956-9a9a-0d42286ee20e-cert-memcached-mtls" (OuterVolumeSpecName: "cert-memcached-mtls") pod "08a4eaaa-38f1-4956-9a9a-0d42286ee20e" (UID: "08a4eaaa-38f1-4956-9a9a-0d42286ee20e"). InnerVolumeSpecName "cert-memcached-mtls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:11:42 crc kubenswrapper[4720]: I0122 07:11:42.220957 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd50208a-0f02-4a61-9393-5d24423ffd69" path="/var/lib/kubelet/pods/cd50208a-0f02-4a61-9393-5d24423ffd69/volumes" Jan 22 07:11:42 crc kubenswrapper[4720]: I0122 07:11:42.291332 4720 reconciler_common.go:293] "Volume detached for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/08a4eaaa-38f1-4956-9a9a-0d42286ee20e-cert-memcached-mtls\") on node \"crc\" DevicePath \"\"" Jan 22 07:11:42 crc kubenswrapper[4720]: I0122 07:11:42.430767 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 07:11:42 crc kubenswrapper[4720]: I0122 07:11:42.449082 4720 generic.go:334] "Generic (PLEG): container finished" podID="08a4eaaa-38f1-4956-9a9a-0d42286ee20e" containerID="17c2d00fe7004d679924e717586b6da3c7015366b9d03dddd525fdb7b785cad1" exitCode=0 Jan 22 07:11:42 crc kubenswrapper[4720]: I0122 07:11:42.449183 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"08a4eaaa-38f1-4956-9a9a-0d42286ee20e","Type":"ContainerDied","Data":"17c2d00fe7004d679924e717586b6da3c7015366b9d03dddd525fdb7b785cad1"} Jan 22 07:11:42 crc kubenswrapper[4720]: I0122 07:11:42.449493 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"08a4eaaa-38f1-4956-9a9a-0d42286ee20e","Type":"ContainerDied","Data":"cf35391377ce928094cba3486c1bba50c00fcb9c84b9b6fc998504d89bdf3190"} Jan 22 07:11:42 crc kubenswrapper[4720]: I0122 07:11:42.449541 4720 scope.go:117] "RemoveContainer" containerID="17c2d00fe7004d679924e717586b6da3c7015366b9d03dddd525fdb7b785cad1" Jan 22 07:11:42 crc kubenswrapper[4720]: I0122 07:11:42.449763 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 07:11:42 crc kubenswrapper[4720]: I0122 07:11:42.461417 4720 generic.go:334] "Generic (PLEG): container finished" podID="d95b789f-6df6-421f-bfe5-1d06f018b526" containerID="561af8a9f1e70f913484bb556be9e8363103c73e67ebc2df010d47b799bce331" exitCode=0 Jan 22 07:11:42 crc kubenswrapper[4720]: I0122 07:11:42.461728 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s6427" event={"ID":"d95b789f-6df6-421f-bfe5-1d06f018b526","Type":"ContainerDied","Data":"561af8a9f1e70f913484bb556be9e8363103c73e67ebc2df010d47b799bce331"} Jan 22 07:11:42 crc kubenswrapper[4720]: I0122 07:11:42.502509 4720 scope.go:117] "RemoveContainer" containerID="17c2d00fe7004d679924e717586b6da3c7015366b9d03dddd525fdb7b785cad1" Jan 22 07:11:42 crc kubenswrapper[4720]: E0122 07:11:42.503377 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"17c2d00fe7004d679924e717586b6da3c7015366b9d03dddd525fdb7b785cad1\": container with ID starting with 17c2d00fe7004d679924e717586b6da3c7015366b9d03dddd525fdb7b785cad1 not found: ID does not exist" containerID="17c2d00fe7004d679924e717586b6da3c7015366b9d03dddd525fdb7b785cad1" Jan 22 07:11:42 crc kubenswrapper[4720]: I0122 07:11:42.503415 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"17c2d00fe7004d679924e717586b6da3c7015366b9d03dddd525fdb7b785cad1"} err="failed to get container status \"17c2d00fe7004d679924e717586b6da3c7015366b9d03dddd525fdb7b785cad1\": rpc error: code = NotFound desc = could not find container \"17c2d00fe7004d679924e717586b6da3c7015366b9d03dddd525fdb7b785cad1\": container with ID starting with 17c2d00fe7004d679924e717586b6da3c7015366b9d03dddd525fdb7b785cad1 not found: ID does not exist" Jan 22 07:11:42 crc kubenswrapper[4720]: I0122 07:11:42.521768 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 22 07:11:42 crc kubenswrapper[4720]: I0122 07:11:42.528722 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 22 07:11:43 crc kubenswrapper[4720]: I0122 07:11:43.476583 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s6427" event={"ID":"d95b789f-6df6-421f-bfe5-1d06f018b526","Type":"ContainerStarted","Data":"e8f88aa0bf0ece5a2d34b83440736fe3472843e554c48898f6e1de0f095ce6a1"} Jan 22 07:11:43 crc kubenswrapper[4720]: I0122 07:11:43.478314 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"32462def-5ce6-4eee-9b2e-4bb394fff83d","Type":"ContainerStarted","Data":"838bfe0de51a6527da22f5b59d9711d8e3b4fc6b9d6e12a28115e78606809478"} Jan 22 07:11:43 crc kubenswrapper[4720]: I0122 07:11:43.478364 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"32462def-5ce6-4eee-9b2e-4bb394fff83d","Type":"ContainerStarted","Data":"f9e4789db2e8292d9d7b30b1674e7860a6421525cbf5b606f53dfb2d932ace6c"} Jan 22 07:11:43 crc kubenswrapper[4720]: I0122 07:11:43.498686 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-s6427" podStartSLOduration=3.054616468 podStartE2EDuration="5.498662759s" podCreationTimestamp="2026-01-22 07:11:38 +0000 UTC" firstStartedPulling="2026-01-22 07:11:40.426021696 +0000 UTC m=+2192.567928451" lastFinishedPulling="2026-01-22 07:11:42.870068037 +0000 UTC m=+2195.011974742" observedRunningTime="2026-01-22 07:11:43.495464119 +0000 UTC m=+2195.637370834" watchObservedRunningTime="2026-01-22 07:11:43.498662759 +0000 UTC m=+2195.640569464" Jan 22 07:11:43 crc kubenswrapper[4720]: I0122 07:11:43.566758 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-db-create-56tkg"] Jan 22 07:11:43 crc kubenswrapper[4720]: E0122 07:11:43.567171 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08a4eaaa-38f1-4956-9a9a-0d42286ee20e" containerName="watcher-decision-engine" Jan 22 07:11:43 crc kubenswrapper[4720]: I0122 07:11:43.567189 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="08a4eaaa-38f1-4956-9a9a-0d42286ee20e" containerName="watcher-decision-engine" Jan 22 07:11:43 crc kubenswrapper[4720]: I0122 07:11:43.567393 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="08a4eaaa-38f1-4956-9a9a-0d42286ee20e" containerName="watcher-decision-engine" Jan 22 07:11:43 crc kubenswrapper[4720]: I0122 07:11:43.568104 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-56tkg" Jan 22 07:11:43 crc kubenswrapper[4720]: I0122 07:11:43.584886 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-db-create-56tkg"] Jan 22 07:11:43 crc kubenswrapper[4720]: I0122 07:11:43.619603 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-42cv9\" (UniqueName: \"kubernetes.io/projected/603e01a3-d099-4300-8a39-7987332eed09-kube-api-access-42cv9\") pod \"watcher-db-create-56tkg\" (UID: \"603e01a3-d099-4300-8a39-7987332eed09\") " pod="watcher-kuttl-default/watcher-db-create-56tkg" Jan 22 07:11:43 crc kubenswrapper[4720]: I0122 07:11:43.619703 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/603e01a3-d099-4300-8a39-7987332eed09-operator-scripts\") pod \"watcher-db-create-56tkg\" (UID: \"603e01a3-d099-4300-8a39-7987332eed09\") " pod="watcher-kuttl-default/watcher-db-create-56tkg" Jan 22 07:11:43 crc kubenswrapper[4720]: I0122 07:11:43.626797 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-4042-account-create-update-rjvws"] Jan 22 07:11:43 crc kubenswrapper[4720]: I0122 07:11:43.630465 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-4042-account-create-update-rjvws" Jan 22 07:11:43 crc kubenswrapper[4720]: I0122 07:11:43.633821 4720 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-db-secret" Jan 22 07:11:43 crc kubenswrapper[4720]: I0122 07:11:43.647773 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-4042-account-create-update-rjvws"] Jan 22 07:11:43 crc kubenswrapper[4720]: I0122 07:11:43.721221 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a48a3256-414e-4999-b919-fe801092cf23-operator-scripts\") pod \"watcher-4042-account-create-update-rjvws\" (UID: \"a48a3256-414e-4999-b919-fe801092cf23\") " pod="watcher-kuttl-default/watcher-4042-account-create-update-rjvws" Jan 22 07:11:43 crc kubenswrapper[4720]: I0122 07:11:43.721319 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-42cv9\" (UniqueName: \"kubernetes.io/projected/603e01a3-d099-4300-8a39-7987332eed09-kube-api-access-42cv9\") pod \"watcher-db-create-56tkg\" (UID: \"603e01a3-d099-4300-8a39-7987332eed09\") " pod="watcher-kuttl-default/watcher-db-create-56tkg" Jan 22 07:11:43 crc kubenswrapper[4720]: I0122 07:11:43.721644 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4h8cf\" (UniqueName: \"kubernetes.io/projected/a48a3256-414e-4999-b919-fe801092cf23-kube-api-access-4h8cf\") pod \"watcher-4042-account-create-update-rjvws\" (UID: \"a48a3256-414e-4999-b919-fe801092cf23\") " pod="watcher-kuttl-default/watcher-4042-account-create-update-rjvws" Jan 22 07:11:43 crc kubenswrapper[4720]: I0122 07:11:43.721725 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/603e01a3-d099-4300-8a39-7987332eed09-operator-scripts\") pod \"watcher-db-create-56tkg\" (UID: \"603e01a3-d099-4300-8a39-7987332eed09\") " pod="watcher-kuttl-default/watcher-db-create-56tkg" Jan 22 07:11:43 crc kubenswrapper[4720]: I0122 07:11:43.722732 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/603e01a3-d099-4300-8a39-7987332eed09-operator-scripts\") pod \"watcher-db-create-56tkg\" (UID: \"603e01a3-d099-4300-8a39-7987332eed09\") " pod="watcher-kuttl-default/watcher-db-create-56tkg" Jan 22 07:11:43 crc kubenswrapper[4720]: I0122 07:11:43.740163 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-42cv9\" (UniqueName: \"kubernetes.io/projected/603e01a3-d099-4300-8a39-7987332eed09-kube-api-access-42cv9\") pod \"watcher-db-create-56tkg\" (UID: \"603e01a3-d099-4300-8a39-7987332eed09\") " pod="watcher-kuttl-default/watcher-db-create-56tkg" Jan 22 07:11:43 crc kubenswrapper[4720]: I0122 07:11:43.823149 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4h8cf\" (UniqueName: \"kubernetes.io/projected/a48a3256-414e-4999-b919-fe801092cf23-kube-api-access-4h8cf\") pod \"watcher-4042-account-create-update-rjvws\" (UID: \"a48a3256-414e-4999-b919-fe801092cf23\") " pod="watcher-kuttl-default/watcher-4042-account-create-update-rjvws" Jan 22 07:11:43 crc kubenswrapper[4720]: I0122 07:11:43.823276 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a48a3256-414e-4999-b919-fe801092cf23-operator-scripts\") pod \"watcher-4042-account-create-update-rjvws\" (UID: \"a48a3256-414e-4999-b919-fe801092cf23\") " pod="watcher-kuttl-default/watcher-4042-account-create-update-rjvws" Jan 22 07:11:43 crc kubenswrapper[4720]: I0122 07:11:43.824268 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a48a3256-414e-4999-b919-fe801092cf23-operator-scripts\") pod \"watcher-4042-account-create-update-rjvws\" (UID: \"a48a3256-414e-4999-b919-fe801092cf23\") " pod="watcher-kuttl-default/watcher-4042-account-create-update-rjvws" Jan 22 07:11:43 crc kubenswrapper[4720]: I0122 07:11:43.849423 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4h8cf\" (UniqueName: \"kubernetes.io/projected/a48a3256-414e-4999-b919-fe801092cf23-kube-api-access-4h8cf\") pod \"watcher-4042-account-create-update-rjvws\" (UID: \"a48a3256-414e-4999-b919-fe801092cf23\") " pod="watcher-kuttl-default/watcher-4042-account-create-update-rjvws" Jan 22 07:11:43 crc kubenswrapper[4720]: I0122 07:11:43.884526 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-56tkg" Jan 22 07:11:43 crc kubenswrapper[4720]: I0122 07:11:43.967428 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-4042-account-create-update-rjvws" Jan 22 07:11:44 crc kubenswrapper[4720]: I0122 07:11:44.226772 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="08a4eaaa-38f1-4956-9a9a-0d42286ee20e" path="/var/lib/kubelet/pods/08a4eaaa-38f1-4956-9a9a-0d42286ee20e/volumes" Jan 22 07:11:44 crc kubenswrapper[4720]: I0122 07:11:44.433538 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-db-create-56tkg"] Jan 22 07:11:44 crc kubenswrapper[4720]: I0122 07:11:44.489490 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"32462def-5ce6-4eee-9b2e-4bb394fff83d","Type":"ContainerStarted","Data":"4739b367212d15d0a7404b3bdedc68c8437a77fcace189834a40c8908c043494"} Jan 22 07:11:44 crc kubenswrapper[4720]: I0122 07:11:44.491132 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-56tkg" event={"ID":"603e01a3-d099-4300-8a39-7987332eed09","Type":"ContainerStarted","Data":"f9b25e8804737bd6b8a0ae9b7363a2c49cd1c5b543cd8707fbf88557542a3420"} Jan 22 07:11:44 crc kubenswrapper[4720]: I0122 07:11:44.559260 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-4042-account-create-update-rjvws"] Jan 22 07:11:44 crc kubenswrapper[4720]: W0122 07:11:44.562939 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda48a3256_414e_4999_b919_fe801092cf23.slice/crio-3c5d4db7de0b53457284a7240e4ce09d1916f2478900b0c80b6c4500a52f6e86 WatchSource:0}: Error finding container 3c5d4db7de0b53457284a7240e4ce09d1916f2478900b0c80b6c4500a52f6e86: Status 404 returned error can't find the container with id 3c5d4db7de0b53457284a7240e4ce09d1916f2478900b0c80b6c4500a52f6e86 Jan 22 07:11:45 crc kubenswrapper[4720]: I0122 07:11:45.501541 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"32462def-5ce6-4eee-9b2e-4bb394fff83d","Type":"ContainerStarted","Data":"9bbc5b65195bc545443b88b0e12ad036f20e81361eabbd252a56cc43fd1a91e5"} Jan 22 07:11:45 crc kubenswrapper[4720]: I0122 07:11:45.503323 4720 generic.go:334] "Generic (PLEG): container finished" podID="603e01a3-d099-4300-8a39-7987332eed09" containerID="2e524cfb70b60a78682e2f54be696366ed85cc1fd60ba385dc5c86438ad662ab" exitCode=0 Jan 22 07:11:45 crc kubenswrapper[4720]: I0122 07:11:45.503393 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-56tkg" event={"ID":"603e01a3-d099-4300-8a39-7987332eed09","Type":"ContainerDied","Data":"2e524cfb70b60a78682e2f54be696366ed85cc1fd60ba385dc5c86438ad662ab"} Jan 22 07:11:45 crc kubenswrapper[4720]: I0122 07:11:45.504727 4720 generic.go:334] "Generic (PLEG): container finished" podID="a48a3256-414e-4999-b919-fe801092cf23" containerID="4db83137fc068de77dc06676c1b0f60f172e33619f5133b2922f88fb8045917d" exitCode=0 Jan 22 07:11:45 crc kubenswrapper[4720]: I0122 07:11:45.504778 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-4042-account-create-update-rjvws" event={"ID":"a48a3256-414e-4999-b919-fe801092cf23","Type":"ContainerDied","Data":"4db83137fc068de77dc06676c1b0f60f172e33619f5133b2922f88fb8045917d"} Jan 22 07:11:45 crc kubenswrapper[4720]: I0122 07:11:45.504811 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-4042-account-create-update-rjvws" event={"ID":"a48a3256-414e-4999-b919-fe801092cf23","Type":"ContainerStarted","Data":"3c5d4db7de0b53457284a7240e4ce09d1916f2478900b0c80b6c4500a52f6e86"} Jan 22 07:11:46 crc kubenswrapper[4720]: I0122 07:11:46.519382 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"32462def-5ce6-4eee-9b2e-4bb394fff83d","Type":"ContainerStarted","Data":"ee55746933386d742e8f2f54e2c3377a2803e32848066320654c34af05e3e7db"} Jan 22 07:11:46 crc kubenswrapper[4720]: I0122 07:11:46.574180 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/ceilometer-0" podStartSLOduration=1.892945803 podStartE2EDuration="5.574154372s" podCreationTimestamp="2026-01-22 07:11:41 +0000 UTC" firstStartedPulling="2026-01-22 07:11:42.474352603 +0000 UTC m=+2194.616259308" lastFinishedPulling="2026-01-22 07:11:46.155561172 +0000 UTC m=+2198.297467877" observedRunningTime="2026-01-22 07:11:46.565147388 +0000 UTC m=+2198.707054093" watchObservedRunningTime="2026-01-22 07:11:46.574154372 +0000 UTC m=+2198.716061077" Jan 22 07:11:47 crc kubenswrapper[4720]: I0122 07:11:47.019533 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-4042-account-create-update-rjvws" Jan 22 07:11:47 crc kubenswrapper[4720]: I0122 07:11:47.023275 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-56tkg" Jan 22 07:11:47 crc kubenswrapper[4720]: I0122 07:11:47.102745 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/603e01a3-d099-4300-8a39-7987332eed09-operator-scripts\") pod \"603e01a3-d099-4300-8a39-7987332eed09\" (UID: \"603e01a3-d099-4300-8a39-7987332eed09\") " Jan 22 07:11:47 crc kubenswrapper[4720]: I0122 07:11:47.103476 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/603e01a3-d099-4300-8a39-7987332eed09-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "603e01a3-d099-4300-8a39-7987332eed09" (UID: "603e01a3-d099-4300-8a39-7987332eed09"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 07:11:47 crc kubenswrapper[4720]: I0122 07:11:47.103794 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4h8cf\" (UniqueName: \"kubernetes.io/projected/a48a3256-414e-4999-b919-fe801092cf23-kube-api-access-4h8cf\") pod \"a48a3256-414e-4999-b919-fe801092cf23\" (UID: \"a48a3256-414e-4999-b919-fe801092cf23\") " Jan 22 07:11:47 crc kubenswrapper[4720]: I0122 07:11:47.104592 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a48a3256-414e-4999-b919-fe801092cf23-operator-scripts\") pod \"a48a3256-414e-4999-b919-fe801092cf23\" (UID: \"a48a3256-414e-4999-b919-fe801092cf23\") " Jan 22 07:11:47 crc kubenswrapper[4720]: I0122 07:11:47.104642 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-42cv9\" (UniqueName: \"kubernetes.io/projected/603e01a3-d099-4300-8a39-7987332eed09-kube-api-access-42cv9\") pod \"603e01a3-d099-4300-8a39-7987332eed09\" (UID: \"603e01a3-d099-4300-8a39-7987332eed09\") " Jan 22 07:11:47 crc kubenswrapper[4720]: I0122 07:11:47.105047 4720 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/603e01a3-d099-4300-8a39-7987332eed09-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 07:11:47 crc kubenswrapper[4720]: I0122 07:11:47.105699 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a48a3256-414e-4999-b919-fe801092cf23-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a48a3256-414e-4999-b919-fe801092cf23" (UID: "a48a3256-414e-4999-b919-fe801092cf23"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 07:11:47 crc kubenswrapper[4720]: I0122 07:11:47.112791 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a48a3256-414e-4999-b919-fe801092cf23-kube-api-access-4h8cf" (OuterVolumeSpecName: "kube-api-access-4h8cf") pod "a48a3256-414e-4999-b919-fe801092cf23" (UID: "a48a3256-414e-4999-b919-fe801092cf23"). InnerVolumeSpecName "kube-api-access-4h8cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 07:11:47 crc kubenswrapper[4720]: I0122 07:11:47.112893 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/603e01a3-d099-4300-8a39-7987332eed09-kube-api-access-42cv9" (OuterVolumeSpecName: "kube-api-access-42cv9") pod "603e01a3-d099-4300-8a39-7987332eed09" (UID: "603e01a3-d099-4300-8a39-7987332eed09"). InnerVolumeSpecName "kube-api-access-42cv9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 07:11:47 crc kubenswrapper[4720]: I0122 07:11:47.207511 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4h8cf\" (UniqueName: \"kubernetes.io/projected/a48a3256-414e-4999-b919-fe801092cf23-kube-api-access-4h8cf\") on node \"crc\" DevicePath \"\"" Jan 22 07:11:47 crc kubenswrapper[4720]: I0122 07:11:47.207566 4720 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a48a3256-414e-4999-b919-fe801092cf23-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 07:11:47 crc kubenswrapper[4720]: I0122 07:11:47.207584 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-42cv9\" (UniqueName: \"kubernetes.io/projected/603e01a3-d099-4300-8a39-7987332eed09-kube-api-access-42cv9\") on node \"crc\" DevicePath \"\"" Jan 22 07:11:47 crc kubenswrapper[4720]: I0122 07:11:47.536188 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-56tkg" event={"ID":"603e01a3-d099-4300-8a39-7987332eed09","Type":"ContainerDied","Data":"f9b25e8804737bd6b8a0ae9b7363a2c49cd1c5b543cd8707fbf88557542a3420"} Jan 22 07:11:47 crc kubenswrapper[4720]: I0122 07:11:47.536241 4720 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f9b25e8804737bd6b8a0ae9b7363a2c49cd1c5b543cd8707fbf88557542a3420" Jan 22 07:11:47 crc kubenswrapper[4720]: I0122 07:11:47.536323 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-56tkg" Jan 22 07:11:47 crc kubenswrapper[4720]: I0122 07:11:47.541006 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-4042-account-create-update-rjvws" Jan 22 07:11:47 crc kubenswrapper[4720]: I0122 07:11:47.541125 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-4042-account-create-update-rjvws" event={"ID":"a48a3256-414e-4999-b919-fe801092cf23","Type":"ContainerDied","Data":"3c5d4db7de0b53457284a7240e4ce09d1916f2478900b0c80b6c4500a52f6e86"} Jan 22 07:11:47 crc kubenswrapper[4720]: I0122 07:11:47.541167 4720 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3c5d4db7de0b53457284a7240e4ce09d1916f2478900b0c80b6c4500a52f6e86" Jan 22 07:11:47 crc kubenswrapper[4720]: I0122 07:11:47.541187 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:11:48 crc kubenswrapper[4720]: I0122 07:11:48.819327 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-59hk6"] Jan 22 07:11:48 crc kubenswrapper[4720]: E0122 07:11:48.820054 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="603e01a3-d099-4300-8a39-7987332eed09" containerName="mariadb-database-create" Jan 22 07:11:48 crc kubenswrapper[4720]: I0122 07:11:48.820070 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="603e01a3-d099-4300-8a39-7987332eed09" containerName="mariadb-database-create" Jan 22 07:11:48 crc kubenswrapper[4720]: E0122 07:11:48.820085 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a48a3256-414e-4999-b919-fe801092cf23" containerName="mariadb-account-create-update" Jan 22 07:11:48 crc kubenswrapper[4720]: I0122 07:11:48.820091 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="a48a3256-414e-4999-b919-fe801092cf23" containerName="mariadb-account-create-update" Jan 22 07:11:48 crc kubenswrapper[4720]: I0122 07:11:48.820252 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="603e01a3-d099-4300-8a39-7987332eed09" containerName="mariadb-database-create" Jan 22 07:11:48 crc kubenswrapper[4720]: I0122 07:11:48.820276 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="a48a3256-414e-4999-b919-fe801092cf23" containerName="mariadb-account-create-update" Jan 22 07:11:48 crc kubenswrapper[4720]: I0122 07:11:48.821050 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-59hk6" Jan 22 07:11:48 crc kubenswrapper[4720]: I0122 07:11:48.824792 4720 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-watcher-kuttl-dockercfg-zwg9h" Jan 22 07:11:48 crc kubenswrapper[4720]: I0122 07:11:48.833220 4720 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-config-data" Jan 22 07:11:48 crc kubenswrapper[4720]: I0122 07:11:48.840518 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-59hk6"] Jan 22 07:11:48 crc kubenswrapper[4720]: I0122 07:11:48.938057 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b8ghz\" (UniqueName: \"kubernetes.io/projected/866509ea-c033-42f6-8274-6e929e2086ff-kube-api-access-b8ghz\") pod \"watcher-kuttl-db-sync-59hk6\" (UID: \"866509ea-c033-42f6-8274-6e929e2086ff\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-59hk6" Jan 22 07:11:48 crc kubenswrapper[4720]: I0122 07:11:48.938376 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/866509ea-c033-42f6-8274-6e929e2086ff-combined-ca-bundle\") pod \"watcher-kuttl-db-sync-59hk6\" (UID: \"866509ea-c033-42f6-8274-6e929e2086ff\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-59hk6" Jan 22 07:11:48 crc kubenswrapper[4720]: I0122 07:11:48.938496 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/866509ea-c033-42f6-8274-6e929e2086ff-config-data\") pod \"watcher-kuttl-db-sync-59hk6\" (UID: \"866509ea-c033-42f6-8274-6e929e2086ff\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-59hk6" Jan 22 07:11:48 crc kubenswrapper[4720]: I0122 07:11:48.938587 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/866509ea-c033-42f6-8274-6e929e2086ff-db-sync-config-data\") pod \"watcher-kuttl-db-sync-59hk6\" (UID: \"866509ea-c033-42f6-8274-6e929e2086ff\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-59hk6" Jan 22 07:11:49 crc kubenswrapper[4720]: I0122 07:11:49.026950 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/keystone-bootstrap-6qk6s"] Jan 22 07:11:49 crc kubenswrapper[4720]: I0122 07:11:49.034184 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/keystone-bootstrap-6qk6s"] Jan 22 07:11:49 crc kubenswrapper[4720]: I0122 07:11:49.040019 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b8ghz\" (UniqueName: \"kubernetes.io/projected/866509ea-c033-42f6-8274-6e929e2086ff-kube-api-access-b8ghz\") pod \"watcher-kuttl-db-sync-59hk6\" (UID: \"866509ea-c033-42f6-8274-6e929e2086ff\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-59hk6" Jan 22 07:11:49 crc kubenswrapper[4720]: I0122 07:11:49.040117 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/866509ea-c033-42f6-8274-6e929e2086ff-combined-ca-bundle\") pod \"watcher-kuttl-db-sync-59hk6\" (UID: \"866509ea-c033-42f6-8274-6e929e2086ff\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-59hk6" Jan 22 07:11:49 crc kubenswrapper[4720]: I0122 07:11:49.040143 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/866509ea-c033-42f6-8274-6e929e2086ff-config-data\") pod \"watcher-kuttl-db-sync-59hk6\" (UID: \"866509ea-c033-42f6-8274-6e929e2086ff\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-59hk6" Jan 22 07:11:49 crc kubenswrapper[4720]: I0122 07:11:49.040164 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/866509ea-c033-42f6-8274-6e929e2086ff-db-sync-config-data\") pod \"watcher-kuttl-db-sync-59hk6\" (UID: \"866509ea-c033-42f6-8274-6e929e2086ff\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-59hk6" Jan 22 07:11:49 crc kubenswrapper[4720]: I0122 07:11:49.044532 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/866509ea-c033-42f6-8274-6e929e2086ff-db-sync-config-data\") pod \"watcher-kuttl-db-sync-59hk6\" (UID: \"866509ea-c033-42f6-8274-6e929e2086ff\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-59hk6" Jan 22 07:11:49 crc kubenswrapper[4720]: I0122 07:11:49.046299 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/866509ea-c033-42f6-8274-6e929e2086ff-config-data\") pod \"watcher-kuttl-db-sync-59hk6\" (UID: \"866509ea-c033-42f6-8274-6e929e2086ff\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-59hk6" Jan 22 07:11:49 crc kubenswrapper[4720]: I0122 07:11:49.058618 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/866509ea-c033-42f6-8274-6e929e2086ff-combined-ca-bundle\") pod \"watcher-kuttl-db-sync-59hk6\" (UID: \"866509ea-c033-42f6-8274-6e929e2086ff\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-59hk6" Jan 22 07:11:49 crc kubenswrapper[4720]: I0122 07:11:49.090414 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b8ghz\" (UniqueName: \"kubernetes.io/projected/866509ea-c033-42f6-8274-6e929e2086ff-kube-api-access-b8ghz\") pod \"watcher-kuttl-db-sync-59hk6\" (UID: \"866509ea-c033-42f6-8274-6e929e2086ff\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-59hk6" Jan 22 07:11:49 crc kubenswrapper[4720]: I0122 07:11:49.137866 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-59hk6" Jan 22 07:11:49 crc kubenswrapper[4720]: I0122 07:11:49.311118 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-s6427" Jan 22 07:11:49 crc kubenswrapper[4720]: I0122 07:11:49.311174 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-s6427" Jan 22 07:11:49 crc kubenswrapper[4720]: I0122 07:11:49.363890 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-s6427" Jan 22 07:11:49 crc kubenswrapper[4720]: I0122 07:11:49.597488 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-59hk6"] Jan 22 07:11:49 crc kubenswrapper[4720]: I0122 07:11:49.627363 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-s6427" Jan 22 07:11:50 crc kubenswrapper[4720]: I0122 07:11:50.219973 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="439270d4-5c94-4dba-8623-2d03bd7198d8" path="/var/lib/kubelet/pods/439270d4-5c94-4dba-8623-2d03bd7198d8/volumes" Jan 22 07:11:50 crc kubenswrapper[4720]: I0122 07:11:50.595357 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-59hk6" event={"ID":"866509ea-c033-42f6-8274-6e929e2086ff","Type":"ContainerStarted","Data":"806ced3cbc4ba6c9ad97758119eac78b6e28162e8a0772cef70e81eaecec156f"} Jan 22 07:11:50 crc kubenswrapper[4720]: I0122 07:11:50.595999 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-59hk6" event={"ID":"866509ea-c033-42f6-8274-6e929e2086ff","Type":"ContainerStarted","Data":"b3089eacd327b6c23e19a40213a2b7103e8399157891af15237cbc863e3631b2"} Jan 22 07:11:50 crc kubenswrapper[4720]: I0122 07:11:50.617693 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-db-sync-59hk6" podStartSLOduration=2.617669963 podStartE2EDuration="2.617669963s" podCreationTimestamp="2026-01-22 07:11:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 07:11:50.613530396 +0000 UTC m=+2202.755437111" watchObservedRunningTime="2026-01-22 07:11:50.617669963 +0000 UTC m=+2202.759576668" Jan 22 07:11:53 crc kubenswrapper[4720]: I0122 07:11:53.640052 4720 generic.go:334] "Generic (PLEG): container finished" podID="866509ea-c033-42f6-8274-6e929e2086ff" containerID="806ced3cbc4ba6c9ad97758119eac78b6e28162e8a0772cef70e81eaecec156f" exitCode=0 Jan 22 07:11:53 crc kubenswrapper[4720]: I0122 07:11:53.640159 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-59hk6" event={"ID":"866509ea-c033-42f6-8274-6e929e2086ff","Type":"ContainerDied","Data":"806ced3cbc4ba6c9ad97758119eac78b6e28162e8a0772cef70e81eaecec156f"} Jan 22 07:11:53 crc kubenswrapper[4720]: I0122 07:11:53.960421 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-s6427"] Jan 22 07:11:53 crc kubenswrapper[4720]: I0122 07:11:53.960750 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-s6427" podUID="d95b789f-6df6-421f-bfe5-1d06f018b526" containerName="registry-server" containerID="cri-o://e8f88aa0bf0ece5a2d34b83440736fe3472843e554c48898f6e1de0f095ce6a1" gracePeriod=2 Jan 22 07:11:54 crc kubenswrapper[4720]: I0122 07:11:54.440373 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-s6427" Jan 22 07:11:54 crc kubenswrapper[4720]: I0122 07:11:54.489337 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d95b789f-6df6-421f-bfe5-1d06f018b526-catalog-content\") pod \"d95b789f-6df6-421f-bfe5-1d06f018b526\" (UID: \"d95b789f-6df6-421f-bfe5-1d06f018b526\") " Jan 22 07:11:54 crc kubenswrapper[4720]: I0122 07:11:54.489532 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d95b789f-6df6-421f-bfe5-1d06f018b526-utilities\") pod \"d95b789f-6df6-421f-bfe5-1d06f018b526\" (UID: \"d95b789f-6df6-421f-bfe5-1d06f018b526\") " Jan 22 07:11:54 crc kubenswrapper[4720]: I0122 07:11:54.489651 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bvq86\" (UniqueName: \"kubernetes.io/projected/d95b789f-6df6-421f-bfe5-1d06f018b526-kube-api-access-bvq86\") pod \"d95b789f-6df6-421f-bfe5-1d06f018b526\" (UID: \"d95b789f-6df6-421f-bfe5-1d06f018b526\") " Jan 22 07:11:54 crc kubenswrapper[4720]: I0122 07:11:54.491782 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d95b789f-6df6-421f-bfe5-1d06f018b526-utilities" (OuterVolumeSpecName: "utilities") pod "d95b789f-6df6-421f-bfe5-1d06f018b526" (UID: "d95b789f-6df6-421f-bfe5-1d06f018b526"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 07:11:54 crc kubenswrapper[4720]: I0122 07:11:54.495983 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d95b789f-6df6-421f-bfe5-1d06f018b526-kube-api-access-bvq86" (OuterVolumeSpecName: "kube-api-access-bvq86") pod "d95b789f-6df6-421f-bfe5-1d06f018b526" (UID: "d95b789f-6df6-421f-bfe5-1d06f018b526"). InnerVolumeSpecName "kube-api-access-bvq86". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 07:11:54 crc kubenswrapper[4720]: I0122 07:11:54.555459 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d95b789f-6df6-421f-bfe5-1d06f018b526-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d95b789f-6df6-421f-bfe5-1d06f018b526" (UID: "d95b789f-6df6-421f-bfe5-1d06f018b526"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 07:11:54 crc kubenswrapper[4720]: I0122 07:11:54.592171 4720 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d95b789f-6df6-421f-bfe5-1d06f018b526-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 07:11:54 crc kubenswrapper[4720]: I0122 07:11:54.592244 4720 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d95b789f-6df6-421f-bfe5-1d06f018b526-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 07:11:54 crc kubenswrapper[4720]: I0122 07:11:54.592262 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bvq86\" (UniqueName: \"kubernetes.io/projected/d95b789f-6df6-421f-bfe5-1d06f018b526-kube-api-access-bvq86\") on node \"crc\" DevicePath \"\"" Jan 22 07:11:54 crc kubenswrapper[4720]: I0122 07:11:54.652044 4720 generic.go:334] "Generic (PLEG): container finished" podID="d95b789f-6df6-421f-bfe5-1d06f018b526" containerID="e8f88aa0bf0ece5a2d34b83440736fe3472843e554c48898f6e1de0f095ce6a1" exitCode=0 Jan 22 07:11:54 crc kubenswrapper[4720]: I0122 07:11:54.652333 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-s6427" Jan 22 07:11:54 crc kubenswrapper[4720]: I0122 07:11:54.654070 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s6427" event={"ID":"d95b789f-6df6-421f-bfe5-1d06f018b526","Type":"ContainerDied","Data":"e8f88aa0bf0ece5a2d34b83440736fe3472843e554c48898f6e1de0f095ce6a1"} Jan 22 07:11:54 crc kubenswrapper[4720]: I0122 07:11:54.654154 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s6427" event={"ID":"d95b789f-6df6-421f-bfe5-1d06f018b526","Type":"ContainerDied","Data":"18b6eff2a150736cd4c0c198bf4e04592693fb4c84930f34e73e43f0660e7d30"} Jan 22 07:11:54 crc kubenswrapper[4720]: I0122 07:11:54.654179 4720 scope.go:117] "RemoveContainer" containerID="e8f88aa0bf0ece5a2d34b83440736fe3472843e554c48898f6e1de0f095ce6a1" Jan 22 07:11:54 crc kubenswrapper[4720]: I0122 07:11:54.696635 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-s6427"] Jan 22 07:11:54 crc kubenswrapper[4720]: I0122 07:11:54.696724 4720 scope.go:117] "RemoveContainer" containerID="561af8a9f1e70f913484bb556be9e8363103c73e67ebc2df010d47b799bce331" Jan 22 07:11:54 crc kubenswrapper[4720]: I0122 07:11:54.705449 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-s6427"] Jan 22 07:11:54 crc kubenswrapper[4720]: I0122 07:11:54.722969 4720 scope.go:117] "RemoveContainer" containerID="9a7358250d1c7112039ce45749b18785b072fdcfe3ff5a85340feb03b9470437" Jan 22 07:11:54 crc kubenswrapper[4720]: I0122 07:11:54.758704 4720 scope.go:117] "RemoveContainer" containerID="e8f88aa0bf0ece5a2d34b83440736fe3472843e554c48898f6e1de0f095ce6a1" Jan 22 07:11:54 crc kubenswrapper[4720]: E0122 07:11:54.759221 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e8f88aa0bf0ece5a2d34b83440736fe3472843e554c48898f6e1de0f095ce6a1\": container with ID starting with e8f88aa0bf0ece5a2d34b83440736fe3472843e554c48898f6e1de0f095ce6a1 not found: ID does not exist" containerID="e8f88aa0bf0ece5a2d34b83440736fe3472843e554c48898f6e1de0f095ce6a1" Jan 22 07:11:54 crc kubenswrapper[4720]: I0122 07:11:54.759258 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e8f88aa0bf0ece5a2d34b83440736fe3472843e554c48898f6e1de0f095ce6a1"} err="failed to get container status \"e8f88aa0bf0ece5a2d34b83440736fe3472843e554c48898f6e1de0f095ce6a1\": rpc error: code = NotFound desc = could not find container \"e8f88aa0bf0ece5a2d34b83440736fe3472843e554c48898f6e1de0f095ce6a1\": container with ID starting with e8f88aa0bf0ece5a2d34b83440736fe3472843e554c48898f6e1de0f095ce6a1 not found: ID does not exist" Jan 22 07:11:54 crc kubenswrapper[4720]: I0122 07:11:54.759288 4720 scope.go:117] "RemoveContainer" containerID="561af8a9f1e70f913484bb556be9e8363103c73e67ebc2df010d47b799bce331" Jan 22 07:11:54 crc kubenswrapper[4720]: E0122 07:11:54.759559 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"561af8a9f1e70f913484bb556be9e8363103c73e67ebc2df010d47b799bce331\": container with ID starting with 561af8a9f1e70f913484bb556be9e8363103c73e67ebc2df010d47b799bce331 not found: ID does not exist" containerID="561af8a9f1e70f913484bb556be9e8363103c73e67ebc2df010d47b799bce331" Jan 22 07:11:54 crc kubenswrapper[4720]: I0122 07:11:54.759577 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"561af8a9f1e70f913484bb556be9e8363103c73e67ebc2df010d47b799bce331"} err="failed to get container status \"561af8a9f1e70f913484bb556be9e8363103c73e67ebc2df010d47b799bce331\": rpc error: code = NotFound desc = could not find container \"561af8a9f1e70f913484bb556be9e8363103c73e67ebc2df010d47b799bce331\": container with ID starting with 561af8a9f1e70f913484bb556be9e8363103c73e67ebc2df010d47b799bce331 not found: ID does not exist" Jan 22 07:11:54 crc kubenswrapper[4720]: I0122 07:11:54.759589 4720 scope.go:117] "RemoveContainer" containerID="9a7358250d1c7112039ce45749b18785b072fdcfe3ff5a85340feb03b9470437" Jan 22 07:11:54 crc kubenswrapper[4720]: E0122 07:11:54.759995 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9a7358250d1c7112039ce45749b18785b072fdcfe3ff5a85340feb03b9470437\": container with ID starting with 9a7358250d1c7112039ce45749b18785b072fdcfe3ff5a85340feb03b9470437 not found: ID does not exist" containerID="9a7358250d1c7112039ce45749b18785b072fdcfe3ff5a85340feb03b9470437" Jan 22 07:11:54 crc kubenswrapper[4720]: I0122 07:11:54.760012 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9a7358250d1c7112039ce45749b18785b072fdcfe3ff5a85340feb03b9470437"} err="failed to get container status \"9a7358250d1c7112039ce45749b18785b072fdcfe3ff5a85340feb03b9470437\": rpc error: code = NotFound desc = could not find container \"9a7358250d1c7112039ce45749b18785b072fdcfe3ff5a85340feb03b9470437\": container with ID starting with 9a7358250d1c7112039ce45749b18785b072fdcfe3ff5a85340feb03b9470437 not found: ID does not exist" Jan 22 07:11:54 crc kubenswrapper[4720]: I0122 07:11:54.933432 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-59hk6" Jan 22 07:11:55 crc kubenswrapper[4720]: I0122 07:11:55.000217 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b8ghz\" (UniqueName: \"kubernetes.io/projected/866509ea-c033-42f6-8274-6e929e2086ff-kube-api-access-b8ghz\") pod \"866509ea-c033-42f6-8274-6e929e2086ff\" (UID: \"866509ea-c033-42f6-8274-6e929e2086ff\") " Jan 22 07:11:55 crc kubenswrapper[4720]: I0122 07:11:55.000279 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/866509ea-c033-42f6-8274-6e929e2086ff-db-sync-config-data\") pod \"866509ea-c033-42f6-8274-6e929e2086ff\" (UID: \"866509ea-c033-42f6-8274-6e929e2086ff\") " Jan 22 07:11:55 crc kubenswrapper[4720]: I0122 07:11:55.000310 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/866509ea-c033-42f6-8274-6e929e2086ff-config-data\") pod \"866509ea-c033-42f6-8274-6e929e2086ff\" (UID: \"866509ea-c033-42f6-8274-6e929e2086ff\") " Jan 22 07:11:55 crc kubenswrapper[4720]: I0122 07:11:55.000500 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/866509ea-c033-42f6-8274-6e929e2086ff-combined-ca-bundle\") pod \"866509ea-c033-42f6-8274-6e929e2086ff\" (UID: \"866509ea-c033-42f6-8274-6e929e2086ff\") " Jan 22 07:11:55 crc kubenswrapper[4720]: I0122 07:11:55.003721 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/866509ea-c033-42f6-8274-6e929e2086ff-kube-api-access-b8ghz" (OuterVolumeSpecName: "kube-api-access-b8ghz") pod "866509ea-c033-42f6-8274-6e929e2086ff" (UID: "866509ea-c033-42f6-8274-6e929e2086ff"). InnerVolumeSpecName "kube-api-access-b8ghz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 07:11:55 crc kubenswrapper[4720]: I0122 07:11:55.004776 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/866509ea-c033-42f6-8274-6e929e2086ff-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "866509ea-c033-42f6-8274-6e929e2086ff" (UID: "866509ea-c033-42f6-8274-6e929e2086ff"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:11:55 crc kubenswrapper[4720]: I0122 07:11:55.023714 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/866509ea-c033-42f6-8274-6e929e2086ff-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "866509ea-c033-42f6-8274-6e929e2086ff" (UID: "866509ea-c033-42f6-8274-6e929e2086ff"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:11:55 crc kubenswrapper[4720]: I0122 07:11:55.045706 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/866509ea-c033-42f6-8274-6e929e2086ff-config-data" (OuterVolumeSpecName: "config-data") pod "866509ea-c033-42f6-8274-6e929e2086ff" (UID: "866509ea-c033-42f6-8274-6e929e2086ff"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:11:55 crc kubenswrapper[4720]: I0122 07:11:55.101636 4720 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/866509ea-c033-42f6-8274-6e929e2086ff-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 07:11:55 crc kubenswrapper[4720]: I0122 07:11:55.101807 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b8ghz\" (UniqueName: \"kubernetes.io/projected/866509ea-c033-42f6-8274-6e929e2086ff-kube-api-access-b8ghz\") on node \"crc\" DevicePath \"\"" Jan 22 07:11:55 crc kubenswrapper[4720]: I0122 07:11:55.101892 4720 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/866509ea-c033-42f6-8274-6e929e2086ff-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 07:11:55 crc kubenswrapper[4720]: I0122 07:11:55.102026 4720 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/866509ea-c033-42f6-8274-6e929e2086ff-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 07:11:55 crc kubenswrapper[4720]: I0122 07:11:55.661726 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-59hk6" event={"ID":"866509ea-c033-42f6-8274-6e929e2086ff","Type":"ContainerDied","Data":"b3089eacd327b6c23e19a40213a2b7103e8399157891af15237cbc863e3631b2"} Jan 22 07:11:55 crc kubenswrapper[4720]: I0122 07:11:55.661768 4720 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b3089eacd327b6c23e19a40213a2b7103e8399157891af15237cbc863e3631b2" Jan 22 07:11:55 crc kubenswrapper[4720]: I0122 07:11:55.662424 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-59hk6" Jan 22 07:11:55 crc kubenswrapper[4720]: I0122 07:11:55.914350 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 22 07:11:55 crc kubenswrapper[4720]: E0122 07:11:55.914794 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="866509ea-c033-42f6-8274-6e929e2086ff" containerName="watcher-kuttl-db-sync" Jan 22 07:11:55 crc kubenswrapper[4720]: I0122 07:11:55.914817 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="866509ea-c033-42f6-8274-6e929e2086ff" containerName="watcher-kuttl-db-sync" Jan 22 07:11:55 crc kubenswrapper[4720]: E0122 07:11:55.914839 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d95b789f-6df6-421f-bfe5-1d06f018b526" containerName="extract-utilities" Jan 22 07:11:55 crc kubenswrapper[4720]: I0122 07:11:55.914848 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="d95b789f-6df6-421f-bfe5-1d06f018b526" containerName="extract-utilities" Jan 22 07:11:55 crc kubenswrapper[4720]: E0122 07:11:55.914867 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d95b789f-6df6-421f-bfe5-1d06f018b526" containerName="registry-server" Jan 22 07:11:55 crc kubenswrapper[4720]: I0122 07:11:55.914875 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="d95b789f-6df6-421f-bfe5-1d06f018b526" containerName="registry-server" Jan 22 07:11:55 crc kubenswrapper[4720]: E0122 07:11:55.914920 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d95b789f-6df6-421f-bfe5-1d06f018b526" containerName="extract-content" Jan 22 07:11:55 crc kubenswrapper[4720]: I0122 07:11:55.914929 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="d95b789f-6df6-421f-bfe5-1d06f018b526" containerName="extract-content" Jan 22 07:11:55 crc kubenswrapper[4720]: I0122 07:11:55.915120 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="d95b789f-6df6-421f-bfe5-1d06f018b526" containerName="registry-server" Jan 22 07:11:55 crc kubenswrapper[4720]: I0122 07:11:55.915153 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="866509ea-c033-42f6-8274-6e929e2086ff" containerName="watcher-kuttl-db-sync" Jan 22 07:11:55 crc kubenswrapper[4720]: I0122 07:11:55.916243 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:11:55 crc kubenswrapper[4720]: I0122 07:11:55.920338 4720 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-api-config-data" Jan 22 07:11:55 crc kubenswrapper[4720]: I0122 07:11:55.922990 4720 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-watcher-kuttl-dockercfg-zwg9h" Jan 22 07:11:55 crc kubenswrapper[4720]: I0122 07:11:55.935109 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 22 07:11:55 crc kubenswrapper[4720]: I0122 07:11:55.983284 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 22 07:11:55 crc kubenswrapper[4720]: I0122 07:11:55.984483 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 07:11:55 crc kubenswrapper[4720]: I0122 07:11:55.986804 4720 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-applier-config-data" Jan 22 07:11:56 crc kubenswrapper[4720]: I0122 07:11:56.003905 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 22 07:11:56 crc kubenswrapper[4720]: I0122 07:11:56.026001 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/966f7a85-55c7-4218-93e4-ab5f53c396e1-logs\") pod \"watcher-kuttl-api-0\" (UID: \"966f7a85-55c7-4218-93e4-ab5f53c396e1\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:11:56 crc kubenswrapper[4720]: I0122 07:11:56.035242 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/966f7a85-55c7-4218-93e4-ab5f53c396e1-cert-memcached-mtls\") pod \"watcher-kuttl-api-0\" (UID: \"966f7a85-55c7-4218-93e4-ab5f53c396e1\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:11:56 crc kubenswrapper[4720]: I0122 07:11:56.035741 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/966f7a85-55c7-4218-93e4-ab5f53c396e1-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"966f7a85-55c7-4218-93e4-ab5f53c396e1\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:11:56 crc kubenswrapper[4720]: I0122 07:11:56.036038 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/966f7a85-55c7-4218-93e4-ab5f53c396e1-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"966f7a85-55c7-4218-93e4-ab5f53c396e1\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:11:56 crc kubenswrapper[4720]: I0122 07:11:56.036301 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/966f7a85-55c7-4218-93e4-ab5f53c396e1-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"966f7a85-55c7-4218-93e4-ab5f53c396e1\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:11:56 crc kubenswrapper[4720]: I0122 07:11:56.036685 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qwn6n\" (UniqueName: \"kubernetes.io/projected/966f7a85-55c7-4218-93e4-ab5f53c396e1-kube-api-access-qwn6n\") pod \"watcher-kuttl-api-0\" (UID: \"966f7a85-55c7-4218-93e4-ab5f53c396e1\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:11:56 crc kubenswrapper[4720]: I0122 07:11:56.084393 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 22 07:11:56 crc kubenswrapper[4720]: I0122 07:11:56.085769 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 07:11:56 crc kubenswrapper[4720]: I0122 07:11:56.089036 4720 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-decision-engine-config-data" Jan 22 07:11:56 crc kubenswrapper[4720]: I0122 07:11:56.099357 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 22 07:11:56 crc kubenswrapper[4720]: I0122 07:11:56.138486 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/966f7a85-55c7-4218-93e4-ab5f53c396e1-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"966f7a85-55c7-4218-93e4-ab5f53c396e1\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:11:56 crc kubenswrapper[4720]: I0122 07:11:56.138555 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/e515a8f3-9b77-4077-a660-8fbdbd4fe36f-cert-memcached-mtls\") pod \"watcher-kuttl-applier-0\" (UID: \"e515a8f3-9b77-4077-a660-8fbdbd4fe36f\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 07:11:56 crc kubenswrapper[4720]: I0122 07:11:56.138608 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e515a8f3-9b77-4077-a660-8fbdbd4fe36f-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"e515a8f3-9b77-4077-a660-8fbdbd4fe36f\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 07:11:56 crc kubenswrapper[4720]: I0122 07:11:56.138636 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8271367a-5b87-4c4f-8cf4-cbf6d77a7caa-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"8271367a-5b87-4c4f-8cf4-cbf6d77a7caa\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 07:11:56 crc kubenswrapper[4720]: I0122 07:11:56.138657 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8271367a-5b87-4c4f-8cf4-cbf6d77a7caa-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"8271367a-5b87-4c4f-8cf4-cbf6d77a7caa\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 07:11:56 crc kubenswrapper[4720]: I0122 07:11:56.138677 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/966f7a85-55c7-4218-93e4-ab5f53c396e1-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"966f7a85-55c7-4218-93e4-ab5f53c396e1\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:11:56 crc kubenswrapper[4720]: I0122 07:11:56.138858 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/8271367a-5b87-4c4f-8cf4-cbf6d77a7caa-cert-memcached-mtls\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"8271367a-5b87-4c4f-8cf4-cbf6d77a7caa\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 07:11:56 crc kubenswrapper[4720]: I0122 07:11:56.138956 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/8271367a-5b87-4c4f-8cf4-cbf6d77a7caa-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"8271367a-5b87-4c4f-8cf4-cbf6d77a7caa\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 07:11:56 crc kubenswrapper[4720]: I0122 07:11:56.139072 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8271367a-5b87-4c4f-8cf4-cbf6d77a7caa-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"8271367a-5b87-4c4f-8cf4-cbf6d77a7caa\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 07:11:56 crc kubenswrapper[4720]: I0122 07:11:56.139249 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e515a8f3-9b77-4077-a660-8fbdbd4fe36f-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"e515a8f3-9b77-4077-a660-8fbdbd4fe36f\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 07:11:56 crc kubenswrapper[4720]: I0122 07:11:56.139304 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dh62x\" (UniqueName: \"kubernetes.io/projected/8271367a-5b87-4c4f-8cf4-cbf6d77a7caa-kube-api-access-dh62x\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"8271367a-5b87-4c4f-8cf4-cbf6d77a7caa\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 07:11:56 crc kubenswrapper[4720]: I0122 07:11:56.139333 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e515a8f3-9b77-4077-a660-8fbdbd4fe36f-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"e515a8f3-9b77-4077-a660-8fbdbd4fe36f\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 07:11:56 crc kubenswrapper[4720]: I0122 07:11:56.139370 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f6dgm\" (UniqueName: \"kubernetes.io/projected/e515a8f3-9b77-4077-a660-8fbdbd4fe36f-kube-api-access-f6dgm\") pod \"watcher-kuttl-applier-0\" (UID: \"e515a8f3-9b77-4077-a660-8fbdbd4fe36f\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 07:11:56 crc kubenswrapper[4720]: I0122 07:11:56.139459 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qwn6n\" (UniqueName: \"kubernetes.io/projected/966f7a85-55c7-4218-93e4-ab5f53c396e1-kube-api-access-qwn6n\") pod \"watcher-kuttl-api-0\" (UID: \"966f7a85-55c7-4218-93e4-ab5f53c396e1\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:11:56 crc kubenswrapper[4720]: I0122 07:11:56.139511 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/966f7a85-55c7-4218-93e4-ab5f53c396e1-logs\") pod \"watcher-kuttl-api-0\" (UID: \"966f7a85-55c7-4218-93e4-ab5f53c396e1\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:11:56 crc kubenswrapper[4720]: I0122 07:11:56.139566 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/966f7a85-55c7-4218-93e4-ab5f53c396e1-cert-memcached-mtls\") pod \"watcher-kuttl-api-0\" (UID: \"966f7a85-55c7-4218-93e4-ab5f53c396e1\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:11:56 crc kubenswrapper[4720]: I0122 07:11:56.139618 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/966f7a85-55c7-4218-93e4-ab5f53c396e1-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"966f7a85-55c7-4218-93e4-ab5f53c396e1\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:11:56 crc kubenswrapper[4720]: I0122 07:11:56.140031 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/966f7a85-55c7-4218-93e4-ab5f53c396e1-logs\") pod \"watcher-kuttl-api-0\" (UID: \"966f7a85-55c7-4218-93e4-ab5f53c396e1\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:11:56 crc kubenswrapper[4720]: I0122 07:11:56.144181 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/966f7a85-55c7-4218-93e4-ab5f53c396e1-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"966f7a85-55c7-4218-93e4-ab5f53c396e1\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:11:56 crc kubenswrapper[4720]: I0122 07:11:56.144409 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/966f7a85-55c7-4218-93e4-ab5f53c396e1-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"966f7a85-55c7-4218-93e4-ab5f53c396e1\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:11:56 crc kubenswrapper[4720]: I0122 07:11:56.144565 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/966f7a85-55c7-4218-93e4-ab5f53c396e1-cert-memcached-mtls\") pod \"watcher-kuttl-api-0\" (UID: \"966f7a85-55c7-4218-93e4-ab5f53c396e1\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:11:56 crc kubenswrapper[4720]: I0122 07:11:56.145050 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/966f7a85-55c7-4218-93e4-ab5f53c396e1-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"966f7a85-55c7-4218-93e4-ab5f53c396e1\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:11:56 crc kubenswrapper[4720]: I0122 07:11:56.154829 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qwn6n\" (UniqueName: \"kubernetes.io/projected/966f7a85-55c7-4218-93e4-ab5f53c396e1-kube-api-access-qwn6n\") pod \"watcher-kuttl-api-0\" (UID: \"966f7a85-55c7-4218-93e4-ab5f53c396e1\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:11:56 crc kubenswrapper[4720]: I0122 07:11:56.222193 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d95b789f-6df6-421f-bfe5-1d06f018b526" path="/var/lib/kubelet/pods/d95b789f-6df6-421f-bfe5-1d06f018b526/volumes" Jan 22 07:11:56 crc kubenswrapper[4720]: I0122 07:11:56.231982 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:11:56 crc kubenswrapper[4720]: I0122 07:11:56.240787 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8271367a-5b87-4c4f-8cf4-cbf6d77a7caa-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"8271367a-5b87-4c4f-8cf4-cbf6d77a7caa\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 07:11:56 crc kubenswrapper[4720]: I0122 07:11:56.240851 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dh62x\" (UniqueName: \"kubernetes.io/projected/8271367a-5b87-4c4f-8cf4-cbf6d77a7caa-kube-api-access-dh62x\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"8271367a-5b87-4c4f-8cf4-cbf6d77a7caa\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 07:11:56 crc kubenswrapper[4720]: I0122 07:11:56.240877 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e515a8f3-9b77-4077-a660-8fbdbd4fe36f-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"e515a8f3-9b77-4077-a660-8fbdbd4fe36f\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 07:11:56 crc kubenswrapper[4720]: I0122 07:11:56.241341 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e515a8f3-9b77-4077-a660-8fbdbd4fe36f-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"e515a8f3-9b77-4077-a660-8fbdbd4fe36f\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 07:11:56 crc kubenswrapper[4720]: I0122 07:11:56.241371 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f6dgm\" (UniqueName: \"kubernetes.io/projected/e515a8f3-9b77-4077-a660-8fbdbd4fe36f-kube-api-access-f6dgm\") pod \"watcher-kuttl-applier-0\" (UID: \"e515a8f3-9b77-4077-a660-8fbdbd4fe36f\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 07:11:56 crc kubenswrapper[4720]: I0122 07:11:56.242249 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/e515a8f3-9b77-4077-a660-8fbdbd4fe36f-cert-memcached-mtls\") pod \"watcher-kuttl-applier-0\" (UID: \"e515a8f3-9b77-4077-a660-8fbdbd4fe36f\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 07:11:56 crc kubenswrapper[4720]: I0122 07:11:56.242855 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e515a8f3-9b77-4077-a660-8fbdbd4fe36f-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"e515a8f3-9b77-4077-a660-8fbdbd4fe36f\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 07:11:56 crc kubenswrapper[4720]: I0122 07:11:56.242889 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8271367a-5b87-4c4f-8cf4-cbf6d77a7caa-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"8271367a-5b87-4c4f-8cf4-cbf6d77a7caa\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 07:11:56 crc kubenswrapper[4720]: I0122 07:11:56.242940 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8271367a-5b87-4c4f-8cf4-cbf6d77a7caa-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"8271367a-5b87-4c4f-8cf4-cbf6d77a7caa\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 07:11:56 crc kubenswrapper[4720]: I0122 07:11:56.243013 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/8271367a-5b87-4c4f-8cf4-cbf6d77a7caa-cert-memcached-mtls\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"8271367a-5b87-4c4f-8cf4-cbf6d77a7caa\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 07:11:56 crc kubenswrapper[4720]: I0122 07:11:56.243207 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/8271367a-5b87-4c4f-8cf4-cbf6d77a7caa-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"8271367a-5b87-4c4f-8cf4-cbf6d77a7caa\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 07:11:56 crc kubenswrapper[4720]: I0122 07:11:56.244420 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8271367a-5b87-4c4f-8cf4-cbf6d77a7caa-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"8271367a-5b87-4c4f-8cf4-cbf6d77a7caa\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 07:11:56 crc kubenswrapper[4720]: I0122 07:11:56.245040 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e515a8f3-9b77-4077-a660-8fbdbd4fe36f-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"e515a8f3-9b77-4077-a660-8fbdbd4fe36f\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 07:11:56 crc kubenswrapper[4720]: I0122 07:11:56.245360 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e515a8f3-9b77-4077-a660-8fbdbd4fe36f-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"e515a8f3-9b77-4077-a660-8fbdbd4fe36f\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 07:11:56 crc kubenswrapper[4720]: I0122 07:11:56.245583 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8271367a-5b87-4c4f-8cf4-cbf6d77a7caa-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"8271367a-5b87-4c4f-8cf4-cbf6d77a7caa\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 07:11:56 crc kubenswrapper[4720]: I0122 07:11:56.251284 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/e515a8f3-9b77-4077-a660-8fbdbd4fe36f-cert-memcached-mtls\") pod \"watcher-kuttl-applier-0\" (UID: \"e515a8f3-9b77-4077-a660-8fbdbd4fe36f\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 07:11:56 crc kubenswrapper[4720]: I0122 07:11:56.251534 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e515a8f3-9b77-4077-a660-8fbdbd4fe36f-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"e515a8f3-9b77-4077-a660-8fbdbd4fe36f\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 07:11:56 crc kubenswrapper[4720]: I0122 07:11:56.264491 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8271367a-5b87-4c4f-8cf4-cbf6d77a7caa-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"8271367a-5b87-4c4f-8cf4-cbf6d77a7caa\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 07:11:56 crc kubenswrapper[4720]: I0122 07:11:56.267130 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/8271367a-5b87-4c4f-8cf4-cbf6d77a7caa-cert-memcached-mtls\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"8271367a-5b87-4c4f-8cf4-cbf6d77a7caa\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 07:11:56 crc kubenswrapper[4720]: I0122 07:11:56.267574 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/8271367a-5b87-4c4f-8cf4-cbf6d77a7caa-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"8271367a-5b87-4c4f-8cf4-cbf6d77a7caa\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 07:11:56 crc kubenswrapper[4720]: I0122 07:11:56.293640 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dh62x\" (UniqueName: \"kubernetes.io/projected/8271367a-5b87-4c4f-8cf4-cbf6d77a7caa-kube-api-access-dh62x\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"8271367a-5b87-4c4f-8cf4-cbf6d77a7caa\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 07:11:56 crc kubenswrapper[4720]: I0122 07:11:56.294567 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f6dgm\" (UniqueName: \"kubernetes.io/projected/e515a8f3-9b77-4077-a660-8fbdbd4fe36f-kube-api-access-f6dgm\") pod \"watcher-kuttl-applier-0\" (UID: \"e515a8f3-9b77-4077-a660-8fbdbd4fe36f\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 07:11:56 crc kubenswrapper[4720]: I0122 07:11:56.301379 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 07:11:56 crc kubenswrapper[4720]: I0122 07:11:56.404281 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 07:11:56 crc kubenswrapper[4720]: I0122 07:11:56.922670 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 22 07:11:57 crc kubenswrapper[4720]: I0122 07:11:57.149787 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 22 07:11:57 crc kubenswrapper[4720]: I0122 07:11:57.159630 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 22 07:11:57 crc kubenswrapper[4720]: W0122 07:11:57.159747 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode515a8f3_9b77_4077_a660_8fbdbd4fe36f.slice/crio-6d5354e40282ce8423322b0af0c14c754c10cfe392137d0000d95f79b671e2dc WatchSource:0}: Error finding container 6d5354e40282ce8423322b0af0c14c754c10cfe392137d0000d95f79b671e2dc: Status 404 returned error can't find the container with id 6d5354e40282ce8423322b0af0c14c754c10cfe392137d0000d95f79b671e2dc Jan 22 07:11:57 crc kubenswrapper[4720]: W0122 07:11:57.162000 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8271367a_5b87_4c4f_8cf4_cbf6d77a7caa.slice/crio-3c99df8e5136cfc4fbb8203a78476fd60f17ee01433934548e0ea583a4808448 WatchSource:0}: Error finding container 3c99df8e5136cfc4fbb8203a78476fd60f17ee01433934548e0ea583a4808448: Status 404 returned error can't find the container with id 3c99df8e5136cfc4fbb8203a78476fd60f17ee01433934548e0ea583a4808448 Jan 22 07:11:57 crc kubenswrapper[4720]: I0122 07:11:57.688466 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"8271367a-5b87-4c4f-8cf4-cbf6d77a7caa","Type":"ContainerStarted","Data":"7f0d4fd72d80c3e7e2371620a822a74d403195b74bc24759df9438a08e5a3a42"} Jan 22 07:11:57 crc kubenswrapper[4720]: I0122 07:11:57.688804 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"8271367a-5b87-4c4f-8cf4-cbf6d77a7caa","Type":"ContainerStarted","Data":"3c99df8e5136cfc4fbb8203a78476fd60f17ee01433934548e0ea583a4808448"} Jan 22 07:11:57 crc kubenswrapper[4720]: I0122 07:11:57.689957 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"e515a8f3-9b77-4077-a660-8fbdbd4fe36f","Type":"ContainerStarted","Data":"b52af6605465c645446c8212a08e28eaa40e5302eec79b9fb01846e9284e84e6"} Jan 22 07:11:57 crc kubenswrapper[4720]: I0122 07:11:57.690024 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"e515a8f3-9b77-4077-a660-8fbdbd4fe36f","Type":"ContainerStarted","Data":"6d5354e40282ce8423322b0af0c14c754c10cfe392137d0000d95f79b671e2dc"} Jan 22 07:11:57 crc kubenswrapper[4720]: I0122 07:11:57.691868 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"966f7a85-55c7-4218-93e4-ab5f53c396e1","Type":"ContainerStarted","Data":"cdade98cdbea121b11c056204d4ef85b839da237917bc47c6d04bec3a8c08a7e"} Jan 22 07:11:57 crc kubenswrapper[4720]: I0122 07:11:57.691980 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"966f7a85-55c7-4218-93e4-ab5f53c396e1","Type":"ContainerStarted","Data":"7cdbb37638e90067b54e89dec34f69fe498bbd0b1869c40ed6698b0338972809"} Jan 22 07:11:57 crc kubenswrapper[4720]: I0122 07:11:57.691991 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"966f7a85-55c7-4218-93e4-ab5f53c396e1","Type":"ContainerStarted","Data":"339a66376ae6304fb96e986cb6617afd655ada9f6a7df82101439b1d2ab4f56e"} Jan 22 07:11:57 crc kubenswrapper[4720]: I0122 07:11:57.692298 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:11:57 crc kubenswrapper[4720]: I0122 07:11:57.714565 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podStartSLOduration=1.7145476830000002 podStartE2EDuration="1.714547683s" podCreationTimestamp="2026-01-22 07:11:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 07:11:57.711845346 +0000 UTC m=+2209.853752061" watchObservedRunningTime="2026-01-22 07:11:57.714547683 +0000 UTC m=+2209.856454388" Jan 22 07:11:57 crc kubenswrapper[4720]: I0122 07:11:57.734128 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-api-0" podStartSLOduration=2.734113205 podStartE2EDuration="2.734113205s" podCreationTimestamp="2026-01-22 07:11:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 07:11:57.732068058 +0000 UTC m=+2209.873974763" watchObservedRunningTime="2026-01-22 07:11:57.734113205 +0000 UTC m=+2209.876019910" Jan 22 07:11:57 crc kubenswrapper[4720]: I0122 07:11:57.760263 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podStartSLOduration=2.760242103 podStartE2EDuration="2.760242103s" podCreationTimestamp="2026-01-22 07:11:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 07:11:57.755475219 +0000 UTC m=+2209.897381924" watchObservedRunningTime="2026-01-22 07:11:57.760242103 +0000 UTC m=+2209.902148818" Jan 22 07:11:59 crc kubenswrapper[4720]: I0122 07:11:59.780172 4720 patch_prober.go:28] interesting pod/machine-config-daemon-bnsvd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 07:11:59 crc kubenswrapper[4720]: I0122 07:11:59.780536 4720 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" podUID="f4b26e9d-6a95-4b1c-9750-88b6aa100c67" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 07:11:59 crc kubenswrapper[4720]: I0122 07:11:59.780587 4720 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" Jan 22 07:11:59 crc kubenswrapper[4720]: I0122 07:11:59.781312 4720 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2e4c7f6c5c98df3a612e9e9bbe7b31422556264b5ee2718f6d180f5bbbf48836"} pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 22 07:11:59 crc kubenswrapper[4720]: I0122 07:11:59.781379 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" podUID="f4b26e9d-6a95-4b1c-9750-88b6aa100c67" containerName="machine-config-daemon" containerID="cri-o://2e4c7f6c5c98df3a612e9e9bbe7b31422556264b5ee2718f6d180f5bbbf48836" gracePeriod=600 Jan 22 07:12:00 crc kubenswrapper[4720]: I0122 07:12:00.361775 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:12:00 crc kubenswrapper[4720]: I0122 07:12:00.718851 4720 generic.go:334] "Generic (PLEG): container finished" podID="f4b26e9d-6a95-4b1c-9750-88b6aa100c67" containerID="2e4c7f6c5c98df3a612e9e9bbe7b31422556264b5ee2718f6d180f5bbbf48836" exitCode=0 Jan 22 07:12:00 crc kubenswrapper[4720]: I0122 07:12:00.718960 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" event={"ID":"f4b26e9d-6a95-4b1c-9750-88b6aa100c67","Type":"ContainerDied","Data":"2e4c7f6c5c98df3a612e9e9bbe7b31422556264b5ee2718f6d180f5bbbf48836"} Jan 22 07:12:00 crc kubenswrapper[4720]: I0122 07:12:00.719213 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" event={"ID":"f4b26e9d-6a95-4b1c-9750-88b6aa100c67","Type":"ContainerStarted","Data":"0eb7d27ae9c0d299c1e1a3566b73c7f2c1ab85b6c0032703c951b172cd0c528c"} Jan 22 07:12:00 crc kubenswrapper[4720]: I0122 07:12:00.719240 4720 scope.go:117] "RemoveContainer" containerID="b610ae887215ebfcaa39de45e339fcae21a08ebe6e59b991ec1661de0f19a21c" Jan 22 07:12:01 crc kubenswrapper[4720]: I0122 07:12:01.232258 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:12:01 crc kubenswrapper[4720]: I0122 07:12:01.302503 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 07:12:06 crc kubenswrapper[4720]: I0122 07:12:06.232377 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:12:06 crc kubenswrapper[4720]: I0122 07:12:06.239644 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:12:06 crc kubenswrapper[4720]: I0122 07:12:06.302584 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 07:12:06 crc kubenswrapper[4720]: I0122 07:12:06.329874 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 07:12:06 crc kubenswrapper[4720]: I0122 07:12:06.405628 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 07:12:06 crc kubenswrapper[4720]: I0122 07:12:06.428303 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 07:12:06 crc kubenswrapper[4720]: I0122 07:12:06.770750 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 07:12:06 crc kubenswrapper[4720]: I0122 07:12:06.776334 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:12:06 crc kubenswrapper[4720]: I0122 07:12:06.796336 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 07:12:06 crc kubenswrapper[4720]: I0122 07:12:06.812228 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 07:12:10 crc kubenswrapper[4720]: I0122 07:12:10.081237 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 07:12:10 crc kubenswrapper[4720]: I0122 07:12:10.081964 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="32462def-5ce6-4eee-9b2e-4bb394fff83d" containerName="ceilometer-central-agent" containerID="cri-o://838bfe0de51a6527da22f5b59d9711d8e3b4fc6b9d6e12a28115e78606809478" gracePeriod=30 Jan 22 07:12:10 crc kubenswrapper[4720]: I0122 07:12:10.082134 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="32462def-5ce6-4eee-9b2e-4bb394fff83d" containerName="proxy-httpd" containerID="cri-o://ee55746933386d742e8f2f54e2c3377a2803e32848066320654c34af05e3e7db" gracePeriod=30 Jan 22 07:12:10 crc kubenswrapper[4720]: I0122 07:12:10.082183 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="32462def-5ce6-4eee-9b2e-4bb394fff83d" containerName="sg-core" containerID="cri-o://9bbc5b65195bc545443b88b0e12ad036f20e81361eabbd252a56cc43fd1a91e5" gracePeriod=30 Jan 22 07:12:10 crc kubenswrapper[4720]: I0122 07:12:10.082184 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="32462def-5ce6-4eee-9b2e-4bb394fff83d" containerName="ceilometer-notification-agent" containerID="cri-o://4739b367212d15d0a7404b3bdedc68c8437a77fcace189834a40c8908c043494" gracePeriod=30 Jan 22 07:12:10 crc kubenswrapper[4720]: I0122 07:12:10.186685 4720 prober.go:107] "Probe failed" probeType="Readiness" pod="watcher-kuttl-default/ceilometer-0" podUID="32462def-5ce6-4eee-9b2e-4bb394fff83d" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.219:3000/\": read tcp 10.217.0.2:48864->10.217.0.219:3000: read: connection reset by peer" Jan 22 07:12:10 crc kubenswrapper[4720]: I0122 07:12:10.804378 4720 generic.go:334] "Generic (PLEG): container finished" podID="32462def-5ce6-4eee-9b2e-4bb394fff83d" containerID="ee55746933386d742e8f2f54e2c3377a2803e32848066320654c34af05e3e7db" exitCode=0 Jan 22 07:12:10 crc kubenswrapper[4720]: I0122 07:12:10.804702 4720 generic.go:334] "Generic (PLEG): container finished" podID="32462def-5ce6-4eee-9b2e-4bb394fff83d" containerID="9bbc5b65195bc545443b88b0e12ad036f20e81361eabbd252a56cc43fd1a91e5" exitCode=2 Jan 22 07:12:10 crc kubenswrapper[4720]: I0122 07:12:10.804711 4720 generic.go:334] "Generic (PLEG): container finished" podID="32462def-5ce6-4eee-9b2e-4bb394fff83d" containerID="838bfe0de51a6527da22f5b59d9711d8e3b4fc6b9d6e12a28115e78606809478" exitCode=0 Jan 22 07:12:10 crc kubenswrapper[4720]: I0122 07:12:10.804456 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"32462def-5ce6-4eee-9b2e-4bb394fff83d","Type":"ContainerDied","Data":"ee55746933386d742e8f2f54e2c3377a2803e32848066320654c34af05e3e7db"} Jan 22 07:12:10 crc kubenswrapper[4720]: I0122 07:12:10.804758 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"32462def-5ce6-4eee-9b2e-4bb394fff83d","Type":"ContainerDied","Data":"9bbc5b65195bc545443b88b0e12ad036f20e81361eabbd252a56cc43fd1a91e5"} Jan 22 07:12:10 crc kubenswrapper[4720]: I0122 07:12:10.804776 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"32462def-5ce6-4eee-9b2e-4bb394fff83d","Type":"ContainerDied","Data":"838bfe0de51a6527da22f5b59d9711d8e3b4fc6b9d6e12a28115e78606809478"} Jan 22 07:12:11 crc kubenswrapper[4720]: I0122 07:12:11.724893 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:12:11 crc kubenswrapper[4720]: I0122 07:12:11.733477 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/32462def-5ce6-4eee-9b2e-4bb394fff83d-scripts\") pod \"32462def-5ce6-4eee-9b2e-4bb394fff83d\" (UID: \"32462def-5ce6-4eee-9b2e-4bb394fff83d\") " Jan 22 07:12:11 crc kubenswrapper[4720]: I0122 07:12:11.733579 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-446bv\" (UniqueName: \"kubernetes.io/projected/32462def-5ce6-4eee-9b2e-4bb394fff83d-kube-api-access-446bv\") pod \"32462def-5ce6-4eee-9b2e-4bb394fff83d\" (UID: \"32462def-5ce6-4eee-9b2e-4bb394fff83d\") " Jan 22 07:12:11 crc kubenswrapper[4720]: I0122 07:12:11.733606 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/32462def-5ce6-4eee-9b2e-4bb394fff83d-sg-core-conf-yaml\") pod \"32462def-5ce6-4eee-9b2e-4bb394fff83d\" (UID: \"32462def-5ce6-4eee-9b2e-4bb394fff83d\") " Jan 22 07:12:11 crc kubenswrapper[4720]: I0122 07:12:11.733659 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/32462def-5ce6-4eee-9b2e-4bb394fff83d-run-httpd\") pod \"32462def-5ce6-4eee-9b2e-4bb394fff83d\" (UID: \"32462def-5ce6-4eee-9b2e-4bb394fff83d\") " Jan 22 07:12:11 crc kubenswrapper[4720]: I0122 07:12:11.733756 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/32462def-5ce6-4eee-9b2e-4bb394fff83d-log-httpd\") pod \"32462def-5ce6-4eee-9b2e-4bb394fff83d\" (UID: \"32462def-5ce6-4eee-9b2e-4bb394fff83d\") " Jan 22 07:12:11 crc kubenswrapper[4720]: I0122 07:12:11.733804 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32462def-5ce6-4eee-9b2e-4bb394fff83d-combined-ca-bundle\") pod \"32462def-5ce6-4eee-9b2e-4bb394fff83d\" (UID: \"32462def-5ce6-4eee-9b2e-4bb394fff83d\") " Jan 22 07:12:11 crc kubenswrapper[4720]: I0122 07:12:11.733926 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/32462def-5ce6-4eee-9b2e-4bb394fff83d-ceilometer-tls-certs\") pod \"32462def-5ce6-4eee-9b2e-4bb394fff83d\" (UID: \"32462def-5ce6-4eee-9b2e-4bb394fff83d\") " Jan 22 07:12:11 crc kubenswrapper[4720]: I0122 07:12:11.733954 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32462def-5ce6-4eee-9b2e-4bb394fff83d-config-data\") pod \"32462def-5ce6-4eee-9b2e-4bb394fff83d\" (UID: \"32462def-5ce6-4eee-9b2e-4bb394fff83d\") " Jan 22 07:12:11 crc kubenswrapper[4720]: I0122 07:12:11.734147 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/32462def-5ce6-4eee-9b2e-4bb394fff83d-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "32462def-5ce6-4eee-9b2e-4bb394fff83d" (UID: "32462def-5ce6-4eee-9b2e-4bb394fff83d"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 07:12:11 crc kubenswrapper[4720]: I0122 07:12:11.734371 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/32462def-5ce6-4eee-9b2e-4bb394fff83d-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "32462def-5ce6-4eee-9b2e-4bb394fff83d" (UID: "32462def-5ce6-4eee-9b2e-4bb394fff83d"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 07:12:11 crc kubenswrapper[4720]: I0122 07:12:11.734680 4720 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/32462def-5ce6-4eee-9b2e-4bb394fff83d-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 22 07:12:11 crc kubenswrapper[4720]: I0122 07:12:11.734710 4720 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/32462def-5ce6-4eee-9b2e-4bb394fff83d-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 22 07:12:11 crc kubenswrapper[4720]: I0122 07:12:11.764411 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/32462def-5ce6-4eee-9b2e-4bb394fff83d-kube-api-access-446bv" (OuterVolumeSpecName: "kube-api-access-446bv") pod "32462def-5ce6-4eee-9b2e-4bb394fff83d" (UID: "32462def-5ce6-4eee-9b2e-4bb394fff83d"). InnerVolumeSpecName "kube-api-access-446bv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 07:12:11 crc kubenswrapper[4720]: I0122 07:12:11.764855 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/32462def-5ce6-4eee-9b2e-4bb394fff83d-scripts" (OuterVolumeSpecName: "scripts") pod "32462def-5ce6-4eee-9b2e-4bb394fff83d" (UID: "32462def-5ce6-4eee-9b2e-4bb394fff83d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:12:11 crc kubenswrapper[4720]: I0122 07:12:11.774526 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/32462def-5ce6-4eee-9b2e-4bb394fff83d-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "32462def-5ce6-4eee-9b2e-4bb394fff83d" (UID: "32462def-5ce6-4eee-9b2e-4bb394fff83d"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:12:11 crc kubenswrapper[4720]: I0122 07:12:11.836306 4720 generic.go:334] "Generic (PLEG): container finished" podID="32462def-5ce6-4eee-9b2e-4bb394fff83d" containerID="4739b367212d15d0a7404b3bdedc68c8437a77fcace189834a40c8908c043494" exitCode=0 Jan 22 07:12:11 crc kubenswrapper[4720]: I0122 07:12:11.836365 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"32462def-5ce6-4eee-9b2e-4bb394fff83d","Type":"ContainerDied","Data":"4739b367212d15d0a7404b3bdedc68c8437a77fcace189834a40c8908c043494"} Jan 22 07:12:11 crc kubenswrapper[4720]: I0122 07:12:11.836422 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"32462def-5ce6-4eee-9b2e-4bb394fff83d","Type":"ContainerDied","Data":"f9e4789db2e8292d9d7b30b1674e7860a6421525cbf5b606f53dfb2d932ace6c"} Jan 22 07:12:11 crc kubenswrapper[4720]: I0122 07:12:11.836444 4720 scope.go:117] "RemoveContainer" containerID="ee55746933386d742e8f2f54e2c3377a2803e32848066320654c34af05e3e7db" Jan 22 07:12:11 crc kubenswrapper[4720]: I0122 07:12:11.836677 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:12:11 crc kubenswrapper[4720]: I0122 07:12:11.837262 4720 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/32462def-5ce6-4eee-9b2e-4bb394fff83d-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 07:12:11 crc kubenswrapper[4720]: I0122 07:12:11.837299 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-446bv\" (UniqueName: \"kubernetes.io/projected/32462def-5ce6-4eee-9b2e-4bb394fff83d-kube-api-access-446bv\") on node \"crc\" DevicePath \"\"" Jan 22 07:12:11 crc kubenswrapper[4720]: I0122 07:12:11.837314 4720 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/32462def-5ce6-4eee-9b2e-4bb394fff83d-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 22 07:12:11 crc kubenswrapper[4720]: I0122 07:12:11.841001 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/32462def-5ce6-4eee-9b2e-4bb394fff83d-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "32462def-5ce6-4eee-9b2e-4bb394fff83d" (UID: "32462def-5ce6-4eee-9b2e-4bb394fff83d"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:12:11 crc kubenswrapper[4720]: I0122 07:12:11.874876 4720 scope.go:117] "RemoveContainer" containerID="9bbc5b65195bc545443b88b0e12ad036f20e81361eabbd252a56cc43fd1a91e5" Jan 22 07:12:11 crc kubenswrapper[4720]: I0122 07:12:11.879122 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/32462def-5ce6-4eee-9b2e-4bb394fff83d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "32462def-5ce6-4eee-9b2e-4bb394fff83d" (UID: "32462def-5ce6-4eee-9b2e-4bb394fff83d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:12:11 crc kubenswrapper[4720]: I0122 07:12:11.892388 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/32462def-5ce6-4eee-9b2e-4bb394fff83d-config-data" (OuterVolumeSpecName: "config-data") pod "32462def-5ce6-4eee-9b2e-4bb394fff83d" (UID: "32462def-5ce6-4eee-9b2e-4bb394fff83d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:12:11 crc kubenswrapper[4720]: I0122 07:12:11.895172 4720 scope.go:117] "RemoveContainer" containerID="4739b367212d15d0a7404b3bdedc68c8437a77fcace189834a40c8908c043494" Jan 22 07:12:11 crc kubenswrapper[4720]: I0122 07:12:11.912961 4720 scope.go:117] "RemoveContainer" containerID="838bfe0de51a6527da22f5b59d9711d8e3b4fc6b9d6e12a28115e78606809478" Jan 22 07:12:11 crc kubenswrapper[4720]: I0122 07:12:11.937829 4720 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/32462def-5ce6-4eee-9b2e-4bb394fff83d-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 22 07:12:11 crc kubenswrapper[4720]: I0122 07:12:11.937868 4720 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32462def-5ce6-4eee-9b2e-4bb394fff83d-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 07:12:11 crc kubenswrapper[4720]: I0122 07:12:11.937879 4720 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32462def-5ce6-4eee-9b2e-4bb394fff83d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 07:12:11 crc kubenswrapper[4720]: I0122 07:12:11.939297 4720 scope.go:117] "RemoveContainer" containerID="ee55746933386d742e8f2f54e2c3377a2803e32848066320654c34af05e3e7db" Jan 22 07:12:11 crc kubenswrapper[4720]: E0122 07:12:11.939798 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ee55746933386d742e8f2f54e2c3377a2803e32848066320654c34af05e3e7db\": container with ID starting with ee55746933386d742e8f2f54e2c3377a2803e32848066320654c34af05e3e7db not found: ID does not exist" containerID="ee55746933386d742e8f2f54e2c3377a2803e32848066320654c34af05e3e7db" Jan 22 07:12:11 crc kubenswrapper[4720]: I0122 07:12:11.939842 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ee55746933386d742e8f2f54e2c3377a2803e32848066320654c34af05e3e7db"} err="failed to get container status \"ee55746933386d742e8f2f54e2c3377a2803e32848066320654c34af05e3e7db\": rpc error: code = NotFound desc = could not find container \"ee55746933386d742e8f2f54e2c3377a2803e32848066320654c34af05e3e7db\": container with ID starting with ee55746933386d742e8f2f54e2c3377a2803e32848066320654c34af05e3e7db not found: ID does not exist" Jan 22 07:12:11 crc kubenswrapper[4720]: I0122 07:12:11.939873 4720 scope.go:117] "RemoveContainer" containerID="9bbc5b65195bc545443b88b0e12ad036f20e81361eabbd252a56cc43fd1a91e5" Jan 22 07:12:11 crc kubenswrapper[4720]: E0122 07:12:11.940355 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9bbc5b65195bc545443b88b0e12ad036f20e81361eabbd252a56cc43fd1a91e5\": container with ID starting with 9bbc5b65195bc545443b88b0e12ad036f20e81361eabbd252a56cc43fd1a91e5 not found: ID does not exist" containerID="9bbc5b65195bc545443b88b0e12ad036f20e81361eabbd252a56cc43fd1a91e5" Jan 22 07:12:11 crc kubenswrapper[4720]: I0122 07:12:11.940378 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9bbc5b65195bc545443b88b0e12ad036f20e81361eabbd252a56cc43fd1a91e5"} err="failed to get container status \"9bbc5b65195bc545443b88b0e12ad036f20e81361eabbd252a56cc43fd1a91e5\": rpc error: code = NotFound desc = could not find container \"9bbc5b65195bc545443b88b0e12ad036f20e81361eabbd252a56cc43fd1a91e5\": container with ID starting with 9bbc5b65195bc545443b88b0e12ad036f20e81361eabbd252a56cc43fd1a91e5 not found: ID does not exist" Jan 22 07:12:11 crc kubenswrapper[4720]: I0122 07:12:11.940393 4720 scope.go:117] "RemoveContainer" containerID="4739b367212d15d0a7404b3bdedc68c8437a77fcace189834a40c8908c043494" Jan 22 07:12:11 crc kubenswrapper[4720]: E0122 07:12:11.940893 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4739b367212d15d0a7404b3bdedc68c8437a77fcace189834a40c8908c043494\": container with ID starting with 4739b367212d15d0a7404b3bdedc68c8437a77fcace189834a40c8908c043494 not found: ID does not exist" containerID="4739b367212d15d0a7404b3bdedc68c8437a77fcace189834a40c8908c043494" Jan 22 07:12:11 crc kubenswrapper[4720]: I0122 07:12:11.940937 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4739b367212d15d0a7404b3bdedc68c8437a77fcace189834a40c8908c043494"} err="failed to get container status \"4739b367212d15d0a7404b3bdedc68c8437a77fcace189834a40c8908c043494\": rpc error: code = NotFound desc = could not find container \"4739b367212d15d0a7404b3bdedc68c8437a77fcace189834a40c8908c043494\": container with ID starting with 4739b367212d15d0a7404b3bdedc68c8437a77fcace189834a40c8908c043494 not found: ID does not exist" Jan 22 07:12:11 crc kubenswrapper[4720]: I0122 07:12:11.940958 4720 scope.go:117] "RemoveContainer" containerID="838bfe0de51a6527da22f5b59d9711d8e3b4fc6b9d6e12a28115e78606809478" Jan 22 07:12:11 crc kubenswrapper[4720]: E0122 07:12:11.941299 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"838bfe0de51a6527da22f5b59d9711d8e3b4fc6b9d6e12a28115e78606809478\": container with ID starting with 838bfe0de51a6527da22f5b59d9711d8e3b4fc6b9d6e12a28115e78606809478 not found: ID does not exist" containerID="838bfe0de51a6527da22f5b59d9711d8e3b4fc6b9d6e12a28115e78606809478" Jan 22 07:12:11 crc kubenswrapper[4720]: I0122 07:12:11.941325 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"838bfe0de51a6527da22f5b59d9711d8e3b4fc6b9d6e12a28115e78606809478"} err="failed to get container status \"838bfe0de51a6527da22f5b59d9711d8e3b4fc6b9d6e12a28115e78606809478\": rpc error: code = NotFound desc = could not find container \"838bfe0de51a6527da22f5b59d9711d8e3b4fc6b9d6e12a28115e78606809478\": container with ID starting with 838bfe0de51a6527da22f5b59d9711d8e3b4fc6b9d6e12a28115e78606809478 not found: ID does not exist" Jan 22 07:12:12 crc kubenswrapper[4720]: I0122 07:12:12.171863 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 07:12:12 crc kubenswrapper[4720]: I0122 07:12:12.183634 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 07:12:12 crc kubenswrapper[4720]: I0122 07:12:12.193752 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 07:12:12 crc kubenswrapper[4720]: E0122 07:12:12.194125 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32462def-5ce6-4eee-9b2e-4bb394fff83d" containerName="sg-core" Jan 22 07:12:12 crc kubenswrapper[4720]: I0122 07:12:12.194143 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="32462def-5ce6-4eee-9b2e-4bb394fff83d" containerName="sg-core" Jan 22 07:12:12 crc kubenswrapper[4720]: E0122 07:12:12.194158 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32462def-5ce6-4eee-9b2e-4bb394fff83d" containerName="proxy-httpd" Jan 22 07:12:12 crc kubenswrapper[4720]: I0122 07:12:12.194165 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="32462def-5ce6-4eee-9b2e-4bb394fff83d" containerName="proxy-httpd" Jan 22 07:12:12 crc kubenswrapper[4720]: E0122 07:12:12.194175 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32462def-5ce6-4eee-9b2e-4bb394fff83d" containerName="ceilometer-central-agent" Jan 22 07:12:12 crc kubenswrapper[4720]: I0122 07:12:12.194180 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="32462def-5ce6-4eee-9b2e-4bb394fff83d" containerName="ceilometer-central-agent" Jan 22 07:12:12 crc kubenswrapper[4720]: E0122 07:12:12.194198 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32462def-5ce6-4eee-9b2e-4bb394fff83d" containerName="ceilometer-notification-agent" Jan 22 07:12:12 crc kubenswrapper[4720]: I0122 07:12:12.194204 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="32462def-5ce6-4eee-9b2e-4bb394fff83d" containerName="ceilometer-notification-agent" Jan 22 07:12:12 crc kubenswrapper[4720]: I0122 07:12:12.194359 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="32462def-5ce6-4eee-9b2e-4bb394fff83d" containerName="ceilometer-central-agent" Jan 22 07:12:12 crc kubenswrapper[4720]: I0122 07:12:12.194374 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="32462def-5ce6-4eee-9b2e-4bb394fff83d" containerName="sg-core" Jan 22 07:12:12 crc kubenswrapper[4720]: I0122 07:12:12.194385 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="32462def-5ce6-4eee-9b2e-4bb394fff83d" containerName="ceilometer-notification-agent" Jan 22 07:12:12 crc kubenswrapper[4720]: I0122 07:12:12.194397 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="32462def-5ce6-4eee-9b2e-4bb394fff83d" containerName="proxy-httpd" Jan 22 07:12:12 crc kubenswrapper[4720]: I0122 07:12:12.195847 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:12:12 crc kubenswrapper[4720]: I0122 07:12:12.199093 4720 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-ceilometer-internal-svc" Jan 22 07:12:12 crc kubenswrapper[4720]: I0122 07:12:12.200198 4720 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-config-data" Jan 22 07:12:12 crc kubenswrapper[4720]: I0122 07:12:12.200332 4720 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-scripts" Jan 22 07:12:12 crc kubenswrapper[4720]: I0122 07:12:12.209576 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 07:12:12 crc kubenswrapper[4720]: I0122 07:12:12.220359 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="32462def-5ce6-4eee-9b2e-4bb394fff83d" path="/var/lib/kubelet/pods/32462def-5ce6-4eee-9b2e-4bb394fff83d/volumes" Jan 22 07:12:12 crc kubenswrapper[4720]: I0122 07:12:12.242528 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/607fcd76-c250-451f-adc4-aa14a6211d2d-config-data\") pod \"ceilometer-0\" (UID: \"607fcd76-c250-451f-adc4-aa14a6211d2d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:12:12 crc kubenswrapper[4720]: I0122 07:12:12.242587 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/607fcd76-c250-451f-adc4-aa14a6211d2d-scripts\") pod \"ceilometer-0\" (UID: \"607fcd76-c250-451f-adc4-aa14a6211d2d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:12:12 crc kubenswrapper[4720]: I0122 07:12:12.344844 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/607fcd76-c250-451f-adc4-aa14a6211d2d-scripts\") pod \"ceilometer-0\" (UID: \"607fcd76-c250-451f-adc4-aa14a6211d2d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:12:12 crc kubenswrapper[4720]: I0122 07:12:12.345289 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/607fcd76-c250-451f-adc4-aa14a6211d2d-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"607fcd76-c250-451f-adc4-aa14a6211d2d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:12:12 crc kubenswrapper[4720]: I0122 07:12:12.345340 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/607fcd76-c250-451f-adc4-aa14a6211d2d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"607fcd76-c250-451f-adc4-aa14a6211d2d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:12:12 crc kubenswrapper[4720]: I0122 07:12:12.345396 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nzsqk\" (UniqueName: \"kubernetes.io/projected/607fcd76-c250-451f-adc4-aa14a6211d2d-kube-api-access-nzsqk\") pod \"ceilometer-0\" (UID: \"607fcd76-c250-451f-adc4-aa14a6211d2d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:12:12 crc kubenswrapper[4720]: I0122 07:12:12.345460 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/607fcd76-c250-451f-adc4-aa14a6211d2d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"607fcd76-c250-451f-adc4-aa14a6211d2d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:12:12 crc kubenswrapper[4720]: I0122 07:12:12.345483 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/607fcd76-c250-451f-adc4-aa14a6211d2d-run-httpd\") pod \"ceilometer-0\" (UID: \"607fcd76-c250-451f-adc4-aa14a6211d2d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:12:12 crc kubenswrapper[4720]: I0122 07:12:12.345576 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/607fcd76-c250-451f-adc4-aa14a6211d2d-log-httpd\") pod \"ceilometer-0\" (UID: \"607fcd76-c250-451f-adc4-aa14a6211d2d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:12:12 crc kubenswrapper[4720]: I0122 07:12:12.345605 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/607fcd76-c250-451f-adc4-aa14a6211d2d-config-data\") pod \"ceilometer-0\" (UID: \"607fcd76-c250-451f-adc4-aa14a6211d2d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:12:12 crc kubenswrapper[4720]: I0122 07:12:12.349518 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/607fcd76-c250-451f-adc4-aa14a6211d2d-scripts\") pod \"ceilometer-0\" (UID: \"607fcd76-c250-451f-adc4-aa14a6211d2d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:12:12 crc kubenswrapper[4720]: I0122 07:12:12.350349 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/607fcd76-c250-451f-adc4-aa14a6211d2d-config-data\") pod \"ceilometer-0\" (UID: \"607fcd76-c250-451f-adc4-aa14a6211d2d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:12:12 crc kubenswrapper[4720]: I0122 07:12:12.447536 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/607fcd76-c250-451f-adc4-aa14a6211d2d-log-httpd\") pod \"ceilometer-0\" (UID: \"607fcd76-c250-451f-adc4-aa14a6211d2d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:12:12 crc kubenswrapper[4720]: I0122 07:12:12.447624 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/607fcd76-c250-451f-adc4-aa14a6211d2d-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"607fcd76-c250-451f-adc4-aa14a6211d2d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:12:12 crc kubenswrapper[4720]: I0122 07:12:12.447659 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/607fcd76-c250-451f-adc4-aa14a6211d2d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"607fcd76-c250-451f-adc4-aa14a6211d2d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:12:12 crc kubenswrapper[4720]: I0122 07:12:12.447701 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nzsqk\" (UniqueName: \"kubernetes.io/projected/607fcd76-c250-451f-adc4-aa14a6211d2d-kube-api-access-nzsqk\") pod \"ceilometer-0\" (UID: \"607fcd76-c250-451f-adc4-aa14a6211d2d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:12:12 crc kubenswrapper[4720]: I0122 07:12:12.447724 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/607fcd76-c250-451f-adc4-aa14a6211d2d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"607fcd76-c250-451f-adc4-aa14a6211d2d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:12:12 crc kubenswrapper[4720]: I0122 07:12:12.447742 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/607fcd76-c250-451f-adc4-aa14a6211d2d-run-httpd\") pod \"ceilometer-0\" (UID: \"607fcd76-c250-451f-adc4-aa14a6211d2d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:12:12 crc kubenswrapper[4720]: I0122 07:12:12.448228 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/607fcd76-c250-451f-adc4-aa14a6211d2d-log-httpd\") pod \"ceilometer-0\" (UID: \"607fcd76-c250-451f-adc4-aa14a6211d2d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:12:12 crc kubenswrapper[4720]: I0122 07:12:12.449020 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/607fcd76-c250-451f-adc4-aa14a6211d2d-run-httpd\") pod \"ceilometer-0\" (UID: \"607fcd76-c250-451f-adc4-aa14a6211d2d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:12:12 crc kubenswrapper[4720]: I0122 07:12:12.450946 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/607fcd76-c250-451f-adc4-aa14a6211d2d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"607fcd76-c250-451f-adc4-aa14a6211d2d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:12:12 crc kubenswrapper[4720]: I0122 07:12:12.451558 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/607fcd76-c250-451f-adc4-aa14a6211d2d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"607fcd76-c250-451f-adc4-aa14a6211d2d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:12:12 crc kubenswrapper[4720]: I0122 07:12:12.451705 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/607fcd76-c250-451f-adc4-aa14a6211d2d-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"607fcd76-c250-451f-adc4-aa14a6211d2d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:12:12 crc kubenswrapper[4720]: I0122 07:12:12.465706 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nzsqk\" (UniqueName: \"kubernetes.io/projected/607fcd76-c250-451f-adc4-aa14a6211d2d-kube-api-access-nzsqk\") pod \"ceilometer-0\" (UID: \"607fcd76-c250-451f-adc4-aa14a6211d2d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:12:12 crc kubenswrapper[4720]: I0122 07:12:12.511847 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:12:12 crc kubenswrapper[4720]: I0122 07:12:12.940084 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 07:12:13 crc kubenswrapper[4720]: I0122 07:12:13.856743 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"607fcd76-c250-451f-adc4-aa14a6211d2d","Type":"ContainerStarted","Data":"3ebb2c2409ab4e9c0d175ababd94c48b22f3bc5f4e34655466d28f8d1daa6bf8"} Jan 22 07:12:13 crc kubenswrapper[4720]: I0122 07:12:13.856804 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"607fcd76-c250-451f-adc4-aa14a6211d2d","Type":"ContainerStarted","Data":"d58816b2eff070d967847b711625fa1123a1b3e7d8214838954f5920a809e1ea"} Jan 22 07:12:14 crc kubenswrapper[4720]: I0122 07:12:14.868708 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"607fcd76-c250-451f-adc4-aa14a6211d2d","Type":"ContainerStarted","Data":"24e6aac6da8dbf1e9490b9d6331c0d6a559dac17f6dbb259f81a9b05ee2e0016"} Jan 22 07:12:15 crc kubenswrapper[4720]: I0122 07:12:15.879406 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"607fcd76-c250-451f-adc4-aa14a6211d2d","Type":"ContainerStarted","Data":"efad0716fff836604bc846d6954101d52d5fb596ea25c8c73a87488fa1e8118e"} Jan 22 07:12:16 crc kubenswrapper[4720]: I0122 07:12:16.891618 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"607fcd76-c250-451f-adc4-aa14a6211d2d","Type":"ContainerStarted","Data":"e3a39218eac10cabf6115ccb465c0e85397f9a88695b48f18cf2501f31625f6e"} Jan 22 07:12:16 crc kubenswrapper[4720]: I0122 07:12:16.892217 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:12:16 crc kubenswrapper[4720]: I0122 07:12:16.935263 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/ceilometer-0" podStartSLOduration=1.962108644 podStartE2EDuration="4.935237156s" podCreationTimestamp="2026-01-22 07:12:12 +0000 UTC" firstStartedPulling="2026-01-22 07:12:12.949031163 +0000 UTC m=+2225.090937868" lastFinishedPulling="2026-01-22 07:12:15.922159675 +0000 UTC m=+2228.064066380" observedRunningTime="2026-01-22 07:12:16.927816406 +0000 UTC m=+2229.069723121" watchObservedRunningTime="2026-01-22 07:12:16.935237156 +0000 UTC m=+2229.077143871" Jan 22 07:12:16 crc kubenswrapper[4720]: I0122 07:12:16.964887 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-59hk6"] Jan 22 07:12:16 crc kubenswrapper[4720]: I0122 07:12:16.977468 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-59hk6"] Jan 22 07:12:17 crc kubenswrapper[4720]: I0122 07:12:17.013118 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher4042-account-delete-m5ds2"] Jan 22 07:12:17 crc kubenswrapper[4720]: I0122 07:12:17.014209 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher4042-account-delete-m5ds2" Jan 22 07:12:17 crc kubenswrapper[4720]: I0122 07:12:17.028039 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l9d97\" (UniqueName: \"kubernetes.io/projected/39df1590-14b9-4fa3-a751-c40df86e633f-kube-api-access-l9d97\") pod \"watcher4042-account-delete-m5ds2\" (UID: \"39df1590-14b9-4fa3-a751-c40df86e633f\") " pod="watcher-kuttl-default/watcher4042-account-delete-m5ds2" Jan 22 07:12:17 crc kubenswrapper[4720]: I0122 07:12:17.028165 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/39df1590-14b9-4fa3-a751-c40df86e633f-operator-scripts\") pod \"watcher4042-account-delete-m5ds2\" (UID: \"39df1590-14b9-4fa3-a751-c40df86e633f\") " pod="watcher-kuttl-default/watcher4042-account-delete-m5ds2" Jan 22 07:12:17 crc kubenswrapper[4720]: I0122 07:12:17.041268 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher4042-account-delete-m5ds2"] Jan 22 07:12:17 crc kubenswrapper[4720]: I0122 07:12:17.063241 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 22 07:12:17 crc kubenswrapper[4720]: I0122 07:12:17.064120 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="8271367a-5b87-4c4f-8cf4-cbf6d77a7caa" containerName="watcher-decision-engine" containerID="cri-o://7f0d4fd72d80c3e7e2371620a822a74d403195b74bc24759df9438a08e5a3a42" gracePeriod=30 Jan 22 07:12:17 crc kubenswrapper[4720]: I0122 07:12:17.101677 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 22 07:12:17 crc kubenswrapper[4720]: I0122 07:12:17.102528 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podUID="e515a8f3-9b77-4077-a660-8fbdbd4fe36f" containerName="watcher-applier" containerID="cri-o://b52af6605465c645446c8212a08e28eaa40e5302eec79b9fb01846e9284e84e6" gracePeriod=30 Jan 22 07:12:17 crc kubenswrapper[4720]: I0122 07:12:17.132082 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/39df1590-14b9-4fa3-a751-c40df86e633f-operator-scripts\") pod \"watcher4042-account-delete-m5ds2\" (UID: \"39df1590-14b9-4fa3-a751-c40df86e633f\") " pod="watcher-kuttl-default/watcher4042-account-delete-m5ds2" Jan 22 07:12:17 crc kubenswrapper[4720]: I0122 07:12:17.132261 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l9d97\" (UniqueName: \"kubernetes.io/projected/39df1590-14b9-4fa3-a751-c40df86e633f-kube-api-access-l9d97\") pod \"watcher4042-account-delete-m5ds2\" (UID: \"39df1590-14b9-4fa3-a751-c40df86e633f\") " pod="watcher-kuttl-default/watcher4042-account-delete-m5ds2" Jan 22 07:12:17 crc kubenswrapper[4720]: I0122 07:12:17.133486 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/39df1590-14b9-4fa3-a751-c40df86e633f-operator-scripts\") pod \"watcher4042-account-delete-m5ds2\" (UID: \"39df1590-14b9-4fa3-a751-c40df86e633f\") " pod="watcher-kuttl-default/watcher4042-account-delete-m5ds2" Jan 22 07:12:17 crc kubenswrapper[4720]: I0122 07:12:17.157041 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 22 07:12:17 crc kubenswrapper[4720]: I0122 07:12:17.157622 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="966f7a85-55c7-4218-93e4-ab5f53c396e1" containerName="watcher-api" containerID="cri-o://cdade98cdbea121b11c056204d4ef85b839da237917bc47c6d04bec3a8c08a7e" gracePeriod=30 Jan 22 07:12:17 crc kubenswrapper[4720]: I0122 07:12:17.157579 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="966f7a85-55c7-4218-93e4-ab5f53c396e1" containerName="watcher-kuttl-api-log" containerID="cri-o://7cdbb37638e90067b54e89dec34f69fe498bbd0b1869c40ed6698b0338972809" gracePeriod=30 Jan 22 07:12:17 crc kubenswrapper[4720]: I0122 07:12:17.164299 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l9d97\" (UniqueName: \"kubernetes.io/projected/39df1590-14b9-4fa3-a751-c40df86e633f-kube-api-access-l9d97\") pod \"watcher4042-account-delete-m5ds2\" (UID: \"39df1590-14b9-4fa3-a751-c40df86e633f\") " pod="watcher-kuttl-default/watcher4042-account-delete-m5ds2" Jan 22 07:12:17 crc kubenswrapper[4720]: I0122 07:12:17.333439 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher4042-account-delete-m5ds2" Jan 22 07:12:17 crc kubenswrapper[4720]: I0122 07:12:17.612707 4720 scope.go:117] "RemoveContainer" containerID="99971402e2f98e7ec904431a823b04bf2b72c067ff53e86c7576d3fc53e0fe04" Jan 22 07:12:17 crc kubenswrapper[4720]: I0122 07:12:17.922376 4720 generic.go:334] "Generic (PLEG): container finished" podID="966f7a85-55c7-4218-93e4-ab5f53c396e1" containerID="7cdbb37638e90067b54e89dec34f69fe498bbd0b1869c40ed6698b0338972809" exitCode=143 Jan 22 07:12:17 crc kubenswrapper[4720]: I0122 07:12:17.923518 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"966f7a85-55c7-4218-93e4-ab5f53c396e1","Type":"ContainerDied","Data":"7cdbb37638e90067b54e89dec34f69fe498bbd0b1869c40ed6698b0338972809"} Jan 22 07:12:18 crc kubenswrapper[4720]: I0122 07:12:18.022794 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher4042-account-delete-m5ds2"] Jan 22 07:12:18 crc kubenswrapper[4720]: W0122 07:12:18.032831 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod39df1590_14b9_4fa3_a751_c40df86e633f.slice/crio-3cb92d1d00ff307c03d1c304e9665cac01c21b9511bf1f8b8add523e90d6a8ac WatchSource:0}: Error finding container 3cb92d1d00ff307c03d1c304e9665cac01c21b9511bf1f8b8add523e90d6a8ac: Status 404 returned error can't find the container with id 3cb92d1d00ff307c03d1c304e9665cac01c21b9511bf1f8b8add523e90d6a8ac Jan 22 07:12:18 crc kubenswrapper[4720]: I0122 07:12:18.248978 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="866509ea-c033-42f6-8274-6e929e2086ff" path="/var/lib/kubelet/pods/866509ea-c033-42f6-8274-6e929e2086ff/volumes" Jan 22 07:12:18 crc kubenswrapper[4720]: I0122 07:12:18.937415 4720 generic.go:334] "Generic (PLEG): container finished" podID="39df1590-14b9-4fa3-a751-c40df86e633f" containerID="82f32033f1ff54d39280eee2ec0a53551990022f44471df205f8b5e63471a831" exitCode=0 Jan 22 07:12:18 crc kubenswrapper[4720]: I0122 07:12:18.937495 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher4042-account-delete-m5ds2" event={"ID":"39df1590-14b9-4fa3-a751-c40df86e633f","Type":"ContainerDied","Data":"82f32033f1ff54d39280eee2ec0a53551990022f44471df205f8b5e63471a831"} Jan 22 07:12:18 crc kubenswrapper[4720]: I0122 07:12:18.937832 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher4042-account-delete-m5ds2" event={"ID":"39df1590-14b9-4fa3-a751-c40df86e633f","Type":"ContainerStarted","Data":"3cb92d1d00ff307c03d1c304e9665cac01c21b9511bf1f8b8add523e90d6a8ac"} Jan 22 07:12:19 crc kubenswrapper[4720]: I0122 07:12:19.588726 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 07:12:19 crc kubenswrapper[4720]: I0122 07:12:19.727532 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/e515a8f3-9b77-4077-a660-8fbdbd4fe36f-cert-memcached-mtls\") pod \"e515a8f3-9b77-4077-a660-8fbdbd4fe36f\" (UID: \"e515a8f3-9b77-4077-a660-8fbdbd4fe36f\") " Jan 22 07:12:19 crc kubenswrapper[4720]: I0122 07:12:19.727594 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e515a8f3-9b77-4077-a660-8fbdbd4fe36f-logs\") pod \"e515a8f3-9b77-4077-a660-8fbdbd4fe36f\" (UID: \"e515a8f3-9b77-4077-a660-8fbdbd4fe36f\") " Jan 22 07:12:19 crc kubenswrapper[4720]: I0122 07:12:19.727802 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f6dgm\" (UniqueName: \"kubernetes.io/projected/e515a8f3-9b77-4077-a660-8fbdbd4fe36f-kube-api-access-f6dgm\") pod \"e515a8f3-9b77-4077-a660-8fbdbd4fe36f\" (UID: \"e515a8f3-9b77-4077-a660-8fbdbd4fe36f\") " Jan 22 07:12:19 crc kubenswrapper[4720]: I0122 07:12:19.727829 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e515a8f3-9b77-4077-a660-8fbdbd4fe36f-combined-ca-bundle\") pod \"e515a8f3-9b77-4077-a660-8fbdbd4fe36f\" (UID: \"e515a8f3-9b77-4077-a660-8fbdbd4fe36f\") " Jan 22 07:12:19 crc kubenswrapper[4720]: I0122 07:12:19.727927 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e515a8f3-9b77-4077-a660-8fbdbd4fe36f-config-data\") pod \"e515a8f3-9b77-4077-a660-8fbdbd4fe36f\" (UID: \"e515a8f3-9b77-4077-a660-8fbdbd4fe36f\") " Jan 22 07:12:19 crc kubenswrapper[4720]: I0122 07:12:19.728515 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e515a8f3-9b77-4077-a660-8fbdbd4fe36f-logs" (OuterVolumeSpecName: "logs") pod "e515a8f3-9b77-4077-a660-8fbdbd4fe36f" (UID: "e515a8f3-9b77-4077-a660-8fbdbd4fe36f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 07:12:19 crc kubenswrapper[4720]: I0122 07:12:19.735035 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e515a8f3-9b77-4077-a660-8fbdbd4fe36f-kube-api-access-f6dgm" (OuterVolumeSpecName: "kube-api-access-f6dgm") pod "e515a8f3-9b77-4077-a660-8fbdbd4fe36f" (UID: "e515a8f3-9b77-4077-a660-8fbdbd4fe36f"). InnerVolumeSpecName "kube-api-access-f6dgm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 07:12:19 crc kubenswrapper[4720]: I0122 07:12:19.749944 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:12:19 crc kubenswrapper[4720]: I0122 07:12:19.756100 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e515a8f3-9b77-4077-a660-8fbdbd4fe36f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e515a8f3-9b77-4077-a660-8fbdbd4fe36f" (UID: "e515a8f3-9b77-4077-a660-8fbdbd4fe36f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:12:19 crc kubenswrapper[4720]: I0122 07:12:19.798391 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e515a8f3-9b77-4077-a660-8fbdbd4fe36f-config-data" (OuterVolumeSpecName: "config-data") pod "e515a8f3-9b77-4077-a660-8fbdbd4fe36f" (UID: "e515a8f3-9b77-4077-a660-8fbdbd4fe36f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:12:19 crc kubenswrapper[4720]: I0122 07:12:19.819548 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e515a8f3-9b77-4077-a660-8fbdbd4fe36f-cert-memcached-mtls" (OuterVolumeSpecName: "cert-memcached-mtls") pod "e515a8f3-9b77-4077-a660-8fbdbd4fe36f" (UID: "e515a8f3-9b77-4077-a660-8fbdbd4fe36f"). InnerVolumeSpecName "cert-memcached-mtls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:12:19 crc kubenswrapper[4720]: I0122 07:12:19.829819 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f6dgm\" (UniqueName: \"kubernetes.io/projected/e515a8f3-9b77-4077-a660-8fbdbd4fe36f-kube-api-access-f6dgm\") on node \"crc\" DevicePath \"\"" Jan 22 07:12:19 crc kubenswrapper[4720]: I0122 07:12:19.829851 4720 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e515a8f3-9b77-4077-a660-8fbdbd4fe36f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 07:12:19 crc kubenswrapper[4720]: I0122 07:12:19.829864 4720 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e515a8f3-9b77-4077-a660-8fbdbd4fe36f-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 07:12:19 crc kubenswrapper[4720]: I0122 07:12:19.829875 4720 reconciler_common.go:293] "Volume detached for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/e515a8f3-9b77-4077-a660-8fbdbd4fe36f-cert-memcached-mtls\") on node \"crc\" DevicePath \"\"" Jan 22 07:12:19 crc kubenswrapper[4720]: I0122 07:12:19.829888 4720 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e515a8f3-9b77-4077-a660-8fbdbd4fe36f-logs\") on node \"crc\" DevicePath \"\"" Jan 22 07:12:19 crc kubenswrapper[4720]: I0122 07:12:19.930579 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/966f7a85-55c7-4218-93e4-ab5f53c396e1-logs\") pod \"966f7a85-55c7-4218-93e4-ab5f53c396e1\" (UID: \"966f7a85-55c7-4218-93e4-ab5f53c396e1\") " Jan 22 07:12:19 crc kubenswrapper[4720]: I0122 07:12:19.930696 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/966f7a85-55c7-4218-93e4-ab5f53c396e1-custom-prometheus-ca\") pod \"966f7a85-55c7-4218-93e4-ab5f53c396e1\" (UID: \"966f7a85-55c7-4218-93e4-ab5f53c396e1\") " Jan 22 07:12:19 crc kubenswrapper[4720]: I0122 07:12:19.930760 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qwn6n\" (UniqueName: \"kubernetes.io/projected/966f7a85-55c7-4218-93e4-ab5f53c396e1-kube-api-access-qwn6n\") pod \"966f7a85-55c7-4218-93e4-ab5f53c396e1\" (UID: \"966f7a85-55c7-4218-93e4-ab5f53c396e1\") " Jan 22 07:12:19 crc kubenswrapper[4720]: I0122 07:12:19.930830 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/966f7a85-55c7-4218-93e4-ab5f53c396e1-cert-memcached-mtls\") pod \"966f7a85-55c7-4218-93e4-ab5f53c396e1\" (UID: \"966f7a85-55c7-4218-93e4-ab5f53c396e1\") " Jan 22 07:12:19 crc kubenswrapper[4720]: I0122 07:12:19.931026 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/966f7a85-55c7-4218-93e4-ab5f53c396e1-logs" (OuterVolumeSpecName: "logs") pod "966f7a85-55c7-4218-93e4-ab5f53c396e1" (UID: "966f7a85-55c7-4218-93e4-ab5f53c396e1"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 07:12:19 crc kubenswrapper[4720]: I0122 07:12:19.931303 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/966f7a85-55c7-4218-93e4-ab5f53c396e1-config-data\") pod \"966f7a85-55c7-4218-93e4-ab5f53c396e1\" (UID: \"966f7a85-55c7-4218-93e4-ab5f53c396e1\") " Jan 22 07:12:19 crc kubenswrapper[4720]: I0122 07:12:19.931328 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/966f7a85-55c7-4218-93e4-ab5f53c396e1-combined-ca-bundle\") pod \"966f7a85-55c7-4218-93e4-ab5f53c396e1\" (UID: \"966f7a85-55c7-4218-93e4-ab5f53c396e1\") " Jan 22 07:12:19 crc kubenswrapper[4720]: I0122 07:12:19.931612 4720 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/966f7a85-55c7-4218-93e4-ab5f53c396e1-logs\") on node \"crc\" DevicePath \"\"" Jan 22 07:12:19 crc kubenswrapper[4720]: I0122 07:12:19.939314 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/966f7a85-55c7-4218-93e4-ab5f53c396e1-kube-api-access-qwn6n" (OuterVolumeSpecName: "kube-api-access-qwn6n") pod "966f7a85-55c7-4218-93e4-ab5f53c396e1" (UID: "966f7a85-55c7-4218-93e4-ab5f53c396e1"). InnerVolumeSpecName "kube-api-access-qwn6n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 07:12:19 crc kubenswrapper[4720]: I0122 07:12:19.949979 4720 generic.go:334] "Generic (PLEG): container finished" podID="966f7a85-55c7-4218-93e4-ab5f53c396e1" containerID="cdade98cdbea121b11c056204d4ef85b839da237917bc47c6d04bec3a8c08a7e" exitCode=0 Jan 22 07:12:19 crc kubenswrapper[4720]: I0122 07:12:19.950070 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"966f7a85-55c7-4218-93e4-ab5f53c396e1","Type":"ContainerDied","Data":"cdade98cdbea121b11c056204d4ef85b839da237917bc47c6d04bec3a8c08a7e"} Jan 22 07:12:19 crc kubenswrapper[4720]: I0122 07:12:19.950102 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"966f7a85-55c7-4218-93e4-ab5f53c396e1","Type":"ContainerDied","Data":"339a66376ae6304fb96e986cb6617afd655ada9f6a7df82101439b1d2ab4f56e"} Jan 22 07:12:19 crc kubenswrapper[4720]: I0122 07:12:19.950116 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:12:19 crc kubenswrapper[4720]: I0122 07:12:19.950134 4720 scope.go:117] "RemoveContainer" containerID="cdade98cdbea121b11c056204d4ef85b839da237917bc47c6d04bec3a8c08a7e" Jan 22 07:12:19 crc kubenswrapper[4720]: I0122 07:12:19.953544 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/966f7a85-55c7-4218-93e4-ab5f53c396e1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "966f7a85-55c7-4218-93e4-ab5f53c396e1" (UID: "966f7a85-55c7-4218-93e4-ab5f53c396e1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:12:19 crc kubenswrapper[4720]: I0122 07:12:19.954066 4720 generic.go:334] "Generic (PLEG): container finished" podID="e515a8f3-9b77-4077-a660-8fbdbd4fe36f" containerID="b52af6605465c645446c8212a08e28eaa40e5302eec79b9fb01846e9284e84e6" exitCode=0 Jan 22 07:12:19 crc kubenswrapper[4720]: I0122 07:12:19.954202 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"e515a8f3-9b77-4077-a660-8fbdbd4fe36f","Type":"ContainerDied","Data":"b52af6605465c645446c8212a08e28eaa40e5302eec79b9fb01846e9284e84e6"} Jan 22 07:12:19 crc kubenswrapper[4720]: I0122 07:12:19.954269 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"e515a8f3-9b77-4077-a660-8fbdbd4fe36f","Type":"ContainerDied","Data":"6d5354e40282ce8423322b0af0c14c754c10cfe392137d0000d95f79b671e2dc"} Jan 22 07:12:19 crc kubenswrapper[4720]: I0122 07:12:19.954305 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 07:12:19 crc kubenswrapper[4720]: I0122 07:12:19.967723 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/966f7a85-55c7-4218-93e4-ab5f53c396e1-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "966f7a85-55c7-4218-93e4-ab5f53c396e1" (UID: "966f7a85-55c7-4218-93e4-ab5f53c396e1"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:12:19 crc kubenswrapper[4720]: I0122 07:12:19.982118 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/966f7a85-55c7-4218-93e4-ab5f53c396e1-config-data" (OuterVolumeSpecName: "config-data") pod "966f7a85-55c7-4218-93e4-ab5f53c396e1" (UID: "966f7a85-55c7-4218-93e4-ab5f53c396e1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:12:19 crc kubenswrapper[4720]: I0122 07:12:19.985152 4720 scope.go:117] "RemoveContainer" containerID="7cdbb37638e90067b54e89dec34f69fe498bbd0b1869c40ed6698b0338972809" Jan 22 07:12:20 crc kubenswrapper[4720]: I0122 07:12:20.003019 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 22 07:12:20 crc kubenswrapper[4720]: I0122 07:12:20.025986 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 22 07:12:20 crc kubenswrapper[4720]: I0122 07:12:20.026022 4720 scope.go:117] "RemoveContainer" containerID="cdade98cdbea121b11c056204d4ef85b839da237917bc47c6d04bec3a8c08a7e" Jan 22 07:12:20 crc kubenswrapper[4720]: E0122 07:12:20.027919 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cdade98cdbea121b11c056204d4ef85b839da237917bc47c6d04bec3a8c08a7e\": container with ID starting with cdade98cdbea121b11c056204d4ef85b839da237917bc47c6d04bec3a8c08a7e not found: ID does not exist" containerID="cdade98cdbea121b11c056204d4ef85b839da237917bc47c6d04bec3a8c08a7e" Jan 22 07:12:20 crc kubenswrapper[4720]: I0122 07:12:20.027969 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cdade98cdbea121b11c056204d4ef85b839da237917bc47c6d04bec3a8c08a7e"} err="failed to get container status \"cdade98cdbea121b11c056204d4ef85b839da237917bc47c6d04bec3a8c08a7e\": rpc error: code = NotFound desc = could not find container \"cdade98cdbea121b11c056204d4ef85b839da237917bc47c6d04bec3a8c08a7e\": container with ID starting with cdade98cdbea121b11c056204d4ef85b839da237917bc47c6d04bec3a8c08a7e not found: ID does not exist" Jan 22 07:12:20 crc kubenswrapper[4720]: I0122 07:12:20.028002 4720 scope.go:117] "RemoveContainer" containerID="7cdbb37638e90067b54e89dec34f69fe498bbd0b1869c40ed6698b0338972809" Jan 22 07:12:20 crc kubenswrapper[4720]: E0122 07:12:20.028291 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7cdbb37638e90067b54e89dec34f69fe498bbd0b1869c40ed6698b0338972809\": container with ID starting with 7cdbb37638e90067b54e89dec34f69fe498bbd0b1869c40ed6698b0338972809 not found: ID does not exist" containerID="7cdbb37638e90067b54e89dec34f69fe498bbd0b1869c40ed6698b0338972809" Jan 22 07:12:20 crc kubenswrapper[4720]: I0122 07:12:20.028325 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7cdbb37638e90067b54e89dec34f69fe498bbd0b1869c40ed6698b0338972809"} err="failed to get container status \"7cdbb37638e90067b54e89dec34f69fe498bbd0b1869c40ed6698b0338972809\": rpc error: code = NotFound desc = could not find container \"7cdbb37638e90067b54e89dec34f69fe498bbd0b1869c40ed6698b0338972809\": container with ID starting with 7cdbb37638e90067b54e89dec34f69fe498bbd0b1869c40ed6698b0338972809 not found: ID does not exist" Jan 22 07:12:20 crc kubenswrapper[4720]: I0122 07:12:20.028355 4720 scope.go:117] "RemoveContainer" containerID="b52af6605465c645446c8212a08e28eaa40e5302eec79b9fb01846e9284e84e6" Jan 22 07:12:20 crc kubenswrapper[4720]: I0122 07:12:20.036656 4720 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/966f7a85-55c7-4218-93e4-ab5f53c396e1-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 07:12:20 crc kubenswrapper[4720]: I0122 07:12:20.040018 4720 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/966f7a85-55c7-4218-93e4-ab5f53c396e1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 07:12:20 crc kubenswrapper[4720]: I0122 07:12:20.040144 4720 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/966f7a85-55c7-4218-93e4-ab5f53c396e1-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Jan 22 07:12:20 crc kubenswrapper[4720]: I0122 07:12:20.040207 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qwn6n\" (UniqueName: \"kubernetes.io/projected/966f7a85-55c7-4218-93e4-ab5f53c396e1-kube-api-access-qwn6n\") on node \"crc\" DevicePath \"\"" Jan 22 07:12:20 crc kubenswrapper[4720]: I0122 07:12:20.050410 4720 scope.go:117] "RemoveContainer" containerID="b52af6605465c645446c8212a08e28eaa40e5302eec79b9fb01846e9284e84e6" Jan 22 07:12:20 crc kubenswrapper[4720]: E0122 07:12:20.050888 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b52af6605465c645446c8212a08e28eaa40e5302eec79b9fb01846e9284e84e6\": container with ID starting with b52af6605465c645446c8212a08e28eaa40e5302eec79b9fb01846e9284e84e6 not found: ID does not exist" containerID="b52af6605465c645446c8212a08e28eaa40e5302eec79b9fb01846e9284e84e6" Jan 22 07:12:20 crc kubenswrapper[4720]: I0122 07:12:20.050972 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b52af6605465c645446c8212a08e28eaa40e5302eec79b9fb01846e9284e84e6"} err="failed to get container status \"b52af6605465c645446c8212a08e28eaa40e5302eec79b9fb01846e9284e84e6\": rpc error: code = NotFound desc = could not find container \"b52af6605465c645446c8212a08e28eaa40e5302eec79b9fb01846e9284e84e6\": container with ID starting with b52af6605465c645446c8212a08e28eaa40e5302eec79b9fb01846e9284e84e6 not found: ID does not exist" Jan 22 07:12:20 crc kubenswrapper[4720]: I0122 07:12:20.053060 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/966f7a85-55c7-4218-93e4-ab5f53c396e1-cert-memcached-mtls" (OuterVolumeSpecName: "cert-memcached-mtls") pod "966f7a85-55c7-4218-93e4-ab5f53c396e1" (UID: "966f7a85-55c7-4218-93e4-ab5f53c396e1"). InnerVolumeSpecName "cert-memcached-mtls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:12:20 crc kubenswrapper[4720]: I0122 07:12:20.142073 4720 reconciler_common.go:293] "Volume detached for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/966f7a85-55c7-4218-93e4-ab5f53c396e1-cert-memcached-mtls\") on node \"crc\" DevicePath \"\"" Jan 22 07:12:20 crc kubenswrapper[4720]: I0122 07:12:20.223286 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e515a8f3-9b77-4077-a660-8fbdbd4fe36f" path="/var/lib/kubelet/pods/e515a8f3-9b77-4077-a660-8fbdbd4fe36f/volumes" Jan 22 07:12:20 crc kubenswrapper[4720]: I0122 07:12:20.280098 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 22 07:12:20 crc kubenswrapper[4720]: I0122 07:12:20.286743 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 22 07:12:20 crc kubenswrapper[4720]: I0122 07:12:20.386902 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher4042-account-delete-m5ds2" Jan 22 07:12:20 crc kubenswrapper[4720]: I0122 07:12:20.448748 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l9d97\" (UniqueName: \"kubernetes.io/projected/39df1590-14b9-4fa3-a751-c40df86e633f-kube-api-access-l9d97\") pod \"39df1590-14b9-4fa3-a751-c40df86e633f\" (UID: \"39df1590-14b9-4fa3-a751-c40df86e633f\") " Jan 22 07:12:20 crc kubenswrapper[4720]: I0122 07:12:20.448846 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/39df1590-14b9-4fa3-a751-c40df86e633f-operator-scripts\") pod \"39df1590-14b9-4fa3-a751-c40df86e633f\" (UID: \"39df1590-14b9-4fa3-a751-c40df86e633f\") " Jan 22 07:12:20 crc kubenswrapper[4720]: I0122 07:12:20.449988 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/39df1590-14b9-4fa3-a751-c40df86e633f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "39df1590-14b9-4fa3-a751-c40df86e633f" (UID: "39df1590-14b9-4fa3-a751-c40df86e633f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 07:12:20 crc kubenswrapper[4720]: I0122 07:12:20.455051 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/39df1590-14b9-4fa3-a751-c40df86e633f-kube-api-access-l9d97" (OuterVolumeSpecName: "kube-api-access-l9d97") pod "39df1590-14b9-4fa3-a751-c40df86e633f" (UID: "39df1590-14b9-4fa3-a751-c40df86e633f"). InnerVolumeSpecName "kube-api-access-l9d97". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 07:12:20 crc kubenswrapper[4720]: I0122 07:12:20.551510 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l9d97\" (UniqueName: \"kubernetes.io/projected/39df1590-14b9-4fa3-a751-c40df86e633f-kube-api-access-l9d97\") on node \"crc\" DevicePath \"\"" Jan 22 07:12:20 crc kubenswrapper[4720]: I0122 07:12:20.551554 4720 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/39df1590-14b9-4fa3-a751-c40df86e633f-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 07:12:20 crc kubenswrapper[4720]: I0122 07:12:20.820534 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 07:12:20 crc kubenswrapper[4720]: I0122 07:12:20.820922 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="607fcd76-c250-451f-adc4-aa14a6211d2d" containerName="ceilometer-central-agent" containerID="cri-o://3ebb2c2409ab4e9c0d175ababd94c48b22f3bc5f4e34655466d28f8d1daa6bf8" gracePeriod=30 Jan 22 07:12:20 crc kubenswrapper[4720]: I0122 07:12:20.820959 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="607fcd76-c250-451f-adc4-aa14a6211d2d" containerName="proxy-httpd" containerID="cri-o://e3a39218eac10cabf6115ccb465c0e85397f9a88695b48f18cf2501f31625f6e" gracePeriod=30 Jan 22 07:12:20 crc kubenswrapper[4720]: I0122 07:12:20.820994 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="607fcd76-c250-451f-adc4-aa14a6211d2d" containerName="ceilometer-notification-agent" containerID="cri-o://24e6aac6da8dbf1e9490b9d6331c0d6a559dac17f6dbb259f81a9b05ee2e0016" gracePeriod=30 Jan 22 07:12:20 crc kubenswrapper[4720]: I0122 07:12:20.820950 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="607fcd76-c250-451f-adc4-aa14a6211d2d" containerName="sg-core" containerID="cri-o://efad0716fff836604bc846d6954101d52d5fb596ea25c8c73a87488fa1e8118e" gracePeriod=30 Jan 22 07:12:20 crc kubenswrapper[4720]: I0122 07:12:20.979427 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher4042-account-delete-m5ds2" event={"ID":"39df1590-14b9-4fa3-a751-c40df86e633f","Type":"ContainerDied","Data":"3cb92d1d00ff307c03d1c304e9665cac01c21b9511bf1f8b8add523e90d6a8ac"} Jan 22 07:12:20 crc kubenswrapper[4720]: I0122 07:12:20.979473 4720 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3cb92d1d00ff307c03d1c304e9665cac01c21b9511bf1f8b8add523e90d6a8ac" Jan 22 07:12:20 crc kubenswrapper[4720]: I0122 07:12:20.979556 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher4042-account-delete-m5ds2" Jan 22 07:12:20 crc kubenswrapper[4720]: I0122 07:12:20.984826 4720 generic.go:334] "Generic (PLEG): container finished" podID="607fcd76-c250-451f-adc4-aa14a6211d2d" containerID="e3a39218eac10cabf6115ccb465c0e85397f9a88695b48f18cf2501f31625f6e" exitCode=0 Jan 22 07:12:20 crc kubenswrapper[4720]: I0122 07:12:20.984856 4720 generic.go:334] "Generic (PLEG): container finished" podID="607fcd76-c250-451f-adc4-aa14a6211d2d" containerID="efad0716fff836604bc846d6954101d52d5fb596ea25c8c73a87488fa1e8118e" exitCode=2 Jan 22 07:12:20 crc kubenswrapper[4720]: I0122 07:12:20.984967 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"607fcd76-c250-451f-adc4-aa14a6211d2d","Type":"ContainerDied","Data":"e3a39218eac10cabf6115ccb465c0e85397f9a88695b48f18cf2501f31625f6e"} Jan 22 07:12:20 crc kubenswrapper[4720]: I0122 07:12:20.984998 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"607fcd76-c250-451f-adc4-aa14a6211d2d","Type":"ContainerDied","Data":"efad0716fff836604bc846d6954101d52d5fb596ea25c8c73a87488fa1e8118e"} Jan 22 07:12:21 crc kubenswrapper[4720]: I0122 07:12:21.789973 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:12:21 crc kubenswrapper[4720]: I0122 07:12:21.874936 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/607fcd76-c250-451f-adc4-aa14a6211d2d-combined-ca-bundle\") pod \"607fcd76-c250-451f-adc4-aa14a6211d2d\" (UID: \"607fcd76-c250-451f-adc4-aa14a6211d2d\") " Jan 22 07:12:21 crc kubenswrapper[4720]: I0122 07:12:21.875004 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/607fcd76-c250-451f-adc4-aa14a6211d2d-config-data\") pod \"607fcd76-c250-451f-adc4-aa14a6211d2d\" (UID: \"607fcd76-c250-451f-adc4-aa14a6211d2d\") " Jan 22 07:12:21 crc kubenswrapper[4720]: I0122 07:12:21.875098 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/607fcd76-c250-451f-adc4-aa14a6211d2d-sg-core-conf-yaml\") pod \"607fcd76-c250-451f-adc4-aa14a6211d2d\" (UID: \"607fcd76-c250-451f-adc4-aa14a6211d2d\") " Jan 22 07:12:21 crc kubenswrapper[4720]: I0122 07:12:21.875134 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/607fcd76-c250-451f-adc4-aa14a6211d2d-log-httpd\") pod \"607fcd76-c250-451f-adc4-aa14a6211d2d\" (UID: \"607fcd76-c250-451f-adc4-aa14a6211d2d\") " Jan 22 07:12:21 crc kubenswrapper[4720]: I0122 07:12:21.875173 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzsqk\" (UniqueName: \"kubernetes.io/projected/607fcd76-c250-451f-adc4-aa14a6211d2d-kube-api-access-nzsqk\") pod \"607fcd76-c250-451f-adc4-aa14a6211d2d\" (UID: \"607fcd76-c250-451f-adc4-aa14a6211d2d\") " Jan 22 07:12:21 crc kubenswrapper[4720]: I0122 07:12:21.875264 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/607fcd76-c250-451f-adc4-aa14a6211d2d-scripts\") pod \"607fcd76-c250-451f-adc4-aa14a6211d2d\" (UID: \"607fcd76-c250-451f-adc4-aa14a6211d2d\") " Jan 22 07:12:21 crc kubenswrapper[4720]: I0122 07:12:21.875301 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/607fcd76-c250-451f-adc4-aa14a6211d2d-run-httpd\") pod \"607fcd76-c250-451f-adc4-aa14a6211d2d\" (UID: \"607fcd76-c250-451f-adc4-aa14a6211d2d\") " Jan 22 07:12:21 crc kubenswrapper[4720]: I0122 07:12:21.875405 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/607fcd76-c250-451f-adc4-aa14a6211d2d-ceilometer-tls-certs\") pod \"607fcd76-c250-451f-adc4-aa14a6211d2d\" (UID: \"607fcd76-c250-451f-adc4-aa14a6211d2d\") " Jan 22 07:12:21 crc kubenswrapper[4720]: I0122 07:12:21.876484 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/607fcd76-c250-451f-adc4-aa14a6211d2d-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "607fcd76-c250-451f-adc4-aa14a6211d2d" (UID: "607fcd76-c250-451f-adc4-aa14a6211d2d"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 07:12:21 crc kubenswrapper[4720]: I0122 07:12:21.879817 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/607fcd76-c250-451f-adc4-aa14a6211d2d-kube-api-access-nzsqk" (OuterVolumeSpecName: "kube-api-access-nzsqk") pod "607fcd76-c250-451f-adc4-aa14a6211d2d" (UID: "607fcd76-c250-451f-adc4-aa14a6211d2d"). InnerVolumeSpecName "kube-api-access-nzsqk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 07:12:21 crc kubenswrapper[4720]: I0122 07:12:21.879994 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/607fcd76-c250-451f-adc4-aa14a6211d2d-scripts" (OuterVolumeSpecName: "scripts") pod "607fcd76-c250-451f-adc4-aa14a6211d2d" (UID: "607fcd76-c250-451f-adc4-aa14a6211d2d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:12:21 crc kubenswrapper[4720]: I0122 07:12:21.880159 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/607fcd76-c250-451f-adc4-aa14a6211d2d-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "607fcd76-c250-451f-adc4-aa14a6211d2d" (UID: "607fcd76-c250-451f-adc4-aa14a6211d2d"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 07:12:21 crc kubenswrapper[4720]: I0122 07:12:21.915538 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/607fcd76-c250-451f-adc4-aa14a6211d2d-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "607fcd76-c250-451f-adc4-aa14a6211d2d" (UID: "607fcd76-c250-451f-adc4-aa14a6211d2d"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:12:21 crc kubenswrapper[4720]: I0122 07:12:21.956511 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/607fcd76-c250-451f-adc4-aa14a6211d2d-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "607fcd76-c250-451f-adc4-aa14a6211d2d" (UID: "607fcd76-c250-451f-adc4-aa14a6211d2d"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:12:21 crc kubenswrapper[4720]: I0122 07:12:21.976463 4720 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/607fcd76-c250-451f-adc4-aa14a6211d2d-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 07:12:21 crc kubenswrapper[4720]: I0122 07:12:21.976493 4720 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/607fcd76-c250-451f-adc4-aa14a6211d2d-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 22 07:12:21 crc kubenswrapper[4720]: I0122 07:12:21.976503 4720 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/607fcd76-c250-451f-adc4-aa14a6211d2d-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 22 07:12:21 crc kubenswrapper[4720]: I0122 07:12:21.976514 4720 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/607fcd76-c250-451f-adc4-aa14a6211d2d-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 22 07:12:21 crc kubenswrapper[4720]: I0122 07:12:21.976522 4720 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/607fcd76-c250-451f-adc4-aa14a6211d2d-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 22 07:12:21 crc kubenswrapper[4720]: I0122 07:12:21.976531 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzsqk\" (UniqueName: \"kubernetes.io/projected/607fcd76-c250-451f-adc4-aa14a6211d2d-kube-api-access-nzsqk\") on node \"crc\" DevicePath \"\"" Jan 22 07:12:21 crc kubenswrapper[4720]: I0122 07:12:21.977107 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/607fcd76-c250-451f-adc4-aa14a6211d2d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "607fcd76-c250-451f-adc4-aa14a6211d2d" (UID: "607fcd76-c250-451f-adc4-aa14a6211d2d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:12:22 crc kubenswrapper[4720]: I0122 07:12:22.002188 4720 generic.go:334] "Generic (PLEG): container finished" podID="607fcd76-c250-451f-adc4-aa14a6211d2d" containerID="24e6aac6da8dbf1e9490b9d6331c0d6a559dac17f6dbb259f81a9b05ee2e0016" exitCode=0 Jan 22 07:12:22 crc kubenswrapper[4720]: I0122 07:12:22.002246 4720 generic.go:334] "Generic (PLEG): container finished" podID="607fcd76-c250-451f-adc4-aa14a6211d2d" containerID="3ebb2c2409ab4e9c0d175ababd94c48b22f3bc5f4e34655466d28f8d1daa6bf8" exitCode=0 Jan 22 07:12:22 crc kubenswrapper[4720]: I0122 07:12:22.002275 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"607fcd76-c250-451f-adc4-aa14a6211d2d","Type":"ContainerDied","Data":"24e6aac6da8dbf1e9490b9d6331c0d6a559dac17f6dbb259f81a9b05ee2e0016"} Jan 22 07:12:22 crc kubenswrapper[4720]: I0122 07:12:22.002311 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"607fcd76-c250-451f-adc4-aa14a6211d2d","Type":"ContainerDied","Data":"3ebb2c2409ab4e9c0d175ababd94c48b22f3bc5f4e34655466d28f8d1daa6bf8"} Jan 22 07:12:22 crc kubenswrapper[4720]: I0122 07:12:22.002324 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:12:22 crc kubenswrapper[4720]: I0122 07:12:22.002338 4720 scope.go:117] "RemoveContainer" containerID="e3a39218eac10cabf6115ccb465c0e85397f9a88695b48f18cf2501f31625f6e" Jan 22 07:12:22 crc kubenswrapper[4720]: I0122 07:12:22.002325 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"607fcd76-c250-451f-adc4-aa14a6211d2d","Type":"ContainerDied","Data":"d58816b2eff070d967847b711625fa1123a1b3e7d8214838954f5920a809e1ea"} Jan 22 07:12:22 crc kubenswrapper[4720]: I0122 07:12:22.008410 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/607fcd76-c250-451f-adc4-aa14a6211d2d-config-data" (OuterVolumeSpecName: "config-data") pod "607fcd76-c250-451f-adc4-aa14a6211d2d" (UID: "607fcd76-c250-451f-adc4-aa14a6211d2d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:12:22 crc kubenswrapper[4720]: I0122 07:12:22.025703 4720 scope.go:117] "RemoveContainer" containerID="efad0716fff836604bc846d6954101d52d5fb596ea25c8c73a87488fa1e8118e" Jan 22 07:12:22 crc kubenswrapper[4720]: I0122 07:12:22.051000 4720 scope.go:117] "RemoveContainer" containerID="24e6aac6da8dbf1e9490b9d6331c0d6a559dac17f6dbb259f81a9b05ee2e0016" Jan 22 07:12:22 crc kubenswrapper[4720]: I0122 07:12:22.077791 4720 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/607fcd76-c250-451f-adc4-aa14a6211d2d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 07:12:22 crc kubenswrapper[4720]: I0122 07:12:22.078183 4720 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/607fcd76-c250-451f-adc4-aa14a6211d2d-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 07:12:22 crc kubenswrapper[4720]: I0122 07:12:22.085650 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-db-create-56tkg"] Jan 22 07:12:22 crc kubenswrapper[4720]: I0122 07:12:22.097035 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-4042-account-create-update-rjvws"] Jan 22 07:12:22 crc kubenswrapper[4720]: I0122 07:12:22.102771 4720 scope.go:117] "RemoveContainer" containerID="3ebb2c2409ab4e9c0d175ababd94c48b22f3bc5f4e34655466d28f8d1daa6bf8" Jan 22 07:12:22 crc kubenswrapper[4720]: I0122 07:12:22.104805 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-db-create-56tkg"] Jan 22 07:12:22 crc kubenswrapper[4720]: I0122 07:12:22.114670 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-4042-account-create-update-rjvws"] Jan 22 07:12:22 crc kubenswrapper[4720]: I0122 07:12:22.126201 4720 scope.go:117] "RemoveContainer" containerID="e3a39218eac10cabf6115ccb465c0e85397f9a88695b48f18cf2501f31625f6e" Jan 22 07:12:22 crc kubenswrapper[4720]: E0122 07:12:22.126880 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e3a39218eac10cabf6115ccb465c0e85397f9a88695b48f18cf2501f31625f6e\": container with ID starting with e3a39218eac10cabf6115ccb465c0e85397f9a88695b48f18cf2501f31625f6e not found: ID does not exist" containerID="e3a39218eac10cabf6115ccb465c0e85397f9a88695b48f18cf2501f31625f6e" Jan 22 07:12:22 crc kubenswrapper[4720]: I0122 07:12:22.126925 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e3a39218eac10cabf6115ccb465c0e85397f9a88695b48f18cf2501f31625f6e"} err="failed to get container status \"e3a39218eac10cabf6115ccb465c0e85397f9a88695b48f18cf2501f31625f6e\": rpc error: code = NotFound desc = could not find container \"e3a39218eac10cabf6115ccb465c0e85397f9a88695b48f18cf2501f31625f6e\": container with ID starting with e3a39218eac10cabf6115ccb465c0e85397f9a88695b48f18cf2501f31625f6e not found: ID does not exist" Jan 22 07:12:22 crc kubenswrapper[4720]: I0122 07:12:22.126947 4720 scope.go:117] "RemoveContainer" containerID="efad0716fff836604bc846d6954101d52d5fb596ea25c8c73a87488fa1e8118e" Jan 22 07:12:22 crc kubenswrapper[4720]: E0122 07:12:22.127280 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"efad0716fff836604bc846d6954101d52d5fb596ea25c8c73a87488fa1e8118e\": container with ID starting with efad0716fff836604bc846d6954101d52d5fb596ea25c8c73a87488fa1e8118e not found: ID does not exist" containerID="efad0716fff836604bc846d6954101d52d5fb596ea25c8c73a87488fa1e8118e" Jan 22 07:12:22 crc kubenswrapper[4720]: I0122 07:12:22.127326 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"efad0716fff836604bc846d6954101d52d5fb596ea25c8c73a87488fa1e8118e"} err="failed to get container status \"efad0716fff836604bc846d6954101d52d5fb596ea25c8c73a87488fa1e8118e\": rpc error: code = NotFound desc = could not find container \"efad0716fff836604bc846d6954101d52d5fb596ea25c8c73a87488fa1e8118e\": container with ID starting with efad0716fff836604bc846d6954101d52d5fb596ea25c8c73a87488fa1e8118e not found: ID does not exist" Jan 22 07:12:22 crc kubenswrapper[4720]: I0122 07:12:22.127343 4720 scope.go:117] "RemoveContainer" containerID="24e6aac6da8dbf1e9490b9d6331c0d6a559dac17f6dbb259f81a9b05ee2e0016" Jan 22 07:12:22 crc kubenswrapper[4720]: E0122 07:12:22.127600 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"24e6aac6da8dbf1e9490b9d6331c0d6a559dac17f6dbb259f81a9b05ee2e0016\": container with ID starting with 24e6aac6da8dbf1e9490b9d6331c0d6a559dac17f6dbb259f81a9b05ee2e0016 not found: ID does not exist" containerID="24e6aac6da8dbf1e9490b9d6331c0d6a559dac17f6dbb259f81a9b05ee2e0016" Jan 22 07:12:22 crc kubenswrapper[4720]: I0122 07:12:22.127617 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"24e6aac6da8dbf1e9490b9d6331c0d6a559dac17f6dbb259f81a9b05ee2e0016"} err="failed to get container status \"24e6aac6da8dbf1e9490b9d6331c0d6a559dac17f6dbb259f81a9b05ee2e0016\": rpc error: code = NotFound desc = could not find container \"24e6aac6da8dbf1e9490b9d6331c0d6a559dac17f6dbb259f81a9b05ee2e0016\": container with ID starting with 24e6aac6da8dbf1e9490b9d6331c0d6a559dac17f6dbb259f81a9b05ee2e0016 not found: ID does not exist" Jan 22 07:12:22 crc kubenswrapper[4720]: I0122 07:12:22.127628 4720 scope.go:117] "RemoveContainer" containerID="3ebb2c2409ab4e9c0d175ababd94c48b22f3bc5f4e34655466d28f8d1daa6bf8" Jan 22 07:12:22 crc kubenswrapper[4720]: E0122 07:12:22.127941 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3ebb2c2409ab4e9c0d175ababd94c48b22f3bc5f4e34655466d28f8d1daa6bf8\": container with ID starting with 3ebb2c2409ab4e9c0d175ababd94c48b22f3bc5f4e34655466d28f8d1daa6bf8 not found: ID does not exist" containerID="3ebb2c2409ab4e9c0d175ababd94c48b22f3bc5f4e34655466d28f8d1daa6bf8" Jan 22 07:12:22 crc kubenswrapper[4720]: I0122 07:12:22.127958 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3ebb2c2409ab4e9c0d175ababd94c48b22f3bc5f4e34655466d28f8d1daa6bf8"} err="failed to get container status \"3ebb2c2409ab4e9c0d175ababd94c48b22f3bc5f4e34655466d28f8d1daa6bf8\": rpc error: code = NotFound desc = could not find container \"3ebb2c2409ab4e9c0d175ababd94c48b22f3bc5f4e34655466d28f8d1daa6bf8\": container with ID starting with 3ebb2c2409ab4e9c0d175ababd94c48b22f3bc5f4e34655466d28f8d1daa6bf8 not found: ID does not exist" Jan 22 07:12:22 crc kubenswrapper[4720]: I0122 07:12:22.127972 4720 scope.go:117] "RemoveContainer" containerID="e3a39218eac10cabf6115ccb465c0e85397f9a88695b48f18cf2501f31625f6e" Jan 22 07:12:22 crc kubenswrapper[4720]: I0122 07:12:22.128321 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e3a39218eac10cabf6115ccb465c0e85397f9a88695b48f18cf2501f31625f6e"} err="failed to get container status \"e3a39218eac10cabf6115ccb465c0e85397f9a88695b48f18cf2501f31625f6e\": rpc error: code = NotFound desc = could not find container \"e3a39218eac10cabf6115ccb465c0e85397f9a88695b48f18cf2501f31625f6e\": container with ID starting with e3a39218eac10cabf6115ccb465c0e85397f9a88695b48f18cf2501f31625f6e not found: ID does not exist" Jan 22 07:12:22 crc kubenswrapper[4720]: I0122 07:12:22.128342 4720 scope.go:117] "RemoveContainer" containerID="efad0716fff836604bc846d6954101d52d5fb596ea25c8c73a87488fa1e8118e" Jan 22 07:12:22 crc kubenswrapper[4720]: I0122 07:12:22.128599 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"efad0716fff836604bc846d6954101d52d5fb596ea25c8c73a87488fa1e8118e"} err="failed to get container status \"efad0716fff836604bc846d6954101d52d5fb596ea25c8c73a87488fa1e8118e\": rpc error: code = NotFound desc = could not find container \"efad0716fff836604bc846d6954101d52d5fb596ea25c8c73a87488fa1e8118e\": container with ID starting with efad0716fff836604bc846d6954101d52d5fb596ea25c8c73a87488fa1e8118e not found: ID does not exist" Jan 22 07:12:22 crc kubenswrapper[4720]: I0122 07:12:22.128615 4720 scope.go:117] "RemoveContainer" containerID="24e6aac6da8dbf1e9490b9d6331c0d6a559dac17f6dbb259f81a9b05ee2e0016" Jan 22 07:12:22 crc kubenswrapper[4720]: I0122 07:12:22.128892 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"24e6aac6da8dbf1e9490b9d6331c0d6a559dac17f6dbb259f81a9b05ee2e0016"} err="failed to get container status \"24e6aac6da8dbf1e9490b9d6331c0d6a559dac17f6dbb259f81a9b05ee2e0016\": rpc error: code = NotFound desc = could not find container \"24e6aac6da8dbf1e9490b9d6331c0d6a559dac17f6dbb259f81a9b05ee2e0016\": container with ID starting with 24e6aac6da8dbf1e9490b9d6331c0d6a559dac17f6dbb259f81a9b05ee2e0016 not found: ID does not exist" Jan 22 07:12:22 crc kubenswrapper[4720]: I0122 07:12:22.128919 4720 scope.go:117] "RemoveContainer" containerID="3ebb2c2409ab4e9c0d175ababd94c48b22f3bc5f4e34655466d28f8d1daa6bf8" Jan 22 07:12:22 crc kubenswrapper[4720]: I0122 07:12:22.129381 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3ebb2c2409ab4e9c0d175ababd94c48b22f3bc5f4e34655466d28f8d1daa6bf8"} err="failed to get container status \"3ebb2c2409ab4e9c0d175ababd94c48b22f3bc5f4e34655466d28f8d1daa6bf8\": rpc error: code = NotFound desc = could not find container \"3ebb2c2409ab4e9c0d175ababd94c48b22f3bc5f4e34655466d28f8d1daa6bf8\": container with ID starting with 3ebb2c2409ab4e9c0d175ababd94c48b22f3bc5f4e34655466d28f8d1daa6bf8 not found: ID does not exist" Jan 22 07:12:22 crc kubenswrapper[4720]: I0122 07:12:22.131012 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher4042-account-delete-m5ds2"] Jan 22 07:12:22 crc kubenswrapper[4720]: I0122 07:12:22.144995 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher4042-account-delete-m5ds2"] Jan 22 07:12:22 crc kubenswrapper[4720]: I0122 07:12:22.166688 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-db-create-wfvc6"] Jan 22 07:12:22 crc kubenswrapper[4720]: E0122 07:12:22.167328 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e515a8f3-9b77-4077-a660-8fbdbd4fe36f" containerName="watcher-applier" Jan 22 07:12:22 crc kubenswrapper[4720]: I0122 07:12:22.167417 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="e515a8f3-9b77-4077-a660-8fbdbd4fe36f" containerName="watcher-applier" Jan 22 07:12:22 crc kubenswrapper[4720]: E0122 07:12:22.167481 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="607fcd76-c250-451f-adc4-aa14a6211d2d" containerName="ceilometer-central-agent" Jan 22 07:12:22 crc kubenswrapper[4720]: I0122 07:12:22.167534 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="607fcd76-c250-451f-adc4-aa14a6211d2d" containerName="ceilometer-central-agent" Jan 22 07:12:22 crc kubenswrapper[4720]: E0122 07:12:22.167587 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="966f7a85-55c7-4218-93e4-ab5f53c396e1" containerName="watcher-kuttl-api-log" Jan 22 07:12:22 crc kubenswrapper[4720]: I0122 07:12:22.167643 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="966f7a85-55c7-4218-93e4-ab5f53c396e1" containerName="watcher-kuttl-api-log" Jan 22 07:12:22 crc kubenswrapper[4720]: E0122 07:12:22.167698 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="607fcd76-c250-451f-adc4-aa14a6211d2d" containerName="ceilometer-notification-agent" Jan 22 07:12:22 crc kubenswrapper[4720]: I0122 07:12:22.167750 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="607fcd76-c250-451f-adc4-aa14a6211d2d" containerName="ceilometer-notification-agent" Jan 22 07:12:22 crc kubenswrapper[4720]: E0122 07:12:22.167818 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="607fcd76-c250-451f-adc4-aa14a6211d2d" containerName="proxy-httpd" Jan 22 07:12:22 crc kubenswrapper[4720]: I0122 07:12:22.167872 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="607fcd76-c250-451f-adc4-aa14a6211d2d" containerName="proxy-httpd" Jan 22 07:12:22 crc kubenswrapper[4720]: E0122 07:12:22.167953 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="966f7a85-55c7-4218-93e4-ab5f53c396e1" containerName="watcher-api" Jan 22 07:12:22 crc kubenswrapper[4720]: I0122 07:12:22.168009 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="966f7a85-55c7-4218-93e4-ab5f53c396e1" containerName="watcher-api" Jan 22 07:12:22 crc kubenswrapper[4720]: E0122 07:12:22.168067 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="607fcd76-c250-451f-adc4-aa14a6211d2d" containerName="sg-core" Jan 22 07:12:22 crc kubenswrapper[4720]: I0122 07:12:22.168129 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="607fcd76-c250-451f-adc4-aa14a6211d2d" containerName="sg-core" Jan 22 07:12:22 crc kubenswrapper[4720]: E0122 07:12:22.168199 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39df1590-14b9-4fa3-a751-c40df86e633f" containerName="mariadb-account-delete" Jan 22 07:12:22 crc kubenswrapper[4720]: I0122 07:12:22.168257 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="39df1590-14b9-4fa3-a751-c40df86e633f" containerName="mariadb-account-delete" Jan 22 07:12:22 crc kubenswrapper[4720]: I0122 07:12:22.168471 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="e515a8f3-9b77-4077-a660-8fbdbd4fe36f" containerName="watcher-applier" Jan 22 07:12:22 crc kubenswrapper[4720]: I0122 07:12:22.168541 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="607fcd76-c250-451f-adc4-aa14a6211d2d" containerName="ceilometer-central-agent" Jan 22 07:12:22 crc kubenswrapper[4720]: I0122 07:12:22.168599 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="966f7a85-55c7-4218-93e4-ab5f53c396e1" containerName="watcher-kuttl-api-log" Jan 22 07:12:22 crc kubenswrapper[4720]: I0122 07:12:22.168656 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="39df1590-14b9-4fa3-a751-c40df86e633f" containerName="mariadb-account-delete" Jan 22 07:12:22 crc kubenswrapper[4720]: I0122 07:12:22.168720 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="966f7a85-55c7-4218-93e4-ab5f53c396e1" containerName="watcher-api" Jan 22 07:12:22 crc kubenswrapper[4720]: I0122 07:12:22.168774 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="607fcd76-c250-451f-adc4-aa14a6211d2d" containerName="proxy-httpd" Jan 22 07:12:22 crc kubenswrapper[4720]: I0122 07:12:22.168827 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="607fcd76-c250-451f-adc4-aa14a6211d2d" containerName="sg-core" Jan 22 07:12:22 crc kubenswrapper[4720]: I0122 07:12:22.168882 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="607fcd76-c250-451f-adc4-aa14a6211d2d" containerName="ceilometer-notification-agent" Jan 22 07:12:22 crc kubenswrapper[4720]: I0122 07:12:22.169598 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-wfvc6" Jan 22 07:12:22 crc kubenswrapper[4720]: I0122 07:12:22.177506 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-db-create-wfvc6"] Jan 22 07:12:22 crc kubenswrapper[4720]: I0122 07:12:22.179315 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hkbgb\" (UniqueName: \"kubernetes.io/projected/89b9d35f-d279-4f5c-8316-9ee5cb5e8b68-kube-api-access-hkbgb\") pod \"watcher-db-create-wfvc6\" (UID: \"89b9d35f-d279-4f5c-8316-9ee5cb5e8b68\") " pod="watcher-kuttl-default/watcher-db-create-wfvc6" Jan 22 07:12:22 crc kubenswrapper[4720]: I0122 07:12:22.179394 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/89b9d35f-d279-4f5c-8316-9ee5cb5e8b68-operator-scripts\") pod \"watcher-db-create-wfvc6\" (UID: \"89b9d35f-d279-4f5c-8316-9ee5cb5e8b68\") " pod="watcher-kuttl-default/watcher-db-create-wfvc6" Jan 22 07:12:22 crc kubenswrapper[4720]: I0122 07:12:22.231351 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="39df1590-14b9-4fa3-a751-c40df86e633f" path="/var/lib/kubelet/pods/39df1590-14b9-4fa3-a751-c40df86e633f/volumes" Jan 22 07:12:22 crc kubenswrapper[4720]: I0122 07:12:22.232080 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="603e01a3-d099-4300-8a39-7987332eed09" path="/var/lib/kubelet/pods/603e01a3-d099-4300-8a39-7987332eed09/volumes" Jan 22 07:12:22 crc kubenswrapper[4720]: I0122 07:12:22.232666 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="966f7a85-55c7-4218-93e4-ab5f53c396e1" path="/var/lib/kubelet/pods/966f7a85-55c7-4218-93e4-ab5f53c396e1/volumes" Jan 22 07:12:22 crc kubenswrapper[4720]: I0122 07:12:22.234413 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a48a3256-414e-4999-b919-fe801092cf23" path="/var/lib/kubelet/pods/a48a3256-414e-4999-b919-fe801092cf23/volumes" Jan 22 07:12:22 crc kubenswrapper[4720]: I0122 07:12:22.254155 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-test-account-create-update-dl67x"] Jan 22 07:12:22 crc kubenswrapper[4720]: I0122 07:12:22.255575 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-test-account-create-update-dl67x" Jan 22 07:12:22 crc kubenswrapper[4720]: I0122 07:12:22.262251 4720 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-db-secret" Jan 22 07:12:22 crc kubenswrapper[4720]: I0122 07:12:22.268128 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-test-account-create-update-dl67x"] Jan 22 07:12:22 crc kubenswrapper[4720]: I0122 07:12:22.281175 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hkbgb\" (UniqueName: \"kubernetes.io/projected/89b9d35f-d279-4f5c-8316-9ee5cb5e8b68-kube-api-access-hkbgb\") pod \"watcher-db-create-wfvc6\" (UID: \"89b9d35f-d279-4f5c-8316-9ee5cb5e8b68\") " pod="watcher-kuttl-default/watcher-db-create-wfvc6" Jan 22 07:12:22 crc kubenswrapper[4720]: I0122 07:12:22.281522 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/89b9d35f-d279-4f5c-8316-9ee5cb5e8b68-operator-scripts\") pod \"watcher-db-create-wfvc6\" (UID: \"89b9d35f-d279-4f5c-8316-9ee5cb5e8b68\") " pod="watcher-kuttl-default/watcher-db-create-wfvc6" Jan 22 07:12:22 crc kubenswrapper[4720]: I0122 07:12:22.281709 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7ae0c5a8-fdb3-4b7e-96ea-b8445d063a59-operator-scripts\") pod \"watcher-test-account-create-update-dl67x\" (UID: \"7ae0c5a8-fdb3-4b7e-96ea-b8445d063a59\") " pod="watcher-kuttl-default/watcher-test-account-create-update-dl67x" Jan 22 07:12:22 crc kubenswrapper[4720]: I0122 07:12:22.281899 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rw5pw\" (UniqueName: \"kubernetes.io/projected/7ae0c5a8-fdb3-4b7e-96ea-b8445d063a59-kube-api-access-rw5pw\") pod \"watcher-test-account-create-update-dl67x\" (UID: \"7ae0c5a8-fdb3-4b7e-96ea-b8445d063a59\") " pod="watcher-kuttl-default/watcher-test-account-create-update-dl67x" Jan 22 07:12:22 crc kubenswrapper[4720]: I0122 07:12:22.284193 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/89b9d35f-d279-4f5c-8316-9ee5cb5e8b68-operator-scripts\") pod \"watcher-db-create-wfvc6\" (UID: \"89b9d35f-d279-4f5c-8316-9ee5cb5e8b68\") " pod="watcher-kuttl-default/watcher-db-create-wfvc6" Jan 22 07:12:22 crc kubenswrapper[4720]: I0122 07:12:22.301929 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hkbgb\" (UniqueName: \"kubernetes.io/projected/89b9d35f-d279-4f5c-8316-9ee5cb5e8b68-kube-api-access-hkbgb\") pod \"watcher-db-create-wfvc6\" (UID: \"89b9d35f-d279-4f5c-8316-9ee5cb5e8b68\") " pod="watcher-kuttl-default/watcher-db-create-wfvc6" Jan 22 07:12:22 crc kubenswrapper[4720]: I0122 07:12:22.373089 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 07:12:22 crc kubenswrapper[4720]: I0122 07:12:22.379395 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 07:12:22 crc kubenswrapper[4720]: I0122 07:12:22.383743 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7ae0c5a8-fdb3-4b7e-96ea-b8445d063a59-operator-scripts\") pod \"watcher-test-account-create-update-dl67x\" (UID: \"7ae0c5a8-fdb3-4b7e-96ea-b8445d063a59\") " pod="watcher-kuttl-default/watcher-test-account-create-update-dl67x" Jan 22 07:12:22 crc kubenswrapper[4720]: I0122 07:12:22.383976 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rw5pw\" (UniqueName: \"kubernetes.io/projected/7ae0c5a8-fdb3-4b7e-96ea-b8445d063a59-kube-api-access-rw5pw\") pod \"watcher-test-account-create-update-dl67x\" (UID: \"7ae0c5a8-fdb3-4b7e-96ea-b8445d063a59\") " pod="watcher-kuttl-default/watcher-test-account-create-update-dl67x" Jan 22 07:12:22 crc kubenswrapper[4720]: I0122 07:12:22.384727 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7ae0c5a8-fdb3-4b7e-96ea-b8445d063a59-operator-scripts\") pod \"watcher-test-account-create-update-dl67x\" (UID: \"7ae0c5a8-fdb3-4b7e-96ea-b8445d063a59\") " pod="watcher-kuttl-default/watcher-test-account-create-update-dl67x" Jan 22 07:12:22 crc kubenswrapper[4720]: I0122 07:12:22.392235 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 07:12:22 crc kubenswrapper[4720]: I0122 07:12:22.394472 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:12:22 crc kubenswrapper[4720]: I0122 07:12:22.397023 4720 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-scripts" Jan 22 07:12:22 crc kubenswrapper[4720]: I0122 07:12:22.397353 4720 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-config-data" Jan 22 07:12:22 crc kubenswrapper[4720]: I0122 07:12:22.398282 4720 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-ceilometer-internal-svc" Jan 22 07:12:22 crc kubenswrapper[4720]: I0122 07:12:22.412787 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 07:12:22 crc kubenswrapper[4720]: I0122 07:12:22.415576 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rw5pw\" (UniqueName: \"kubernetes.io/projected/7ae0c5a8-fdb3-4b7e-96ea-b8445d063a59-kube-api-access-rw5pw\") pod \"watcher-test-account-create-update-dl67x\" (UID: \"7ae0c5a8-fdb3-4b7e-96ea-b8445d063a59\") " pod="watcher-kuttl-default/watcher-test-account-create-update-dl67x" Jan 22 07:12:22 crc kubenswrapper[4720]: I0122 07:12:22.484494 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f58c7c4-42ec-42ec-9c37-44cdb8490082-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5f58c7c4-42ec-42ec-9c37-44cdb8490082\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:12:22 crc kubenswrapper[4720]: I0122 07:12:22.484540 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f58c7c4-42ec-42ec-9c37-44cdb8490082-config-data\") pod \"ceilometer-0\" (UID: \"5f58c7c4-42ec-42ec-9c37-44cdb8490082\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:12:22 crc kubenswrapper[4720]: I0122 07:12:22.484570 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5f58c7c4-42ec-42ec-9c37-44cdb8490082-scripts\") pod \"ceilometer-0\" (UID: \"5f58c7c4-42ec-42ec-9c37-44cdb8490082\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:12:22 crc kubenswrapper[4720]: I0122 07:12:22.484602 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5f58c7c4-42ec-42ec-9c37-44cdb8490082-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5f58c7c4-42ec-42ec-9c37-44cdb8490082\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:12:22 crc kubenswrapper[4720]: I0122 07:12:22.484782 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/5f58c7c4-42ec-42ec-9c37-44cdb8490082-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"5f58c7c4-42ec-42ec-9c37-44cdb8490082\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:12:22 crc kubenswrapper[4720]: I0122 07:12:22.484827 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t8l4d\" (UniqueName: \"kubernetes.io/projected/5f58c7c4-42ec-42ec-9c37-44cdb8490082-kube-api-access-t8l4d\") pod \"ceilometer-0\" (UID: \"5f58c7c4-42ec-42ec-9c37-44cdb8490082\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:12:22 crc kubenswrapper[4720]: I0122 07:12:22.484937 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5f58c7c4-42ec-42ec-9c37-44cdb8490082-log-httpd\") pod \"ceilometer-0\" (UID: \"5f58c7c4-42ec-42ec-9c37-44cdb8490082\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:12:22 crc kubenswrapper[4720]: I0122 07:12:22.485020 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5f58c7c4-42ec-42ec-9c37-44cdb8490082-run-httpd\") pod \"ceilometer-0\" (UID: \"5f58c7c4-42ec-42ec-9c37-44cdb8490082\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:12:22 crc kubenswrapper[4720]: I0122 07:12:22.490585 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-wfvc6" Jan 22 07:12:22 crc kubenswrapper[4720]: I0122 07:12:22.586079 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5f58c7c4-42ec-42ec-9c37-44cdb8490082-log-httpd\") pod \"ceilometer-0\" (UID: \"5f58c7c4-42ec-42ec-9c37-44cdb8490082\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:12:22 crc kubenswrapper[4720]: I0122 07:12:22.586434 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5f58c7c4-42ec-42ec-9c37-44cdb8490082-run-httpd\") pod \"ceilometer-0\" (UID: \"5f58c7c4-42ec-42ec-9c37-44cdb8490082\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:12:22 crc kubenswrapper[4720]: I0122 07:12:22.586467 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f58c7c4-42ec-42ec-9c37-44cdb8490082-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5f58c7c4-42ec-42ec-9c37-44cdb8490082\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:12:22 crc kubenswrapper[4720]: I0122 07:12:22.586487 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f58c7c4-42ec-42ec-9c37-44cdb8490082-config-data\") pod \"ceilometer-0\" (UID: \"5f58c7c4-42ec-42ec-9c37-44cdb8490082\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:12:22 crc kubenswrapper[4720]: I0122 07:12:22.586512 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5f58c7c4-42ec-42ec-9c37-44cdb8490082-scripts\") pod \"ceilometer-0\" (UID: \"5f58c7c4-42ec-42ec-9c37-44cdb8490082\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:12:22 crc kubenswrapper[4720]: I0122 07:12:22.586550 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5f58c7c4-42ec-42ec-9c37-44cdb8490082-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5f58c7c4-42ec-42ec-9c37-44cdb8490082\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:12:22 crc kubenswrapper[4720]: I0122 07:12:22.586598 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/5f58c7c4-42ec-42ec-9c37-44cdb8490082-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"5f58c7c4-42ec-42ec-9c37-44cdb8490082\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:12:22 crc kubenswrapper[4720]: I0122 07:12:22.586617 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t8l4d\" (UniqueName: \"kubernetes.io/projected/5f58c7c4-42ec-42ec-9c37-44cdb8490082-kube-api-access-t8l4d\") pod \"ceilometer-0\" (UID: \"5f58c7c4-42ec-42ec-9c37-44cdb8490082\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:12:22 crc kubenswrapper[4720]: I0122 07:12:22.586660 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5f58c7c4-42ec-42ec-9c37-44cdb8490082-log-httpd\") pod \"ceilometer-0\" (UID: \"5f58c7c4-42ec-42ec-9c37-44cdb8490082\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:12:22 crc kubenswrapper[4720]: I0122 07:12:22.589670 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5f58c7c4-42ec-42ec-9c37-44cdb8490082-run-httpd\") pod \"ceilometer-0\" (UID: \"5f58c7c4-42ec-42ec-9c37-44cdb8490082\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:12:22 crc kubenswrapper[4720]: I0122 07:12:22.595688 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f58c7c4-42ec-42ec-9c37-44cdb8490082-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5f58c7c4-42ec-42ec-9c37-44cdb8490082\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:12:22 crc kubenswrapper[4720]: I0122 07:12:22.598233 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5f58c7c4-42ec-42ec-9c37-44cdb8490082-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5f58c7c4-42ec-42ec-9c37-44cdb8490082\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:12:22 crc kubenswrapper[4720]: I0122 07:12:22.599503 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5f58c7c4-42ec-42ec-9c37-44cdb8490082-scripts\") pod \"ceilometer-0\" (UID: \"5f58c7c4-42ec-42ec-9c37-44cdb8490082\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:12:22 crc kubenswrapper[4720]: I0122 07:12:22.602395 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/5f58c7c4-42ec-42ec-9c37-44cdb8490082-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"5f58c7c4-42ec-42ec-9c37-44cdb8490082\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:12:22 crc kubenswrapper[4720]: I0122 07:12:22.603487 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f58c7c4-42ec-42ec-9c37-44cdb8490082-config-data\") pod \"ceilometer-0\" (UID: \"5f58c7c4-42ec-42ec-9c37-44cdb8490082\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:12:22 crc kubenswrapper[4720]: I0122 07:12:22.616485 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t8l4d\" (UniqueName: \"kubernetes.io/projected/5f58c7c4-42ec-42ec-9c37-44cdb8490082-kube-api-access-t8l4d\") pod \"ceilometer-0\" (UID: \"5f58c7c4-42ec-42ec-9c37-44cdb8490082\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:12:22 crc kubenswrapper[4720]: I0122 07:12:22.658338 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-test-account-create-update-dl67x" Jan 22 07:12:22 crc kubenswrapper[4720]: I0122 07:12:22.747581 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:12:22 crc kubenswrapper[4720]: I0122 07:12:22.944716 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-db-create-wfvc6"] Jan 22 07:12:22 crc kubenswrapper[4720]: W0122 07:12:22.951777 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod89b9d35f_d279_4f5c_8316_9ee5cb5e8b68.slice/crio-c20e3d7063eed0fa68f5bcc459d01fba570534817830314cc91107d580272e0f WatchSource:0}: Error finding container c20e3d7063eed0fa68f5bcc459d01fba570534817830314cc91107d580272e0f: Status 404 returned error can't find the container with id c20e3d7063eed0fa68f5bcc459d01fba570534817830314cc91107d580272e0f Jan 22 07:12:23 crc kubenswrapper[4720]: I0122 07:12:23.013293 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-wfvc6" event={"ID":"89b9d35f-d279-4f5c-8316-9ee5cb5e8b68","Type":"ContainerStarted","Data":"c20e3d7063eed0fa68f5bcc459d01fba570534817830314cc91107d580272e0f"} Jan 22 07:12:23 crc kubenswrapper[4720]: I0122 07:12:23.197982 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-test-account-create-update-dl67x"] Jan 22 07:12:23 crc kubenswrapper[4720]: W0122 07:12:23.267805 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5f58c7c4_42ec_42ec_9c37_44cdb8490082.slice/crio-5bb5e5af63ac4b99b18ab616a85f61bca94db1410366c8b1008df72a5d84b9f8 WatchSource:0}: Error finding container 5bb5e5af63ac4b99b18ab616a85f61bca94db1410366c8b1008df72a5d84b9f8: Status 404 returned error can't find the container with id 5bb5e5af63ac4b99b18ab616a85f61bca94db1410366c8b1008df72a5d84b9f8 Jan 22 07:12:23 crc kubenswrapper[4720]: I0122 07:12:23.270850 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 07:12:24 crc kubenswrapper[4720]: I0122 07:12:24.032269 4720 generic.go:334] "Generic (PLEG): container finished" podID="89b9d35f-d279-4f5c-8316-9ee5cb5e8b68" containerID="7124e005859df9f544b6042b0339a5236a0dda1cf1b3503aeb1de614855dd7d9" exitCode=0 Jan 22 07:12:24 crc kubenswrapper[4720]: I0122 07:12:24.032652 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-wfvc6" event={"ID":"89b9d35f-d279-4f5c-8316-9ee5cb5e8b68","Type":"ContainerDied","Data":"7124e005859df9f544b6042b0339a5236a0dda1cf1b3503aeb1de614855dd7d9"} Jan 22 07:12:24 crc kubenswrapper[4720]: I0122 07:12:24.043579 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"5f58c7c4-42ec-42ec-9c37-44cdb8490082","Type":"ContainerStarted","Data":"cc9abd2956dbbf4f23319a9fd62abf25645bb85630c613218fe3fa501e796f70"} Jan 22 07:12:24 crc kubenswrapper[4720]: I0122 07:12:24.043663 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"5f58c7c4-42ec-42ec-9c37-44cdb8490082","Type":"ContainerStarted","Data":"5bb5e5af63ac4b99b18ab616a85f61bca94db1410366c8b1008df72a5d84b9f8"} Jan 22 07:12:24 crc kubenswrapper[4720]: I0122 07:12:24.054143 4720 generic.go:334] "Generic (PLEG): container finished" podID="7ae0c5a8-fdb3-4b7e-96ea-b8445d063a59" containerID="b069cde98b1b46e01886309eab44f22bb37d6c4097276e0df4ed2c93c64c7aa5" exitCode=0 Jan 22 07:12:24 crc kubenswrapper[4720]: I0122 07:12:24.054205 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-test-account-create-update-dl67x" event={"ID":"7ae0c5a8-fdb3-4b7e-96ea-b8445d063a59","Type":"ContainerDied","Data":"b069cde98b1b46e01886309eab44f22bb37d6c4097276e0df4ed2c93c64c7aa5"} Jan 22 07:12:24 crc kubenswrapper[4720]: I0122 07:12:24.054293 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-test-account-create-update-dl67x" event={"ID":"7ae0c5a8-fdb3-4b7e-96ea-b8445d063a59","Type":"ContainerStarted","Data":"7771295fbaf661c0ad0885f4e9d9c0bf4468d118c6a7d0eacd69025d73c3e441"} Jan 22 07:12:24 crc kubenswrapper[4720]: I0122 07:12:24.222849 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="607fcd76-c250-451f-adc4-aa14a6211d2d" path="/var/lib/kubelet/pods/607fcd76-c250-451f-adc4-aa14a6211d2d/volumes" Jan 22 07:12:25 crc kubenswrapper[4720]: I0122 07:12:25.067563 4720 generic.go:334] "Generic (PLEG): container finished" podID="8271367a-5b87-4c4f-8cf4-cbf6d77a7caa" containerID="7f0d4fd72d80c3e7e2371620a822a74d403195b74bc24759df9438a08e5a3a42" exitCode=0 Jan 22 07:12:25 crc kubenswrapper[4720]: I0122 07:12:25.067649 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"8271367a-5b87-4c4f-8cf4-cbf6d77a7caa","Type":"ContainerDied","Data":"7f0d4fd72d80c3e7e2371620a822a74d403195b74bc24759df9438a08e5a3a42"} Jan 22 07:12:25 crc kubenswrapper[4720]: I0122 07:12:25.072232 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"5f58c7c4-42ec-42ec-9c37-44cdb8490082","Type":"ContainerStarted","Data":"b5e361375f3a25a54cbc5e9ef43e15acdf498c9da49620a3bec186591f769fef"} Jan 22 07:12:25 crc kubenswrapper[4720]: I0122 07:12:25.346858 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 07:12:25 crc kubenswrapper[4720]: I0122 07:12:25.448180 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8271367a-5b87-4c4f-8cf4-cbf6d77a7caa-logs\") pod \"8271367a-5b87-4c4f-8cf4-cbf6d77a7caa\" (UID: \"8271367a-5b87-4c4f-8cf4-cbf6d77a7caa\") " Jan 22 07:12:25 crc kubenswrapper[4720]: I0122 07:12:25.448246 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/8271367a-5b87-4c4f-8cf4-cbf6d77a7caa-cert-memcached-mtls\") pod \"8271367a-5b87-4c4f-8cf4-cbf6d77a7caa\" (UID: \"8271367a-5b87-4c4f-8cf4-cbf6d77a7caa\") " Jan 22 07:12:25 crc kubenswrapper[4720]: I0122 07:12:25.448452 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dh62x\" (UniqueName: \"kubernetes.io/projected/8271367a-5b87-4c4f-8cf4-cbf6d77a7caa-kube-api-access-dh62x\") pod \"8271367a-5b87-4c4f-8cf4-cbf6d77a7caa\" (UID: \"8271367a-5b87-4c4f-8cf4-cbf6d77a7caa\") " Jan 22 07:12:25 crc kubenswrapper[4720]: I0122 07:12:25.448491 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8271367a-5b87-4c4f-8cf4-cbf6d77a7caa-combined-ca-bundle\") pod \"8271367a-5b87-4c4f-8cf4-cbf6d77a7caa\" (UID: \"8271367a-5b87-4c4f-8cf4-cbf6d77a7caa\") " Jan 22 07:12:25 crc kubenswrapper[4720]: I0122 07:12:25.448561 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/8271367a-5b87-4c4f-8cf4-cbf6d77a7caa-custom-prometheus-ca\") pod \"8271367a-5b87-4c4f-8cf4-cbf6d77a7caa\" (UID: \"8271367a-5b87-4c4f-8cf4-cbf6d77a7caa\") " Jan 22 07:12:25 crc kubenswrapper[4720]: I0122 07:12:25.448619 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8271367a-5b87-4c4f-8cf4-cbf6d77a7caa-config-data\") pod \"8271367a-5b87-4c4f-8cf4-cbf6d77a7caa\" (UID: \"8271367a-5b87-4c4f-8cf4-cbf6d77a7caa\") " Jan 22 07:12:25 crc kubenswrapper[4720]: I0122 07:12:25.449310 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8271367a-5b87-4c4f-8cf4-cbf6d77a7caa-logs" (OuterVolumeSpecName: "logs") pod "8271367a-5b87-4c4f-8cf4-cbf6d77a7caa" (UID: "8271367a-5b87-4c4f-8cf4-cbf6d77a7caa"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 07:12:25 crc kubenswrapper[4720]: I0122 07:12:25.458097 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8271367a-5b87-4c4f-8cf4-cbf6d77a7caa-kube-api-access-dh62x" (OuterVolumeSpecName: "kube-api-access-dh62x") pod "8271367a-5b87-4c4f-8cf4-cbf6d77a7caa" (UID: "8271367a-5b87-4c4f-8cf4-cbf6d77a7caa"). InnerVolumeSpecName "kube-api-access-dh62x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 07:12:25 crc kubenswrapper[4720]: I0122 07:12:25.477264 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8271367a-5b87-4c4f-8cf4-cbf6d77a7caa-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8271367a-5b87-4c4f-8cf4-cbf6d77a7caa" (UID: "8271367a-5b87-4c4f-8cf4-cbf6d77a7caa"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:12:25 crc kubenswrapper[4720]: I0122 07:12:25.500604 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8271367a-5b87-4c4f-8cf4-cbf6d77a7caa-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "8271367a-5b87-4c4f-8cf4-cbf6d77a7caa" (UID: "8271367a-5b87-4c4f-8cf4-cbf6d77a7caa"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:12:25 crc kubenswrapper[4720]: I0122 07:12:25.532851 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8271367a-5b87-4c4f-8cf4-cbf6d77a7caa-config-data" (OuterVolumeSpecName: "config-data") pod "8271367a-5b87-4c4f-8cf4-cbf6d77a7caa" (UID: "8271367a-5b87-4c4f-8cf4-cbf6d77a7caa"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:12:25 crc kubenswrapper[4720]: I0122 07:12:25.549662 4720 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8271367a-5b87-4c4f-8cf4-cbf6d77a7caa-logs\") on node \"crc\" DevicePath \"\"" Jan 22 07:12:25 crc kubenswrapper[4720]: I0122 07:12:25.549703 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dh62x\" (UniqueName: \"kubernetes.io/projected/8271367a-5b87-4c4f-8cf4-cbf6d77a7caa-kube-api-access-dh62x\") on node \"crc\" DevicePath \"\"" Jan 22 07:12:25 crc kubenswrapper[4720]: I0122 07:12:25.549717 4720 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8271367a-5b87-4c4f-8cf4-cbf6d77a7caa-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 07:12:25 crc kubenswrapper[4720]: I0122 07:12:25.549731 4720 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/8271367a-5b87-4c4f-8cf4-cbf6d77a7caa-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Jan 22 07:12:25 crc kubenswrapper[4720]: I0122 07:12:25.549743 4720 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8271367a-5b87-4c4f-8cf4-cbf6d77a7caa-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 07:12:25 crc kubenswrapper[4720]: I0122 07:12:25.572791 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8271367a-5b87-4c4f-8cf4-cbf6d77a7caa-cert-memcached-mtls" (OuterVolumeSpecName: "cert-memcached-mtls") pod "8271367a-5b87-4c4f-8cf4-cbf6d77a7caa" (UID: "8271367a-5b87-4c4f-8cf4-cbf6d77a7caa"). InnerVolumeSpecName "cert-memcached-mtls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:12:25 crc kubenswrapper[4720]: I0122 07:12:25.647746 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-wfvc6" Jan 22 07:12:25 crc kubenswrapper[4720]: I0122 07:12:25.651114 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/89b9d35f-d279-4f5c-8316-9ee5cb5e8b68-operator-scripts\") pod \"89b9d35f-d279-4f5c-8316-9ee5cb5e8b68\" (UID: \"89b9d35f-d279-4f5c-8316-9ee5cb5e8b68\") " Jan 22 07:12:25 crc kubenswrapper[4720]: I0122 07:12:25.651198 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hkbgb\" (UniqueName: \"kubernetes.io/projected/89b9d35f-d279-4f5c-8316-9ee5cb5e8b68-kube-api-access-hkbgb\") pod \"89b9d35f-d279-4f5c-8316-9ee5cb5e8b68\" (UID: \"89b9d35f-d279-4f5c-8316-9ee5cb5e8b68\") " Jan 22 07:12:25 crc kubenswrapper[4720]: I0122 07:12:25.651606 4720 reconciler_common.go:293] "Volume detached for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/8271367a-5b87-4c4f-8cf4-cbf6d77a7caa-cert-memcached-mtls\") on node \"crc\" DevicePath \"\"" Jan 22 07:12:25 crc kubenswrapper[4720]: I0122 07:12:25.651717 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/89b9d35f-d279-4f5c-8316-9ee5cb5e8b68-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "89b9d35f-d279-4f5c-8316-9ee5cb5e8b68" (UID: "89b9d35f-d279-4f5c-8316-9ee5cb5e8b68"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 07:12:25 crc kubenswrapper[4720]: I0122 07:12:25.654601 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-test-account-create-update-dl67x" Jan 22 07:12:25 crc kubenswrapper[4720]: I0122 07:12:25.658562 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/89b9d35f-d279-4f5c-8316-9ee5cb5e8b68-kube-api-access-hkbgb" (OuterVolumeSpecName: "kube-api-access-hkbgb") pod "89b9d35f-d279-4f5c-8316-9ee5cb5e8b68" (UID: "89b9d35f-d279-4f5c-8316-9ee5cb5e8b68"). InnerVolumeSpecName "kube-api-access-hkbgb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 07:12:25 crc kubenswrapper[4720]: I0122 07:12:25.752525 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7ae0c5a8-fdb3-4b7e-96ea-b8445d063a59-operator-scripts\") pod \"7ae0c5a8-fdb3-4b7e-96ea-b8445d063a59\" (UID: \"7ae0c5a8-fdb3-4b7e-96ea-b8445d063a59\") " Jan 22 07:12:25 crc kubenswrapper[4720]: I0122 07:12:25.752607 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rw5pw\" (UniqueName: \"kubernetes.io/projected/7ae0c5a8-fdb3-4b7e-96ea-b8445d063a59-kube-api-access-rw5pw\") pod \"7ae0c5a8-fdb3-4b7e-96ea-b8445d063a59\" (UID: \"7ae0c5a8-fdb3-4b7e-96ea-b8445d063a59\") " Jan 22 07:12:25 crc kubenswrapper[4720]: I0122 07:12:25.752872 4720 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/89b9d35f-d279-4f5c-8316-9ee5cb5e8b68-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 07:12:25 crc kubenswrapper[4720]: I0122 07:12:25.752886 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hkbgb\" (UniqueName: \"kubernetes.io/projected/89b9d35f-d279-4f5c-8316-9ee5cb5e8b68-kube-api-access-hkbgb\") on node \"crc\" DevicePath \"\"" Jan 22 07:12:25 crc kubenswrapper[4720]: I0122 07:12:25.753126 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7ae0c5a8-fdb3-4b7e-96ea-b8445d063a59-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7ae0c5a8-fdb3-4b7e-96ea-b8445d063a59" (UID: "7ae0c5a8-fdb3-4b7e-96ea-b8445d063a59"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 07:12:25 crc kubenswrapper[4720]: I0122 07:12:25.755784 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ae0c5a8-fdb3-4b7e-96ea-b8445d063a59-kube-api-access-rw5pw" (OuterVolumeSpecName: "kube-api-access-rw5pw") pod "7ae0c5a8-fdb3-4b7e-96ea-b8445d063a59" (UID: "7ae0c5a8-fdb3-4b7e-96ea-b8445d063a59"). InnerVolumeSpecName "kube-api-access-rw5pw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 07:12:25 crc kubenswrapper[4720]: I0122 07:12:25.854474 4720 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7ae0c5a8-fdb3-4b7e-96ea-b8445d063a59-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 07:12:25 crc kubenswrapper[4720]: I0122 07:12:25.854509 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rw5pw\" (UniqueName: \"kubernetes.io/projected/7ae0c5a8-fdb3-4b7e-96ea-b8445d063a59-kube-api-access-rw5pw\") on node \"crc\" DevicePath \"\"" Jan 22 07:12:26 crc kubenswrapper[4720]: I0122 07:12:26.080601 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-test-account-create-update-dl67x" Jan 22 07:12:26 crc kubenswrapper[4720]: I0122 07:12:26.080604 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-test-account-create-update-dl67x" event={"ID":"7ae0c5a8-fdb3-4b7e-96ea-b8445d063a59","Type":"ContainerDied","Data":"7771295fbaf661c0ad0885f4e9d9c0bf4468d118c6a7d0eacd69025d73c3e441"} Jan 22 07:12:26 crc kubenswrapper[4720]: I0122 07:12:26.080746 4720 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7771295fbaf661c0ad0885f4e9d9c0bf4468d118c6a7d0eacd69025d73c3e441" Jan 22 07:12:26 crc kubenswrapper[4720]: I0122 07:12:26.081978 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-wfvc6" event={"ID":"89b9d35f-d279-4f5c-8316-9ee5cb5e8b68","Type":"ContainerDied","Data":"c20e3d7063eed0fa68f5bcc459d01fba570534817830314cc91107d580272e0f"} Jan 22 07:12:26 crc kubenswrapper[4720]: I0122 07:12:26.081998 4720 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c20e3d7063eed0fa68f5bcc459d01fba570534817830314cc91107d580272e0f" Jan 22 07:12:26 crc kubenswrapper[4720]: I0122 07:12:26.082016 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-wfvc6" Jan 22 07:12:26 crc kubenswrapper[4720]: I0122 07:12:26.084659 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"8271367a-5b87-4c4f-8cf4-cbf6d77a7caa","Type":"ContainerDied","Data":"3c99df8e5136cfc4fbb8203a78476fd60f17ee01433934548e0ea583a4808448"} Jan 22 07:12:26 crc kubenswrapper[4720]: I0122 07:12:26.084709 4720 scope.go:117] "RemoveContainer" containerID="7f0d4fd72d80c3e7e2371620a822a74d403195b74bc24759df9438a08e5a3a42" Jan 22 07:12:26 crc kubenswrapper[4720]: I0122 07:12:26.084718 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 07:12:26 crc kubenswrapper[4720]: I0122 07:12:26.098151 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"5f58c7c4-42ec-42ec-9c37-44cdb8490082","Type":"ContainerStarted","Data":"d34c6ee8c1fe6bbdb9cb5578a6ad657d13dd7a9a4f159fec07f14c502c1e3651"} Jan 22 07:12:26 crc kubenswrapper[4720]: I0122 07:12:26.135968 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 22 07:12:26 crc kubenswrapper[4720]: I0122 07:12:26.143332 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 22 07:12:26 crc kubenswrapper[4720]: I0122 07:12:26.221296 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8271367a-5b87-4c4f-8cf4-cbf6d77a7caa" path="/var/lib/kubelet/pods/8271367a-5b87-4c4f-8cf4-cbf6d77a7caa/volumes" Jan 22 07:12:27 crc kubenswrapper[4720]: I0122 07:12:27.111683 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"5f58c7c4-42ec-42ec-9c37-44cdb8490082","Type":"ContainerStarted","Data":"d50c20d35c348c7e439c65ca188d36c0e5bcccebb10c6c7a23bbb986381bd2b4"} Jan 22 07:12:27 crc kubenswrapper[4720]: I0122 07:12:27.112178 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:12:27 crc kubenswrapper[4720]: I0122 07:12:27.139312 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/ceilometer-0" podStartSLOduration=1.9667867 podStartE2EDuration="5.139288253s" podCreationTimestamp="2026-01-22 07:12:22 +0000 UTC" firstStartedPulling="2026-01-22 07:12:23.270107775 +0000 UTC m=+2235.412014470" lastFinishedPulling="2026-01-22 07:12:26.442609318 +0000 UTC m=+2238.584516023" observedRunningTime="2026-01-22 07:12:27.131799462 +0000 UTC m=+2239.273706187" watchObservedRunningTime="2026-01-22 07:12:27.139288253 +0000 UTC m=+2239.281194958" Jan 22 07:12:27 crc kubenswrapper[4720]: I0122 07:12:27.530151 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-4mqkv"] Jan 22 07:12:27 crc kubenswrapper[4720]: E0122 07:12:27.531063 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8271367a-5b87-4c4f-8cf4-cbf6d77a7caa" containerName="watcher-decision-engine" Jan 22 07:12:27 crc kubenswrapper[4720]: I0122 07:12:27.531085 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="8271367a-5b87-4c4f-8cf4-cbf6d77a7caa" containerName="watcher-decision-engine" Jan 22 07:12:27 crc kubenswrapper[4720]: E0122 07:12:27.531121 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ae0c5a8-fdb3-4b7e-96ea-b8445d063a59" containerName="mariadb-account-create-update" Jan 22 07:12:27 crc kubenswrapper[4720]: I0122 07:12:27.531129 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ae0c5a8-fdb3-4b7e-96ea-b8445d063a59" containerName="mariadb-account-create-update" Jan 22 07:12:27 crc kubenswrapper[4720]: E0122 07:12:27.531160 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89b9d35f-d279-4f5c-8316-9ee5cb5e8b68" containerName="mariadb-database-create" Jan 22 07:12:27 crc kubenswrapper[4720]: I0122 07:12:27.531168 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="89b9d35f-d279-4f5c-8316-9ee5cb5e8b68" containerName="mariadb-database-create" Jan 22 07:12:27 crc kubenswrapper[4720]: I0122 07:12:27.531628 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ae0c5a8-fdb3-4b7e-96ea-b8445d063a59" containerName="mariadb-account-create-update" Jan 22 07:12:27 crc kubenswrapper[4720]: I0122 07:12:27.531652 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="89b9d35f-d279-4f5c-8316-9ee5cb5e8b68" containerName="mariadb-database-create" Jan 22 07:12:27 crc kubenswrapper[4720]: I0122 07:12:27.531681 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="8271367a-5b87-4c4f-8cf4-cbf6d77a7caa" containerName="watcher-decision-engine" Jan 22 07:12:27 crc kubenswrapper[4720]: I0122 07:12:27.532848 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-4mqkv" Jan 22 07:12:27 crc kubenswrapper[4720]: I0122 07:12:27.542656 4720 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-watcher-kuttl-dockercfg-r9kxh" Jan 22 07:12:27 crc kubenswrapper[4720]: I0122 07:12:27.560697 4720 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-config-data" Jan 22 07:12:27 crc kubenswrapper[4720]: I0122 07:12:27.564922 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-4mqkv"] Jan 22 07:12:27 crc kubenswrapper[4720]: I0122 07:12:27.690021 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/42e812cf-4d66-42df-9fdb-f5b80e6a2766-db-sync-config-data\") pod \"watcher-kuttl-db-sync-4mqkv\" (UID: \"42e812cf-4d66-42df-9fdb-f5b80e6a2766\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-4mqkv" Jan 22 07:12:27 crc kubenswrapper[4720]: I0122 07:12:27.690109 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42e812cf-4d66-42df-9fdb-f5b80e6a2766-config-data\") pod \"watcher-kuttl-db-sync-4mqkv\" (UID: \"42e812cf-4d66-42df-9fdb-f5b80e6a2766\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-4mqkv" Jan 22 07:12:27 crc kubenswrapper[4720]: I0122 07:12:27.690150 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bffjt\" (UniqueName: \"kubernetes.io/projected/42e812cf-4d66-42df-9fdb-f5b80e6a2766-kube-api-access-bffjt\") pod \"watcher-kuttl-db-sync-4mqkv\" (UID: \"42e812cf-4d66-42df-9fdb-f5b80e6a2766\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-4mqkv" Jan 22 07:12:27 crc kubenswrapper[4720]: I0122 07:12:27.690192 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42e812cf-4d66-42df-9fdb-f5b80e6a2766-combined-ca-bundle\") pod \"watcher-kuttl-db-sync-4mqkv\" (UID: \"42e812cf-4d66-42df-9fdb-f5b80e6a2766\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-4mqkv" Jan 22 07:12:27 crc kubenswrapper[4720]: I0122 07:12:27.791668 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/42e812cf-4d66-42df-9fdb-f5b80e6a2766-db-sync-config-data\") pod \"watcher-kuttl-db-sync-4mqkv\" (UID: \"42e812cf-4d66-42df-9fdb-f5b80e6a2766\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-4mqkv" Jan 22 07:12:27 crc kubenswrapper[4720]: I0122 07:12:27.791785 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42e812cf-4d66-42df-9fdb-f5b80e6a2766-config-data\") pod \"watcher-kuttl-db-sync-4mqkv\" (UID: \"42e812cf-4d66-42df-9fdb-f5b80e6a2766\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-4mqkv" Jan 22 07:12:27 crc kubenswrapper[4720]: I0122 07:12:27.791833 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bffjt\" (UniqueName: \"kubernetes.io/projected/42e812cf-4d66-42df-9fdb-f5b80e6a2766-kube-api-access-bffjt\") pod \"watcher-kuttl-db-sync-4mqkv\" (UID: \"42e812cf-4d66-42df-9fdb-f5b80e6a2766\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-4mqkv" Jan 22 07:12:27 crc kubenswrapper[4720]: I0122 07:12:27.791890 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42e812cf-4d66-42df-9fdb-f5b80e6a2766-combined-ca-bundle\") pod \"watcher-kuttl-db-sync-4mqkv\" (UID: \"42e812cf-4d66-42df-9fdb-f5b80e6a2766\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-4mqkv" Jan 22 07:12:27 crc kubenswrapper[4720]: I0122 07:12:27.808668 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/42e812cf-4d66-42df-9fdb-f5b80e6a2766-db-sync-config-data\") pod \"watcher-kuttl-db-sync-4mqkv\" (UID: \"42e812cf-4d66-42df-9fdb-f5b80e6a2766\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-4mqkv" Jan 22 07:12:27 crc kubenswrapper[4720]: I0122 07:12:27.808991 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42e812cf-4d66-42df-9fdb-f5b80e6a2766-combined-ca-bundle\") pod \"watcher-kuttl-db-sync-4mqkv\" (UID: \"42e812cf-4d66-42df-9fdb-f5b80e6a2766\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-4mqkv" Jan 22 07:12:27 crc kubenswrapper[4720]: I0122 07:12:27.809865 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42e812cf-4d66-42df-9fdb-f5b80e6a2766-config-data\") pod \"watcher-kuttl-db-sync-4mqkv\" (UID: \"42e812cf-4d66-42df-9fdb-f5b80e6a2766\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-4mqkv" Jan 22 07:12:27 crc kubenswrapper[4720]: I0122 07:12:27.812748 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bffjt\" (UniqueName: \"kubernetes.io/projected/42e812cf-4d66-42df-9fdb-f5b80e6a2766-kube-api-access-bffjt\") pod \"watcher-kuttl-db-sync-4mqkv\" (UID: \"42e812cf-4d66-42df-9fdb-f5b80e6a2766\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-4mqkv" Jan 22 07:12:27 crc kubenswrapper[4720]: I0122 07:12:27.870693 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-4mqkv" Jan 22 07:12:28 crc kubenswrapper[4720]: I0122 07:12:28.384005 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-4mqkv"] Jan 22 07:12:28 crc kubenswrapper[4720]: W0122 07:12:28.388421 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod42e812cf_4d66_42df_9fdb_f5b80e6a2766.slice/crio-e61a2f2cf463131b002011f4adae440db4b5956c3dad0d372408873164b91b11 WatchSource:0}: Error finding container e61a2f2cf463131b002011f4adae440db4b5956c3dad0d372408873164b91b11: Status 404 returned error can't find the container with id e61a2f2cf463131b002011f4adae440db4b5956c3dad0d372408873164b91b11 Jan 22 07:12:29 crc kubenswrapper[4720]: I0122 07:12:29.134500 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-4mqkv" event={"ID":"42e812cf-4d66-42df-9fdb-f5b80e6a2766","Type":"ContainerStarted","Data":"9db6abd6cb610984867ad2454018e0fd723010ea518ed4379a86b3ee88bb3530"} Jan 22 07:12:29 crc kubenswrapper[4720]: I0122 07:12:29.134874 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-4mqkv" event={"ID":"42e812cf-4d66-42df-9fdb-f5b80e6a2766","Type":"ContainerStarted","Data":"e61a2f2cf463131b002011f4adae440db4b5956c3dad0d372408873164b91b11"} Jan 22 07:12:29 crc kubenswrapper[4720]: I0122 07:12:29.152533 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-db-sync-4mqkv" podStartSLOduration=2.152512688 podStartE2EDuration="2.152512688s" podCreationTimestamp="2026-01-22 07:12:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 07:12:29.147463466 +0000 UTC m=+2241.289370171" watchObservedRunningTime="2026-01-22 07:12:29.152512688 +0000 UTC m=+2241.294419393" Jan 22 07:12:31 crc kubenswrapper[4720]: I0122 07:12:31.156238 4720 generic.go:334] "Generic (PLEG): container finished" podID="42e812cf-4d66-42df-9fdb-f5b80e6a2766" containerID="9db6abd6cb610984867ad2454018e0fd723010ea518ed4379a86b3ee88bb3530" exitCode=0 Jan 22 07:12:31 crc kubenswrapper[4720]: I0122 07:12:31.156466 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-4mqkv" event={"ID":"42e812cf-4d66-42df-9fdb-f5b80e6a2766","Type":"ContainerDied","Data":"9db6abd6cb610984867ad2454018e0fd723010ea518ed4379a86b3ee88bb3530"} Jan 22 07:12:32 crc kubenswrapper[4720]: I0122 07:12:32.569074 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-4mqkv" Jan 22 07:12:32 crc kubenswrapper[4720]: I0122 07:12:32.691760 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bffjt\" (UniqueName: \"kubernetes.io/projected/42e812cf-4d66-42df-9fdb-f5b80e6a2766-kube-api-access-bffjt\") pod \"42e812cf-4d66-42df-9fdb-f5b80e6a2766\" (UID: \"42e812cf-4d66-42df-9fdb-f5b80e6a2766\") " Jan 22 07:12:32 crc kubenswrapper[4720]: I0122 07:12:32.691829 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/42e812cf-4d66-42df-9fdb-f5b80e6a2766-db-sync-config-data\") pod \"42e812cf-4d66-42df-9fdb-f5b80e6a2766\" (UID: \"42e812cf-4d66-42df-9fdb-f5b80e6a2766\") " Jan 22 07:12:32 crc kubenswrapper[4720]: I0122 07:12:32.691939 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42e812cf-4d66-42df-9fdb-f5b80e6a2766-combined-ca-bundle\") pod \"42e812cf-4d66-42df-9fdb-f5b80e6a2766\" (UID: \"42e812cf-4d66-42df-9fdb-f5b80e6a2766\") " Jan 22 07:12:32 crc kubenswrapper[4720]: I0122 07:12:32.692013 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42e812cf-4d66-42df-9fdb-f5b80e6a2766-config-data\") pod \"42e812cf-4d66-42df-9fdb-f5b80e6a2766\" (UID: \"42e812cf-4d66-42df-9fdb-f5b80e6a2766\") " Jan 22 07:12:32 crc kubenswrapper[4720]: I0122 07:12:32.699544 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42e812cf-4d66-42df-9fdb-f5b80e6a2766-kube-api-access-bffjt" (OuterVolumeSpecName: "kube-api-access-bffjt") pod "42e812cf-4d66-42df-9fdb-f5b80e6a2766" (UID: "42e812cf-4d66-42df-9fdb-f5b80e6a2766"). InnerVolumeSpecName "kube-api-access-bffjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 07:12:32 crc kubenswrapper[4720]: I0122 07:12:32.702274 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42e812cf-4d66-42df-9fdb-f5b80e6a2766-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "42e812cf-4d66-42df-9fdb-f5b80e6a2766" (UID: "42e812cf-4d66-42df-9fdb-f5b80e6a2766"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:12:32 crc kubenswrapper[4720]: I0122 07:12:32.718249 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42e812cf-4d66-42df-9fdb-f5b80e6a2766-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "42e812cf-4d66-42df-9fdb-f5b80e6a2766" (UID: "42e812cf-4d66-42df-9fdb-f5b80e6a2766"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:12:32 crc kubenswrapper[4720]: I0122 07:12:32.747221 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42e812cf-4d66-42df-9fdb-f5b80e6a2766-config-data" (OuterVolumeSpecName: "config-data") pod "42e812cf-4d66-42df-9fdb-f5b80e6a2766" (UID: "42e812cf-4d66-42df-9fdb-f5b80e6a2766"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:12:32 crc kubenswrapper[4720]: I0122 07:12:32.793945 4720 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42e812cf-4d66-42df-9fdb-f5b80e6a2766-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 07:12:32 crc kubenswrapper[4720]: I0122 07:12:32.793985 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bffjt\" (UniqueName: \"kubernetes.io/projected/42e812cf-4d66-42df-9fdb-f5b80e6a2766-kube-api-access-bffjt\") on node \"crc\" DevicePath \"\"" Jan 22 07:12:32 crc kubenswrapper[4720]: I0122 07:12:32.794001 4720 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/42e812cf-4d66-42df-9fdb-f5b80e6a2766-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 07:12:32 crc kubenswrapper[4720]: I0122 07:12:32.794014 4720 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42e812cf-4d66-42df-9fdb-f5b80e6a2766-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 07:12:33 crc kubenswrapper[4720]: I0122 07:12:33.176871 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-4mqkv" event={"ID":"42e812cf-4d66-42df-9fdb-f5b80e6a2766","Type":"ContainerDied","Data":"e61a2f2cf463131b002011f4adae440db4b5956c3dad0d372408873164b91b11"} Jan 22 07:12:33 crc kubenswrapper[4720]: I0122 07:12:33.177250 4720 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e61a2f2cf463131b002011f4adae440db4b5956c3dad0d372408873164b91b11" Jan 22 07:12:33 crc kubenswrapper[4720]: I0122 07:12:33.176988 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-4mqkv" Jan 22 07:12:33 crc kubenswrapper[4720]: I0122 07:12:33.456863 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 22 07:12:33 crc kubenswrapper[4720]: E0122 07:12:33.457295 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42e812cf-4d66-42df-9fdb-f5b80e6a2766" containerName="watcher-kuttl-db-sync" Jan 22 07:12:33 crc kubenswrapper[4720]: I0122 07:12:33.457313 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="42e812cf-4d66-42df-9fdb-f5b80e6a2766" containerName="watcher-kuttl-db-sync" Jan 22 07:12:33 crc kubenswrapper[4720]: I0122 07:12:33.457475 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="42e812cf-4d66-42df-9fdb-f5b80e6a2766" containerName="watcher-kuttl-db-sync" Jan 22 07:12:33 crc kubenswrapper[4720]: I0122 07:12:33.458454 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:12:33 crc kubenswrapper[4720]: I0122 07:12:33.461702 4720 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-watcher-kuttl-dockercfg-r9kxh" Jan 22 07:12:33 crc kubenswrapper[4720]: I0122 07:12:33.464852 4720 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-api-config-data" Jan 22 07:12:33 crc kubenswrapper[4720]: I0122 07:12:33.465248 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-1"] Jan 22 07:12:33 crc kubenswrapper[4720]: I0122 07:12:33.466806 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 22 07:12:33 crc kubenswrapper[4720]: I0122 07:12:33.478379 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 22 07:12:33 crc kubenswrapper[4720]: I0122 07:12:33.484749 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-1"] Jan 22 07:12:33 crc kubenswrapper[4720]: I0122 07:12:33.502354 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 22 07:12:33 crc kubenswrapper[4720]: I0122 07:12:33.504082 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 07:12:33 crc kubenswrapper[4720]: I0122 07:12:33.506575 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1037ddad-a13e-4701-ad46-a24948f9973f-combined-ca-bundle\") pod \"watcher-kuttl-api-1\" (UID: \"1037ddad-a13e-4701-ad46-a24948f9973f\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 22 07:12:33 crc kubenswrapper[4720]: I0122 07:12:33.506762 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/1037ddad-a13e-4701-ad46-a24948f9973f-custom-prometheus-ca\") pod \"watcher-kuttl-api-1\" (UID: \"1037ddad-a13e-4701-ad46-a24948f9973f\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 22 07:12:33 crc kubenswrapper[4720]: I0122 07:12:33.507024 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/4e24a170-40dc-44c4-9cd8-be786de38699-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"4e24a170-40dc-44c4-9cd8-be786de38699\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:12:33 crc kubenswrapper[4720]: I0122 07:12:33.507235 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/4e24a170-40dc-44c4-9cd8-be786de38699-cert-memcached-mtls\") pod \"watcher-kuttl-api-0\" (UID: \"4e24a170-40dc-44c4-9cd8-be786de38699\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:12:33 crc kubenswrapper[4720]: I0122 07:12:33.507372 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4e24a170-40dc-44c4-9cd8-be786de38699-logs\") pod \"watcher-kuttl-api-0\" (UID: \"4e24a170-40dc-44c4-9cd8-be786de38699\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:12:33 crc kubenswrapper[4720]: I0122 07:12:33.507580 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/1037ddad-a13e-4701-ad46-a24948f9973f-cert-memcached-mtls\") pod \"watcher-kuttl-api-1\" (UID: \"1037ddad-a13e-4701-ad46-a24948f9973f\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 22 07:12:33 crc kubenswrapper[4720]: I0122 07:12:33.507707 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1037ddad-a13e-4701-ad46-a24948f9973f-logs\") pod \"watcher-kuttl-api-1\" (UID: \"1037ddad-a13e-4701-ad46-a24948f9973f\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 22 07:12:33 crc kubenswrapper[4720]: I0122 07:12:33.507817 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e24a170-40dc-44c4-9cd8-be786de38699-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"4e24a170-40dc-44c4-9cd8-be786de38699\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:12:33 crc kubenswrapper[4720]: I0122 07:12:33.507962 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h4rhk\" (UniqueName: \"kubernetes.io/projected/4e24a170-40dc-44c4-9cd8-be786de38699-kube-api-access-h4rhk\") pod \"watcher-kuttl-api-0\" (UID: \"4e24a170-40dc-44c4-9cd8-be786de38699\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:12:33 crc kubenswrapper[4720]: I0122 07:12:33.508130 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1037ddad-a13e-4701-ad46-a24948f9973f-config-data\") pod \"watcher-kuttl-api-1\" (UID: \"1037ddad-a13e-4701-ad46-a24948f9973f\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 22 07:12:33 crc kubenswrapper[4720]: I0122 07:12:33.508266 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7gs4n\" (UniqueName: \"kubernetes.io/projected/228f711d-bac2-4ac4-b837-8b86b4111f50-kube-api-access-7gs4n\") pod \"watcher-kuttl-applier-0\" (UID: \"228f711d-bac2-4ac4-b837-8b86b4111f50\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 07:12:33 crc kubenswrapper[4720]: I0122 07:12:33.508391 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k4d2m\" (UniqueName: \"kubernetes.io/projected/1037ddad-a13e-4701-ad46-a24948f9973f-kube-api-access-k4d2m\") pod \"watcher-kuttl-api-1\" (UID: \"1037ddad-a13e-4701-ad46-a24948f9973f\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 22 07:12:33 crc kubenswrapper[4720]: I0122 07:12:33.508509 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/228f711d-bac2-4ac4-b837-8b86b4111f50-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"228f711d-bac2-4ac4-b837-8b86b4111f50\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 07:12:33 crc kubenswrapper[4720]: I0122 07:12:33.508612 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/228f711d-bac2-4ac4-b837-8b86b4111f50-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"228f711d-bac2-4ac4-b837-8b86b4111f50\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 07:12:33 crc kubenswrapper[4720]: I0122 07:12:33.508716 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e24a170-40dc-44c4-9cd8-be786de38699-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"4e24a170-40dc-44c4-9cd8-be786de38699\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:12:33 crc kubenswrapper[4720]: I0122 07:12:33.508822 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/228f711d-bac2-4ac4-b837-8b86b4111f50-cert-memcached-mtls\") pod \"watcher-kuttl-applier-0\" (UID: \"228f711d-bac2-4ac4-b837-8b86b4111f50\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 07:12:33 crc kubenswrapper[4720]: I0122 07:12:33.509448 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/228f711d-bac2-4ac4-b837-8b86b4111f50-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"228f711d-bac2-4ac4-b837-8b86b4111f50\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 07:12:33 crc kubenswrapper[4720]: I0122 07:12:33.513678 4720 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-applier-config-data" Jan 22 07:12:33 crc kubenswrapper[4720]: I0122 07:12:33.523651 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 22 07:12:33 crc kubenswrapper[4720]: I0122 07:12:33.593672 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 22 07:12:33 crc kubenswrapper[4720]: I0122 07:12:33.595835 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 07:12:33 crc kubenswrapper[4720]: I0122 07:12:33.607551 4720 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-decision-engine-config-data" Jan 22 07:12:33 crc kubenswrapper[4720]: I0122 07:12:33.611385 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/4e24a170-40dc-44c4-9cd8-be786de38699-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"4e24a170-40dc-44c4-9cd8-be786de38699\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:12:33 crc kubenswrapper[4720]: I0122 07:12:33.611457 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/3253a75f-f2ab-43fe-9b62-cfa02849f7bc-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"3253a75f-f2ab-43fe-9b62-cfa02849f7bc\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 07:12:33 crc kubenswrapper[4720]: I0122 07:12:33.611490 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/4e24a170-40dc-44c4-9cd8-be786de38699-cert-memcached-mtls\") pod \"watcher-kuttl-api-0\" (UID: \"4e24a170-40dc-44c4-9cd8-be786de38699\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:12:33 crc kubenswrapper[4720]: I0122 07:12:33.611517 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4e24a170-40dc-44c4-9cd8-be786de38699-logs\") pod \"watcher-kuttl-api-0\" (UID: \"4e24a170-40dc-44c4-9cd8-be786de38699\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:12:33 crc kubenswrapper[4720]: I0122 07:12:33.611544 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/1037ddad-a13e-4701-ad46-a24948f9973f-cert-memcached-mtls\") pod \"watcher-kuttl-api-1\" (UID: \"1037ddad-a13e-4701-ad46-a24948f9973f\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 22 07:12:33 crc kubenswrapper[4720]: I0122 07:12:33.611566 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3253a75f-f2ab-43fe-9b62-cfa02849f7bc-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"3253a75f-f2ab-43fe-9b62-cfa02849f7bc\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 07:12:33 crc kubenswrapper[4720]: I0122 07:12:33.611583 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3253a75f-f2ab-43fe-9b62-cfa02849f7bc-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"3253a75f-f2ab-43fe-9b62-cfa02849f7bc\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 07:12:33 crc kubenswrapper[4720]: I0122 07:12:33.611611 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1037ddad-a13e-4701-ad46-a24948f9973f-logs\") pod \"watcher-kuttl-api-1\" (UID: \"1037ddad-a13e-4701-ad46-a24948f9973f\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 22 07:12:33 crc kubenswrapper[4720]: I0122 07:12:33.611627 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e24a170-40dc-44c4-9cd8-be786de38699-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"4e24a170-40dc-44c4-9cd8-be786de38699\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:12:33 crc kubenswrapper[4720]: I0122 07:12:33.611652 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h4rhk\" (UniqueName: \"kubernetes.io/projected/4e24a170-40dc-44c4-9cd8-be786de38699-kube-api-access-h4rhk\") pod \"watcher-kuttl-api-0\" (UID: \"4e24a170-40dc-44c4-9cd8-be786de38699\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:12:33 crc kubenswrapper[4720]: I0122 07:12:33.611676 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1037ddad-a13e-4701-ad46-a24948f9973f-config-data\") pod \"watcher-kuttl-api-1\" (UID: \"1037ddad-a13e-4701-ad46-a24948f9973f\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 22 07:12:33 crc kubenswrapper[4720]: I0122 07:12:33.611699 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7gs4n\" (UniqueName: \"kubernetes.io/projected/228f711d-bac2-4ac4-b837-8b86b4111f50-kube-api-access-7gs4n\") pod \"watcher-kuttl-applier-0\" (UID: \"228f711d-bac2-4ac4-b837-8b86b4111f50\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 07:12:33 crc kubenswrapper[4720]: I0122 07:12:33.611719 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k4d2m\" (UniqueName: \"kubernetes.io/projected/1037ddad-a13e-4701-ad46-a24948f9973f-kube-api-access-k4d2m\") pod \"watcher-kuttl-api-1\" (UID: \"1037ddad-a13e-4701-ad46-a24948f9973f\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 22 07:12:33 crc kubenswrapper[4720]: I0122 07:12:33.611742 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/228f711d-bac2-4ac4-b837-8b86b4111f50-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"228f711d-bac2-4ac4-b837-8b86b4111f50\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 07:12:33 crc kubenswrapper[4720]: I0122 07:12:33.611759 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3253a75f-f2ab-43fe-9b62-cfa02849f7bc-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"3253a75f-f2ab-43fe-9b62-cfa02849f7bc\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 07:12:33 crc kubenswrapper[4720]: I0122 07:12:33.611779 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/228f711d-bac2-4ac4-b837-8b86b4111f50-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"228f711d-bac2-4ac4-b837-8b86b4111f50\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 07:12:33 crc kubenswrapper[4720]: I0122 07:12:33.611795 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e24a170-40dc-44c4-9cd8-be786de38699-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"4e24a170-40dc-44c4-9cd8-be786de38699\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:12:33 crc kubenswrapper[4720]: I0122 07:12:33.611842 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/228f711d-bac2-4ac4-b837-8b86b4111f50-cert-memcached-mtls\") pod \"watcher-kuttl-applier-0\" (UID: \"228f711d-bac2-4ac4-b837-8b86b4111f50\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 07:12:33 crc kubenswrapper[4720]: I0122 07:12:33.611859 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/3253a75f-f2ab-43fe-9b62-cfa02849f7bc-cert-memcached-mtls\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"3253a75f-f2ab-43fe-9b62-cfa02849f7bc\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 07:12:33 crc kubenswrapper[4720]: I0122 07:12:33.611880 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pmdlh\" (UniqueName: \"kubernetes.io/projected/3253a75f-f2ab-43fe-9b62-cfa02849f7bc-kube-api-access-pmdlh\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"3253a75f-f2ab-43fe-9b62-cfa02849f7bc\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 07:12:33 crc kubenswrapper[4720]: I0122 07:12:33.611903 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/228f711d-bac2-4ac4-b837-8b86b4111f50-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"228f711d-bac2-4ac4-b837-8b86b4111f50\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 07:12:33 crc kubenswrapper[4720]: I0122 07:12:33.611940 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1037ddad-a13e-4701-ad46-a24948f9973f-combined-ca-bundle\") pod \"watcher-kuttl-api-1\" (UID: \"1037ddad-a13e-4701-ad46-a24948f9973f\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 22 07:12:33 crc kubenswrapper[4720]: I0122 07:12:33.611963 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/1037ddad-a13e-4701-ad46-a24948f9973f-custom-prometheus-ca\") pod \"watcher-kuttl-api-1\" (UID: \"1037ddad-a13e-4701-ad46-a24948f9973f\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 22 07:12:33 crc kubenswrapper[4720]: I0122 07:12:33.614814 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4e24a170-40dc-44c4-9cd8-be786de38699-logs\") pod \"watcher-kuttl-api-0\" (UID: \"4e24a170-40dc-44c4-9cd8-be786de38699\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:12:33 crc kubenswrapper[4720]: I0122 07:12:33.615048 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/228f711d-bac2-4ac4-b837-8b86b4111f50-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"228f711d-bac2-4ac4-b837-8b86b4111f50\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 07:12:33 crc kubenswrapper[4720]: I0122 07:12:33.615175 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1037ddad-a13e-4701-ad46-a24948f9973f-logs\") pod \"watcher-kuttl-api-1\" (UID: \"1037ddad-a13e-4701-ad46-a24948f9973f\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 22 07:12:33 crc kubenswrapper[4720]: I0122 07:12:33.617627 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 22 07:12:33 crc kubenswrapper[4720]: I0122 07:12:33.624734 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/4e24a170-40dc-44c4-9cd8-be786de38699-cert-memcached-mtls\") pod \"watcher-kuttl-api-0\" (UID: \"4e24a170-40dc-44c4-9cd8-be786de38699\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:12:33 crc kubenswrapper[4720]: I0122 07:12:33.624759 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/228f711d-bac2-4ac4-b837-8b86b4111f50-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"228f711d-bac2-4ac4-b837-8b86b4111f50\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 07:12:33 crc kubenswrapper[4720]: I0122 07:12:33.625117 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1037ddad-a13e-4701-ad46-a24948f9973f-config-data\") pod \"watcher-kuttl-api-1\" (UID: \"1037ddad-a13e-4701-ad46-a24948f9973f\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 22 07:12:33 crc kubenswrapper[4720]: I0122 07:12:33.625409 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e24a170-40dc-44c4-9cd8-be786de38699-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"4e24a170-40dc-44c4-9cd8-be786de38699\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:12:33 crc kubenswrapper[4720]: I0122 07:12:33.625514 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/1037ddad-a13e-4701-ad46-a24948f9973f-custom-prometheus-ca\") pod \"watcher-kuttl-api-1\" (UID: \"1037ddad-a13e-4701-ad46-a24948f9973f\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 22 07:12:33 crc kubenswrapper[4720]: I0122 07:12:33.625569 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e24a170-40dc-44c4-9cd8-be786de38699-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"4e24a170-40dc-44c4-9cd8-be786de38699\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:12:33 crc kubenswrapper[4720]: I0122 07:12:33.625661 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/1037ddad-a13e-4701-ad46-a24948f9973f-cert-memcached-mtls\") pod \"watcher-kuttl-api-1\" (UID: \"1037ddad-a13e-4701-ad46-a24948f9973f\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 22 07:12:33 crc kubenswrapper[4720]: I0122 07:12:33.625759 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/228f711d-bac2-4ac4-b837-8b86b4111f50-cert-memcached-mtls\") pod \"watcher-kuttl-applier-0\" (UID: \"228f711d-bac2-4ac4-b837-8b86b4111f50\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 07:12:33 crc kubenswrapper[4720]: I0122 07:12:33.628362 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1037ddad-a13e-4701-ad46-a24948f9973f-combined-ca-bundle\") pod \"watcher-kuttl-api-1\" (UID: \"1037ddad-a13e-4701-ad46-a24948f9973f\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 22 07:12:33 crc kubenswrapper[4720]: I0122 07:12:33.633707 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/228f711d-bac2-4ac4-b837-8b86b4111f50-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"228f711d-bac2-4ac4-b837-8b86b4111f50\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 07:12:33 crc kubenswrapper[4720]: I0122 07:12:33.638654 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/4e24a170-40dc-44c4-9cd8-be786de38699-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"4e24a170-40dc-44c4-9cd8-be786de38699\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:12:33 crc kubenswrapper[4720]: I0122 07:12:33.639861 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h4rhk\" (UniqueName: \"kubernetes.io/projected/4e24a170-40dc-44c4-9cd8-be786de38699-kube-api-access-h4rhk\") pod \"watcher-kuttl-api-0\" (UID: \"4e24a170-40dc-44c4-9cd8-be786de38699\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:12:33 crc kubenswrapper[4720]: I0122 07:12:33.648028 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7gs4n\" (UniqueName: \"kubernetes.io/projected/228f711d-bac2-4ac4-b837-8b86b4111f50-kube-api-access-7gs4n\") pod \"watcher-kuttl-applier-0\" (UID: \"228f711d-bac2-4ac4-b837-8b86b4111f50\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 07:12:33 crc kubenswrapper[4720]: I0122 07:12:33.653027 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k4d2m\" (UniqueName: \"kubernetes.io/projected/1037ddad-a13e-4701-ad46-a24948f9973f-kube-api-access-k4d2m\") pod \"watcher-kuttl-api-1\" (UID: \"1037ddad-a13e-4701-ad46-a24948f9973f\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 22 07:12:33 crc kubenswrapper[4720]: I0122 07:12:33.712407 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/3253a75f-f2ab-43fe-9b62-cfa02849f7bc-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"3253a75f-f2ab-43fe-9b62-cfa02849f7bc\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 07:12:33 crc kubenswrapper[4720]: I0122 07:12:33.712474 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3253a75f-f2ab-43fe-9b62-cfa02849f7bc-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"3253a75f-f2ab-43fe-9b62-cfa02849f7bc\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 07:12:33 crc kubenswrapper[4720]: I0122 07:12:33.712493 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3253a75f-f2ab-43fe-9b62-cfa02849f7bc-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"3253a75f-f2ab-43fe-9b62-cfa02849f7bc\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 07:12:33 crc kubenswrapper[4720]: I0122 07:12:33.712542 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3253a75f-f2ab-43fe-9b62-cfa02849f7bc-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"3253a75f-f2ab-43fe-9b62-cfa02849f7bc\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 07:12:33 crc kubenswrapper[4720]: I0122 07:12:33.712568 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/3253a75f-f2ab-43fe-9b62-cfa02849f7bc-cert-memcached-mtls\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"3253a75f-f2ab-43fe-9b62-cfa02849f7bc\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 07:12:33 crc kubenswrapper[4720]: I0122 07:12:33.712588 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pmdlh\" (UniqueName: \"kubernetes.io/projected/3253a75f-f2ab-43fe-9b62-cfa02849f7bc-kube-api-access-pmdlh\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"3253a75f-f2ab-43fe-9b62-cfa02849f7bc\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 07:12:33 crc kubenswrapper[4720]: I0122 07:12:33.713726 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3253a75f-f2ab-43fe-9b62-cfa02849f7bc-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"3253a75f-f2ab-43fe-9b62-cfa02849f7bc\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 07:12:33 crc kubenswrapper[4720]: I0122 07:12:33.717893 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3253a75f-f2ab-43fe-9b62-cfa02849f7bc-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"3253a75f-f2ab-43fe-9b62-cfa02849f7bc\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 07:12:33 crc kubenswrapper[4720]: I0122 07:12:33.717939 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/3253a75f-f2ab-43fe-9b62-cfa02849f7bc-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"3253a75f-f2ab-43fe-9b62-cfa02849f7bc\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 07:12:33 crc kubenswrapper[4720]: I0122 07:12:33.720523 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/3253a75f-f2ab-43fe-9b62-cfa02849f7bc-cert-memcached-mtls\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"3253a75f-f2ab-43fe-9b62-cfa02849f7bc\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 07:12:33 crc kubenswrapper[4720]: I0122 07:12:33.728215 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pmdlh\" (UniqueName: \"kubernetes.io/projected/3253a75f-f2ab-43fe-9b62-cfa02849f7bc-kube-api-access-pmdlh\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"3253a75f-f2ab-43fe-9b62-cfa02849f7bc\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 07:12:33 crc kubenswrapper[4720]: I0122 07:12:33.737094 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3253a75f-f2ab-43fe-9b62-cfa02849f7bc-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"3253a75f-f2ab-43fe-9b62-cfa02849f7bc\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 07:12:33 crc kubenswrapper[4720]: I0122 07:12:33.779460 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:12:33 crc kubenswrapper[4720]: I0122 07:12:33.787072 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 22 07:12:33 crc kubenswrapper[4720]: I0122 07:12:33.831703 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 07:12:33 crc kubenswrapper[4720]: I0122 07:12:33.913517 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 07:12:34 crc kubenswrapper[4720]: I0122 07:12:34.305586 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 22 07:12:34 crc kubenswrapper[4720]: W0122 07:12:34.459828 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1037ddad_a13e_4701_ad46_a24948f9973f.slice/crio-93171a2d7c6d3a01dc080073bbb80cdcce5ca45893e4c143f9aa2fd9e7cb88c6 WatchSource:0}: Error finding container 93171a2d7c6d3a01dc080073bbb80cdcce5ca45893e4c143f9aa2fd9e7cb88c6: Status 404 returned error can't find the container with id 93171a2d7c6d3a01dc080073bbb80cdcce5ca45893e4c143f9aa2fd9e7cb88c6 Jan 22 07:12:34 crc kubenswrapper[4720]: I0122 07:12:34.470714 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-1"] Jan 22 07:12:34 crc kubenswrapper[4720]: I0122 07:12:34.594258 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 22 07:12:34 crc kubenswrapper[4720]: W0122 07:12:34.601184 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod228f711d_bac2_4ac4_b837_8b86b4111f50.slice/crio-cb094a7b921638824e8bc9e9726330c13f303ba013840ee1bb45064d321e8674 WatchSource:0}: Error finding container cb094a7b921638824e8bc9e9726330c13f303ba013840ee1bb45064d321e8674: Status 404 returned error can't find the container with id cb094a7b921638824e8bc9e9726330c13f303ba013840ee1bb45064d321e8674 Jan 22 07:12:34 crc kubenswrapper[4720]: I0122 07:12:34.723724 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 22 07:12:35 crc kubenswrapper[4720]: I0122 07:12:35.198524 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-1" event={"ID":"1037ddad-a13e-4701-ad46-a24948f9973f","Type":"ContainerStarted","Data":"d3fe731cca03907b7be05ea13bd6dae95c63f23bc76f5de7a4d4035179c56980"} Jan 22 07:12:35 crc kubenswrapper[4720]: I0122 07:12:35.198945 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-1" event={"ID":"1037ddad-a13e-4701-ad46-a24948f9973f","Type":"ContainerStarted","Data":"a100977fe4a0b7b518a80888dd685d14f82f3654474d4af1105cfc244810533f"} Jan 22 07:12:35 crc kubenswrapper[4720]: I0122 07:12:35.198966 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-1" event={"ID":"1037ddad-a13e-4701-ad46-a24948f9973f","Type":"ContainerStarted","Data":"93171a2d7c6d3a01dc080073bbb80cdcce5ca45893e4c143f9aa2fd9e7cb88c6"} Jan 22 07:12:35 crc kubenswrapper[4720]: I0122 07:12:35.200609 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 22 07:12:35 crc kubenswrapper[4720]: I0122 07:12:35.203958 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"228f711d-bac2-4ac4-b837-8b86b4111f50","Type":"ContainerStarted","Data":"12c01c9e8beee36790b085a69f5615c195d7a27968f71e1a6abbe8d5aa700f2c"} Jan 22 07:12:35 crc kubenswrapper[4720]: I0122 07:12:35.204020 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"228f711d-bac2-4ac4-b837-8b86b4111f50","Type":"ContainerStarted","Data":"cb094a7b921638824e8bc9e9726330c13f303ba013840ee1bb45064d321e8674"} Jan 22 07:12:35 crc kubenswrapper[4720]: I0122 07:12:35.220612 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"4e24a170-40dc-44c4-9cd8-be786de38699","Type":"ContainerStarted","Data":"9e9781537c0c644ace106ee566300f32621fc025a35d48790e114f2e902e4543"} Jan 22 07:12:35 crc kubenswrapper[4720]: I0122 07:12:35.220692 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"4e24a170-40dc-44c4-9cd8-be786de38699","Type":"ContainerStarted","Data":"eb42e166395ae91e6bf5f2b28e698a84e27669e5d87aa18d80ea9b9844fcd3cb"} Jan 22 07:12:35 crc kubenswrapper[4720]: I0122 07:12:35.220713 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"4e24a170-40dc-44c4-9cd8-be786de38699","Type":"ContainerStarted","Data":"17a47724550455ace710a755a976cf04e1501ccd8c444719687c421f3241889a"} Jan 22 07:12:35 crc kubenswrapper[4720]: I0122 07:12:35.221093 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:12:35 crc kubenswrapper[4720]: I0122 07:12:35.225964 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"3253a75f-f2ab-43fe-9b62-cfa02849f7bc","Type":"ContainerStarted","Data":"0022af083d2cffb0ce09b94a6d7cf971e0d15f2bdcb9207280c78e61165965dc"} Jan 22 07:12:35 crc kubenswrapper[4720]: I0122 07:12:35.226030 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"3253a75f-f2ab-43fe-9b62-cfa02849f7bc","Type":"ContainerStarted","Data":"21de1eae354dd97e32290a6a930bdcf9f45993a88e36d0dfa9012f7122d16e79"} Jan 22 07:12:35 crc kubenswrapper[4720]: I0122 07:12:35.243068 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-api-1" podStartSLOduration=2.2430441979999998 podStartE2EDuration="2.243044198s" podCreationTimestamp="2026-01-22 07:12:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 07:12:35.229431713 +0000 UTC m=+2247.371338418" watchObservedRunningTime="2026-01-22 07:12:35.243044198 +0000 UTC m=+2247.384950923" Jan 22 07:12:35 crc kubenswrapper[4720]: I0122 07:12:35.277263 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-api-0" podStartSLOduration=2.277241783 podStartE2EDuration="2.277241783s" podCreationTimestamp="2026-01-22 07:12:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 07:12:35.268953269 +0000 UTC m=+2247.410859984" watchObservedRunningTime="2026-01-22 07:12:35.277241783 +0000 UTC m=+2247.419148498" Jan 22 07:12:35 crc kubenswrapper[4720]: I0122 07:12:35.332577 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podStartSLOduration=2.332556266 podStartE2EDuration="2.332556266s" podCreationTimestamp="2026-01-22 07:12:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 07:12:35.308527377 +0000 UTC m=+2247.450434082" watchObservedRunningTime="2026-01-22 07:12:35.332556266 +0000 UTC m=+2247.474462981" Jan 22 07:12:35 crc kubenswrapper[4720]: I0122 07:12:35.337968 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podStartSLOduration=2.3379436780000002 podStartE2EDuration="2.337943678s" podCreationTimestamp="2026-01-22 07:12:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 07:12:35.330658472 +0000 UTC m=+2247.472565197" watchObservedRunningTime="2026-01-22 07:12:35.337943678 +0000 UTC m=+2247.479850383" Jan 22 07:12:37 crc kubenswrapper[4720]: I0122 07:12:37.256590 4720 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 22 07:12:37 crc kubenswrapper[4720]: I0122 07:12:37.966779 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 22 07:12:38 crc kubenswrapper[4720]: I0122 07:12:38.158649 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:12:38 crc kubenswrapper[4720]: I0122 07:12:38.780600 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:12:38 crc kubenswrapper[4720]: I0122 07:12:38.787183 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 22 07:12:38 crc kubenswrapper[4720]: I0122 07:12:38.832298 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 07:12:39 crc kubenswrapper[4720]: E0122 07:12:39.190368 4720 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.147:47744->38.102.83.147:40617: write tcp 38.102.83.147:47744->38.102.83.147:40617: write: broken pipe Jan 22 07:12:43 crc kubenswrapper[4720]: I0122 07:12:43.780266 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:12:43 crc kubenswrapper[4720]: I0122 07:12:43.787616 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 22 07:12:43 crc kubenswrapper[4720]: I0122 07:12:43.791769 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:12:43 crc kubenswrapper[4720]: I0122 07:12:43.819630 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 22 07:12:43 crc kubenswrapper[4720]: I0122 07:12:43.832881 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 07:12:43 crc kubenswrapper[4720]: I0122 07:12:43.864234 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 07:12:43 crc kubenswrapper[4720]: I0122 07:12:43.913960 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 07:12:43 crc kubenswrapper[4720]: I0122 07:12:43.940552 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 07:12:44 crc kubenswrapper[4720]: I0122 07:12:44.312295 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 07:12:44 crc kubenswrapper[4720]: I0122 07:12:44.316886 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:12:44 crc kubenswrapper[4720]: I0122 07:12:44.317423 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 22 07:12:44 crc kubenswrapper[4720]: I0122 07:12:44.342320 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 07:12:44 crc kubenswrapper[4720]: I0122 07:12:44.354323 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 07:12:45 crc kubenswrapper[4720]: I0122 07:12:45.966300 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 07:12:45 crc kubenswrapper[4720]: I0122 07:12:45.967535 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="5f58c7c4-42ec-42ec-9c37-44cdb8490082" containerName="proxy-httpd" containerID="cri-o://d50c20d35c348c7e439c65ca188d36c0e5bcccebb10c6c7a23bbb986381bd2b4" gracePeriod=30 Jan 22 07:12:45 crc kubenswrapper[4720]: I0122 07:12:45.967591 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="5f58c7c4-42ec-42ec-9c37-44cdb8490082" containerName="sg-core" containerID="cri-o://d34c6ee8c1fe6bbdb9cb5578a6ad657d13dd7a9a4f159fec07f14c502c1e3651" gracePeriod=30 Jan 22 07:12:45 crc kubenswrapper[4720]: I0122 07:12:45.967591 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="5f58c7c4-42ec-42ec-9c37-44cdb8490082" containerName="ceilometer-notification-agent" containerID="cri-o://b5e361375f3a25a54cbc5e9ef43e15acdf498c9da49620a3bec186591f769fef" gracePeriod=30 Jan 22 07:12:45 crc kubenswrapper[4720]: I0122 07:12:45.968716 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="5f58c7c4-42ec-42ec-9c37-44cdb8490082" containerName="ceilometer-central-agent" containerID="cri-o://cc9abd2956dbbf4f23319a9fd62abf25645bb85630c613218fe3fa501e796f70" gracePeriod=30 Jan 22 07:12:45 crc kubenswrapper[4720]: I0122 07:12:45.990885 4720 prober.go:107] "Probe failed" probeType="Readiness" pod="watcher-kuttl-default/ceilometer-0" podUID="5f58c7c4-42ec-42ec-9c37-44cdb8490082" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.230:3000/\": EOF" Jan 22 07:12:46 crc kubenswrapper[4720]: I0122 07:12:46.339095 4720 generic.go:334] "Generic (PLEG): container finished" podID="5f58c7c4-42ec-42ec-9c37-44cdb8490082" containerID="d50c20d35c348c7e439c65ca188d36c0e5bcccebb10c6c7a23bbb986381bd2b4" exitCode=0 Jan 22 07:12:46 crc kubenswrapper[4720]: I0122 07:12:46.339611 4720 generic.go:334] "Generic (PLEG): container finished" podID="5f58c7c4-42ec-42ec-9c37-44cdb8490082" containerID="d34c6ee8c1fe6bbdb9cb5578a6ad657d13dd7a9a4f159fec07f14c502c1e3651" exitCode=2 Jan 22 07:12:46 crc kubenswrapper[4720]: I0122 07:12:46.339438 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"5f58c7c4-42ec-42ec-9c37-44cdb8490082","Type":"ContainerDied","Data":"d50c20d35c348c7e439c65ca188d36c0e5bcccebb10c6c7a23bbb986381bd2b4"} Jan 22 07:12:46 crc kubenswrapper[4720]: I0122 07:12:46.339758 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"5f58c7c4-42ec-42ec-9c37-44cdb8490082","Type":"ContainerDied","Data":"d34c6ee8c1fe6bbdb9cb5578a6ad657d13dd7a9a4f159fec07f14c502c1e3651"} Jan 22 07:12:46 crc kubenswrapper[4720]: I0122 07:12:46.905551 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:12:47 crc kubenswrapper[4720]: I0122 07:12:47.012302 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t8l4d\" (UniqueName: \"kubernetes.io/projected/5f58c7c4-42ec-42ec-9c37-44cdb8490082-kube-api-access-t8l4d\") pod \"5f58c7c4-42ec-42ec-9c37-44cdb8490082\" (UID: \"5f58c7c4-42ec-42ec-9c37-44cdb8490082\") " Jan 22 07:12:47 crc kubenswrapper[4720]: I0122 07:12:47.012402 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5f58c7c4-42ec-42ec-9c37-44cdb8490082-sg-core-conf-yaml\") pod \"5f58c7c4-42ec-42ec-9c37-44cdb8490082\" (UID: \"5f58c7c4-42ec-42ec-9c37-44cdb8490082\") " Jan 22 07:12:47 crc kubenswrapper[4720]: I0122 07:12:47.012433 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f58c7c4-42ec-42ec-9c37-44cdb8490082-combined-ca-bundle\") pod \"5f58c7c4-42ec-42ec-9c37-44cdb8490082\" (UID: \"5f58c7c4-42ec-42ec-9c37-44cdb8490082\") " Jan 22 07:12:47 crc kubenswrapper[4720]: I0122 07:12:47.012470 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5f58c7c4-42ec-42ec-9c37-44cdb8490082-run-httpd\") pod \"5f58c7c4-42ec-42ec-9c37-44cdb8490082\" (UID: \"5f58c7c4-42ec-42ec-9c37-44cdb8490082\") " Jan 22 07:12:47 crc kubenswrapper[4720]: I0122 07:12:47.012508 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5f58c7c4-42ec-42ec-9c37-44cdb8490082-log-httpd\") pod \"5f58c7c4-42ec-42ec-9c37-44cdb8490082\" (UID: \"5f58c7c4-42ec-42ec-9c37-44cdb8490082\") " Jan 22 07:12:47 crc kubenswrapper[4720]: I0122 07:12:47.012639 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/5f58c7c4-42ec-42ec-9c37-44cdb8490082-ceilometer-tls-certs\") pod \"5f58c7c4-42ec-42ec-9c37-44cdb8490082\" (UID: \"5f58c7c4-42ec-42ec-9c37-44cdb8490082\") " Jan 22 07:12:47 crc kubenswrapper[4720]: I0122 07:12:47.012699 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5f58c7c4-42ec-42ec-9c37-44cdb8490082-scripts\") pod \"5f58c7c4-42ec-42ec-9c37-44cdb8490082\" (UID: \"5f58c7c4-42ec-42ec-9c37-44cdb8490082\") " Jan 22 07:12:47 crc kubenswrapper[4720]: I0122 07:12:47.012785 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f58c7c4-42ec-42ec-9c37-44cdb8490082-config-data\") pod \"5f58c7c4-42ec-42ec-9c37-44cdb8490082\" (UID: \"5f58c7c4-42ec-42ec-9c37-44cdb8490082\") " Jan 22 07:12:47 crc kubenswrapper[4720]: I0122 07:12:47.013219 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5f58c7c4-42ec-42ec-9c37-44cdb8490082-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "5f58c7c4-42ec-42ec-9c37-44cdb8490082" (UID: "5f58c7c4-42ec-42ec-9c37-44cdb8490082"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 07:12:47 crc kubenswrapper[4720]: I0122 07:12:47.018137 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5f58c7c4-42ec-42ec-9c37-44cdb8490082-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "5f58c7c4-42ec-42ec-9c37-44cdb8490082" (UID: "5f58c7c4-42ec-42ec-9c37-44cdb8490082"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 07:12:47 crc kubenswrapper[4720]: I0122 07:12:47.018720 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5f58c7c4-42ec-42ec-9c37-44cdb8490082-kube-api-access-t8l4d" (OuterVolumeSpecName: "kube-api-access-t8l4d") pod "5f58c7c4-42ec-42ec-9c37-44cdb8490082" (UID: "5f58c7c4-42ec-42ec-9c37-44cdb8490082"). InnerVolumeSpecName "kube-api-access-t8l4d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 07:12:47 crc kubenswrapper[4720]: I0122 07:12:47.021386 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f58c7c4-42ec-42ec-9c37-44cdb8490082-scripts" (OuterVolumeSpecName: "scripts") pod "5f58c7c4-42ec-42ec-9c37-44cdb8490082" (UID: "5f58c7c4-42ec-42ec-9c37-44cdb8490082"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:12:47 crc kubenswrapper[4720]: I0122 07:12:47.042239 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f58c7c4-42ec-42ec-9c37-44cdb8490082-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "5f58c7c4-42ec-42ec-9c37-44cdb8490082" (UID: "5f58c7c4-42ec-42ec-9c37-44cdb8490082"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:12:47 crc kubenswrapper[4720]: I0122 07:12:47.069597 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f58c7c4-42ec-42ec-9c37-44cdb8490082-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "5f58c7c4-42ec-42ec-9c37-44cdb8490082" (UID: "5f58c7c4-42ec-42ec-9c37-44cdb8490082"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:12:47 crc kubenswrapper[4720]: I0122 07:12:47.087305 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f58c7c4-42ec-42ec-9c37-44cdb8490082-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5f58c7c4-42ec-42ec-9c37-44cdb8490082" (UID: "5f58c7c4-42ec-42ec-9c37-44cdb8490082"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:12:47 crc kubenswrapper[4720]: I0122 07:12:47.107530 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f58c7c4-42ec-42ec-9c37-44cdb8490082-config-data" (OuterVolumeSpecName: "config-data") pod "5f58c7c4-42ec-42ec-9c37-44cdb8490082" (UID: "5f58c7c4-42ec-42ec-9c37-44cdb8490082"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:12:47 crc kubenswrapper[4720]: I0122 07:12:47.115479 4720 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f58c7c4-42ec-42ec-9c37-44cdb8490082-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 07:12:47 crc kubenswrapper[4720]: I0122 07:12:47.115514 4720 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5f58c7c4-42ec-42ec-9c37-44cdb8490082-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 22 07:12:47 crc kubenswrapper[4720]: I0122 07:12:47.115523 4720 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5f58c7c4-42ec-42ec-9c37-44cdb8490082-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 22 07:12:47 crc kubenswrapper[4720]: I0122 07:12:47.115532 4720 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/5f58c7c4-42ec-42ec-9c37-44cdb8490082-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 22 07:12:47 crc kubenswrapper[4720]: I0122 07:12:47.115544 4720 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5f58c7c4-42ec-42ec-9c37-44cdb8490082-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 07:12:47 crc kubenswrapper[4720]: I0122 07:12:47.115554 4720 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f58c7c4-42ec-42ec-9c37-44cdb8490082-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 07:12:47 crc kubenswrapper[4720]: I0122 07:12:47.115562 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t8l4d\" (UniqueName: \"kubernetes.io/projected/5f58c7c4-42ec-42ec-9c37-44cdb8490082-kube-api-access-t8l4d\") on node \"crc\" DevicePath \"\"" Jan 22 07:12:47 crc kubenswrapper[4720]: I0122 07:12:47.115574 4720 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5f58c7c4-42ec-42ec-9c37-44cdb8490082-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 22 07:12:47 crc kubenswrapper[4720]: I0122 07:12:47.350183 4720 generic.go:334] "Generic (PLEG): container finished" podID="5f58c7c4-42ec-42ec-9c37-44cdb8490082" containerID="b5e361375f3a25a54cbc5e9ef43e15acdf498c9da49620a3bec186591f769fef" exitCode=0 Jan 22 07:12:47 crc kubenswrapper[4720]: I0122 07:12:47.350216 4720 generic.go:334] "Generic (PLEG): container finished" podID="5f58c7c4-42ec-42ec-9c37-44cdb8490082" containerID="cc9abd2956dbbf4f23319a9fd62abf25645bb85630c613218fe3fa501e796f70" exitCode=0 Jan 22 07:12:47 crc kubenswrapper[4720]: I0122 07:12:47.350236 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"5f58c7c4-42ec-42ec-9c37-44cdb8490082","Type":"ContainerDied","Data":"b5e361375f3a25a54cbc5e9ef43e15acdf498c9da49620a3bec186591f769fef"} Jan 22 07:12:47 crc kubenswrapper[4720]: I0122 07:12:47.350269 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"5f58c7c4-42ec-42ec-9c37-44cdb8490082","Type":"ContainerDied","Data":"cc9abd2956dbbf4f23319a9fd62abf25645bb85630c613218fe3fa501e796f70"} Jan 22 07:12:47 crc kubenswrapper[4720]: I0122 07:12:47.350279 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"5f58c7c4-42ec-42ec-9c37-44cdb8490082","Type":"ContainerDied","Data":"5bb5e5af63ac4b99b18ab616a85f61bca94db1410366c8b1008df72a5d84b9f8"} Jan 22 07:12:47 crc kubenswrapper[4720]: I0122 07:12:47.350295 4720 scope.go:117] "RemoveContainer" containerID="d50c20d35c348c7e439c65ca188d36c0e5bcccebb10c6c7a23bbb986381bd2b4" Jan 22 07:12:47 crc kubenswrapper[4720]: I0122 07:12:47.350306 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:12:47 crc kubenswrapper[4720]: I0122 07:12:47.377358 4720 scope.go:117] "RemoveContainer" containerID="d34c6ee8c1fe6bbdb9cb5578a6ad657d13dd7a9a4f159fec07f14c502c1e3651" Jan 22 07:12:47 crc kubenswrapper[4720]: I0122 07:12:47.382565 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 07:12:47 crc kubenswrapper[4720]: I0122 07:12:47.394749 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 07:12:47 crc kubenswrapper[4720]: I0122 07:12:47.398874 4720 scope.go:117] "RemoveContainer" containerID="b5e361375f3a25a54cbc5e9ef43e15acdf498c9da49620a3bec186591f769fef" Jan 22 07:12:47 crc kubenswrapper[4720]: I0122 07:12:47.411457 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 07:12:47 crc kubenswrapper[4720]: E0122 07:12:47.411943 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f58c7c4-42ec-42ec-9c37-44cdb8490082" containerName="ceilometer-notification-agent" Jan 22 07:12:47 crc kubenswrapper[4720]: I0122 07:12:47.411962 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f58c7c4-42ec-42ec-9c37-44cdb8490082" containerName="ceilometer-notification-agent" Jan 22 07:12:47 crc kubenswrapper[4720]: E0122 07:12:47.411980 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f58c7c4-42ec-42ec-9c37-44cdb8490082" containerName="proxy-httpd" Jan 22 07:12:47 crc kubenswrapper[4720]: I0122 07:12:47.411989 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f58c7c4-42ec-42ec-9c37-44cdb8490082" containerName="proxy-httpd" Jan 22 07:12:47 crc kubenswrapper[4720]: E0122 07:12:47.412008 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f58c7c4-42ec-42ec-9c37-44cdb8490082" containerName="sg-core" Jan 22 07:12:47 crc kubenswrapper[4720]: I0122 07:12:47.412019 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f58c7c4-42ec-42ec-9c37-44cdb8490082" containerName="sg-core" Jan 22 07:12:47 crc kubenswrapper[4720]: E0122 07:12:47.412031 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f58c7c4-42ec-42ec-9c37-44cdb8490082" containerName="ceilometer-central-agent" Jan 22 07:12:47 crc kubenswrapper[4720]: I0122 07:12:47.412039 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f58c7c4-42ec-42ec-9c37-44cdb8490082" containerName="ceilometer-central-agent" Jan 22 07:12:47 crc kubenswrapper[4720]: I0122 07:12:47.412255 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f58c7c4-42ec-42ec-9c37-44cdb8490082" containerName="proxy-httpd" Jan 22 07:12:47 crc kubenswrapper[4720]: I0122 07:12:47.412278 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f58c7c4-42ec-42ec-9c37-44cdb8490082" containerName="sg-core" Jan 22 07:12:47 crc kubenswrapper[4720]: I0122 07:12:47.412293 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f58c7c4-42ec-42ec-9c37-44cdb8490082" containerName="ceilometer-central-agent" Jan 22 07:12:47 crc kubenswrapper[4720]: I0122 07:12:47.412310 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f58c7c4-42ec-42ec-9c37-44cdb8490082" containerName="ceilometer-notification-agent" Jan 22 07:12:47 crc kubenswrapper[4720]: I0122 07:12:47.414550 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:12:47 crc kubenswrapper[4720]: I0122 07:12:47.418563 4720 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-scripts" Jan 22 07:12:47 crc kubenswrapper[4720]: I0122 07:12:47.421977 4720 scope.go:117] "RemoveContainer" containerID="cc9abd2956dbbf4f23319a9fd62abf25645bb85630c613218fe3fa501e796f70" Jan 22 07:12:47 crc kubenswrapper[4720]: I0122 07:12:47.425585 4720 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-ceilometer-internal-svc" Jan 22 07:12:47 crc kubenswrapper[4720]: I0122 07:12:47.425927 4720 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-config-data" Jan 22 07:12:47 crc kubenswrapper[4720]: I0122 07:12:47.449160 4720 scope.go:117] "RemoveContainer" containerID="d50c20d35c348c7e439c65ca188d36c0e5bcccebb10c6c7a23bbb986381bd2b4" Jan 22 07:12:47 crc kubenswrapper[4720]: E0122 07:12:47.450834 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d50c20d35c348c7e439c65ca188d36c0e5bcccebb10c6c7a23bbb986381bd2b4\": container with ID starting with d50c20d35c348c7e439c65ca188d36c0e5bcccebb10c6c7a23bbb986381bd2b4 not found: ID does not exist" containerID="d50c20d35c348c7e439c65ca188d36c0e5bcccebb10c6c7a23bbb986381bd2b4" Jan 22 07:12:47 crc kubenswrapper[4720]: I0122 07:12:47.450933 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d50c20d35c348c7e439c65ca188d36c0e5bcccebb10c6c7a23bbb986381bd2b4"} err="failed to get container status \"d50c20d35c348c7e439c65ca188d36c0e5bcccebb10c6c7a23bbb986381bd2b4\": rpc error: code = NotFound desc = could not find container \"d50c20d35c348c7e439c65ca188d36c0e5bcccebb10c6c7a23bbb986381bd2b4\": container with ID starting with d50c20d35c348c7e439c65ca188d36c0e5bcccebb10c6c7a23bbb986381bd2b4 not found: ID does not exist" Jan 22 07:12:47 crc kubenswrapper[4720]: I0122 07:12:47.450975 4720 scope.go:117] "RemoveContainer" containerID="d34c6ee8c1fe6bbdb9cb5578a6ad657d13dd7a9a4f159fec07f14c502c1e3651" Jan 22 07:12:47 crc kubenswrapper[4720]: E0122 07:12:47.452243 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d34c6ee8c1fe6bbdb9cb5578a6ad657d13dd7a9a4f159fec07f14c502c1e3651\": container with ID starting with d34c6ee8c1fe6bbdb9cb5578a6ad657d13dd7a9a4f159fec07f14c502c1e3651 not found: ID does not exist" containerID="d34c6ee8c1fe6bbdb9cb5578a6ad657d13dd7a9a4f159fec07f14c502c1e3651" Jan 22 07:12:47 crc kubenswrapper[4720]: I0122 07:12:47.452279 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d34c6ee8c1fe6bbdb9cb5578a6ad657d13dd7a9a4f159fec07f14c502c1e3651"} err="failed to get container status \"d34c6ee8c1fe6bbdb9cb5578a6ad657d13dd7a9a4f159fec07f14c502c1e3651\": rpc error: code = NotFound desc = could not find container \"d34c6ee8c1fe6bbdb9cb5578a6ad657d13dd7a9a4f159fec07f14c502c1e3651\": container with ID starting with d34c6ee8c1fe6bbdb9cb5578a6ad657d13dd7a9a4f159fec07f14c502c1e3651 not found: ID does not exist" Jan 22 07:12:47 crc kubenswrapper[4720]: I0122 07:12:47.452303 4720 scope.go:117] "RemoveContainer" containerID="b5e361375f3a25a54cbc5e9ef43e15acdf498c9da49620a3bec186591f769fef" Jan 22 07:12:47 crc kubenswrapper[4720]: E0122 07:12:47.452645 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b5e361375f3a25a54cbc5e9ef43e15acdf498c9da49620a3bec186591f769fef\": container with ID starting with b5e361375f3a25a54cbc5e9ef43e15acdf498c9da49620a3bec186591f769fef not found: ID does not exist" containerID="b5e361375f3a25a54cbc5e9ef43e15acdf498c9da49620a3bec186591f769fef" Jan 22 07:12:47 crc kubenswrapper[4720]: I0122 07:12:47.452691 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b5e361375f3a25a54cbc5e9ef43e15acdf498c9da49620a3bec186591f769fef"} err="failed to get container status \"b5e361375f3a25a54cbc5e9ef43e15acdf498c9da49620a3bec186591f769fef\": rpc error: code = NotFound desc = could not find container \"b5e361375f3a25a54cbc5e9ef43e15acdf498c9da49620a3bec186591f769fef\": container with ID starting with b5e361375f3a25a54cbc5e9ef43e15acdf498c9da49620a3bec186591f769fef not found: ID does not exist" Jan 22 07:12:47 crc kubenswrapper[4720]: I0122 07:12:47.452727 4720 scope.go:117] "RemoveContainer" containerID="cc9abd2956dbbf4f23319a9fd62abf25645bb85630c613218fe3fa501e796f70" Jan 22 07:12:47 crc kubenswrapper[4720]: E0122 07:12:47.452985 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cc9abd2956dbbf4f23319a9fd62abf25645bb85630c613218fe3fa501e796f70\": container with ID starting with cc9abd2956dbbf4f23319a9fd62abf25645bb85630c613218fe3fa501e796f70 not found: ID does not exist" containerID="cc9abd2956dbbf4f23319a9fd62abf25645bb85630c613218fe3fa501e796f70" Jan 22 07:12:47 crc kubenswrapper[4720]: I0122 07:12:47.453010 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cc9abd2956dbbf4f23319a9fd62abf25645bb85630c613218fe3fa501e796f70"} err="failed to get container status \"cc9abd2956dbbf4f23319a9fd62abf25645bb85630c613218fe3fa501e796f70\": rpc error: code = NotFound desc = could not find container \"cc9abd2956dbbf4f23319a9fd62abf25645bb85630c613218fe3fa501e796f70\": container with ID starting with cc9abd2956dbbf4f23319a9fd62abf25645bb85630c613218fe3fa501e796f70 not found: ID does not exist" Jan 22 07:12:47 crc kubenswrapper[4720]: I0122 07:12:47.453028 4720 scope.go:117] "RemoveContainer" containerID="d50c20d35c348c7e439c65ca188d36c0e5bcccebb10c6c7a23bbb986381bd2b4" Jan 22 07:12:47 crc kubenswrapper[4720]: I0122 07:12:47.453328 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d50c20d35c348c7e439c65ca188d36c0e5bcccebb10c6c7a23bbb986381bd2b4"} err="failed to get container status \"d50c20d35c348c7e439c65ca188d36c0e5bcccebb10c6c7a23bbb986381bd2b4\": rpc error: code = NotFound desc = could not find container \"d50c20d35c348c7e439c65ca188d36c0e5bcccebb10c6c7a23bbb986381bd2b4\": container with ID starting with d50c20d35c348c7e439c65ca188d36c0e5bcccebb10c6c7a23bbb986381bd2b4 not found: ID does not exist" Jan 22 07:12:47 crc kubenswrapper[4720]: I0122 07:12:47.453373 4720 scope.go:117] "RemoveContainer" containerID="d34c6ee8c1fe6bbdb9cb5578a6ad657d13dd7a9a4f159fec07f14c502c1e3651" Jan 22 07:12:47 crc kubenswrapper[4720]: I0122 07:12:47.453593 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d34c6ee8c1fe6bbdb9cb5578a6ad657d13dd7a9a4f159fec07f14c502c1e3651"} err="failed to get container status \"d34c6ee8c1fe6bbdb9cb5578a6ad657d13dd7a9a4f159fec07f14c502c1e3651\": rpc error: code = NotFound desc = could not find container \"d34c6ee8c1fe6bbdb9cb5578a6ad657d13dd7a9a4f159fec07f14c502c1e3651\": container with ID starting with d34c6ee8c1fe6bbdb9cb5578a6ad657d13dd7a9a4f159fec07f14c502c1e3651 not found: ID does not exist" Jan 22 07:12:47 crc kubenswrapper[4720]: I0122 07:12:47.453627 4720 scope.go:117] "RemoveContainer" containerID="b5e361375f3a25a54cbc5e9ef43e15acdf498c9da49620a3bec186591f769fef" Jan 22 07:12:47 crc kubenswrapper[4720]: I0122 07:12:47.453971 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b5e361375f3a25a54cbc5e9ef43e15acdf498c9da49620a3bec186591f769fef"} err="failed to get container status \"b5e361375f3a25a54cbc5e9ef43e15acdf498c9da49620a3bec186591f769fef\": rpc error: code = NotFound desc = could not find container \"b5e361375f3a25a54cbc5e9ef43e15acdf498c9da49620a3bec186591f769fef\": container with ID starting with b5e361375f3a25a54cbc5e9ef43e15acdf498c9da49620a3bec186591f769fef not found: ID does not exist" Jan 22 07:12:47 crc kubenswrapper[4720]: I0122 07:12:47.453992 4720 scope.go:117] "RemoveContainer" containerID="cc9abd2956dbbf4f23319a9fd62abf25645bb85630c613218fe3fa501e796f70" Jan 22 07:12:47 crc kubenswrapper[4720]: I0122 07:12:47.454041 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 07:12:47 crc kubenswrapper[4720]: I0122 07:12:47.454302 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cc9abd2956dbbf4f23319a9fd62abf25645bb85630c613218fe3fa501e796f70"} err="failed to get container status \"cc9abd2956dbbf4f23319a9fd62abf25645bb85630c613218fe3fa501e796f70\": rpc error: code = NotFound desc = could not find container \"cc9abd2956dbbf4f23319a9fd62abf25645bb85630c613218fe3fa501e796f70\": container with ID starting with cc9abd2956dbbf4f23319a9fd62abf25645bb85630c613218fe3fa501e796f70 not found: ID does not exist" Jan 22 07:12:47 crc kubenswrapper[4720]: I0122 07:12:47.525089 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/d16b716e-3d91-4255-979d-95cb059f99ee-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"d16b716e-3d91-4255-979d-95cb059f99ee\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:12:47 crc kubenswrapper[4720]: I0122 07:12:47.525155 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d16b716e-3d91-4255-979d-95cb059f99ee-run-httpd\") pod \"ceilometer-0\" (UID: \"d16b716e-3d91-4255-979d-95cb059f99ee\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:12:47 crc kubenswrapper[4720]: I0122 07:12:47.525190 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d16b716e-3d91-4255-979d-95cb059f99ee-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d16b716e-3d91-4255-979d-95cb059f99ee\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:12:47 crc kubenswrapper[4720]: I0122 07:12:47.525227 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d16b716e-3d91-4255-979d-95cb059f99ee-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d16b716e-3d91-4255-979d-95cb059f99ee\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:12:47 crc kubenswrapper[4720]: I0122 07:12:47.525273 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d16b716e-3d91-4255-979d-95cb059f99ee-config-data\") pod \"ceilometer-0\" (UID: \"d16b716e-3d91-4255-979d-95cb059f99ee\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:12:47 crc kubenswrapper[4720]: I0122 07:12:47.525303 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d16b716e-3d91-4255-979d-95cb059f99ee-log-httpd\") pod \"ceilometer-0\" (UID: \"d16b716e-3d91-4255-979d-95cb059f99ee\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:12:47 crc kubenswrapper[4720]: I0122 07:12:47.525376 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-875kf\" (UniqueName: \"kubernetes.io/projected/d16b716e-3d91-4255-979d-95cb059f99ee-kube-api-access-875kf\") pod \"ceilometer-0\" (UID: \"d16b716e-3d91-4255-979d-95cb059f99ee\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:12:47 crc kubenswrapper[4720]: I0122 07:12:47.525409 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d16b716e-3d91-4255-979d-95cb059f99ee-scripts\") pod \"ceilometer-0\" (UID: \"d16b716e-3d91-4255-979d-95cb059f99ee\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:12:47 crc kubenswrapper[4720]: I0122 07:12:47.627380 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-875kf\" (UniqueName: \"kubernetes.io/projected/d16b716e-3d91-4255-979d-95cb059f99ee-kube-api-access-875kf\") pod \"ceilometer-0\" (UID: \"d16b716e-3d91-4255-979d-95cb059f99ee\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:12:47 crc kubenswrapper[4720]: I0122 07:12:47.627431 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d16b716e-3d91-4255-979d-95cb059f99ee-scripts\") pod \"ceilometer-0\" (UID: \"d16b716e-3d91-4255-979d-95cb059f99ee\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:12:47 crc kubenswrapper[4720]: I0122 07:12:47.627488 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/d16b716e-3d91-4255-979d-95cb059f99ee-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"d16b716e-3d91-4255-979d-95cb059f99ee\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:12:47 crc kubenswrapper[4720]: I0122 07:12:47.627512 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d16b716e-3d91-4255-979d-95cb059f99ee-run-httpd\") pod \"ceilometer-0\" (UID: \"d16b716e-3d91-4255-979d-95cb059f99ee\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:12:47 crc kubenswrapper[4720]: I0122 07:12:47.627535 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d16b716e-3d91-4255-979d-95cb059f99ee-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d16b716e-3d91-4255-979d-95cb059f99ee\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:12:47 crc kubenswrapper[4720]: I0122 07:12:47.627560 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d16b716e-3d91-4255-979d-95cb059f99ee-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d16b716e-3d91-4255-979d-95cb059f99ee\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:12:47 crc kubenswrapper[4720]: I0122 07:12:47.627591 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d16b716e-3d91-4255-979d-95cb059f99ee-config-data\") pod \"ceilometer-0\" (UID: \"d16b716e-3d91-4255-979d-95cb059f99ee\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:12:47 crc kubenswrapper[4720]: I0122 07:12:47.627614 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d16b716e-3d91-4255-979d-95cb059f99ee-log-httpd\") pod \"ceilometer-0\" (UID: \"d16b716e-3d91-4255-979d-95cb059f99ee\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:12:47 crc kubenswrapper[4720]: I0122 07:12:47.629048 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d16b716e-3d91-4255-979d-95cb059f99ee-log-httpd\") pod \"ceilometer-0\" (UID: \"d16b716e-3d91-4255-979d-95cb059f99ee\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:12:47 crc kubenswrapper[4720]: I0122 07:12:47.629245 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d16b716e-3d91-4255-979d-95cb059f99ee-run-httpd\") pod \"ceilometer-0\" (UID: \"d16b716e-3d91-4255-979d-95cb059f99ee\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:12:47 crc kubenswrapper[4720]: I0122 07:12:47.631705 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d16b716e-3d91-4255-979d-95cb059f99ee-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d16b716e-3d91-4255-979d-95cb059f99ee\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:12:47 crc kubenswrapper[4720]: I0122 07:12:47.632019 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d16b716e-3d91-4255-979d-95cb059f99ee-scripts\") pod \"ceilometer-0\" (UID: \"d16b716e-3d91-4255-979d-95cb059f99ee\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:12:47 crc kubenswrapper[4720]: I0122 07:12:47.633194 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/d16b716e-3d91-4255-979d-95cb059f99ee-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"d16b716e-3d91-4255-979d-95cb059f99ee\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:12:47 crc kubenswrapper[4720]: I0122 07:12:47.646325 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d16b716e-3d91-4255-979d-95cb059f99ee-config-data\") pod \"ceilometer-0\" (UID: \"d16b716e-3d91-4255-979d-95cb059f99ee\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:12:47 crc kubenswrapper[4720]: I0122 07:12:47.647626 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d16b716e-3d91-4255-979d-95cb059f99ee-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d16b716e-3d91-4255-979d-95cb059f99ee\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:12:47 crc kubenswrapper[4720]: I0122 07:12:47.648307 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-875kf\" (UniqueName: \"kubernetes.io/projected/d16b716e-3d91-4255-979d-95cb059f99ee-kube-api-access-875kf\") pod \"ceilometer-0\" (UID: \"d16b716e-3d91-4255-979d-95cb059f99ee\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:12:47 crc kubenswrapper[4720]: I0122 07:12:47.738037 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:12:48 crc kubenswrapper[4720]: I0122 07:12:48.197683 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 07:12:48 crc kubenswrapper[4720]: W0122 07:12:48.200901 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd16b716e_3d91_4255_979d_95cb059f99ee.slice/crio-83d69397ef62c8f3bf85b9566afb274958b9da0b127c3492d054f4106617e389 WatchSource:0}: Error finding container 83d69397ef62c8f3bf85b9566afb274958b9da0b127c3492d054f4106617e389: Status 404 returned error can't find the container with id 83d69397ef62c8f3bf85b9566afb274958b9da0b127c3492d054f4106617e389 Jan 22 07:12:48 crc kubenswrapper[4720]: I0122 07:12:48.222041 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5f58c7c4-42ec-42ec-9c37-44cdb8490082" path="/var/lib/kubelet/pods/5f58c7c4-42ec-42ec-9c37-44cdb8490082/volumes" Jan 22 07:12:48 crc kubenswrapper[4720]: I0122 07:12:48.361307 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"d16b716e-3d91-4255-979d-95cb059f99ee","Type":"ContainerStarted","Data":"83d69397ef62c8f3bf85b9566afb274958b9da0b127c3492d054f4106617e389"} Jan 22 07:12:49 crc kubenswrapper[4720]: I0122 07:12:49.375774 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"d16b716e-3d91-4255-979d-95cb059f99ee","Type":"ContainerStarted","Data":"98584c31cd40a904f501562c57462948e6f734d25b6232aa927e623e6120147f"} Jan 22 07:12:50 crc kubenswrapper[4720]: I0122 07:12:50.387280 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"d16b716e-3d91-4255-979d-95cb059f99ee","Type":"ContainerStarted","Data":"c2c51d2cb77356a2ca7855d5dae450416740ec1a17a81f505b6f8a07f147f78f"} Jan 22 07:12:51 crc kubenswrapper[4720]: I0122 07:12:51.397176 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"d16b716e-3d91-4255-979d-95cb059f99ee","Type":"ContainerStarted","Data":"da38ba817807e7646e1baf73abf10969ceeb8bfd7c19579ddee86dc24f3aea6e"} Jan 22 07:12:53 crc kubenswrapper[4720]: I0122 07:12:53.419427 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"d16b716e-3d91-4255-979d-95cb059f99ee","Type":"ContainerStarted","Data":"246de8db76d8e070d68ca5393d1b46e6e40d0013e31ffd4e84706e3b6c21711a"} Jan 22 07:12:53 crc kubenswrapper[4720]: I0122 07:12:53.420225 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:12:53 crc kubenswrapper[4720]: I0122 07:12:53.454882 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/ceilometer-0" podStartSLOduration=2.488924289 podStartE2EDuration="6.454859949s" podCreationTimestamp="2026-01-22 07:12:47 +0000 UTC" firstStartedPulling="2026-01-22 07:12:48.204544077 +0000 UTC m=+2260.346450772" lastFinishedPulling="2026-01-22 07:12:52.170479727 +0000 UTC m=+2264.312386432" observedRunningTime="2026-01-22 07:12:53.444449175 +0000 UTC m=+2265.586355900" watchObservedRunningTime="2026-01-22 07:12:53.454859949 +0000 UTC m=+2265.596766674" Jan 22 07:13:00 crc kubenswrapper[4720]: I0122 07:13:00.152762 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-purge-29484433-nj7lg"] Jan 22 07:13:00 crc kubenswrapper[4720]: I0122 07:13:00.154958 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-purge-29484433-nj7lg" Jan 22 07:13:00 crc kubenswrapper[4720]: I0122 07:13:00.157305 4720 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-scripts" Jan 22 07:13:00 crc kubenswrapper[4720]: I0122 07:13:00.157392 4720 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-config-data" Jan 22 07:13:00 crc kubenswrapper[4720]: I0122 07:13:00.175602 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-purge-29484433-nj7lg"] Jan 22 07:13:00 crc kubenswrapper[4720]: I0122 07:13:00.255029 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ab57a6b-942a-4163-bcf7-64d80452933a-combined-ca-bundle\") pod \"watcher-kuttl-db-purge-29484433-nj7lg\" (UID: \"8ab57a6b-942a-4163-bcf7-64d80452933a\") " pod="watcher-kuttl-default/watcher-kuttl-db-purge-29484433-nj7lg" Jan 22 07:13:00 crc kubenswrapper[4720]: I0122 07:13:00.255094 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2rp4n\" (UniqueName: \"kubernetes.io/projected/8ab57a6b-942a-4163-bcf7-64d80452933a-kube-api-access-2rp4n\") pod \"watcher-kuttl-db-purge-29484433-nj7lg\" (UID: \"8ab57a6b-942a-4163-bcf7-64d80452933a\") " pod="watcher-kuttl-default/watcher-kuttl-db-purge-29484433-nj7lg" Jan 22 07:13:00 crc kubenswrapper[4720]: I0122 07:13:00.255161 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ab57a6b-942a-4163-bcf7-64d80452933a-config-data\") pod \"watcher-kuttl-db-purge-29484433-nj7lg\" (UID: \"8ab57a6b-942a-4163-bcf7-64d80452933a\") " pod="watcher-kuttl-default/watcher-kuttl-db-purge-29484433-nj7lg" Jan 22 07:13:00 crc kubenswrapper[4720]: I0122 07:13:00.255226 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts-volume\" (UniqueName: \"kubernetes.io/secret/8ab57a6b-942a-4163-bcf7-64d80452933a-scripts-volume\") pod \"watcher-kuttl-db-purge-29484433-nj7lg\" (UID: \"8ab57a6b-942a-4163-bcf7-64d80452933a\") " pod="watcher-kuttl-default/watcher-kuttl-db-purge-29484433-nj7lg" Jan 22 07:13:00 crc kubenswrapper[4720]: I0122 07:13:00.356967 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2rp4n\" (UniqueName: \"kubernetes.io/projected/8ab57a6b-942a-4163-bcf7-64d80452933a-kube-api-access-2rp4n\") pod \"watcher-kuttl-db-purge-29484433-nj7lg\" (UID: \"8ab57a6b-942a-4163-bcf7-64d80452933a\") " pod="watcher-kuttl-default/watcher-kuttl-db-purge-29484433-nj7lg" Jan 22 07:13:00 crc kubenswrapper[4720]: I0122 07:13:00.357069 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ab57a6b-942a-4163-bcf7-64d80452933a-config-data\") pod \"watcher-kuttl-db-purge-29484433-nj7lg\" (UID: \"8ab57a6b-942a-4163-bcf7-64d80452933a\") " pod="watcher-kuttl-default/watcher-kuttl-db-purge-29484433-nj7lg" Jan 22 07:13:00 crc kubenswrapper[4720]: I0122 07:13:00.357152 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts-volume\" (UniqueName: \"kubernetes.io/secret/8ab57a6b-942a-4163-bcf7-64d80452933a-scripts-volume\") pod \"watcher-kuttl-db-purge-29484433-nj7lg\" (UID: \"8ab57a6b-942a-4163-bcf7-64d80452933a\") " pod="watcher-kuttl-default/watcher-kuttl-db-purge-29484433-nj7lg" Jan 22 07:13:00 crc kubenswrapper[4720]: I0122 07:13:00.357266 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ab57a6b-942a-4163-bcf7-64d80452933a-combined-ca-bundle\") pod \"watcher-kuttl-db-purge-29484433-nj7lg\" (UID: \"8ab57a6b-942a-4163-bcf7-64d80452933a\") " pod="watcher-kuttl-default/watcher-kuttl-db-purge-29484433-nj7lg" Jan 22 07:13:00 crc kubenswrapper[4720]: I0122 07:13:00.365552 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts-volume\" (UniqueName: \"kubernetes.io/secret/8ab57a6b-942a-4163-bcf7-64d80452933a-scripts-volume\") pod \"watcher-kuttl-db-purge-29484433-nj7lg\" (UID: \"8ab57a6b-942a-4163-bcf7-64d80452933a\") " pod="watcher-kuttl-default/watcher-kuttl-db-purge-29484433-nj7lg" Jan 22 07:13:00 crc kubenswrapper[4720]: I0122 07:13:00.365788 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ab57a6b-942a-4163-bcf7-64d80452933a-combined-ca-bundle\") pod \"watcher-kuttl-db-purge-29484433-nj7lg\" (UID: \"8ab57a6b-942a-4163-bcf7-64d80452933a\") " pod="watcher-kuttl-default/watcher-kuttl-db-purge-29484433-nj7lg" Jan 22 07:13:00 crc kubenswrapper[4720]: I0122 07:13:00.374044 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ab57a6b-942a-4163-bcf7-64d80452933a-config-data\") pod \"watcher-kuttl-db-purge-29484433-nj7lg\" (UID: \"8ab57a6b-942a-4163-bcf7-64d80452933a\") " pod="watcher-kuttl-default/watcher-kuttl-db-purge-29484433-nj7lg" Jan 22 07:13:00 crc kubenswrapper[4720]: I0122 07:13:00.384668 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2rp4n\" (UniqueName: \"kubernetes.io/projected/8ab57a6b-942a-4163-bcf7-64d80452933a-kube-api-access-2rp4n\") pod \"watcher-kuttl-db-purge-29484433-nj7lg\" (UID: \"8ab57a6b-942a-4163-bcf7-64d80452933a\") " pod="watcher-kuttl-default/watcher-kuttl-db-purge-29484433-nj7lg" Jan 22 07:13:00 crc kubenswrapper[4720]: I0122 07:13:00.475222 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-purge-29484433-nj7lg" Jan 22 07:13:00 crc kubenswrapper[4720]: I0122 07:13:00.947359 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-purge-29484433-nj7lg"] Jan 22 07:13:01 crc kubenswrapper[4720]: I0122 07:13:01.547623 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-purge-29484433-nj7lg" event={"ID":"8ab57a6b-942a-4163-bcf7-64d80452933a","Type":"ContainerStarted","Data":"394d83aeeb565d877c96015494c1a2873d100f96b0bee82fe09279026e99e779"} Jan 22 07:13:01 crc kubenswrapper[4720]: I0122 07:13:01.548064 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-purge-29484433-nj7lg" event={"ID":"8ab57a6b-942a-4163-bcf7-64d80452933a","Type":"ContainerStarted","Data":"6a6d83a158be91ac9fc6053f8b15f5a4d98170add0c18b36e07014933285c054"} Jan 22 07:13:01 crc kubenswrapper[4720]: I0122 07:13:01.568480 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-db-purge-29484433-nj7lg" podStartSLOduration=1.5684561019999999 podStartE2EDuration="1.568456102s" podCreationTimestamp="2026-01-22 07:13:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 07:13:01.563657596 +0000 UTC m=+2273.705564301" watchObservedRunningTime="2026-01-22 07:13:01.568456102 +0000 UTC m=+2273.710362807" Jan 22 07:13:04 crc kubenswrapper[4720]: I0122 07:13:04.577377 4720 generic.go:334] "Generic (PLEG): container finished" podID="8ab57a6b-942a-4163-bcf7-64d80452933a" containerID="394d83aeeb565d877c96015494c1a2873d100f96b0bee82fe09279026e99e779" exitCode=0 Jan 22 07:13:04 crc kubenswrapper[4720]: I0122 07:13:04.577513 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-purge-29484433-nj7lg" event={"ID":"8ab57a6b-942a-4163-bcf7-64d80452933a","Type":"ContainerDied","Data":"394d83aeeb565d877c96015494c1a2873d100f96b0bee82fe09279026e99e779"} Jan 22 07:13:05 crc kubenswrapper[4720]: I0122 07:13:05.921852 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-purge-29484433-nj7lg" Jan 22 07:13:06 crc kubenswrapper[4720]: I0122 07:13:06.276117 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2rp4n\" (UniqueName: \"kubernetes.io/projected/8ab57a6b-942a-4163-bcf7-64d80452933a-kube-api-access-2rp4n\") pod \"8ab57a6b-942a-4163-bcf7-64d80452933a\" (UID: \"8ab57a6b-942a-4163-bcf7-64d80452933a\") " Jan 22 07:13:06 crc kubenswrapper[4720]: I0122 07:13:06.276386 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ab57a6b-942a-4163-bcf7-64d80452933a-combined-ca-bundle\") pod \"8ab57a6b-942a-4163-bcf7-64d80452933a\" (UID: \"8ab57a6b-942a-4163-bcf7-64d80452933a\") " Jan 22 07:13:06 crc kubenswrapper[4720]: I0122 07:13:06.276444 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ab57a6b-942a-4163-bcf7-64d80452933a-config-data\") pod \"8ab57a6b-942a-4163-bcf7-64d80452933a\" (UID: \"8ab57a6b-942a-4163-bcf7-64d80452933a\") " Jan 22 07:13:06 crc kubenswrapper[4720]: I0122 07:13:06.276539 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts-volume\" (UniqueName: \"kubernetes.io/secret/8ab57a6b-942a-4163-bcf7-64d80452933a-scripts-volume\") pod \"8ab57a6b-942a-4163-bcf7-64d80452933a\" (UID: \"8ab57a6b-942a-4163-bcf7-64d80452933a\") " Jan 22 07:13:06 crc kubenswrapper[4720]: I0122 07:13:06.290028 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ab57a6b-942a-4163-bcf7-64d80452933a-scripts-volume" (OuterVolumeSpecName: "scripts-volume") pod "8ab57a6b-942a-4163-bcf7-64d80452933a" (UID: "8ab57a6b-942a-4163-bcf7-64d80452933a"). InnerVolumeSpecName "scripts-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:13:06 crc kubenswrapper[4720]: I0122 07:13:06.294174 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8ab57a6b-942a-4163-bcf7-64d80452933a-kube-api-access-2rp4n" (OuterVolumeSpecName: "kube-api-access-2rp4n") pod "8ab57a6b-942a-4163-bcf7-64d80452933a" (UID: "8ab57a6b-942a-4163-bcf7-64d80452933a"). InnerVolumeSpecName "kube-api-access-2rp4n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 07:13:06 crc kubenswrapper[4720]: I0122 07:13:06.311099 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ab57a6b-942a-4163-bcf7-64d80452933a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8ab57a6b-942a-4163-bcf7-64d80452933a" (UID: "8ab57a6b-942a-4163-bcf7-64d80452933a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:13:06 crc kubenswrapper[4720]: I0122 07:13:06.366639 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ab57a6b-942a-4163-bcf7-64d80452933a-config-data" (OuterVolumeSpecName: "config-data") pod "8ab57a6b-942a-4163-bcf7-64d80452933a" (UID: "8ab57a6b-942a-4163-bcf7-64d80452933a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:13:06 crc kubenswrapper[4720]: I0122 07:13:06.378352 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2rp4n\" (UniqueName: \"kubernetes.io/projected/8ab57a6b-942a-4163-bcf7-64d80452933a-kube-api-access-2rp4n\") on node \"crc\" DevicePath \"\"" Jan 22 07:13:06 crc kubenswrapper[4720]: I0122 07:13:06.378391 4720 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ab57a6b-942a-4163-bcf7-64d80452933a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 07:13:06 crc kubenswrapper[4720]: I0122 07:13:06.378404 4720 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ab57a6b-942a-4163-bcf7-64d80452933a-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 07:13:06 crc kubenswrapper[4720]: I0122 07:13:06.378413 4720 reconciler_common.go:293] "Volume detached for volume \"scripts-volume\" (UniqueName: \"kubernetes.io/secret/8ab57a6b-942a-4163-bcf7-64d80452933a-scripts-volume\") on node \"crc\" DevicePath \"\"" Jan 22 07:13:06 crc kubenswrapper[4720]: I0122 07:13:06.594242 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-purge-29484433-nj7lg" event={"ID":"8ab57a6b-942a-4163-bcf7-64d80452933a","Type":"ContainerDied","Data":"6a6d83a158be91ac9fc6053f8b15f5a4d98170add0c18b36e07014933285c054"} Jan 22 07:13:06 crc kubenswrapper[4720]: I0122 07:13:06.594533 4720 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6a6d83a158be91ac9fc6053f8b15f5a4d98170add0c18b36e07014933285c054" Jan 22 07:13:06 crc kubenswrapper[4720]: I0122 07:13:06.594499 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-purge-29484433-nj7lg" Jan 22 07:13:09 crc kubenswrapper[4720]: I0122 07:13:09.876272 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-4mqkv"] Jan 22 07:13:09 crc kubenswrapper[4720]: I0122 07:13:09.883841 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-4mqkv"] Jan 22 07:13:09 crc kubenswrapper[4720]: I0122 07:13:09.905992 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-purge-29484433-nj7lg"] Jan 22 07:13:09 crc kubenswrapper[4720]: I0122 07:13:09.912296 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-purge-29484433-nj7lg"] Jan 22 07:13:09 crc kubenswrapper[4720]: I0122 07:13:09.923375 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watchertest-account-delete-w8r5f"] Jan 22 07:13:09 crc kubenswrapper[4720]: E0122 07:13:09.923814 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ab57a6b-942a-4163-bcf7-64d80452933a" containerName="watcher-db-manage" Jan 22 07:13:09 crc kubenswrapper[4720]: I0122 07:13:09.923836 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ab57a6b-942a-4163-bcf7-64d80452933a" containerName="watcher-db-manage" Jan 22 07:13:09 crc kubenswrapper[4720]: I0122 07:13:09.924042 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ab57a6b-942a-4163-bcf7-64d80452933a" containerName="watcher-db-manage" Jan 22 07:13:09 crc kubenswrapper[4720]: I0122 07:13:09.924675 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watchertest-account-delete-w8r5f" Jan 22 07:13:09 crc kubenswrapper[4720]: I0122 07:13:09.952680 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watchertest-account-delete-w8r5f"] Jan 22 07:13:09 crc kubenswrapper[4720]: I0122 07:13:09.993322 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2865d561-4ab8-41ac-b8a1-52738f0a7026-operator-scripts\") pod \"watchertest-account-delete-w8r5f\" (UID: \"2865d561-4ab8-41ac-b8a1-52738f0a7026\") " pod="watcher-kuttl-default/watchertest-account-delete-w8r5f" Jan 22 07:13:09 crc kubenswrapper[4720]: I0122 07:13:09.993468 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ds6v6\" (UniqueName: \"kubernetes.io/projected/2865d561-4ab8-41ac-b8a1-52738f0a7026-kube-api-access-ds6v6\") pod \"watchertest-account-delete-w8r5f\" (UID: \"2865d561-4ab8-41ac-b8a1-52738f0a7026\") " pod="watcher-kuttl-default/watchertest-account-delete-w8r5f" Jan 22 07:13:09 crc kubenswrapper[4720]: I0122 07:13:09.995481 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 22 07:13:09 crc kubenswrapper[4720]: I0122 07:13:09.995774 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podUID="228f711d-bac2-4ac4-b837-8b86b4111f50" containerName="watcher-applier" containerID="cri-o://12c01c9e8beee36790b085a69f5615c195d7a27968f71e1a6abbe8d5aa700f2c" gracePeriod=30 Jan 22 07:13:10 crc kubenswrapper[4720]: I0122 07:13:10.095658 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2865d561-4ab8-41ac-b8a1-52738f0a7026-operator-scripts\") pod \"watchertest-account-delete-w8r5f\" (UID: \"2865d561-4ab8-41ac-b8a1-52738f0a7026\") " pod="watcher-kuttl-default/watchertest-account-delete-w8r5f" Jan 22 07:13:10 crc kubenswrapper[4720]: I0122 07:13:10.095785 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ds6v6\" (UniqueName: \"kubernetes.io/projected/2865d561-4ab8-41ac-b8a1-52738f0a7026-kube-api-access-ds6v6\") pod \"watchertest-account-delete-w8r5f\" (UID: \"2865d561-4ab8-41ac-b8a1-52738f0a7026\") " pod="watcher-kuttl-default/watchertest-account-delete-w8r5f" Jan 22 07:13:10 crc kubenswrapper[4720]: I0122 07:13:10.096727 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2865d561-4ab8-41ac-b8a1-52738f0a7026-operator-scripts\") pod \"watchertest-account-delete-w8r5f\" (UID: \"2865d561-4ab8-41ac-b8a1-52738f0a7026\") " pod="watcher-kuttl-default/watchertest-account-delete-w8r5f" Jan 22 07:13:10 crc kubenswrapper[4720]: I0122 07:13:10.126951 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-1"] Jan 22 07:13:10 crc kubenswrapper[4720]: I0122 07:13:10.127279 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-1" podUID="1037ddad-a13e-4701-ad46-a24948f9973f" containerName="watcher-kuttl-api-log" containerID="cri-o://a100977fe4a0b7b518a80888dd685d14f82f3654474d4af1105cfc244810533f" gracePeriod=30 Jan 22 07:13:10 crc kubenswrapper[4720]: I0122 07:13:10.127847 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-1" podUID="1037ddad-a13e-4701-ad46-a24948f9973f" containerName="watcher-api" containerID="cri-o://d3fe731cca03907b7be05ea13bd6dae95c63f23bc76f5de7a4d4035179c56980" gracePeriod=30 Jan 22 07:13:10 crc kubenswrapper[4720]: I0122 07:13:10.137663 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 22 07:13:10 crc kubenswrapper[4720]: I0122 07:13:10.137998 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="4e24a170-40dc-44c4-9cd8-be786de38699" containerName="watcher-kuttl-api-log" containerID="cri-o://eb42e166395ae91e6bf5f2b28e698a84e27669e5d87aa18d80ea9b9844fcd3cb" gracePeriod=30 Jan 22 07:13:10 crc kubenswrapper[4720]: I0122 07:13:10.138191 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="4e24a170-40dc-44c4-9cd8-be786de38699" containerName="watcher-api" containerID="cri-o://9e9781537c0c644ace106ee566300f32621fc025a35d48790e114f2e902e4543" gracePeriod=30 Jan 22 07:13:10 crc kubenswrapper[4720]: I0122 07:13:10.144717 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ds6v6\" (UniqueName: \"kubernetes.io/projected/2865d561-4ab8-41ac-b8a1-52738f0a7026-kube-api-access-ds6v6\") pod \"watchertest-account-delete-w8r5f\" (UID: \"2865d561-4ab8-41ac-b8a1-52738f0a7026\") " pod="watcher-kuttl-default/watchertest-account-delete-w8r5f" Jan 22 07:13:10 crc kubenswrapper[4720]: I0122 07:13:10.156835 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 22 07:13:10 crc kubenswrapper[4720]: I0122 07:13:10.164375 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="3253a75f-f2ab-43fe-9b62-cfa02849f7bc" containerName="watcher-decision-engine" containerID="cri-o://0022af083d2cffb0ce09b94a6d7cf971e0d15f2bdcb9207280c78e61165965dc" gracePeriod=30 Jan 22 07:13:10 crc kubenswrapper[4720]: I0122 07:13:10.231361 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42e812cf-4d66-42df-9fdb-f5b80e6a2766" path="/var/lib/kubelet/pods/42e812cf-4d66-42df-9fdb-f5b80e6a2766/volumes" Jan 22 07:13:10 crc kubenswrapper[4720]: I0122 07:13:10.232368 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8ab57a6b-942a-4163-bcf7-64d80452933a" path="/var/lib/kubelet/pods/8ab57a6b-942a-4163-bcf7-64d80452933a/volumes" Jan 22 07:13:10 crc kubenswrapper[4720]: I0122 07:13:10.244711 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watchertest-account-delete-w8r5f" Jan 22 07:13:10 crc kubenswrapper[4720]: I0122 07:13:10.643310 4720 generic.go:334] "Generic (PLEG): container finished" podID="4e24a170-40dc-44c4-9cd8-be786de38699" containerID="eb42e166395ae91e6bf5f2b28e698a84e27669e5d87aa18d80ea9b9844fcd3cb" exitCode=143 Jan 22 07:13:10 crc kubenswrapper[4720]: I0122 07:13:10.643689 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"4e24a170-40dc-44c4-9cd8-be786de38699","Type":"ContainerDied","Data":"eb42e166395ae91e6bf5f2b28e698a84e27669e5d87aa18d80ea9b9844fcd3cb"} Jan 22 07:13:10 crc kubenswrapper[4720]: I0122 07:13:10.649572 4720 generic.go:334] "Generic (PLEG): container finished" podID="1037ddad-a13e-4701-ad46-a24948f9973f" containerID="a100977fe4a0b7b518a80888dd685d14f82f3654474d4af1105cfc244810533f" exitCode=143 Jan 22 07:13:10 crc kubenswrapper[4720]: I0122 07:13:10.649619 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-1" event={"ID":"1037ddad-a13e-4701-ad46-a24948f9973f","Type":"ContainerDied","Data":"a100977fe4a0b7b518a80888dd685d14f82f3654474d4af1105cfc244810533f"} Jan 22 07:13:10 crc kubenswrapper[4720]: I0122 07:13:10.887264 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watchertest-account-delete-w8r5f"] Jan 22 07:13:10 crc kubenswrapper[4720]: W0122 07:13:10.894933 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2865d561_4ab8_41ac_b8a1_52738f0a7026.slice/crio-3bd133ee2ea78e500673350b35ec03273400e10906e76cc23f042a483657c12b WatchSource:0}: Error finding container 3bd133ee2ea78e500673350b35ec03273400e10906e76cc23f042a483657c12b: Status 404 returned error can't find the container with id 3bd133ee2ea78e500673350b35ec03273400e10906e76cc23f042a483657c12b Jan 22 07:13:11 crc kubenswrapper[4720]: I0122 07:13:11.685994 4720 generic.go:334] "Generic (PLEG): container finished" podID="2865d561-4ab8-41ac-b8a1-52738f0a7026" containerID="d0d3c5d00bf8dcf6e6d84ca5e0ba8f26f0c5ba342888311b1ff972a0f7ce8d58" exitCode=0 Jan 22 07:13:11 crc kubenswrapper[4720]: I0122 07:13:11.686690 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watchertest-account-delete-w8r5f" event={"ID":"2865d561-4ab8-41ac-b8a1-52738f0a7026","Type":"ContainerDied","Data":"d0d3c5d00bf8dcf6e6d84ca5e0ba8f26f0c5ba342888311b1ff972a0f7ce8d58"} Jan 22 07:13:11 crc kubenswrapper[4720]: I0122 07:13:11.686725 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watchertest-account-delete-w8r5f" event={"ID":"2865d561-4ab8-41ac-b8a1-52738f0a7026","Type":"ContainerStarted","Data":"3bd133ee2ea78e500673350b35ec03273400e10906e76cc23f042a483657c12b"} Jan 22 07:13:11 crc kubenswrapper[4720]: I0122 07:13:11.691840 4720 generic.go:334] "Generic (PLEG): container finished" podID="1037ddad-a13e-4701-ad46-a24948f9973f" containerID="d3fe731cca03907b7be05ea13bd6dae95c63f23bc76f5de7a4d4035179c56980" exitCode=0 Jan 22 07:13:11 crc kubenswrapper[4720]: I0122 07:13:11.691879 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-1" event={"ID":"1037ddad-a13e-4701-ad46-a24948f9973f","Type":"ContainerDied","Data":"d3fe731cca03907b7be05ea13bd6dae95c63f23bc76f5de7a4d4035179c56980"} Jan 22 07:13:12 crc kubenswrapper[4720]: I0122 07:13:12.033284 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 22 07:13:12 crc kubenswrapper[4720]: I0122 07:13:12.146441 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k4d2m\" (UniqueName: \"kubernetes.io/projected/1037ddad-a13e-4701-ad46-a24948f9973f-kube-api-access-k4d2m\") pod \"1037ddad-a13e-4701-ad46-a24948f9973f\" (UID: \"1037ddad-a13e-4701-ad46-a24948f9973f\") " Jan 22 07:13:12 crc kubenswrapper[4720]: I0122 07:13:12.146494 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1037ddad-a13e-4701-ad46-a24948f9973f-combined-ca-bundle\") pod \"1037ddad-a13e-4701-ad46-a24948f9973f\" (UID: \"1037ddad-a13e-4701-ad46-a24948f9973f\") " Jan 22 07:13:12 crc kubenswrapper[4720]: I0122 07:13:12.146535 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/1037ddad-a13e-4701-ad46-a24948f9973f-cert-memcached-mtls\") pod \"1037ddad-a13e-4701-ad46-a24948f9973f\" (UID: \"1037ddad-a13e-4701-ad46-a24948f9973f\") " Jan 22 07:13:12 crc kubenswrapper[4720]: I0122 07:13:12.146589 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1037ddad-a13e-4701-ad46-a24948f9973f-config-data\") pod \"1037ddad-a13e-4701-ad46-a24948f9973f\" (UID: \"1037ddad-a13e-4701-ad46-a24948f9973f\") " Jan 22 07:13:12 crc kubenswrapper[4720]: I0122 07:13:12.146626 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1037ddad-a13e-4701-ad46-a24948f9973f-logs\") pod \"1037ddad-a13e-4701-ad46-a24948f9973f\" (UID: \"1037ddad-a13e-4701-ad46-a24948f9973f\") " Jan 22 07:13:12 crc kubenswrapper[4720]: I0122 07:13:12.146829 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/1037ddad-a13e-4701-ad46-a24948f9973f-custom-prometheus-ca\") pod \"1037ddad-a13e-4701-ad46-a24948f9973f\" (UID: \"1037ddad-a13e-4701-ad46-a24948f9973f\") " Jan 22 07:13:12 crc kubenswrapper[4720]: I0122 07:13:12.147271 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1037ddad-a13e-4701-ad46-a24948f9973f-logs" (OuterVolumeSpecName: "logs") pod "1037ddad-a13e-4701-ad46-a24948f9973f" (UID: "1037ddad-a13e-4701-ad46-a24948f9973f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 07:13:12 crc kubenswrapper[4720]: I0122 07:13:12.147825 4720 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1037ddad-a13e-4701-ad46-a24948f9973f-logs\") on node \"crc\" DevicePath \"\"" Jan 22 07:13:12 crc kubenswrapper[4720]: I0122 07:13:12.160276 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1037ddad-a13e-4701-ad46-a24948f9973f-kube-api-access-k4d2m" (OuterVolumeSpecName: "kube-api-access-k4d2m") pod "1037ddad-a13e-4701-ad46-a24948f9973f" (UID: "1037ddad-a13e-4701-ad46-a24948f9973f"). InnerVolumeSpecName "kube-api-access-k4d2m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 07:13:12 crc kubenswrapper[4720]: I0122 07:13:12.177107 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1037ddad-a13e-4701-ad46-a24948f9973f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1037ddad-a13e-4701-ad46-a24948f9973f" (UID: "1037ddad-a13e-4701-ad46-a24948f9973f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:13:12 crc kubenswrapper[4720]: I0122 07:13:12.183727 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1037ddad-a13e-4701-ad46-a24948f9973f-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "1037ddad-a13e-4701-ad46-a24948f9973f" (UID: "1037ddad-a13e-4701-ad46-a24948f9973f"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:13:12 crc kubenswrapper[4720]: I0122 07:13:12.201215 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1037ddad-a13e-4701-ad46-a24948f9973f-config-data" (OuterVolumeSpecName: "config-data") pod "1037ddad-a13e-4701-ad46-a24948f9973f" (UID: "1037ddad-a13e-4701-ad46-a24948f9973f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:13:12 crc kubenswrapper[4720]: I0122 07:13:12.222258 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1037ddad-a13e-4701-ad46-a24948f9973f-cert-memcached-mtls" (OuterVolumeSpecName: "cert-memcached-mtls") pod "1037ddad-a13e-4701-ad46-a24948f9973f" (UID: "1037ddad-a13e-4701-ad46-a24948f9973f"). InnerVolumeSpecName "cert-memcached-mtls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:13:12 crc kubenswrapper[4720]: I0122 07:13:12.249228 4720 reconciler_common.go:293] "Volume detached for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/1037ddad-a13e-4701-ad46-a24948f9973f-cert-memcached-mtls\") on node \"crc\" DevicePath \"\"" Jan 22 07:13:12 crc kubenswrapper[4720]: I0122 07:13:12.249258 4720 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1037ddad-a13e-4701-ad46-a24948f9973f-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 07:13:12 crc kubenswrapper[4720]: I0122 07:13:12.249266 4720 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/1037ddad-a13e-4701-ad46-a24948f9973f-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Jan 22 07:13:12 crc kubenswrapper[4720]: I0122 07:13:12.249276 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k4d2m\" (UniqueName: \"kubernetes.io/projected/1037ddad-a13e-4701-ad46-a24948f9973f-kube-api-access-k4d2m\") on node \"crc\" DevicePath \"\"" Jan 22 07:13:12 crc kubenswrapper[4720]: I0122 07:13:12.249287 4720 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1037ddad-a13e-4701-ad46-a24948f9973f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 07:13:12 crc kubenswrapper[4720]: I0122 07:13:12.703754 4720 generic.go:334] "Generic (PLEG): container finished" podID="4e24a170-40dc-44c4-9cd8-be786de38699" containerID="9e9781537c0c644ace106ee566300f32621fc025a35d48790e114f2e902e4543" exitCode=0 Jan 22 07:13:12 crc kubenswrapper[4720]: I0122 07:13:12.703837 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"4e24a170-40dc-44c4-9cd8-be786de38699","Type":"ContainerDied","Data":"9e9781537c0c644ace106ee566300f32621fc025a35d48790e114f2e902e4543"} Jan 22 07:13:12 crc kubenswrapper[4720]: I0122 07:13:12.707474 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 22 07:13:12 crc kubenswrapper[4720]: I0122 07:13:12.707485 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-1" event={"ID":"1037ddad-a13e-4701-ad46-a24948f9973f","Type":"ContainerDied","Data":"93171a2d7c6d3a01dc080073bbb80cdcce5ca45893e4c143f9aa2fd9e7cb88c6"} Jan 22 07:13:12 crc kubenswrapper[4720]: I0122 07:13:12.707545 4720 scope.go:117] "RemoveContainer" containerID="d3fe731cca03907b7be05ea13bd6dae95c63f23bc76f5de7a4d4035179c56980" Jan 22 07:13:12 crc kubenswrapper[4720]: I0122 07:13:12.747829 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-1"] Jan 22 07:13:12 crc kubenswrapper[4720]: I0122 07:13:12.749458 4720 scope.go:117] "RemoveContainer" containerID="a100977fe4a0b7b518a80888dd685d14f82f3654474d4af1105cfc244810533f" Jan 22 07:13:12 crc kubenswrapper[4720]: I0122 07:13:12.754545 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-1"] Jan 22 07:13:13 crc kubenswrapper[4720]: I0122 07:13:13.117214 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:13:13 crc kubenswrapper[4720]: I0122 07:13:13.261739 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watchertest-account-delete-w8r5f" Jan 22 07:13:13 crc kubenswrapper[4720]: I0122 07:13:13.266992 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h4rhk\" (UniqueName: \"kubernetes.io/projected/4e24a170-40dc-44c4-9cd8-be786de38699-kube-api-access-h4rhk\") pod \"4e24a170-40dc-44c4-9cd8-be786de38699\" (UID: \"4e24a170-40dc-44c4-9cd8-be786de38699\") " Jan 22 07:13:13 crc kubenswrapper[4720]: I0122 07:13:13.267085 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e24a170-40dc-44c4-9cd8-be786de38699-config-data\") pod \"4e24a170-40dc-44c4-9cd8-be786de38699\" (UID: \"4e24a170-40dc-44c4-9cd8-be786de38699\") " Jan 22 07:13:13 crc kubenswrapper[4720]: I0122 07:13:13.267120 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e24a170-40dc-44c4-9cd8-be786de38699-combined-ca-bundle\") pod \"4e24a170-40dc-44c4-9cd8-be786de38699\" (UID: \"4e24a170-40dc-44c4-9cd8-be786de38699\") " Jan 22 07:13:13 crc kubenswrapper[4720]: I0122 07:13:13.267190 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4e24a170-40dc-44c4-9cd8-be786de38699-logs\") pod \"4e24a170-40dc-44c4-9cd8-be786de38699\" (UID: \"4e24a170-40dc-44c4-9cd8-be786de38699\") " Jan 22 07:13:13 crc kubenswrapper[4720]: I0122 07:13:13.267234 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/4e24a170-40dc-44c4-9cd8-be786de38699-cert-memcached-mtls\") pod \"4e24a170-40dc-44c4-9cd8-be786de38699\" (UID: \"4e24a170-40dc-44c4-9cd8-be786de38699\") " Jan 22 07:13:13 crc kubenswrapper[4720]: I0122 07:13:13.267299 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/4e24a170-40dc-44c4-9cd8-be786de38699-custom-prometheus-ca\") pod \"4e24a170-40dc-44c4-9cd8-be786de38699\" (UID: \"4e24a170-40dc-44c4-9cd8-be786de38699\") " Jan 22 07:13:13 crc kubenswrapper[4720]: I0122 07:13:13.267778 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4e24a170-40dc-44c4-9cd8-be786de38699-logs" (OuterVolumeSpecName: "logs") pod "4e24a170-40dc-44c4-9cd8-be786de38699" (UID: "4e24a170-40dc-44c4-9cd8-be786de38699"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 07:13:13 crc kubenswrapper[4720]: I0122 07:13:13.267936 4720 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4e24a170-40dc-44c4-9cd8-be786de38699-logs\") on node \"crc\" DevicePath \"\"" Jan 22 07:13:13 crc kubenswrapper[4720]: I0122 07:13:13.279091 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4e24a170-40dc-44c4-9cd8-be786de38699-kube-api-access-h4rhk" (OuterVolumeSpecName: "kube-api-access-h4rhk") pod "4e24a170-40dc-44c4-9cd8-be786de38699" (UID: "4e24a170-40dc-44c4-9cd8-be786de38699"). InnerVolumeSpecName "kube-api-access-h4rhk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 07:13:13 crc kubenswrapper[4720]: I0122 07:13:13.330534 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e24a170-40dc-44c4-9cd8-be786de38699-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4e24a170-40dc-44c4-9cd8-be786de38699" (UID: "4e24a170-40dc-44c4-9cd8-be786de38699"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:13:13 crc kubenswrapper[4720]: I0122 07:13:13.332101 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e24a170-40dc-44c4-9cd8-be786de38699-config-data" (OuterVolumeSpecName: "config-data") pod "4e24a170-40dc-44c4-9cd8-be786de38699" (UID: "4e24a170-40dc-44c4-9cd8-be786de38699"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:13:13 crc kubenswrapper[4720]: I0122 07:13:13.340504 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e24a170-40dc-44c4-9cd8-be786de38699-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "4e24a170-40dc-44c4-9cd8-be786de38699" (UID: "4e24a170-40dc-44c4-9cd8-be786de38699"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:13:13 crc kubenswrapper[4720]: I0122 07:13:13.350402 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e24a170-40dc-44c4-9cd8-be786de38699-cert-memcached-mtls" (OuterVolumeSpecName: "cert-memcached-mtls") pod "4e24a170-40dc-44c4-9cd8-be786de38699" (UID: "4e24a170-40dc-44c4-9cd8-be786de38699"). InnerVolumeSpecName "cert-memcached-mtls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:13:13 crc kubenswrapper[4720]: I0122 07:13:13.369036 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2865d561-4ab8-41ac-b8a1-52738f0a7026-operator-scripts\") pod \"2865d561-4ab8-41ac-b8a1-52738f0a7026\" (UID: \"2865d561-4ab8-41ac-b8a1-52738f0a7026\") " Jan 22 07:13:13 crc kubenswrapper[4720]: I0122 07:13:13.369112 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ds6v6\" (UniqueName: \"kubernetes.io/projected/2865d561-4ab8-41ac-b8a1-52738f0a7026-kube-api-access-ds6v6\") pod \"2865d561-4ab8-41ac-b8a1-52738f0a7026\" (UID: \"2865d561-4ab8-41ac-b8a1-52738f0a7026\") " Jan 22 07:13:13 crc kubenswrapper[4720]: I0122 07:13:13.369464 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2865d561-4ab8-41ac-b8a1-52738f0a7026-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2865d561-4ab8-41ac-b8a1-52738f0a7026" (UID: "2865d561-4ab8-41ac-b8a1-52738f0a7026"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 07:13:13 crc kubenswrapper[4720]: I0122 07:13:13.369662 4720 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2865d561-4ab8-41ac-b8a1-52738f0a7026-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 07:13:13 crc kubenswrapper[4720]: I0122 07:13:13.369682 4720 reconciler_common.go:293] "Volume detached for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/4e24a170-40dc-44c4-9cd8-be786de38699-cert-memcached-mtls\") on node \"crc\" DevicePath \"\"" Jan 22 07:13:13 crc kubenswrapper[4720]: I0122 07:13:13.369691 4720 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/4e24a170-40dc-44c4-9cd8-be786de38699-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Jan 22 07:13:13 crc kubenswrapper[4720]: I0122 07:13:13.369700 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h4rhk\" (UniqueName: \"kubernetes.io/projected/4e24a170-40dc-44c4-9cd8-be786de38699-kube-api-access-h4rhk\") on node \"crc\" DevicePath \"\"" Jan 22 07:13:13 crc kubenswrapper[4720]: I0122 07:13:13.369710 4720 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e24a170-40dc-44c4-9cd8-be786de38699-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 07:13:13 crc kubenswrapper[4720]: I0122 07:13:13.369718 4720 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e24a170-40dc-44c4-9cd8-be786de38699-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 07:13:13 crc kubenswrapper[4720]: I0122 07:13:13.372207 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2865d561-4ab8-41ac-b8a1-52738f0a7026-kube-api-access-ds6v6" (OuterVolumeSpecName: "kube-api-access-ds6v6") pod "2865d561-4ab8-41ac-b8a1-52738f0a7026" (UID: "2865d561-4ab8-41ac-b8a1-52738f0a7026"). InnerVolumeSpecName "kube-api-access-ds6v6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 07:13:13 crc kubenswrapper[4720]: I0122 07:13:13.471821 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ds6v6\" (UniqueName: \"kubernetes.io/projected/2865d561-4ab8-41ac-b8a1-52738f0a7026-kube-api-access-ds6v6\") on node \"crc\" DevicePath \"\"" Jan 22 07:13:13 crc kubenswrapper[4720]: I0122 07:13:13.718944 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"4e24a170-40dc-44c4-9cd8-be786de38699","Type":"ContainerDied","Data":"17a47724550455ace710a755a976cf04e1501ccd8c444719687c421f3241889a"} Jan 22 07:13:13 crc kubenswrapper[4720]: I0122 07:13:13.719008 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 22 07:13:13 crc kubenswrapper[4720]: I0122 07:13:13.719020 4720 scope.go:117] "RemoveContainer" containerID="9e9781537c0c644ace106ee566300f32621fc025a35d48790e114f2e902e4543" Jan 22 07:13:13 crc kubenswrapper[4720]: I0122 07:13:13.721299 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watchertest-account-delete-w8r5f" Jan 22 07:13:13 crc kubenswrapper[4720]: I0122 07:13:13.721326 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watchertest-account-delete-w8r5f" event={"ID":"2865d561-4ab8-41ac-b8a1-52738f0a7026","Type":"ContainerDied","Data":"3bd133ee2ea78e500673350b35ec03273400e10906e76cc23f042a483657c12b"} Jan 22 07:13:13 crc kubenswrapper[4720]: I0122 07:13:13.721489 4720 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3bd133ee2ea78e500673350b35ec03273400e10906e76cc23f042a483657c12b" Jan 22 07:13:13 crc kubenswrapper[4720]: I0122 07:13:13.768044 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 22 07:13:13 crc kubenswrapper[4720]: I0122 07:13:13.781123 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 22 07:13:13 crc kubenswrapper[4720]: I0122 07:13:13.787140 4720 scope.go:117] "RemoveContainer" containerID="eb42e166395ae91e6bf5f2b28e698a84e27669e5d87aa18d80ea9b9844fcd3cb" Jan 22 07:13:13 crc kubenswrapper[4720]: E0122 07:13:13.833129 4720 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 12c01c9e8beee36790b085a69f5615c195d7a27968f71e1a6abbe8d5aa700f2c is running failed: container process not found" containerID="12c01c9e8beee36790b085a69f5615c195d7a27968f71e1a6abbe8d5aa700f2c" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Jan 22 07:13:13 crc kubenswrapper[4720]: E0122 07:13:13.833402 4720 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 12c01c9e8beee36790b085a69f5615c195d7a27968f71e1a6abbe8d5aa700f2c is running failed: container process not found" containerID="12c01c9e8beee36790b085a69f5615c195d7a27968f71e1a6abbe8d5aa700f2c" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Jan 22 07:13:13 crc kubenswrapper[4720]: E0122 07:13:13.833584 4720 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 12c01c9e8beee36790b085a69f5615c195d7a27968f71e1a6abbe8d5aa700f2c is running failed: container process not found" containerID="12c01c9e8beee36790b085a69f5615c195d7a27968f71e1a6abbe8d5aa700f2c" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Jan 22 07:13:13 crc kubenswrapper[4720]: E0122 07:13:13.833612 4720 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 12c01c9e8beee36790b085a69f5615c195d7a27968f71e1a6abbe8d5aa700f2c is running failed: container process not found" probeType="Readiness" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podUID="228f711d-bac2-4ac4-b837-8b86b4111f50" containerName="watcher-applier" Jan 22 07:13:14 crc kubenswrapper[4720]: I0122 07:13:14.046120 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 07:13:14 crc kubenswrapper[4720]: I0122 07:13:14.046474 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="d16b716e-3d91-4255-979d-95cb059f99ee" containerName="ceilometer-central-agent" containerID="cri-o://98584c31cd40a904f501562c57462948e6f734d25b6232aa927e623e6120147f" gracePeriod=30 Jan 22 07:13:14 crc kubenswrapper[4720]: I0122 07:13:14.046532 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="d16b716e-3d91-4255-979d-95cb059f99ee" containerName="proxy-httpd" containerID="cri-o://246de8db76d8e070d68ca5393d1b46e6e40d0013e31ffd4e84706e3b6c21711a" gracePeriod=30 Jan 22 07:13:14 crc kubenswrapper[4720]: I0122 07:13:14.046612 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="d16b716e-3d91-4255-979d-95cb059f99ee" containerName="sg-core" containerID="cri-o://da38ba817807e7646e1baf73abf10969ceeb8bfd7c19579ddee86dc24f3aea6e" gracePeriod=30 Jan 22 07:13:14 crc kubenswrapper[4720]: I0122 07:13:14.046651 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="d16b716e-3d91-4255-979d-95cb059f99ee" containerName="ceilometer-notification-agent" containerID="cri-o://c2c51d2cb77356a2ca7855d5dae450416740ec1a17a81f505b6f8a07f147f78f" gracePeriod=30 Jan 22 07:13:14 crc kubenswrapper[4720]: I0122 07:13:14.094534 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 07:13:14 crc kubenswrapper[4720]: I0122 07:13:14.113935 4720 prober.go:107] "Probe failed" probeType="Readiness" pod="watcher-kuttl-default/ceilometer-0" podUID="d16b716e-3d91-4255-979d-95cb059f99ee" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 22 07:13:14 crc kubenswrapper[4720]: I0122 07:13:14.182690 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/228f711d-bac2-4ac4-b837-8b86b4111f50-config-data\") pod \"228f711d-bac2-4ac4-b837-8b86b4111f50\" (UID: \"228f711d-bac2-4ac4-b837-8b86b4111f50\") " Jan 22 07:13:14 crc kubenswrapper[4720]: I0122 07:13:14.182767 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/228f711d-bac2-4ac4-b837-8b86b4111f50-combined-ca-bundle\") pod \"228f711d-bac2-4ac4-b837-8b86b4111f50\" (UID: \"228f711d-bac2-4ac4-b837-8b86b4111f50\") " Jan 22 07:13:14 crc kubenswrapper[4720]: I0122 07:13:14.182860 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7gs4n\" (UniqueName: \"kubernetes.io/projected/228f711d-bac2-4ac4-b837-8b86b4111f50-kube-api-access-7gs4n\") pod \"228f711d-bac2-4ac4-b837-8b86b4111f50\" (UID: \"228f711d-bac2-4ac4-b837-8b86b4111f50\") " Jan 22 07:13:14 crc kubenswrapper[4720]: I0122 07:13:14.182881 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/228f711d-bac2-4ac4-b837-8b86b4111f50-cert-memcached-mtls\") pod \"228f711d-bac2-4ac4-b837-8b86b4111f50\" (UID: \"228f711d-bac2-4ac4-b837-8b86b4111f50\") " Jan 22 07:13:14 crc kubenswrapper[4720]: I0122 07:13:14.182942 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/228f711d-bac2-4ac4-b837-8b86b4111f50-logs\") pod \"228f711d-bac2-4ac4-b837-8b86b4111f50\" (UID: \"228f711d-bac2-4ac4-b837-8b86b4111f50\") " Jan 22 07:13:14 crc kubenswrapper[4720]: I0122 07:13:14.184106 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/228f711d-bac2-4ac4-b837-8b86b4111f50-logs" (OuterVolumeSpecName: "logs") pod "228f711d-bac2-4ac4-b837-8b86b4111f50" (UID: "228f711d-bac2-4ac4-b837-8b86b4111f50"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 07:13:14 crc kubenswrapper[4720]: I0122 07:13:14.195774 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/228f711d-bac2-4ac4-b837-8b86b4111f50-kube-api-access-7gs4n" (OuterVolumeSpecName: "kube-api-access-7gs4n") pod "228f711d-bac2-4ac4-b837-8b86b4111f50" (UID: "228f711d-bac2-4ac4-b837-8b86b4111f50"). InnerVolumeSpecName "kube-api-access-7gs4n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 07:13:14 crc kubenswrapper[4720]: I0122 07:13:14.218312 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/228f711d-bac2-4ac4-b837-8b86b4111f50-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "228f711d-bac2-4ac4-b837-8b86b4111f50" (UID: "228f711d-bac2-4ac4-b837-8b86b4111f50"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:13:14 crc kubenswrapper[4720]: I0122 07:13:14.227828 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1037ddad-a13e-4701-ad46-a24948f9973f" path="/var/lib/kubelet/pods/1037ddad-a13e-4701-ad46-a24948f9973f/volumes" Jan 22 07:13:14 crc kubenswrapper[4720]: I0122 07:13:14.228581 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4e24a170-40dc-44c4-9cd8-be786de38699" path="/var/lib/kubelet/pods/4e24a170-40dc-44c4-9cd8-be786de38699/volumes" Jan 22 07:13:14 crc kubenswrapper[4720]: I0122 07:13:14.245932 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/228f711d-bac2-4ac4-b837-8b86b4111f50-config-data" (OuterVolumeSpecName: "config-data") pod "228f711d-bac2-4ac4-b837-8b86b4111f50" (UID: "228f711d-bac2-4ac4-b837-8b86b4111f50"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:13:14 crc kubenswrapper[4720]: I0122 07:13:14.260211 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/228f711d-bac2-4ac4-b837-8b86b4111f50-cert-memcached-mtls" (OuterVolumeSpecName: "cert-memcached-mtls") pod "228f711d-bac2-4ac4-b837-8b86b4111f50" (UID: "228f711d-bac2-4ac4-b837-8b86b4111f50"). InnerVolumeSpecName "cert-memcached-mtls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:13:14 crc kubenswrapper[4720]: I0122 07:13:14.285145 4720 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/228f711d-bac2-4ac4-b837-8b86b4111f50-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 07:13:14 crc kubenswrapper[4720]: I0122 07:13:14.285189 4720 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/228f711d-bac2-4ac4-b837-8b86b4111f50-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 07:13:14 crc kubenswrapper[4720]: I0122 07:13:14.285204 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7gs4n\" (UniqueName: \"kubernetes.io/projected/228f711d-bac2-4ac4-b837-8b86b4111f50-kube-api-access-7gs4n\") on node \"crc\" DevicePath \"\"" Jan 22 07:13:14 crc kubenswrapper[4720]: I0122 07:13:14.285215 4720 reconciler_common.go:293] "Volume detached for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/228f711d-bac2-4ac4-b837-8b86b4111f50-cert-memcached-mtls\") on node \"crc\" DevicePath \"\"" Jan 22 07:13:14 crc kubenswrapper[4720]: I0122 07:13:14.285226 4720 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/228f711d-bac2-4ac4-b837-8b86b4111f50-logs\") on node \"crc\" DevicePath \"\"" Jan 22 07:13:14 crc kubenswrapper[4720]: I0122 07:13:14.754812 4720 generic.go:334] "Generic (PLEG): container finished" podID="228f711d-bac2-4ac4-b837-8b86b4111f50" containerID="12c01c9e8beee36790b085a69f5615c195d7a27968f71e1a6abbe8d5aa700f2c" exitCode=0 Jan 22 07:13:14 crc kubenswrapper[4720]: I0122 07:13:14.754924 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 22 07:13:14 crc kubenswrapper[4720]: I0122 07:13:14.754937 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"228f711d-bac2-4ac4-b837-8b86b4111f50","Type":"ContainerDied","Data":"12c01c9e8beee36790b085a69f5615c195d7a27968f71e1a6abbe8d5aa700f2c"} Jan 22 07:13:14 crc kubenswrapper[4720]: I0122 07:13:14.756130 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"228f711d-bac2-4ac4-b837-8b86b4111f50","Type":"ContainerDied","Data":"cb094a7b921638824e8bc9e9726330c13f303ba013840ee1bb45064d321e8674"} Jan 22 07:13:14 crc kubenswrapper[4720]: I0122 07:13:14.756198 4720 scope.go:117] "RemoveContainer" containerID="12c01c9e8beee36790b085a69f5615c195d7a27968f71e1a6abbe8d5aa700f2c" Jan 22 07:13:14 crc kubenswrapper[4720]: I0122 07:13:14.761403 4720 generic.go:334] "Generic (PLEG): container finished" podID="3253a75f-f2ab-43fe-9b62-cfa02849f7bc" containerID="0022af083d2cffb0ce09b94a6d7cf971e0d15f2bdcb9207280c78e61165965dc" exitCode=0 Jan 22 07:13:14 crc kubenswrapper[4720]: I0122 07:13:14.761484 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"3253a75f-f2ab-43fe-9b62-cfa02849f7bc","Type":"ContainerDied","Data":"0022af083d2cffb0ce09b94a6d7cf971e0d15f2bdcb9207280c78e61165965dc"} Jan 22 07:13:14 crc kubenswrapper[4720]: I0122 07:13:14.778944 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"d16b716e-3d91-4255-979d-95cb059f99ee","Type":"ContainerDied","Data":"246de8db76d8e070d68ca5393d1b46e6e40d0013e31ffd4e84706e3b6c21711a"} Jan 22 07:13:14 crc kubenswrapper[4720]: I0122 07:13:14.778890 4720 generic.go:334] "Generic (PLEG): container finished" podID="d16b716e-3d91-4255-979d-95cb059f99ee" containerID="246de8db76d8e070d68ca5393d1b46e6e40d0013e31ffd4e84706e3b6c21711a" exitCode=0 Jan 22 07:13:14 crc kubenswrapper[4720]: I0122 07:13:14.779019 4720 generic.go:334] "Generic (PLEG): container finished" podID="d16b716e-3d91-4255-979d-95cb059f99ee" containerID="da38ba817807e7646e1baf73abf10969ceeb8bfd7c19579ddee86dc24f3aea6e" exitCode=2 Jan 22 07:13:14 crc kubenswrapper[4720]: I0122 07:13:14.779030 4720 generic.go:334] "Generic (PLEG): container finished" podID="d16b716e-3d91-4255-979d-95cb059f99ee" containerID="98584c31cd40a904f501562c57462948e6f734d25b6232aa927e623e6120147f" exitCode=0 Jan 22 07:13:14 crc kubenswrapper[4720]: I0122 07:13:14.779047 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"d16b716e-3d91-4255-979d-95cb059f99ee","Type":"ContainerDied","Data":"da38ba817807e7646e1baf73abf10969ceeb8bfd7c19579ddee86dc24f3aea6e"} Jan 22 07:13:14 crc kubenswrapper[4720]: I0122 07:13:14.779060 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"d16b716e-3d91-4255-979d-95cb059f99ee","Type":"ContainerDied","Data":"98584c31cd40a904f501562c57462948e6f734d25b6232aa927e623e6120147f"} Jan 22 07:13:14 crc kubenswrapper[4720]: I0122 07:13:14.812618 4720 scope.go:117] "RemoveContainer" containerID="12c01c9e8beee36790b085a69f5615c195d7a27968f71e1a6abbe8d5aa700f2c" Jan 22 07:13:14 crc kubenswrapper[4720]: E0122 07:13:14.813512 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"12c01c9e8beee36790b085a69f5615c195d7a27968f71e1a6abbe8d5aa700f2c\": container with ID starting with 12c01c9e8beee36790b085a69f5615c195d7a27968f71e1a6abbe8d5aa700f2c not found: ID does not exist" containerID="12c01c9e8beee36790b085a69f5615c195d7a27968f71e1a6abbe8d5aa700f2c" Jan 22 07:13:14 crc kubenswrapper[4720]: I0122 07:13:14.813545 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"12c01c9e8beee36790b085a69f5615c195d7a27968f71e1a6abbe8d5aa700f2c"} err="failed to get container status \"12c01c9e8beee36790b085a69f5615c195d7a27968f71e1a6abbe8d5aa700f2c\": rpc error: code = NotFound desc = could not find container \"12c01c9e8beee36790b085a69f5615c195d7a27968f71e1a6abbe8d5aa700f2c\": container with ID starting with 12c01c9e8beee36790b085a69f5615c195d7a27968f71e1a6abbe8d5aa700f2c not found: ID does not exist" Jan 22 07:13:14 crc kubenswrapper[4720]: I0122 07:13:14.817733 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 22 07:13:14 crc kubenswrapper[4720]: I0122 07:13:14.834606 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 22 07:13:14 crc kubenswrapper[4720]: I0122 07:13:14.841466 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 07:13:14 crc kubenswrapper[4720]: I0122 07:13:14.978923 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-db-create-wfvc6"] Jan 22 07:13:14 crc kubenswrapper[4720]: I0122 07:13:14.998458 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-db-create-wfvc6"] Jan 22 07:13:14 crc kubenswrapper[4720]: I0122 07:13:14.999128 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/3253a75f-f2ab-43fe-9b62-cfa02849f7bc-cert-memcached-mtls\") pod \"3253a75f-f2ab-43fe-9b62-cfa02849f7bc\" (UID: \"3253a75f-f2ab-43fe-9b62-cfa02849f7bc\") " Jan 22 07:13:14 crc kubenswrapper[4720]: I0122 07:13:14.999218 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3253a75f-f2ab-43fe-9b62-cfa02849f7bc-combined-ca-bundle\") pod \"3253a75f-f2ab-43fe-9b62-cfa02849f7bc\" (UID: \"3253a75f-f2ab-43fe-9b62-cfa02849f7bc\") " Jan 22 07:13:14 crc kubenswrapper[4720]: I0122 07:13:14.999315 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3253a75f-f2ab-43fe-9b62-cfa02849f7bc-logs\") pod \"3253a75f-f2ab-43fe-9b62-cfa02849f7bc\" (UID: \"3253a75f-f2ab-43fe-9b62-cfa02849f7bc\") " Jan 22 07:13:14 crc kubenswrapper[4720]: I0122 07:13:14.999465 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3253a75f-f2ab-43fe-9b62-cfa02849f7bc-config-data\") pod \"3253a75f-f2ab-43fe-9b62-cfa02849f7bc\" (UID: \"3253a75f-f2ab-43fe-9b62-cfa02849f7bc\") " Jan 22 07:13:14 crc kubenswrapper[4720]: I0122 07:13:14.999522 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pmdlh\" (UniqueName: \"kubernetes.io/projected/3253a75f-f2ab-43fe-9b62-cfa02849f7bc-kube-api-access-pmdlh\") pod \"3253a75f-f2ab-43fe-9b62-cfa02849f7bc\" (UID: \"3253a75f-f2ab-43fe-9b62-cfa02849f7bc\") " Jan 22 07:13:14 crc kubenswrapper[4720]: I0122 07:13:14.999633 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/3253a75f-f2ab-43fe-9b62-cfa02849f7bc-custom-prometheus-ca\") pod \"3253a75f-f2ab-43fe-9b62-cfa02849f7bc\" (UID: \"3253a75f-f2ab-43fe-9b62-cfa02849f7bc\") " Jan 22 07:13:14 crc kubenswrapper[4720]: I0122 07:13:14.999860 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3253a75f-f2ab-43fe-9b62-cfa02849f7bc-logs" (OuterVolumeSpecName: "logs") pod "3253a75f-f2ab-43fe-9b62-cfa02849f7bc" (UID: "3253a75f-f2ab-43fe-9b62-cfa02849f7bc"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 07:13:15 crc kubenswrapper[4720]: I0122 07:13:15.000186 4720 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3253a75f-f2ab-43fe-9b62-cfa02849f7bc-logs\") on node \"crc\" DevicePath \"\"" Jan 22 07:13:15 crc kubenswrapper[4720]: I0122 07:13:15.021131 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3253a75f-f2ab-43fe-9b62-cfa02849f7bc-kube-api-access-pmdlh" (OuterVolumeSpecName: "kube-api-access-pmdlh") pod "3253a75f-f2ab-43fe-9b62-cfa02849f7bc" (UID: "3253a75f-f2ab-43fe-9b62-cfa02849f7bc"). InnerVolumeSpecName "kube-api-access-pmdlh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 07:13:15 crc kubenswrapper[4720]: I0122 07:13:15.046407 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-test-account-create-update-dl67x"] Jan 22 07:13:15 crc kubenswrapper[4720]: I0122 07:13:15.063019 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-test-account-create-update-dl67x"] Jan 22 07:13:15 crc kubenswrapper[4720]: I0122 07:13:15.068121 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3253a75f-f2ab-43fe-9b62-cfa02849f7bc-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "3253a75f-f2ab-43fe-9b62-cfa02849f7bc" (UID: "3253a75f-f2ab-43fe-9b62-cfa02849f7bc"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:13:15 crc kubenswrapper[4720]: I0122 07:13:15.068157 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3253a75f-f2ab-43fe-9b62-cfa02849f7bc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3253a75f-f2ab-43fe-9b62-cfa02849f7bc" (UID: "3253a75f-f2ab-43fe-9b62-cfa02849f7bc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:13:15 crc kubenswrapper[4720]: I0122 07:13:15.091970 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watchertest-account-delete-w8r5f"] Jan 22 07:13:15 crc kubenswrapper[4720]: I0122 07:13:15.099721 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watchertest-account-delete-w8r5f"] Jan 22 07:13:15 crc kubenswrapper[4720]: I0122 07:13:15.100176 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3253a75f-f2ab-43fe-9b62-cfa02849f7bc-cert-memcached-mtls" (OuterVolumeSpecName: "cert-memcached-mtls") pod "3253a75f-f2ab-43fe-9b62-cfa02849f7bc" (UID: "3253a75f-f2ab-43fe-9b62-cfa02849f7bc"). InnerVolumeSpecName "cert-memcached-mtls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:13:15 crc kubenswrapper[4720]: I0122 07:13:15.101298 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pmdlh\" (UniqueName: \"kubernetes.io/projected/3253a75f-f2ab-43fe-9b62-cfa02849f7bc-kube-api-access-pmdlh\") on node \"crc\" DevicePath \"\"" Jan 22 07:13:15 crc kubenswrapper[4720]: I0122 07:13:15.101317 4720 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/3253a75f-f2ab-43fe-9b62-cfa02849f7bc-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Jan 22 07:13:15 crc kubenswrapper[4720]: I0122 07:13:15.101327 4720 reconciler_common.go:293] "Volume detached for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/3253a75f-f2ab-43fe-9b62-cfa02849f7bc-cert-memcached-mtls\") on node \"crc\" DevicePath \"\"" Jan 22 07:13:15 crc kubenswrapper[4720]: I0122 07:13:15.101336 4720 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3253a75f-f2ab-43fe-9b62-cfa02849f7bc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 07:13:15 crc kubenswrapper[4720]: I0122 07:13:15.102056 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3253a75f-f2ab-43fe-9b62-cfa02849f7bc-config-data" (OuterVolumeSpecName: "config-data") pod "3253a75f-f2ab-43fe-9b62-cfa02849f7bc" (UID: "3253a75f-f2ab-43fe-9b62-cfa02849f7bc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:13:15 crc kubenswrapper[4720]: I0122 07:13:15.202901 4720 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3253a75f-f2ab-43fe-9b62-cfa02849f7bc-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 07:13:15 crc kubenswrapper[4720]: I0122 07:13:15.791371 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"3253a75f-f2ab-43fe-9b62-cfa02849f7bc","Type":"ContainerDied","Data":"21de1eae354dd97e32290a6a930bdcf9f45993a88e36d0dfa9012f7122d16e79"} Jan 22 07:13:15 crc kubenswrapper[4720]: I0122 07:13:15.791448 4720 scope.go:117] "RemoveContainer" containerID="0022af083d2cffb0ce09b94a6d7cf971e0d15f2bdcb9207280c78e61165965dc" Jan 22 07:13:15 crc kubenswrapper[4720]: I0122 07:13:15.791522 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 22 07:13:15 crc kubenswrapper[4720]: I0122 07:13:15.830104 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 22 07:13:15 crc kubenswrapper[4720]: I0122 07:13:15.836374 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 22 07:13:16 crc kubenswrapper[4720]: I0122 07:13:16.222076 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="228f711d-bac2-4ac4-b837-8b86b4111f50" path="/var/lib/kubelet/pods/228f711d-bac2-4ac4-b837-8b86b4111f50/volumes" Jan 22 07:13:16 crc kubenswrapper[4720]: I0122 07:13:16.222713 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2865d561-4ab8-41ac-b8a1-52738f0a7026" path="/var/lib/kubelet/pods/2865d561-4ab8-41ac-b8a1-52738f0a7026/volumes" Jan 22 07:13:16 crc kubenswrapper[4720]: I0122 07:13:16.223395 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3253a75f-f2ab-43fe-9b62-cfa02849f7bc" path="/var/lib/kubelet/pods/3253a75f-f2ab-43fe-9b62-cfa02849f7bc/volumes" Jan 22 07:13:16 crc kubenswrapper[4720]: I0122 07:13:16.224638 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7ae0c5a8-fdb3-4b7e-96ea-b8445d063a59" path="/var/lib/kubelet/pods/7ae0c5a8-fdb3-4b7e-96ea-b8445d063a59/volumes" Jan 22 07:13:16 crc kubenswrapper[4720]: I0122 07:13:16.225348 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="89b9d35f-d279-4f5c-8316-9ee5cb5e8b68" path="/var/lib/kubelet/pods/89b9d35f-d279-4f5c-8316-9ee5cb5e8b68/volumes" Jan 22 07:13:16 crc kubenswrapper[4720]: I0122 07:13:16.794074 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:13:16 crc kubenswrapper[4720]: I0122 07:13:16.806444 4720 generic.go:334] "Generic (PLEG): container finished" podID="d16b716e-3d91-4255-979d-95cb059f99ee" containerID="c2c51d2cb77356a2ca7855d5dae450416740ec1a17a81f505b6f8a07f147f78f" exitCode=0 Jan 22 07:13:16 crc kubenswrapper[4720]: I0122 07:13:16.806531 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:13:16 crc kubenswrapper[4720]: I0122 07:13:16.806525 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"d16b716e-3d91-4255-979d-95cb059f99ee","Type":"ContainerDied","Data":"c2c51d2cb77356a2ca7855d5dae450416740ec1a17a81f505b6f8a07f147f78f"} Jan 22 07:13:16 crc kubenswrapper[4720]: I0122 07:13:16.806603 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"d16b716e-3d91-4255-979d-95cb059f99ee","Type":"ContainerDied","Data":"83d69397ef62c8f3bf85b9566afb274958b9da0b127c3492d054f4106617e389"} Jan 22 07:13:16 crc kubenswrapper[4720]: I0122 07:13:16.806635 4720 scope.go:117] "RemoveContainer" containerID="246de8db76d8e070d68ca5393d1b46e6e40d0013e31ffd4e84706e3b6c21711a" Jan 22 07:13:16 crc kubenswrapper[4720]: I0122 07:13:16.848155 4720 scope.go:117] "RemoveContainer" containerID="da38ba817807e7646e1baf73abf10969ceeb8bfd7c19579ddee86dc24f3aea6e" Jan 22 07:13:16 crc kubenswrapper[4720]: I0122 07:13:16.875988 4720 scope.go:117] "RemoveContainer" containerID="c2c51d2cb77356a2ca7855d5dae450416740ec1a17a81f505b6f8a07f147f78f" Jan 22 07:13:16 crc kubenswrapper[4720]: I0122 07:13:16.895311 4720 scope.go:117] "RemoveContainer" containerID="98584c31cd40a904f501562c57462948e6f734d25b6232aa927e623e6120147f" Jan 22 07:13:16 crc kubenswrapper[4720]: I0122 07:13:16.912393 4720 scope.go:117] "RemoveContainer" containerID="246de8db76d8e070d68ca5393d1b46e6e40d0013e31ffd4e84706e3b6c21711a" Jan 22 07:13:16 crc kubenswrapper[4720]: E0122 07:13:16.912961 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"246de8db76d8e070d68ca5393d1b46e6e40d0013e31ffd4e84706e3b6c21711a\": container with ID starting with 246de8db76d8e070d68ca5393d1b46e6e40d0013e31ffd4e84706e3b6c21711a not found: ID does not exist" containerID="246de8db76d8e070d68ca5393d1b46e6e40d0013e31ffd4e84706e3b6c21711a" Jan 22 07:13:16 crc kubenswrapper[4720]: I0122 07:13:16.913008 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"246de8db76d8e070d68ca5393d1b46e6e40d0013e31ffd4e84706e3b6c21711a"} err="failed to get container status \"246de8db76d8e070d68ca5393d1b46e6e40d0013e31ffd4e84706e3b6c21711a\": rpc error: code = NotFound desc = could not find container \"246de8db76d8e070d68ca5393d1b46e6e40d0013e31ffd4e84706e3b6c21711a\": container with ID starting with 246de8db76d8e070d68ca5393d1b46e6e40d0013e31ffd4e84706e3b6c21711a not found: ID does not exist" Jan 22 07:13:16 crc kubenswrapper[4720]: I0122 07:13:16.913040 4720 scope.go:117] "RemoveContainer" containerID="da38ba817807e7646e1baf73abf10969ceeb8bfd7c19579ddee86dc24f3aea6e" Jan 22 07:13:16 crc kubenswrapper[4720]: E0122 07:13:16.913494 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"da38ba817807e7646e1baf73abf10969ceeb8bfd7c19579ddee86dc24f3aea6e\": container with ID starting with da38ba817807e7646e1baf73abf10969ceeb8bfd7c19579ddee86dc24f3aea6e not found: ID does not exist" containerID="da38ba817807e7646e1baf73abf10969ceeb8bfd7c19579ddee86dc24f3aea6e" Jan 22 07:13:16 crc kubenswrapper[4720]: I0122 07:13:16.913523 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"da38ba817807e7646e1baf73abf10969ceeb8bfd7c19579ddee86dc24f3aea6e"} err="failed to get container status \"da38ba817807e7646e1baf73abf10969ceeb8bfd7c19579ddee86dc24f3aea6e\": rpc error: code = NotFound desc = could not find container \"da38ba817807e7646e1baf73abf10969ceeb8bfd7c19579ddee86dc24f3aea6e\": container with ID starting with da38ba817807e7646e1baf73abf10969ceeb8bfd7c19579ddee86dc24f3aea6e not found: ID does not exist" Jan 22 07:13:16 crc kubenswrapper[4720]: I0122 07:13:16.913540 4720 scope.go:117] "RemoveContainer" containerID="c2c51d2cb77356a2ca7855d5dae450416740ec1a17a81f505b6f8a07f147f78f" Jan 22 07:13:16 crc kubenswrapper[4720]: E0122 07:13:16.913849 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c2c51d2cb77356a2ca7855d5dae450416740ec1a17a81f505b6f8a07f147f78f\": container with ID starting with c2c51d2cb77356a2ca7855d5dae450416740ec1a17a81f505b6f8a07f147f78f not found: ID does not exist" containerID="c2c51d2cb77356a2ca7855d5dae450416740ec1a17a81f505b6f8a07f147f78f" Jan 22 07:13:16 crc kubenswrapper[4720]: I0122 07:13:16.913894 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c2c51d2cb77356a2ca7855d5dae450416740ec1a17a81f505b6f8a07f147f78f"} err="failed to get container status \"c2c51d2cb77356a2ca7855d5dae450416740ec1a17a81f505b6f8a07f147f78f\": rpc error: code = NotFound desc = could not find container \"c2c51d2cb77356a2ca7855d5dae450416740ec1a17a81f505b6f8a07f147f78f\": container with ID starting with c2c51d2cb77356a2ca7855d5dae450416740ec1a17a81f505b6f8a07f147f78f not found: ID does not exist" Jan 22 07:13:16 crc kubenswrapper[4720]: I0122 07:13:16.913973 4720 scope.go:117] "RemoveContainer" containerID="98584c31cd40a904f501562c57462948e6f734d25b6232aa927e623e6120147f" Jan 22 07:13:16 crc kubenswrapper[4720]: E0122 07:13:16.914315 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"98584c31cd40a904f501562c57462948e6f734d25b6232aa927e623e6120147f\": container with ID starting with 98584c31cd40a904f501562c57462948e6f734d25b6232aa927e623e6120147f not found: ID does not exist" containerID="98584c31cd40a904f501562c57462948e6f734d25b6232aa927e623e6120147f" Jan 22 07:13:16 crc kubenswrapper[4720]: I0122 07:13:16.914350 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"98584c31cd40a904f501562c57462948e6f734d25b6232aa927e623e6120147f"} err="failed to get container status \"98584c31cd40a904f501562c57462948e6f734d25b6232aa927e623e6120147f\": rpc error: code = NotFound desc = could not find container \"98584c31cd40a904f501562c57462948e6f734d25b6232aa927e623e6120147f\": container with ID starting with 98584c31cd40a904f501562c57462948e6f734d25b6232aa927e623e6120147f not found: ID does not exist" Jan 22 07:13:16 crc kubenswrapper[4720]: I0122 07:13:16.931089 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/d16b716e-3d91-4255-979d-95cb059f99ee-ceilometer-tls-certs\") pod \"d16b716e-3d91-4255-979d-95cb059f99ee\" (UID: \"d16b716e-3d91-4255-979d-95cb059f99ee\") " Jan 22 07:13:16 crc kubenswrapper[4720]: I0122 07:13:16.931125 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d16b716e-3d91-4255-979d-95cb059f99ee-run-httpd\") pod \"d16b716e-3d91-4255-979d-95cb059f99ee\" (UID: \"d16b716e-3d91-4255-979d-95cb059f99ee\") " Jan 22 07:13:16 crc kubenswrapper[4720]: I0122 07:13:16.931230 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d16b716e-3d91-4255-979d-95cb059f99ee-combined-ca-bundle\") pod \"d16b716e-3d91-4255-979d-95cb059f99ee\" (UID: \"d16b716e-3d91-4255-979d-95cb059f99ee\") " Jan 22 07:13:16 crc kubenswrapper[4720]: I0122 07:13:16.931255 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d16b716e-3d91-4255-979d-95cb059f99ee-log-httpd\") pod \"d16b716e-3d91-4255-979d-95cb059f99ee\" (UID: \"d16b716e-3d91-4255-979d-95cb059f99ee\") " Jan 22 07:13:16 crc kubenswrapper[4720]: I0122 07:13:16.931316 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d16b716e-3d91-4255-979d-95cb059f99ee-sg-core-conf-yaml\") pod \"d16b716e-3d91-4255-979d-95cb059f99ee\" (UID: \"d16b716e-3d91-4255-979d-95cb059f99ee\") " Jan 22 07:13:16 crc kubenswrapper[4720]: I0122 07:13:16.931360 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d16b716e-3d91-4255-979d-95cb059f99ee-scripts\") pod \"d16b716e-3d91-4255-979d-95cb059f99ee\" (UID: \"d16b716e-3d91-4255-979d-95cb059f99ee\") " Jan 22 07:13:16 crc kubenswrapper[4720]: I0122 07:13:16.931381 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-875kf\" (UniqueName: \"kubernetes.io/projected/d16b716e-3d91-4255-979d-95cb059f99ee-kube-api-access-875kf\") pod \"d16b716e-3d91-4255-979d-95cb059f99ee\" (UID: \"d16b716e-3d91-4255-979d-95cb059f99ee\") " Jan 22 07:13:16 crc kubenswrapper[4720]: I0122 07:13:16.931480 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d16b716e-3d91-4255-979d-95cb059f99ee-config-data\") pod \"d16b716e-3d91-4255-979d-95cb059f99ee\" (UID: \"d16b716e-3d91-4255-979d-95cb059f99ee\") " Jan 22 07:13:16 crc kubenswrapper[4720]: I0122 07:13:16.931659 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d16b716e-3d91-4255-979d-95cb059f99ee-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "d16b716e-3d91-4255-979d-95cb059f99ee" (UID: "d16b716e-3d91-4255-979d-95cb059f99ee"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 07:13:16 crc kubenswrapper[4720]: I0122 07:13:16.931795 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d16b716e-3d91-4255-979d-95cb059f99ee-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "d16b716e-3d91-4255-979d-95cb059f99ee" (UID: "d16b716e-3d91-4255-979d-95cb059f99ee"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 07:13:16 crc kubenswrapper[4720]: I0122 07:13:16.932373 4720 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d16b716e-3d91-4255-979d-95cb059f99ee-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 22 07:13:16 crc kubenswrapper[4720]: I0122 07:13:16.932398 4720 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d16b716e-3d91-4255-979d-95cb059f99ee-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 22 07:13:16 crc kubenswrapper[4720]: I0122 07:13:16.936754 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d16b716e-3d91-4255-979d-95cb059f99ee-scripts" (OuterVolumeSpecName: "scripts") pod "d16b716e-3d91-4255-979d-95cb059f99ee" (UID: "d16b716e-3d91-4255-979d-95cb059f99ee"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:13:16 crc kubenswrapper[4720]: I0122 07:13:16.936899 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d16b716e-3d91-4255-979d-95cb059f99ee-kube-api-access-875kf" (OuterVolumeSpecName: "kube-api-access-875kf") pod "d16b716e-3d91-4255-979d-95cb059f99ee" (UID: "d16b716e-3d91-4255-979d-95cb059f99ee"). InnerVolumeSpecName "kube-api-access-875kf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 07:13:16 crc kubenswrapper[4720]: I0122 07:13:16.958487 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d16b716e-3d91-4255-979d-95cb059f99ee-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "d16b716e-3d91-4255-979d-95cb059f99ee" (UID: "d16b716e-3d91-4255-979d-95cb059f99ee"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:13:16 crc kubenswrapper[4720]: I0122 07:13:16.983475 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d16b716e-3d91-4255-979d-95cb059f99ee-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "d16b716e-3d91-4255-979d-95cb059f99ee" (UID: "d16b716e-3d91-4255-979d-95cb059f99ee"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:13:17 crc kubenswrapper[4720]: I0122 07:13:17.003498 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d16b716e-3d91-4255-979d-95cb059f99ee-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d16b716e-3d91-4255-979d-95cb059f99ee" (UID: "d16b716e-3d91-4255-979d-95cb059f99ee"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:13:17 crc kubenswrapper[4720]: I0122 07:13:17.026365 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d16b716e-3d91-4255-979d-95cb059f99ee-config-data" (OuterVolumeSpecName: "config-data") pod "d16b716e-3d91-4255-979d-95cb059f99ee" (UID: "d16b716e-3d91-4255-979d-95cb059f99ee"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:13:17 crc kubenswrapper[4720]: I0122 07:13:17.038310 4720 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d16b716e-3d91-4255-979d-95cb059f99ee-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 22 07:13:17 crc kubenswrapper[4720]: I0122 07:13:17.038366 4720 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d16b716e-3d91-4255-979d-95cb059f99ee-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 22 07:13:17 crc kubenswrapper[4720]: I0122 07:13:17.038377 4720 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d16b716e-3d91-4255-979d-95cb059f99ee-scripts\") on node \"crc\" DevicePath \"\"" Jan 22 07:13:17 crc kubenswrapper[4720]: I0122 07:13:17.038387 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-875kf\" (UniqueName: \"kubernetes.io/projected/d16b716e-3d91-4255-979d-95cb059f99ee-kube-api-access-875kf\") on node \"crc\" DevicePath \"\"" Jan 22 07:13:17 crc kubenswrapper[4720]: I0122 07:13:17.038436 4720 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d16b716e-3d91-4255-979d-95cb059f99ee-config-data\") on node \"crc\" DevicePath \"\"" Jan 22 07:13:17 crc kubenswrapper[4720]: I0122 07:13:17.038446 4720 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/d16b716e-3d91-4255-979d-95cb059f99ee-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 22 07:13:17 crc kubenswrapper[4720]: I0122 07:13:17.145609 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 07:13:17 crc kubenswrapper[4720]: I0122 07:13:17.156709 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 07:13:17 crc kubenswrapper[4720]: I0122 07:13:17.192781 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 07:13:17 crc kubenswrapper[4720]: E0122 07:13:17.193375 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3253a75f-f2ab-43fe-9b62-cfa02849f7bc" containerName="watcher-decision-engine" Jan 22 07:13:17 crc kubenswrapper[4720]: I0122 07:13:17.193405 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="3253a75f-f2ab-43fe-9b62-cfa02849f7bc" containerName="watcher-decision-engine" Jan 22 07:13:17 crc kubenswrapper[4720]: E0122 07:13:17.193426 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e24a170-40dc-44c4-9cd8-be786de38699" containerName="watcher-kuttl-api-log" Jan 22 07:13:17 crc kubenswrapper[4720]: I0122 07:13:17.193437 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e24a170-40dc-44c4-9cd8-be786de38699" containerName="watcher-kuttl-api-log" Jan 22 07:13:17 crc kubenswrapper[4720]: E0122 07:13:17.193450 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1037ddad-a13e-4701-ad46-a24948f9973f" containerName="watcher-api" Jan 22 07:13:17 crc kubenswrapper[4720]: I0122 07:13:17.193459 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="1037ddad-a13e-4701-ad46-a24948f9973f" containerName="watcher-api" Jan 22 07:13:17 crc kubenswrapper[4720]: E0122 07:13:17.193469 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d16b716e-3d91-4255-979d-95cb059f99ee" containerName="ceilometer-central-agent" Jan 22 07:13:17 crc kubenswrapper[4720]: I0122 07:13:17.193477 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="d16b716e-3d91-4255-979d-95cb059f99ee" containerName="ceilometer-central-agent" Jan 22 07:13:17 crc kubenswrapper[4720]: E0122 07:13:17.193498 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e24a170-40dc-44c4-9cd8-be786de38699" containerName="watcher-api" Jan 22 07:13:17 crc kubenswrapper[4720]: I0122 07:13:17.193507 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e24a170-40dc-44c4-9cd8-be786de38699" containerName="watcher-api" Jan 22 07:13:17 crc kubenswrapper[4720]: E0122 07:13:17.193524 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d16b716e-3d91-4255-979d-95cb059f99ee" containerName="sg-core" Jan 22 07:13:17 crc kubenswrapper[4720]: I0122 07:13:17.193533 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="d16b716e-3d91-4255-979d-95cb059f99ee" containerName="sg-core" Jan 22 07:13:17 crc kubenswrapper[4720]: E0122 07:13:17.193545 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d16b716e-3d91-4255-979d-95cb059f99ee" containerName="proxy-httpd" Jan 22 07:13:17 crc kubenswrapper[4720]: I0122 07:13:17.193555 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="d16b716e-3d91-4255-979d-95cb059f99ee" containerName="proxy-httpd" Jan 22 07:13:17 crc kubenswrapper[4720]: E0122 07:13:17.193572 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d16b716e-3d91-4255-979d-95cb059f99ee" containerName="ceilometer-notification-agent" Jan 22 07:13:17 crc kubenswrapper[4720]: I0122 07:13:17.193580 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="d16b716e-3d91-4255-979d-95cb059f99ee" containerName="ceilometer-notification-agent" Jan 22 07:13:17 crc kubenswrapper[4720]: E0122 07:13:17.193595 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1037ddad-a13e-4701-ad46-a24948f9973f" containerName="watcher-kuttl-api-log" Jan 22 07:13:17 crc kubenswrapper[4720]: I0122 07:13:17.193603 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="1037ddad-a13e-4701-ad46-a24948f9973f" containerName="watcher-kuttl-api-log" Jan 22 07:13:17 crc kubenswrapper[4720]: E0122 07:13:17.193618 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2865d561-4ab8-41ac-b8a1-52738f0a7026" containerName="mariadb-account-delete" Jan 22 07:13:17 crc kubenswrapper[4720]: I0122 07:13:17.193627 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="2865d561-4ab8-41ac-b8a1-52738f0a7026" containerName="mariadb-account-delete" Jan 22 07:13:17 crc kubenswrapper[4720]: E0122 07:13:17.193646 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="228f711d-bac2-4ac4-b837-8b86b4111f50" containerName="watcher-applier" Jan 22 07:13:17 crc kubenswrapper[4720]: I0122 07:13:17.193655 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="228f711d-bac2-4ac4-b837-8b86b4111f50" containerName="watcher-applier" Jan 22 07:13:17 crc kubenswrapper[4720]: I0122 07:13:17.193881 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="3253a75f-f2ab-43fe-9b62-cfa02849f7bc" containerName="watcher-decision-engine" Jan 22 07:13:17 crc kubenswrapper[4720]: I0122 07:13:17.193893 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="1037ddad-a13e-4701-ad46-a24948f9973f" containerName="watcher-api" Jan 22 07:13:17 crc kubenswrapper[4720]: I0122 07:13:17.193927 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="2865d561-4ab8-41ac-b8a1-52738f0a7026" containerName="mariadb-account-delete" Jan 22 07:13:17 crc kubenswrapper[4720]: I0122 07:13:17.193942 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e24a170-40dc-44c4-9cd8-be786de38699" containerName="watcher-api" Jan 22 07:13:17 crc kubenswrapper[4720]: I0122 07:13:17.193958 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="d16b716e-3d91-4255-979d-95cb059f99ee" containerName="ceilometer-central-agent" Jan 22 07:13:17 crc kubenswrapper[4720]: I0122 07:13:17.193968 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="d16b716e-3d91-4255-979d-95cb059f99ee" containerName="proxy-httpd" Jan 22 07:13:17 crc kubenswrapper[4720]: I0122 07:13:17.193980 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="228f711d-bac2-4ac4-b837-8b86b4111f50" containerName="watcher-applier" Jan 22 07:13:17 crc kubenswrapper[4720]: I0122 07:13:17.193992 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="d16b716e-3d91-4255-979d-95cb059f99ee" containerName="ceilometer-notification-agent" Jan 22 07:13:17 crc kubenswrapper[4720]: I0122 07:13:17.194006 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="d16b716e-3d91-4255-979d-95cb059f99ee" containerName="sg-core" Jan 22 07:13:17 crc kubenswrapper[4720]: I0122 07:13:17.194022 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="1037ddad-a13e-4701-ad46-a24948f9973f" containerName="watcher-kuttl-api-log" Jan 22 07:13:17 crc kubenswrapper[4720]: I0122 07:13:17.194036 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e24a170-40dc-44c4-9cd8-be786de38699" containerName="watcher-kuttl-api-log" Jan 22 07:13:17 crc kubenswrapper[4720]: I0122 07:13:17.196435 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:13:17 crc kubenswrapper[4720]: I0122 07:13:17.202292 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 07:13:17 crc kubenswrapper[4720]: I0122 07:13:17.202939 4720 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-config-data" Jan 22 07:13:17 crc kubenswrapper[4720]: I0122 07:13:17.203278 4720 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-scripts" Jan 22 07:13:17 crc kubenswrapper[4720]: I0122 07:13:17.203490 4720 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-ceilometer-internal-svc" Jan 22 07:13:17 crc kubenswrapper[4720]: I0122 07:13:17.343822 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ec1e1e1e-2c55-4757-b2e2-8ce22381ff5d-scripts\") pod \"ceilometer-0\" (UID: \"ec1e1e1e-2c55-4757-b2e2-8ce22381ff5d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:13:17 crc kubenswrapper[4720]: I0122 07:13:17.343878 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ec1e1e1e-2c55-4757-b2e2-8ce22381ff5d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ec1e1e1e-2c55-4757-b2e2-8ce22381ff5d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:13:17 crc kubenswrapper[4720]: I0122 07:13:17.343937 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5fksg\" (UniqueName: \"kubernetes.io/projected/ec1e1e1e-2c55-4757-b2e2-8ce22381ff5d-kube-api-access-5fksg\") pod \"ceilometer-0\" (UID: \"ec1e1e1e-2c55-4757-b2e2-8ce22381ff5d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:13:17 crc kubenswrapper[4720]: I0122 07:13:17.344114 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec1e1e1e-2c55-4757-b2e2-8ce22381ff5d-config-data\") pod \"ceilometer-0\" (UID: \"ec1e1e1e-2c55-4757-b2e2-8ce22381ff5d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:13:17 crc kubenswrapper[4720]: I0122 07:13:17.344170 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/ec1e1e1e-2c55-4757-b2e2-8ce22381ff5d-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"ec1e1e1e-2c55-4757-b2e2-8ce22381ff5d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:13:17 crc kubenswrapper[4720]: I0122 07:13:17.344293 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ec1e1e1e-2c55-4757-b2e2-8ce22381ff5d-log-httpd\") pod \"ceilometer-0\" (UID: \"ec1e1e1e-2c55-4757-b2e2-8ce22381ff5d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:13:17 crc kubenswrapper[4720]: I0122 07:13:17.344348 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec1e1e1e-2c55-4757-b2e2-8ce22381ff5d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ec1e1e1e-2c55-4757-b2e2-8ce22381ff5d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:13:17 crc kubenswrapper[4720]: I0122 07:13:17.344378 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ec1e1e1e-2c55-4757-b2e2-8ce22381ff5d-run-httpd\") pod \"ceilometer-0\" (UID: \"ec1e1e1e-2c55-4757-b2e2-8ce22381ff5d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:13:17 crc kubenswrapper[4720]: I0122 07:13:17.446404 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec1e1e1e-2c55-4757-b2e2-8ce22381ff5d-config-data\") pod \"ceilometer-0\" (UID: \"ec1e1e1e-2c55-4757-b2e2-8ce22381ff5d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:13:17 crc kubenswrapper[4720]: I0122 07:13:17.446471 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/ec1e1e1e-2c55-4757-b2e2-8ce22381ff5d-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"ec1e1e1e-2c55-4757-b2e2-8ce22381ff5d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:13:17 crc kubenswrapper[4720]: I0122 07:13:17.446520 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ec1e1e1e-2c55-4757-b2e2-8ce22381ff5d-log-httpd\") pod \"ceilometer-0\" (UID: \"ec1e1e1e-2c55-4757-b2e2-8ce22381ff5d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:13:17 crc kubenswrapper[4720]: I0122 07:13:17.446556 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec1e1e1e-2c55-4757-b2e2-8ce22381ff5d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ec1e1e1e-2c55-4757-b2e2-8ce22381ff5d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:13:17 crc kubenswrapper[4720]: I0122 07:13:17.446587 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ec1e1e1e-2c55-4757-b2e2-8ce22381ff5d-run-httpd\") pod \"ceilometer-0\" (UID: \"ec1e1e1e-2c55-4757-b2e2-8ce22381ff5d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:13:17 crc kubenswrapper[4720]: I0122 07:13:17.446701 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ec1e1e1e-2c55-4757-b2e2-8ce22381ff5d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ec1e1e1e-2c55-4757-b2e2-8ce22381ff5d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:13:17 crc kubenswrapper[4720]: I0122 07:13:17.446727 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ec1e1e1e-2c55-4757-b2e2-8ce22381ff5d-scripts\") pod \"ceilometer-0\" (UID: \"ec1e1e1e-2c55-4757-b2e2-8ce22381ff5d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:13:17 crc kubenswrapper[4720]: I0122 07:13:17.446753 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5fksg\" (UniqueName: \"kubernetes.io/projected/ec1e1e1e-2c55-4757-b2e2-8ce22381ff5d-kube-api-access-5fksg\") pod \"ceilometer-0\" (UID: \"ec1e1e1e-2c55-4757-b2e2-8ce22381ff5d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:13:17 crc kubenswrapper[4720]: I0122 07:13:17.447071 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ec1e1e1e-2c55-4757-b2e2-8ce22381ff5d-log-httpd\") pod \"ceilometer-0\" (UID: \"ec1e1e1e-2c55-4757-b2e2-8ce22381ff5d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:13:17 crc kubenswrapper[4720]: I0122 07:13:17.447195 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ec1e1e1e-2c55-4757-b2e2-8ce22381ff5d-run-httpd\") pod \"ceilometer-0\" (UID: \"ec1e1e1e-2c55-4757-b2e2-8ce22381ff5d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:13:17 crc kubenswrapper[4720]: I0122 07:13:17.451489 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/ec1e1e1e-2c55-4757-b2e2-8ce22381ff5d-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"ec1e1e1e-2c55-4757-b2e2-8ce22381ff5d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:13:17 crc kubenswrapper[4720]: I0122 07:13:17.451641 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec1e1e1e-2c55-4757-b2e2-8ce22381ff5d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ec1e1e1e-2c55-4757-b2e2-8ce22381ff5d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:13:17 crc kubenswrapper[4720]: I0122 07:13:17.456335 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec1e1e1e-2c55-4757-b2e2-8ce22381ff5d-config-data\") pod \"ceilometer-0\" (UID: \"ec1e1e1e-2c55-4757-b2e2-8ce22381ff5d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:13:17 crc kubenswrapper[4720]: I0122 07:13:17.457719 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ec1e1e1e-2c55-4757-b2e2-8ce22381ff5d-scripts\") pod \"ceilometer-0\" (UID: \"ec1e1e1e-2c55-4757-b2e2-8ce22381ff5d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:13:17 crc kubenswrapper[4720]: I0122 07:13:17.464042 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ec1e1e1e-2c55-4757-b2e2-8ce22381ff5d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ec1e1e1e-2c55-4757-b2e2-8ce22381ff5d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:13:17 crc kubenswrapper[4720]: I0122 07:13:17.465271 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5fksg\" (UniqueName: \"kubernetes.io/projected/ec1e1e1e-2c55-4757-b2e2-8ce22381ff5d-kube-api-access-5fksg\") pod \"ceilometer-0\" (UID: \"ec1e1e1e-2c55-4757-b2e2-8ce22381ff5d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:13:17 crc kubenswrapper[4720]: I0122 07:13:17.525022 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:13:17 crc kubenswrapper[4720]: I0122 07:13:17.956822 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 22 07:13:18 crc kubenswrapper[4720]: I0122 07:13:18.246204 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d16b716e-3d91-4255-979d-95cb059f99ee" path="/var/lib/kubelet/pods/d16b716e-3d91-4255-979d-95cb059f99ee/volumes" Jan 22 07:13:18 crc kubenswrapper[4720]: I0122 07:13:18.828976 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"ec1e1e1e-2c55-4757-b2e2-8ce22381ff5d","Type":"ContainerStarted","Data":"51d0e6e3726caef2e60994698e9654f480fa1e886e69efaf332abb14fedfa4cb"} Jan 22 07:13:19 crc kubenswrapper[4720]: I0122 07:13:19.839232 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"ec1e1e1e-2c55-4757-b2e2-8ce22381ff5d","Type":"ContainerStarted","Data":"efeb82a4015665bd64491fc1b03b66091fc70ad8829c0f909f1e2946d4872427"} Jan 22 07:13:19 crc kubenswrapper[4720]: I0122 07:13:19.839485 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"ec1e1e1e-2c55-4757-b2e2-8ce22381ff5d","Type":"ContainerStarted","Data":"fe1397f84fb3361b78d3ab8649fad422bf1ca18b88ec18c4ba2e086e8777e567"} Jan 22 07:13:20 crc kubenswrapper[4720]: I0122 07:13:20.849797 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"ec1e1e1e-2c55-4757-b2e2-8ce22381ff5d","Type":"ContainerStarted","Data":"4c6786c4e6912f54e73980465796aa2b7ddb1ec137782521fe78dda9647d1c11"} Jan 22 07:13:21 crc kubenswrapper[4720]: I0122 07:13:21.861640 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"ec1e1e1e-2c55-4757-b2e2-8ce22381ff5d","Type":"ContainerStarted","Data":"32a0fec9382691d10588cf807267171b9e2aa1a7239635b335b8f80c7de0baaa"} Jan 22 07:13:21 crc kubenswrapper[4720]: I0122 07:13:21.862291 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:13:38 crc kubenswrapper[4720]: I0122 07:13:38.331753 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/ceilometer-0" podStartSLOduration=18.041000962 podStartE2EDuration="21.331717484s" podCreationTimestamp="2026-01-22 07:13:17 +0000 UTC" firstStartedPulling="2026-01-22 07:13:17.960044248 +0000 UTC m=+2290.101950953" lastFinishedPulling="2026-01-22 07:13:21.25076077 +0000 UTC m=+2293.392667475" observedRunningTime="2026-01-22 07:13:21.887939174 +0000 UTC m=+2294.029845889" watchObservedRunningTime="2026-01-22 07:13:38.331717484 +0000 UTC m=+2310.473624189" Jan 22 07:13:38 crc kubenswrapper[4720]: I0122 07:13:38.339723 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-gfn8w/must-gather-8cfqv"] Jan 22 07:13:38 crc kubenswrapper[4720]: I0122 07:13:38.348626 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-gfn8w/must-gather-8cfqv" Jan 22 07:13:38 crc kubenswrapper[4720]: I0122 07:13:38.355359 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-gfn8w"/"default-dockercfg-lzkgv" Jan 22 07:13:38 crc kubenswrapper[4720]: I0122 07:13:38.355829 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-gfn8w"/"kube-root-ca.crt" Jan 22 07:13:38 crc kubenswrapper[4720]: I0122 07:13:38.362111 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-gfn8w"/"openshift-service-ca.crt" Jan 22 07:13:38 crc kubenswrapper[4720]: I0122 07:13:38.367852 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-gfn8w/must-gather-8cfqv"] Jan 22 07:13:38 crc kubenswrapper[4720]: I0122 07:13:38.484192 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/0e6e6106-9cc5-4a36-9ce4-d4fedcf93bf5-must-gather-output\") pod \"must-gather-8cfqv\" (UID: \"0e6e6106-9cc5-4a36-9ce4-d4fedcf93bf5\") " pod="openshift-must-gather-gfn8w/must-gather-8cfqv" Jan 22 07:13:38 crc kubenswrapper[4720]: I0122 07:13:38.484341 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f9srx\" (UniqueName: \"kubernetes.io/projected/0e6e6106-9cc5-4a36-9ce4-d4fedcf93bf5-kube-api-access-f9srx\") pod \"must-gather-8cfqv\" (UID: \"0e6e6106-9cc5-4a36-9ce4-d4fedcf93bf5\") " pod="openshift-must-gather-gfn8w/must-gather-8cfqv" Jan 22 07:13:38 crc kubenswrapper[4720]: I0122 07:13:38.586327 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f9srx\" (UniqueName: \"kubernetes.io/projected/0e6e6106-9cc5-4a36-9ce4-d4fedcf93bf5-kube-api-access-f9srx\") pod \"must-gather-8cfqv\" (UID: \"0e6e6106-9cc5-4a36-9ce4-d4fedcf93bf5\") " pod="openshift-must-gather-gfn8w/must-gather-8cfqv" Jan 22 07:13:38 crc kubenswrapper[4720]: I0122 07:13:38.586439 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/0e6e6106-9cc5-4a36-9ce4-d4fedcf93bf5-must-gather-output\") pod \"must-gather-8cfqv\" (UID: \"0e6e6106-9cc5-4a36-9ce4-d4fedcf93bf5\") " pod="openshift-must-gather-gfn8w/must-gather-8cfqv" Jan 22 07:13:38 crc kubenswrapper[4720]: I0122 07:13:38.586870 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/0e6e6106-9cc5-4a36-9ce4-d4fedcf93bf5-must-gather-output\") pod \"must-gather-8cfqv\" (UID: \"0e6e6106-9cc5-4a36-9ce4-d4fedcf93bf5\") " pod="openshift-must-gather-gfn8w/must-gather-8cfqv" Jan 22 07:13:38 crc kubenswrapper[4720]: I0122 07:13:38.606423 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f9srx\" (UniqueName: \"kubernetes.io/projected/0e6e6106-9cc5-4a36-9ce4-d4fedcf93bf5-kube-api-access-f9srx\") pod \"must-gather-8cfqv\" (UID: \"0e6e6106-9cc5-4a36-9ce4-d4fedcf93bf5\") " pod="openshift-must-gather-gfn8w/must-gather-8cfqv" Jan 22 07:13:38 crc kubenswrapper[4720]: I0122 07:13:38.673210 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-gfn8w/must-gather-8cfqv" Jan 22 07:13:39 crc kubenswrapper[4720]: I0122 07:13:39.202572 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-gfn8w/must-gather-8cfqv"] Jan 22 07:13:39 crc kubenswrapper[4720]: W0122 07:13:39.205207 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0e6e6106_9cc5_4a36_9ce4_d4fedcf93bf5.slice/crio-8d88002155bb64a4191888e4451d4e230703e98555f3b58947c4149a81489c81 WatchSource:0}: Error finding container 8d88002155bb64a4191888e4451d4e230703e98555f3b58947c4149a81489c81: Status 404 returned error can't find the container with id 8d88002155bb64a4191888e4451d4e230703e98555f3b58947c4149a81489c81 Jan 22 07:13:40 crc kubenswrapper[4720]: I0122 07:13:40.094365 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-gfn8w/must-gather-8cfqv" event={"ID":"0e6e6106-9cc5-4a36-9ce4-d4fedcf93bf5","Type":"ContainerStarted","Data":"8d88002155bb64a4191888e4451d4e230703e98555f3b58947c4149a81489c81"} Jan 22 07:13:46 crc kubenswrapper[4720]: I0122 07:13:46.159110 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-gfn8w/must-gather-8cfqv" event={"ID":"0e6e6106-9cc5-4a36-9ce4-d4fedcf93bf5","Type":"ContainerStarted","Data":"cffc3228e0dae0494b5ac87eefe8d00f895825dbdfd7bf41a6fb0b928cc4395c"} Jan 22 07:13:47 crc kubenswrapper[4720]: I0122 07:13:47.168986 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-gfn8w/must-gather-8cfqv" event={"ID":"0e6e6106-9cc5-4a36-9ce4-d4fedcf93bf5","Type":"ContainerStarted","Data":"3bda583da3566ee4cb9f43510afb5015a28ac2ad8a956794c692b8868d2fb6cc"} Jan 22 07:13:47 crc kubenswrapper[4720]: I0122 07:13:47.188050 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-gfn8w/must-gather-8cfqv" podStartSLOduration=2.486516928 podStartE2EDuration="9.188032392s" podCreationTimestamp="2026-01-22 07:13:38 +0000 UTC" firstStartedPulling="2026-01-22 07:13:39.208242838 +0000 UTC m=+2311.350149543" lastFinishedPulling="2026-01-22 07:13:45.909758302 +0000 UTC m=+2318.051665007" observedRunningTime="2026-01-22 07:13:47.183466333 +0000 UTC m=+2319.325373048" watchObservedRunningTime="2026-01-22 07:13:47.188032392 +0000 UTC m=+2319.329939097" Jan 22 07:13:47 crc kubenswrapper[4720]: I0122 07:13:47.549437 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/ceilometer-0" Jan 22 07:14:14 crc kubenswrapper[4720]: I0122 07:14:14.377837 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-49vhq_dfe2424d-a522-48e7-921c-ddce7a244b13/controller/0.log" Jan 22 07:14:14 crc kubenswrapper[4720]: I0122 07:14:14.383678 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-49vhq_dfe2424d-a522-48e7-921c-ddce7a244b13/kube-rbac-proxy/0.log" Jan 22 07:14:14 crc kubenswrapper[4720]: I0122 07:14:14.405752 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-kdlvf_e41ff3f3-3360-4fd3-99ed-448ca648f3b6/controller/0.log" Jan 22 07:14:15 crc kubenswrapper[4720]: I0122 07:14:15.694349 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-kdlvf_e41ff3f3-3360-4fd3-99ed-448ca648f3b6/frr/0.log" Jan 22 07:14:15 crc kubenswrapper[4720]: I0122 07:14:15.702491 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-kdlvf_e41ff3f3-3360-4fd3-99ed-448ca648f3b6/reloader/0.log" Jan 22 07:14:15 crc kubenswrapper[4720]: I0122 07:14:15.713161 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-kdlvf_e41ff3f3-3360-4fd3-99ed-448ca648f3b6/frr-metrics/0.log" Jan 22 07:14:15 crc kubenswrapper[4720]: I0122 07:14:15.720202 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-kdlvf_e41ff3f3-3360-4fd3-99ed-448ca648f3b6/kube-rbac-proxy/0.log" Jan 22 07:14:15 crc kubenswrapper[4720]: I0122 07:14:15.731751 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-kdlvf_e41ff3f3-3360-4fd3-99ed-448ca648f3b6/kube-rbac-proxy-frr/0.log" Jan 22 07:14:15 crc kubenswrapper[4720]: I0122 07:14:15.739988 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-kdlvf_e41ff3f3-3360-4fd3-99ed-448ca648f3b6/cp-frr-files/0.log" Jan 22 07:14:15 crc kubenswrapper[4720]: I0122 07:14:15.746719 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-kdlvf_e41ff3f3-3360-4fd3-99ed-448ca648f3b6/cp-reloader/0.log" Jan 22 07:14:15 crc kubenswrapper[4720]: I0122 07:14:15.754243 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-kdlvf_e41ff3f3-3360-4fd3-99ed-448ca648f3b6/cp-metrics/0.log" Jan 22 07:14:15 crc kubenswrapper[4720]: I0122 07:14:15.765680 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-bnntl_15c14672-daa2-408e-a693-6ac7bef81828/frr-k8s-webhook-server/0.log" Jan 22 07:14:15 crc kubenswrapper[4720]: I0122 07:14:15.790589 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-7449444d4b-xh4ps_0e4de6cb-3e0d-46e0-a286-ab0ac437bb3c/manager/0.log" Jan 22 07:14:15 crc kubenswrapper[4720]: I0122 07:14:15.800420 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-fc49cf759-5hjst_48a13b3e-ee8e-4ba2-ad41-c83176d673a5/webhook-server/0.log" Jan 22 07:14:16 crc kubenswrapper[4720]: I0122 07:14:16.055335 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-67m5k_ce7509f5-f9e6-4130-b569-986bb9b61ffd/speaker/0.log" Jan 22 07:14:16 crc kubenswrapper[4720]: I0122 07:14:16.060684 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-67m5k_ce7509f5-f9e6-4130-b569-986bb9b61ffd/kube-rbac-proxy/0.log" Jan 22 07:14:19 crc kubenswrapper[4720]: I0122 07:14:19.016406 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_037caf7323fbd4952783c45a5586c50076cc7a0d4d49824be452e917b1vmprt_61a1b004-dab4-4246-93a6-81d023e08232/extract/0.log" Jan 22 07:14:19 crc kubenswrapper[4720]: I0122 07:14:19.025663 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_037caf7323fbd4952783c45a5586c50076cc7a0d4d49824be452e917b1vmprt_61a1b004-dab4-4246-93a6-81d023e08232/util/0.log" Jan 22 07:14:19 crc kubenswrapper[4720]: I0122 07:14:19.039878 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_037caf7323fbd4952783c45a5586c50076cc7a0d4d49824be452e917b1vmprt_61a1b004-dab4-4246-93a6-81d023e08232/pull/0.log" Jan 22 07:14:19 crc kubenswrapper[4720]: I0122 07:14:19.054146 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-59dd8b7cbf-kp5p9_a072cd1a-6b0c-4f3c-aa50-12a441bc87e3/manager/0.log" Jan 22 07:14:19 crc kubenswrapper[4720]: I0122 07:14:19.087990 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-69cf5d4557-g9d9q_15bf2b23-40fc-4958-9774-3c6e4f2c591a/manager/0.log" Jan 22 07:14:19 crc kubenswrapper[4720]: I0122 07:14:19.099990 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-b45d7bf98-4nvtq_cc13fc87-a160-4804-aef4-bb2c6ee89f13/manager/0.log" Jan 22 07:14:19 crc kubenswrapper[4720]: I0122 07:14:19.112568 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_df9484187864248b0416024bc436a60f9e23a62baa023aadac0f15570dbdgwp_40adb427-e593-415a-a491-fc641e94e5a2/extract/0.log" Jan 22 07:14:19 crc kubenswrapper[4720]: I0122 07:14:19.126224 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_df9484187864248b0416024bc436a60f9e23a62baa023aadac0f15570dbdgwp_40adb427-e593-415a-a491-fc641e94e5a2/util/0.log" Jan 22 07:14:19 crc kubenswrapper[4720]: I0122 07:14:19.133843 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_df9484187864248b0416024bc436a60f9e23a62baa023aadac0f15570dbdgwp_40adb427-e593-415a-a491-fc641e94e5a2/pull/0.log" Jan 22 07:14:19 crc kubenswrapper[4720]: I0122 07:14:19.146528 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-78fdd796fd-l7wpl_b464ce62-6f79-452c-a1c6-3c4878bcc8ba/manager/0.log" Jan 22 07:14:19 crc kubenswrapper[4720]: I0122 07:14:19.160448 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-594c8c9d5d-6rl8m_f30c0975-10b7-4d3b-98f7-63a02ae44927/manager/0.log" Jan 22 07:14:19 crc kubenswrapper[4720]: I0122 07:14:19.185291 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-77d5c5b54f-9jw99_7d67431b-e376-4558-83f2-af33c36b403b/manager/0.log" Jan 22 07:14:19 crc kubenswrapper[4720]: I0122 07:14:19.405551 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-54ccf4f85d-h6fd5_25a73ab8-0306-4e57-9417-ce651e370925/manager/0.log" Jan 22 07:14:19 crc kubenswrapper[4720]: I0122 07:14:19.418325 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-69d6c9f5b8-d5h9r_ace6e6bf-fddd-4105-af4e-5ad7fcd9f4d1/manager/0.log" Jan 22 07:14:19 crc kubenswrapper[4720]: I0122 07:14:19.555151 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-b8b6d4659-ddkv8_21ee70f0-2938-4d3a-9edf-beaa943261ab/manager/0.log" Jan 22 07:14:19 crc kubenswrapper[4720]: I0122 07:14:19.569751 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-78c6999f6f-nn4jg_d681304a-06cd-4870-b2b5-4f10936b7775/manager/0.log" Jan 22 07:14:19 crc kubenswrapper[4720]: I0122 07:14:19.610816 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-c87fff755-hq64w_de14bbbe-09fc-4f3c-8857-e3f7abca82f8/manager/0.log" Jan 22 07:14:19 crc kubenswrapper[4720]: I0122 07:14:19.620148 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-5d8f59fb49-47njc_fd7a6c01-1255-4f11-9dba-d3119753d47c/manager/0.log" Jan 22 07:14:19 crc kubenswrapper[4720]: I0122 07:14:19.637105 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-6b8bc8d87d-tnvdl_a2440b28-2217-482c-87c6-443616b586cb/manager/0.log" Jan 22 07:14:19 crc kubenswrapper[4720]: I0122 07:14:19.654898 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-7bd9774b6-gkhjf_e77f3a0e-4936-4b98-829b-6ea9ebe6e817/manager/0.log" Jan 22 07:14:19 crc kubenswrapper[4720]: I0122 07:14:19.681557 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-6b68b8b85485jc7_476ecc66-be12-4a68-8de1-3a062ec12f55/manager/0.log" Jan 22 07:14:20 crc kubenswrapper[4720]: I0122 07:14:20.163990 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-758ddb75c6-rjkvm_611fcdc7-1f1f-4530-9f34-68dae9bf4bd5/manager/0.log" Jan 22 07:14:20 crc kubenswrapper[4720]: I0122 07:14:20.173736 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-2g762_80ba9f63-ae49-476c-9282-f9b32f804ab3/registry-server/0.log" Jan 22 07:14:20 crc kubenswrapper[4720]: I0122 07:14:20.190763 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-55db956ddc-m2hkw_6a45a130-7295-401c-a63c-1df68c263764/manager/0.log" Jan 22 07:14:20 crc kubenswrapper[4720]: I0122 07:14:20.206865 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5d646b7d76-wmhbp_0a6de6f6-4bef-4f84-b4b8-4de46e9347b1/manager/0.log" Jan 22 07:14:20 crc kubenswrapper[4720]: I0122 07:14:20.233253 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-4nvz6_ff37e0b2-69d6-4217-b44f-a8bf016e45d6/operator/0.log" Jan 22 07:14:20 crc kubenswrapper[4720]: I0122 07:14:20.249501 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-547cbdb99f-xqx67_e9c3503d-2a2a-4f59-8c25-b28a681cdcfb/manager/0.log" Jan 22 07:14:20 crc kubenswrapper[4720]: I0122 07:14:20.422587 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-85cd9769bb-4tlfl_0e186e5c-83e6-465d-9353-e9314702d85a/manager/0.log" Jan 22 07:14:20 crc kubenswrapper[4720]: I0122 07:14:20.474160 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-69797bbcbd-2cs6n_1a3c6a91-064b-4006-b40f-ba7bc317aa83/manager/0.log" Jan 22 07:14:20 crc kubenswrapper[4720]: I0122 07:14:20.798983 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-db559d697-hjx74_12086a20-e137-4c50-8273-3823f70fbfda/manager/0.log" Jan 22 07:14:20 crc kubenswrapper[4720]: I0122 07:14:20.809464 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-index-dsskv_cb206016-4343-44c8-88e0-2f6400068e6d/registry-server/0.log" Jan 22 07:14:26 crc kubenswrapper[4720]: I0122 07:14:26.896479 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-zmhj8_b768bae9-692e-4039-8fea-d88359e16ee4/control-plane-machine-set-operator/0.log" Jan 22 07:14:26 crc kubenswrapper[4720]: I0122 07:14:26.910358 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-hxdwr_42322892-7874-4c59-ab1a-e3f205212e2e/kube-rbac-proxy/0.log" Jan 22 07:14:26 crc kubenswrapper[4720]: I0122 07:14:26.916411 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-hxdwr_42322892-7874-4c59-ab1a-e3f205212e2e/machine-api-operator/0.log" Jan 22 07:14:29 crc kubenswrapper[4720]: I0122 07:14:29.780370 4720 patch_prober.go:28] interesting pod/machine-config-daemon-bnsvd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 07:14:29 crc kubenswrapper[4720]: I0122 07:14:29.780709 4720 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" podUID="f4b26e9d-6a95-4b1c-9750-88b6aa100c67" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 07:14:33 crc kubenswrapper[4720]: I0122 07:14:33.321517 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-86cb77c54b-b9fc8_34089ae4-0f59-4909-96f9-b64ebe3e1a29/cert-manager-controller/0.log" Jan 22 07:14:33 crc kubenswrapper[4720]: I0122 07:14:33.337250 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-855d9ccff4-fkxm7_adf9f211-0196-4391-ae7a-c98e6e20147e/cert-manager-cainjector/0.log" Jan 22 07:14:33 crc kubenswrapper[4720]: I0122 07:14:33.348551 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-f4fb5df64-5klsr_34799a28-6c13-4288-946f-bc4d9e57b756/cert-manager-webhook/0.log" Jan 22 07:14:39 crc kubenswrapper[4720]: I0122 07:14:39.257347 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7754f76f8b-b4rzf_5515f37e-3d61-49f0-ba5d-5d6896527923/nmstate-console-plugin/0.log" Jan 22 07:14:39 crc kubenswrapper[4720]: I0122 07:14:39.276392 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-lfx9d_10356d8e-1761-4a55-ad79-fee34dd3aabf/nmstate-handler/0.log" Jan 22 07:14:39 crc kubenswrapper[4720]: I0122 07:14:39.301499 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-hnrr7_bce8fd7c-de7e-4ca2-bebf-c37b5c6d5ddb/nmstate-metrics/0.log" Jan 22 07:14:39 crc kubenswrapper[4720]: I0122 07:14:39.310506 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-hnrr7_bce8fd7c-de7e-4ca2-bebf-c37b5c6d5ddb/kube-rbac-proxy/0.log" Jan 22 07:14:39 crc kubenswrapper[4720]: I0122 07:14:39.350867 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-646758c888-hb6mk_2e442158-14c1-4ed3-a62b-679e64c48148/nmstate-operator/0.log" Jan 22 07:14:39 crc kubenswrapper[4720]: I0122 07:14:39.360653 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-8474b5b9d8-68ctq_1d307b97-f8d7-4624-ad82-c40af972eeff/nmstate-webhook/0.log" Jan 22 07:14:45 crc kubenswrapper[4720]: I0122 07:14:45.259071 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-5x7g8_fd9304c1-f30e-4235-9324-b437e69544ee/prometheus-operator/0.log" Jan 22 07:14:45 crc kubenswrapper[4720]: I0122 07:14:45.272001 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-57d798ddfd-rvwjb_dad79855-f5f9-42e6-ba0b-c2134f92c107/prometheus-operator-admission-webhook/0.log" Jan 22 07:14:45 crc kubenswrapper[4720]: I0122 07:14:45.291537 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-57d798ddfd-xtbwd_b47c94b1-cb06-4aa2-aa94-cbf6da840eb4/prometheus-operator-admission-webhook/0.log" Jan 22 07:14:45 crc kubenswrapper[4720]: I0122 07:14:45.330271 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-9tl9d_758ea564-cd8b-4e93-bd76-563d86418578/operator/0.log" Jan 22 07:14:45 crc kubenswrapper[4720]: I0122 07:14:45.340465 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-ui-dashboards-66cbf594b5-gqdw7_976fdae9-9e7d-46d1-b649-c0cfecd372ae/observability-ui-dashboards/0.log" Jan 22 07:14:45 crc kubenswrapper[4720]: I0122 07:14:45.355560 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-88ll2_db323c34-5995-4cc9-baab-de570b5fc5b3/perses-operator/0.log" Jan 22 07:14:51 crc kubenswrapper[4720]: I0122 07:14:51.253536 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-49vhq_dfe2424d-a522-48e7-921c-ddce7a244b13/controller/0.log" Jan 22 07:14:51 crc kubenswrapper[4720]: I0122 07:14:51.260616 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-49vhq_dfe2424d-a522-48e7-921c-ddce7a244b13/kube-rbac-proxy/0.log" Jan 22 07:14:51 crc kubenswrapper[4720]: I0122 07:14:51.300351 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-kdlvf_e41ff3f3-3360-4fd3-99ed-448ca648f3b6/controller/0.log" Jan 22 07:14:52 crc kubenswrapper[4720]: I0122 07:14:52.461323 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-kdlvf_e41ff3f3-3360-4fd3-99ed-448ca648f3b6/frr/0.log" Jan 22 07:14:52 crc kubenswrapper[4720]: I0122 07:14:52.476701 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-kdlvf_e41ff3f3-3360-4fd3-99ed-448ca648f3b6/reloader/0.log" Jan 22 07:14:52 crc kubenswrapper[4720]: I0122 07:14:52.481825 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-kdlvf_e41ff3f3-3360-4fd3-99ed-448ca648f3b6/frr-metrics/0.log" Jan 22 07:14:52 crc kubenswrapper[4720]: I0122 07:14:52.490311 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-kdlvf_e41ff3f3-3360-4fd3-99ed-448ca648f3b6/kube-rbac-proxy/0.log" Jan 22 07:14:52 crc kubenswrapper[4720]: I0122 07:14:52.499414 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-kdlvf_e41ff3f3-3360-4fd3-99ed-448ca648f3b6/kube-rbac-proxy-frr/0.log" Jan 22 07:14:52 crc kubenswrapper[4720]: I0122 07:14:52.506305 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-kdlvf_e41ff3f3-3360-4fd3-99ed-448ca648f3b6/cp-frr-files/0.log" Jan 22 07:14:52 crc kubenswrapper[4720]: I0122 07:14:52.513292 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-kdlvf_e41ff3f3-3360-4fd3-99ed-448ca648f3b6/cp-reloader/0.log" Jan 22 07:14:52 crc kubenswrapper[4720]: I0122 07:14:52.521354 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-kdlvf_e41ff3f3-3360-4fd3-99ed-448ca648f3b6/cp-metrics/0.log" Jan 22 07:14:52 crc kubenswrapper[4720]: I0122 07:14:52.533641 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-bnntl_15c14672-daa2-408e-a693-6ac7bef81828/frr-k8s-webhook-server/0.log" Jan 22 07:14:52 crc kubenswrapper[4720]: I0122 07:14:52.558408 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-7449444d4b-xh4ps_0e4de6cb-3e0d-46e0-a286-ab0ac437bb3c/manager/0.log" Jan 22 07:14:52 crc kubenswrapper[4720]: I0122 07:14:52.568463 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-fc49cf759-5hjst_48a13b3e-ee8e-4ba2-ad41-c83176d673a5/webhook-server/0.log" Jan 22 07:14:52 crc kubenswrapper[4720]: I0122 07:14:52.770881 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-67m5k_ce7509f5-f9e6-4130-b569-986bb9b61ffd/speaker/0.log" Jan 22 07:14:52 crc kubenswrapper[4720]: I0122 07:14:52.777969 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-67m5k_ce7509f5-f9e6-4130-b569-986bb9b61ffd/kube-rbac-proxy/0.log" Jan 22 07:14:57 crc kubenswrapper[4720]: I0122 07:14:57.578847 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_alertmanager-metric-storage-0_98e9f2ee-4cf2-414f-85c4-3dc1e3023a7c/alertmanager/0.log" Jan 22 07:14:57 crc kubenswrapper[4720]: I0122 07:14:57.588857 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_alertmanager-metric-storage-0_98e9f2ee-4cf2-414f-85c4-3dc1e3023a7c/config-reloader/0.log" Jan 22 07:14:57 crc kubenswrapper[4720]: I0122 07:14:57.596090 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_alertmanager-metric-storage-0_98e9f2ee-4cf2-414f-85c4-3dc1e3023a7c/init-config-reloader/0.log" Jan 22 07:14:57 crc kubenswrapper[4720]: I0122 07:14:57.645604 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_ceilometer-0_ec1e1e1e-2c55-4757-b2e2-8ce22381ff5d/ceilometer-central-agent/0.log" Jan 22 07:14:57 crc kubenswrapper[4720]: I0122 07:14:57.663094 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_ceilometer-0_ec1e1e1e-2c55-4757-b2e2-8ce22381ff5d/ceilometer-notification-agent/0.log" Jan 22 07:14:57 crc kubenswrapper[4720]: I0122 07:14:57.668365 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_ceilometer-0_ec1e1e1e-2c55-4757-b2e2-8ce22381ff5d/sg-core/0.log" Jan 22 07:14:57 crc kubenswrapper[4720]: I0122 07:14:57.674709 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_ceilometer-0_ec1e1e1e-2c55-4757-b2e2-8ce22381ff5d/proxy-httpd/0.log" Jan 22 07:14:57 crc kubenswrapper[4720]: I0122 07:14:57.765038 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_keystone-b68754746-52s4w_415f8b45-c7ea-49bc-aed1-1367c47fac0b/keystone-api/0.log" Jan 22 07:14:57 crc kubenswrapper[4720]: I0122 07:14:57.773892 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_keystone-cron-29484421-kq8hq_305d07dc-f843-4277-b728-1f00028fbac5/keystone-cron/0.log" Jan 22 07:14:57 crc kubenswrapper[4720]: I0122 07:14:57.783548 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_kube-state-metrics-0_ada8cab6-f7e3-47fc-8ce8-684f61ceb5b8/kube-state-metrics/0.log" Jan 22 07:14:59 crc kubenswrapper[4720]: I0122 07:14:59.779722 4720 patch_prober.go:28] interesting pod/machine-config-daemon-bnsvd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 07:14:59 crc kubenswrapper[4720]: I0122 07:14:59.780097 4720 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" podUID="f4b26e9d-6a95-4b1c-9750-88b6aa100c67" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 07:15:00 crc kubenswrapper[4720]: I0122 07:15:00.150680 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484435-7f9ch"] Jan 22 07:15:00 crc kubenswrapper[4720]: I0122 07:15:00.152089 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484435-7f9ch" Jan 22 07:15:00 crc kubenswrapper[4720]: I0122 07:15:00.154782 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 22 07:15:00 crc kubenswrapper[4720]: I0122 07:15:00.155000 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 22 07:15:00 crc kubenswrapper[4720]: I0122 07:15:00.162233 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484435-7f9ch"] Jan 22 07:15:00 crc kubenswrapper[4720]: I0122 07:15:00.254012 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/683b9d33-e312-4f08-b4f2-a20d2f49a303-config-volume\") pod \"collect-profiles-29484435-7f9ch\" (UID: \"683b9d33-e312-4f08-b4f2-a20d2f49a303\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484435-7f9ch" Jan 22 07:15:00 crc kubenswrapper[4720]: I0122 07:15:00.254069 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lfs77\" (UniqueName: \"kubernetes.io/projected/683b9d33-e312-4f08-b4f2-a20d2f49a303-kube-api-access-lfs77\") pod \"collect-profiles-29484435-7f9ch\" (UID: \"683b9d33-e312-4f08-b4f2-a20d2f49a303\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484435-7f9ch" Jan 22 07:15:00 crc kubenswrapper[4720]: I0122 07:15:00.254128 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/683b9d33-e312-4f08-b4f2-a20d2f49a303-secret-volume\") pod \"collect-profiles-29484435-7f9ch\" (UID: \"683b9d33-e312-4f08-b4f2-a20d2f49a303\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484435-7f9ch" Jan 22 07:15:00 crc kubenswrapper[4720]: I0122 07:15:00.355335 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/683b9d33-e312-4f08-b4f2-a20d2f49a303-config-volume\") pod \"collect-profiles-29484435-7f9ch\" (UID: \"683b9d33-e312-4f08-b4f2-a20d2f49a303\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484435-7f9ch" Jan 22 07:15:00 crc kubenswrapper[4720]: I0122 07:15:00.355395 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lfs77\" (UniqueName: \"kubernetes.io/projected/683b9d33-e312-4f08-b4f2-a20d2f49a303-kube-api-access-lfs77\") pod \"collect-profiles-29484435-7f9ch\" (UID: \"683b9d33-e312-4f08-b4f2-a20d2f49a303\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484435-7f9ch" Jan 22 07:15:00 crc kubenswrapper[4720]: I0122 07:15:00.355456 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/683b9d33-e312-4f08-b4f2-a20d2f49a303-secret-volume\") pod \"collect-profiles-29484435-7f9ch\" (UID: \"683b9d33-e312-4f08-b4f2-a20d2f49a303\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484435-7f9ch" Jan 22 07:15:00 crc kubenswrapper[4720]: I0122 07:15:00.356217 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/683b9d33-e312-4f08-b4f2-a20d2f49a303-config-volume\") pod \"collect-profiles-29484435-7f9ch\" (UID: \"683b9d33-e312-4f08-b4f2-a20d2f49a303\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484435-7f9ch" Jan 22 07:15:00 crc kubenswrapper[4720]: I0122 07:15:00.361042 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/683b9d33-e312-4f08-b4f2-a20d2f49a303-secret-volume\") pod \"collect-profiles-29484435-7f9ch\" (UID: \"683b9d33-e312-4f08-b4f2-a20d2f49a303\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484435-7f9ch" Jan 22 07:15:00 crc kubenswrapper[4720]: I0122 07:15:00.374495 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lfs77\" (UniqueName: \"kubernetes.io/projected/683b9d33-e312-4f08-b4f2-a20d2f49a303-kube-api-access-lfs77\") pod \"collect-profiles-29484435-7f9ch\" (UID: \"683b9d33-e312-4f08-b4f2-a20d2f49a303\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484435-7f9ch" Jan 22 07:15:00 crc kubenswrapper[4720]: I0122 07:15:00.497549 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484435-7f9ch" Jan 22 07:15:01 crc kubenswrapper[4720]: I0122 07:15:01.061315 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484435-7f9ch"] Jan 22 07:15:01 crc kubenswrapper[4720]: I0122 07:15:01.153792 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29484435-7f9ch" event={"ID":"683b9d33-e312-4f08-b4f2-a20d2f49a303","Type":"ContainerStarted","Data":"f0a20de1b5e0ed81bb5a0bc3f5b9de98117828095f320ce43813a3d5893b2bb8"} Jan 22 07:15:02 crc kubenswrapper[4720]: I0122 07:15:02.168230 4720 generic.go:334] "Generic (PLEG): container finished" podID="683b9d33-e312-4f08-b4f2-a20d2f49a303" containerID="26087f166e2555ebf351c671e81551886e0aaa57c68957e9534e0a516fd9d5e8" exitCode=0 Jan 22 07:15:02 crc kubenswrapper[4720]: I0122 07:15:02.168770 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29484435-7f9ch" event={"ID":"683b9d33-e312-4f08-b4f2-a20d2f49a303","Type":"ContainerDied","Data":"26087f166e2555ebf351c671e81551886e0aaa57c68957e9534e0a516fd9d5e8"} Jan 22 07:15:03 crc kubenswrapper[4720]: I0122 07:15:03.477782 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484435-7f9ch" Jan 22 07:15:03 crc kubenswrapper[4720]: I0122 07:15:03.608696 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/683b9d33-e312-4f08-b4f2-a20d2f49a303-config-volume\") pod \"683b9d33-e312-4f08-b4f2-a20d2f49a303\" (UID: \"683b9d33-e312-4f08-b4f2-a20d2f49a303\") " Jan 22 07:15:03 crc kubenswrapper[4720]: I0122 07:15:03.609144 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/683b9d33-e312-4f08-b4f2-a20d2f49a303-secret-volume\") pod \"683b9d33-e312-4f08-b4f2-a20d2f49a303\" (UID: \"683b9d33-e312-4f08-b4f2-a20d2f49a303\") " Jan 22 07:15:03 crc kubenswrapper[4720]: I0122 07:15:03.609338 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lfs77\" (UniqueName: \"kubernetes.io/projected/683b9d33-e312-4f08-b4f2-a20d2f49a303-kube-api-access-lfs77\") pod \"683b9d33-e312-4f08-b4f2-a20d2f49a303\" (UID: \"683b9d33-e312-4f08-b4f2-a20d2f49a303\") " Jan 22 07:15:03 crc kubenswrapper[4720]: I0122 07:15:03.609553 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/683b9d33-e312-4f08-b4f2-a20d2f49a303-config-volume" (OuterVolumeSpecName: "config-volume") pod "683b9d33-e312-4f08-b4f2-a20d2f49a303" (UID: "683b9d33-e312-4f08-b4f2-a20d2f49a303"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 07:15:03 crc kubenswrapper[4720]: I0122 07:15:03.609790 4720 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/683b9d33-e312-4f08-b4f2-a20d2f49a303-config-volume\") on node \"crc\" DevicePath \"\"" Jan 22 07:15:03 crc kubenswrapper[4720]: I0122 07:15:03.615233 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/683b9d33-e312-4f08-b4f2-a20d2f49a303-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "683b9d33-e312-4f08-b4f2-a20d2f49a303" (UID: "683b9d33-e312-4f08-b4f2-a20d2f49a303"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:15:03 crc kubenswrapper[4720]: I0122 07:15:03.616173 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/683b9d33-e312-4f08-b4f2-a20d2f49a303-kube-api-access-lfs77" (OuterVolumeSpecName: "kube-api-access-lfs77") pod "683b9d33-e312-4f08-b4f2-a20d2f49a303" (UID: "683b9d33-e312-4f08-b4f2-a20d2f49a303"). InnerVolumeSpecName "kube-api-access-lfs77". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 07:15:03 crc kubenswrapper[4720]: I0122 07:15:03.712051 4720 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/683b9d33-e312-4f08-b4f2-a20d2f49a303-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 22 07:15:03 crc kubenswrapper[4720]: I0122 07:15:03.712235 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lfs77\" (UniqueName: \"kubernetes.io/projected/683b9d33-e312-4f08-b4f2-a20d2f49a303-kube-api-access-lfs77\") on node \"crc\" DevicePath \"\"" Jan 22 07:15:04 crc kubenswrapper[4720]: I0122 07:15:04.187416 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29484435-7f9ch" event={"ID":"683b9d33-e312-4f08-b4f2-a20d2f49a303","Type":"ContainerDied","Data":"f0a20de1b5e0ed81bb5a0bc3f5b9de98117828095f320ce43813a3d5893b2bb8"} Jan 22 07:15:04 crc kubenswrapper[4720]: I0122 07:15:04.187466 4720 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f0a20de1b5e0ed81bb5a0bc3f5b9de98117828095f320ce43813a3d5893b2bb8" Jan 22 07:15:04 crc kubenswrapper[4720]: I0122 07:15:04.187475 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484435-7f9ch" Jan 22 07:15:04 crc kubenswrapper[4720]: I0122 07:15:04.563495 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484390-gc7lm"] Jan 22 07:15:04 crc kubenswrapper[4720]: I0122 07:15:04.572165 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484390-gc7lm"] Jan 22 07:15:06 crc kubenswrapper[4720]: I0122 07:15:06.221929 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e65daf94-2073-4b05-8b99-f80d7f777d12" path="/var/lib/kubelet/pods/e65daf94-2073-4b05-8b99-f80d7f777d12/volumes" Jan 22 07:15:08 crc kubenswrapper[4720]: I0122 07:15:08.041067 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_memcached-0_db50c3b8-8300-4689-be75-dbcc3b10a27f/memcached/0.log" Jan 22 07:15:08 crc kubenswrapper[4720]: I0122 07:15:08.074289 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_openstack-galera-0_7bcd3174-ca47-4882-a14d-1b631d973fcc/galera/0.log" Jan 22 07:15:08 crc kubenswrapper[4720]: I0122 07:15:08.087046 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_openstack-galera-0_7bcd3174-ca47-4882-a14d-1b631d973fcc/mysql-bootstrap/0.log" Jan 22 07:15:08 crc kubenswrapper[4720]: I0122 07:15:08.093229 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_openstackclient_29110f04-f286-4428-b872-a3ed6b6c0919/openstackclient/0.log" Jan 22 07:15:08 crc kubenswrapper[4720]: I0122 07:15:08.135741 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_prometheus-metric-storage-0_dcb7da9c-0e97-404e-9b99-87c192455159/prometheus/0.log" Jan 22 07:15:08 crc kubenswrapper[4720]: I0122 07:15:08.141885 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_prometheus-metric-storage-0_dcb7da9c-0e97-404e-9b99-87c192455159/config-reloader/0.log" Jan 22 07:15:08 crc kubenswrapper[4720]: I0122 07:15:08.160184 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_prometheus-metric-storage-0_dcb7da9c-0e97-404e-9b99-87c192455159/thanos-sidecar/0.log" Jan 22 07:15:08 crc kubenswrapper[4720]: I0122 07:15:08.170050 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_prometheus-metric-storage-0_dcb7da9c-0e97-404e-9b99-87c192455159/init-config-reloader/0.log" Jan 22 07:15:08 crc kubenswrapper[4720]: I0122 07:15:08.196044 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_rabbitmq-notifications-server-0_33c789df-54ca-47c4-9688-74e392e3b121/rabbitmq/0.log" Jan 22 07:15:08 crc kubenswrapper[4720]: I0122 07:15:08.201521 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_rabbitmq-notifications-server-0_33c789df-54ca-47c4-9688-74e392e3b121/setup-container/0.log" Jan 22 07:15:08 crc kubenswrapper[4720]: I0122 07:15:08.260604 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_rabbitmq-server-0_9482dbed-80f4-4d45-9402-5315c0d59310/rabbitmq/0.log" Jan 22 07:15:08 crc kubenswrapper[4720]: I0122 07:15:08.267136 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_rabbitmq-server-0_9482dbed-80f4-4d45-9402-5315c0d59310/setup-container/0.log" Jan 22 07:15:16 crc kubenswrapper[4720]: I0122 07:15:16.962790 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931apsqcw_fc1373bb-3c54-4e19-9129-6d8b288bdc1a/extract/0.log" Jan 22 07:15:16 crc kubenswrapper[4720]: I0122 07:15:16.973314 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931apsqcw_fc1373bb-3c54-4e19-9129-6d8b288bdc1a/util/0.log" Jan 22 07:15:17 crc kubenswrapper[4720]: I0122 07:15:17.007764 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931apsqcw_fc1373bb-3c54-4e19-9129-6d8b288bdc1a/pull/0.log" Jan 22 07:15:17 crc kubenswrapper[4720]: I0122 07:15:17.017476 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2pcgr_da345b49-94f9-4cab-ba07-78dd68bd874b/extract/0.log" Jan 22 07:15:17 crc kubenswrapper[4720]: I0122 07:15:17.025583 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2pcgr_da345b49-94f9-4cab-ba07-78dd68bd874b/util/0.log" Jan 22 07:15:17 crc kubenswrapper[4720]: I0122 07:15:17.034222 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc2pcgr_da345b49-94f9-4cab-ba07-78dd68bd874b/pull/0.log" Jan 22 07:15:17 crc kubenswrapper[4720]: I0122 07:15:17.047209 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7139sftq_86c83893-dd50-4631-aae4-b1069bac73c6/extract/0.log" Jan 22 07:15:17 crc kubenswrapper[4720]: I0122 07:15:17.055745 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7139sftq_86c83893-dd50-4631-aae4-b1069bac73c6/util/0.log" Jan 22 07:15:17 crc kubenswrapper[4720]: I0122 07:15:17.066970 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7139sftq_86c83893-dd50-4631-aae4-b1069bac73c6/pull/0.log" Jan 22 07:15:17 crc kubenswrapper[4720]: I0122 07:15:17.078475 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f082sjlt_dc107a3a-440f-43c6-a92c-378d6fb30761/extract/0.log" Jan 22 07:15:17 crc kubenswrapper[4720]: I0122 07:15:17.089116 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f082sjlt_dc107a3a-440f-43c6-a92c-378d6fb30761/util/0.log" Jan 22 07:15:17 crc kubenswrapper[4720]: I0122 07:15:17.097192 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f082sjlt_dc107a3a-440f-43c6-a92c-378d6fb30761/pull/0.log" Jan 22 07:15:17 crc kubenswrapper[4720]: I0122 07:15:17.558436 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-tbv85_2f3da976-0b3b-4d11-82fa-a5ea4ebcb38b/registry-server/0.log" Jan 22 07:15:17 crc kubenswrapper[4720]: I0122 07:15:17.563567 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-tbv85_2f3da976-0b3b-4d11-82fa-a5ea4ebcb38b/extract-utilities/0.log" Jan 22 07:15:17 crc kubenswrapper[4720]: I0122 07:15:17.570817 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-tbv85_2f3da976-0b3b-4d11-82fa-a5ea4ebcb38b/extract-content/0.log" Jan 22 07:15:18 crc kubenswrapper[4720]: I0122 07:15:18.201428 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-2gqg2_90763cf9-c272-4870-8f6d-9e3b506a712f/registry-server/0.log" Jan 22 07:15:18 crc kubenswrapper[4720]: I0122 07:15:18.207270 4720 scope.go:117] "RemoveContainer" containerID="22439d6ab6366b7b10e629bb151a0d740839990657e42c6e5e7d0508c60a1d7d" Jan 22 07:15:18 crc kubenswrapper[4720]: I0122 07:15:18.265527 4720 scope.go:117] "RemoveContainer" containerID="3cea65862f84662a715490058ad9282de891f32f7264ad46aa88d9dab42dbfe5" Jan 22 07:15:18 crc kubenswrapper[4720]: I0122 07:15:18.332048 4720 scope.go:117] "RemoveContainer" containerID="a2e87ee124f6e4831029ea5e0fb764572aa00f7812355fe16822db3f2ea7182b" Jan 22 07:15:18 crc kubenswrapper[4720]: I0122 07:15:18.390315 4720 scope.go:117] "RemoveContainer" containerID="5d0d3b8683c59fdf0268b786ffa5f5b29dc63bb46e658131ee301fdf9ea9ad73" Jan 22 07:15:18 crc kubenswrapper[4720]: I0122 07:15:18.409614 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-2gqg2_90763cf9-c272-4870-8f6d-9e3b506a712f/extract-utilities/0.log" Jan 22 07:15:18 crc kubenswrapper[4720]: I0122 07:15:18.416741 4720 scope.go:117] "RemoveContainer" containerID="ac3ea03e1ffb648d93b05d8b87ff7e5efcaa75069fdd8b3b46729fd5b7d08889" Jan 22 07:15:18 crc kubenswrapper[4720]: I0122 07:15:18.433938 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-2gqg2_90763cf9-c272-4870-8f6d-9e3b506a712f/extract-content/0.log" Jan 22 07:15:18 crc kubenswrapper[4720]: I0122 07:15:18.451833 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-bg62x_1311d24a-e35a-489c-8010-0bca3da90f0f/marketplace-operator/0.log" Jan 22 07:15:18 crc kubenswrapper[4720]: I0122 07:15:18.575397 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-6rvhm_58ab1210-e65f-4e2b-a3f9-dacecd42d90d/registry-server/0.log" Jan 22 07:15:18 crc kubenswrapper[4720]: I0122 07:15:18.580004 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-6rvhm_58ab1210-e65f-4e2b-a3f9-dacecd42d90d/extract-utilities/0.log" Jan 22 07:15:18 crc kubenswrapper[4720]: I0122 07:15:18.587266 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-6rvhm_58ab1210-e65f-4e2b-a3f9-dacecd42d90d/extract-content/0.log" Jan 22 07:15:19 crc kubenswrapper[4720]: I0122 07:15:19.034163 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-qwws2_737d462f-6525-4b14-b25d-bc2687d9c5e8/registry-server/0.log" Jan 22 07:15:19 crc kubenswrapper[4720]: I0122 07:15:19.039369 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-qwws2_737d462f-6525-4b14-b25d-bc2687d9c5e8/extract-utilities/0.log" Jan 22 07:15:19 crc kubenswrapper[4720]: I0122 07:15:19.046642 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-qwws2_737d462f-6525-4b14-b25d-bc2687d9c5e8/extract-content/0.log" Jan 22 07:15:23 crc kubenswrapper[4720]: I0122 07:15:23.255967 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-5x7g8_fd9304c1-f30e-4235-9324-b437e69544ee/prometheus-operator/0.log" Jan 22 07:15:23 crc kubenswrapper[4720]: I0122 07:15:23.267813 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-57d798ddfd-rvwjb_dad79855-f5f9-42e6-ba0b-c2134f92c107/prometheus-operator-admission-webhook/0.log" Jan 22 07:15:23 crc kubenswrapper[4720]: I0122 07:15:23.280220 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-57d798ddfd-xtbwd_b47c94b1-cb06-4aa2-aa94-cbf6da840eb4/prometheus-operator-admission-webhook/0.log" Jan 22 07:15:23 crc kubenswrapper[4720]: I0122 07:15:23.312173 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-9tl9d_758ea564-cd8b-4e93-bd76-563d86418578/operator/0.log" Jan 22 07:15:23 crc kubenswrapper[4720]: I0122 07:15:23.318945 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-ui-dashboards-66cbf594b5-gqdw7_976fdae9-9e7d-46d1-b649-c0cfecd372ae/observability-ui-dashboards/0.log" Jan 22 07:15:23 crc kubenswrapper[4720]: I0122 07:15:23.332322 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-88ll2_db323c34-5995-4cc9-baab-de570b5fc5b3/perses-operator/0.log" Jan 22 07:15:29 crc kubenswrapper[4720]: I0122 07:15:29.780940 4720 patch_prober.go:28] interesting pod/machine-config-daemon-bnsvd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 07:15:29 crc kubenswrapper[4720]: I0122 07:15:29.781562 4720 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" podUID="f4b26e9d-6a95-4b1c-9750-88b6aa100c67" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 07:15:29 crc kubenswrapper[4720]: I0122 07:15:29.781622 4720 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" Jan 22 07:15:29 crc kubenswrapper[4720]: I0122 07:15:29.782425 4720 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0eb7d27ae9c0d299c1e1a3566b73c7f2c1ab85b6c0032703c951b172cd0c528c"} pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 22 07:15:29 crc kubenswrapper[4720]: I0122 07:15:29.782491 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" podUID="f4b26e9d-6a95-4b1c-9750-88b6aa100c67" containerName="machine-config-daemon" containerID="cri-o://0eb7d27ae9c0d299c1e1a3566b73c7f2c1ab85b6c0032703c951b172cd0c528c" gracePeriod=600 Jan 22 07:15:29 crc kubenswrapper[4720]: E0122 07:15:29.901890 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bnsvd_openshift-machine-config-operator(f4b26e9d-6a95-4b1c-9750-88b6aa100c67)\"" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" podUID="f4b26e9d-6a95-4b1c-9750-88b6aa100c67" Jan 22 07:15:30 crc kubenswrapper[4720]: I0122 07:15:30.431044 4720 generic.go:334] "Generic (PLEG): container finished" podID="f4b26e9d-6a95-4b1c-9750-88b6aa100c67" containerID="0eb7d27ae9c0d299c1e1a3566b73c7f2c1ab85b6c0032703c951b172cd0c528c" exitCode=0 Jan 22 07:15:30 crc kubenswrapper[4720]: I0122 07:15:30.431093 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" event={"ID":"f4b26e9d-6a95-4b1c-9750-88b6aa100c67","Type":"ContainerDied","Data":"0eb7d27ae9c0d299c1e1a3566b73c7f2c1ab85b6c0032703c951b172cd0c528c"} Jan 22 07:15:30 crc kubenswrapper[4720]: I0122 07:15:30.431134 4720 scope.go:117] "RemoveContainer" containerID="2e4c7f6c5c98df3a612e9e9bbe7b31422556264b5ee2718f6d180f5bbbf48836" Jan 22 07:15:30 crc kubenswrapper[4720]: I0122 07:15:30.431869 4720 scope.go:117] "RemoveContainer" containerID="0eb7d27ae9c0d299c1e1a3566b73c7f2c1ab85b6c0032703c951b172cd0c528c" Jan 22 07:15:30 crc kubenswrapper[4720]: E0122 07:15:30.432135 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bnsvd_openshift-machine-config-operator(f4b26e9d-6a95-4b1c-9750-88b6aa100c67)\"" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" podUID="f4b26e9d-6a95-4b1c-9750-88b6aa100c67" Jan 22 07:15:43 crc kubenswrapper[4720]: I0122 07:15:43.211080 4720 scope.go:117] "RemoveContainer" containerID="0eb7d27ae9c0d299c1e1a3566b73c7f2c1ab85b6c0032703c951b172cd0c528c" Jan 22 07:15:43 crc kubenswrapper[4720]: E0122 07:15:43.211812 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bnsvd_openshift-machine-config-operator(f4b26e9d-6a95-4b1c-9750-88b6aa100c67)\"" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" podUID="f4b26e9d-6a95-4b1c-9750-88b6aa100c67" Jan 22 07:15:56 crc kubenswrapper[4720]: I0122 07:15:56.211489 4720 scope.go:117] "RemoveContainer" containerID="0eb7d27ae9c0d299c1e1a3566b73c7f2c1ab85b6c0032703c951b172cd0c528c" Jan 22 07:15:56 crc kubenswrapper[4720]: E0122 07:15:56.212080 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bnsvd_openshift-machine-config-operator(f4b26e9d-6a95-4b1c-9750-88b6aa100c67)\"" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" podUID="f4b26e9d-6a95-4b1c-9750-88b6aa100c67" Jan 22 07:16:08 crc kubenswrapper[4720]: I0122 07:16:08.214884 4720 scope.go:117] "RemoveContainer" containerID="0eb7d27ae9c0d299c1e1a3566b73c7f2c1ab85b6c0032703c951b172cd0c528c" Jan 22 07:16:08 crc kubenswrapper[4720]: E0122 07:16:08.215657 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bnsvd_openshift-machine-config-operator(f4b26e9d-6a95-4b1c-9750-88b6aa100c67)\"" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" podUID="f4b26e9d-6a95-4b1c-9750-88b6aa100c67" Jan 22 07:16:16 crc kubenswrapper[4720]: I0122 07:16:16.531048 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-5x7g8_fd9304c1-f30e-4235-9324-b437e69544ee/prometheus-operator/0.log" Jan 22 07:16:16 crc kubenswrapper[4720]: I0122 07:16:16.542602 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-57d798ddfd-rvwjb_dad79855-f5f9-42e6-ba0b-c2134f92c107/prometheus-operator-admission-webhook/0.log" Jan 22 07:16:16 crc kubenswrapper[4720]: I0122 07:16:16.562356 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-57d798ddfd-xtbwd_b47c94b1-cb06-4aa2-aa94-cbf6da840eb4/prometheus-operator-admission-webhook/0.log" Jan 22 07:16:16 crc kubenswrapper[4720]: I0122 07:16:16.598939 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-9tl9d_758ea564-cd8b-4e93-bd76-563d86418578/operator/0.log" Jan 22 07:16:16 crc kubenswrapper[4720]: I0122 07:16:16.606935 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-ui-dashboards-66cbf594b5-gqdw7_976fdae9-9e7d-46d1-b649-c0cfecd372ae/observability-ui-dashboards/0.log" Jan 22 07:16:16 crc kubenswrapper[4720]: I0122 07:16:16.626143 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-88ll2_db323c34-5995-4cc9-baab-de570b5fc5b3/perses-operator/0.log" Jan 22 07:16:16 crc kubenswrapper[4720]: I0122 07:16:16.830361 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-86cb77c54b-b9fc8_34089ae4-0f59-4909-96f9-b64ebe3e1a29/cert-manager-controller/0.log" Jan 22 07:16:16 crc kubenswrapper[4720]: I0122 07:16:16.946021 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-855d9ccff4-fkxm7_adf9f211-0196-4391-ae7a-c98e6e20147e/cert-manager-cainjector/0.log" Jan 22 07:16:16 crc kubenswrapper[4720]: I0122 07:16:16.974924 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-f4fb5df64-5klsr_34799a28-6c13-4288-946f-bc4d9e57b756/cert-manager-webhook/0.log" Jan 22 07:16:18 crc kubenswrapper[4720]: I0122 07:16:18.426077 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-49vhq_dfe2424d-a522-48e7-921c-ddce7a244b13/controller/0.log" Jan 22 07:16:18 crc kubenswrapper[4720]: I0122 07:16:18.448670 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_037caf7323fbd4952783c45a5586c50076cc7a0d4d49824be452e917b1vmprt_61a1b004-dab4-4246-93a6-81d023e08232/extract/0.log" Jan 22 07:16:18 crc kubenswrapper[4720]: I0122 07:16:18.458608 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_037caf7323fbd4952783c45a5586c50076cc7a0d4d49824be452e917b1vmprt_61a1b004-dab4-4246-93a6-81d023e08232/util/0.log" Jan 22 07:16:18 crc kubenswrapper[4720]: I0122 07:16:18.458855 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-49vhq_dfe2424d-a522-48e7-921c-ddce7a244b13/kube-rbac-proxy/0.log" Jan 22 07:16:18 crc kubenswrapper[4720]: I0122 07:16:18.467199 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_037caf7323fbd4952783c45a5586c50076cc7a0d4d49824be452e917b1vmprt_61a1b004-dab4-4246-93a6-81d023e08232/pull/0.log" Jan 22 07:16:18 crc kubenswrapper[4720]: I0122 07:16:18.502630 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-kdlvf_e41ff3f3-3360-4fd3-99ed-448ca648f3b6/controller/0.log" Jan 22 07:16:18 crc kubenswrapper[4720]: I0122 07:16:18.502836 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-59dd8b7cbf-kp5p9_a072cd1a-6b0c-4f3c-aa50-12a441bc87e3/manager/0.log" Jan 22 07:16:18 crc kubenswrapper[4720]: I0122 07:16:18.551202 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-69cf5d4557-g9d9q_15bf2b23-40fc-4958-9774-3c6e4f2c591a/manager/0.log" Jan 22 07:16:18 crc kubenswrapper[4720]: I0122 07:16:18.559036 4720 scope.go:117] "RemoveContainer" containerID="7869ba0810291f5b33a7836d7f41e37e907cada1c5944038c89e9e4ad91c493a" Jan 22 07:16:18 crc kubenswrapper[4720]: I0122 07:16:18.569616 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-b45d7bf98-4nvtq_cc13fc87-a160-4804-aef4-bb2c6ee89f13/manager/0.log" Jan 22 07:16:18 crc kubenswrapper[4720]: I0122 07:16:18.595219 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_df9484187864248b0416024bc436a60f9e23a62baa023aadac0f15570dbdgwp_40adb427-e593-415a-a491-fc641e94e5a2/extract/0.log" Jan 22 07:16:18 crc kubenswrapper[4720]: I0122 07:16:18.597457 4720 scope.go:117] "RemoveContainer" containerID="07fd63215362add398cc17fe1a62a2747a73c18494f3a671e91152137886d9b4" Jan 22 07:16:18 crc kubenswrapper[4720]: I0122 07:16:18.602343 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_df9484187864248b0416024bc436a60f9e23a62baa023aadac0f15570dbdgwp_40adb427-e593-415a-a491-fc641e94e5a2/util/0.log" Jan 22 07:16:18 crc kubenswrapper[4720]: I0122 07:16:18.621504 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_df9484187864248b0416024bc436a60f9e23a62baa023aadac0f15570dbdgwp_40adb427-e593-415a-a491-fc641e94e5a2/pull/0.log" Jan 22 07:16:18 crc kubenswrapper[4720]: I0122 07:16:18.645514 4720 scope.go:117] "RemoveContainer" containerID="b0afe50b82a4ed4fa15c71d75b0addd355b0e11e803757d7a72ecdc39fd38963" Jan 22 07:16:18 crc kubenswrapper[4720]: I0122 07:16:18.650032 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-78fdd796fd-l7wpl_b464ce62-6f79-452c-a1c6-3c4878bcc8ba/manager/0.log" Jan 22 07:16:18 crc kubenswrapper[4720]: I0122 07:16:18.661545 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-594c8c9d5d-6rl8m_f30c0975-10b7-4d3b-98f7-63a02ae44927/manager/0.log" Jan 22 07:16:18 crc kubenswrapper[4720]: I0122 07:16:18.672750 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-77d5c5b54f-9jw99_7d67431b-e376-4558-83f2-af33c36b403b/manager/0.log" Jan 22 07:16:18 crc kubenswrapper[4720]: I0122 07:16:18.673405 4720 scope.go:117] "RemoveContainer" containerID="757b4c064a93c7e227f8248cc00357eebf04139d73204e5f3483b7e74714082e" Jan 22 07:16:18 crc kubenswrapper[4720]: I0122 07:16:18.706806 4720 scope.go:117] "RemoveContainer" containerID="ae4d37f38ddf0bd212636ab0a7b8c476af35c18661e463ed2b384e725d443be0" Jan 22 07:16:18 crc kubenswrapper[4720]: I0122 07:16:18.788112 4720 scope.go:117] "RemoveContainer" containerID="c1d3677247e926de6bdf9b25d658331f06213d5da76ebf21d4fa186dfdde6499" Jan 22 07:16:19 crc kubenswrapper[4720]: I0122 07:16:19.054802 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-54ccf4f85d-h6fd5_25a73ab8-0306-4e57-9417-ce651e370925/manager/0.log" Jan 22 07:16:19 crc kubenswrapper[4720]: I0122 07:16:19.072392 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-69d6c9f5b8-d5h9r_ace6e6bf-fddd-4105-af4e-5ad7fcd9f4d1/manager/0.log" Jan 22 07:16:19 crc kubenswrapper[4720]: I0122 07:16:19.239351 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-b8b6d4659-ddkv8_21ee70f0-2938-4d3a-9edf-beaa943261ab/manager/0.log" Jan 22 07:16:19 crc kubenswrapper[4720]: I0122 07:16:19.259464 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-78c6999f6f-nn4jg_d681304a-06cd-4870-b2b5-4f10936b7775/manager/0.log" Jan 22 07:16:19 crc kubenswrapper[4720]: I0122 07:16:19.309554 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-c87fff755-hq64w_de14bbbe-09fc-4f3c-8857-e3f7abca82f8/manager/0.log" Jan 22 07:16:19 crc kubenswrapper[4720]: I0122 07:16:19.320927 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-5d8f59fb49-47njc_fd7a6c01-1255-4f11-9dba-d3119753d47c/manager/0.log" Jan 22 07:16:19 crc kubenswrapper[4720]: I0122 07:16:19.334976 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-6b8bc8d87d-tnvdl_a2440b28-2217-482c-87c6-443616b586cb/manager/0.log" Jan 22 07:16:19 crc kubenswrapper[4720]: I0122 07:16:19.351300 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-7bd9774b6-gkhjf_e77f3a0e-4936-4b98-829b-6ea9ebe6e817/manager/0.log" Jan 22 07:16:19 crc kubenswrapper[4720]: I0122 07:16:19.368065 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-6b68b8b85485jc7_476ecc66-be12-4a68-8de1-3a062ec12f55/manager/0.log" Jan 22 07:16:20 crc kubenswrapper[4720]: I0122 07:16:20.087664 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-758ddb75c6-rjkvm_611fcdc7-1f1f-4530-9f34-68dae9bf4bd5/manager/0.log" Jan 22 07:16:20 crc kubenswrapper[4720]: I0122 07:16:20.097492 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-2g762_80ba9f63-ae49-476c-9282-f9b32f804ab3/registry-server/0.log" Jan 22 07:16:20 crc kubenswrapper[4720]: I0122 07:16:20.115679 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-55db956ddc-m2hkw_6a45a130-7295-401c-a63c-1df68c263764/manager/0.log" Jan 22 07:16:20 crc kubenswrapper[4720]: I0122 07:16:20.127509 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5d646b7d76-wmhbp_0a6de6f6-4bef-4f84-b4b8-4de46e9347b1/manager/0.log" Jan 22 07:16:20 crc kubenswrapper[4720]: I0122 07:16:20.145224 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-4nvz6_ff37e0b2-69d6-4217-b44f-a8bf016e45d6/operator/0.log" Jan 22 07:16:20 crc kubenswrapper[4720]: I0122 07:16:20.161388 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-547cbdb99f-xqx67_e9c3503d-2a2a-4f59-8c25-b28a681cdcfb/manager/0.log" Jan 22 07:16:20 crc kubenswrapper[4720]: I0122 07:16:20.563212 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-85cd9769bb-4tlfl_0e186e5c-83e6-465d-9353-e9314702d85a/manager/0.log" Jan 22 07:16:20 crc kubenswrapper[4720]: I0122 07:16:20.570315 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-kdlvf_e41ff3f3-3360-4fd3-99ed-448ca648f3b6/frr/0.log" Jan 22 07:16:20 crc kubenswrapper[4720]: I0122 07:16:20.571948 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-69797bbcbd-2cs6n_1a3c6a91-064b-4006-b40f-ba7bc317aa83/manager/0.log" Jan 22 07:16:20 crc kubenswrapper[4720]: I0122 07:16:20.582360 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-kdlvf_e41ff3f3-3360-4fd3-99ed-448ca648f3b6/reloader/0.log" Jan 22 07:16:20 crc kubenswrapper[4720]: I0122 07:16:20.588596 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-kdlvf_e41ff3f3-3360-4fd3-99ed-448ca648f3b6/frr-metrics/0.log" Jan 22 07:16:20 crc kubenswrapper[4720]: I0122 07:16:20.604832 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-kdlvf_e41ff3f3-3360-4fd3-99ed-448ca648f3b6/kube-rbac-proxy/0.log" Jan 22 07:16:20 crc kubenswrapper[4720]: I0122 07:16:20.625437 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-kdlvf_e41ff3f3-3360-4fd3-99ed-448ca648f3b6/kube-rbac-proxy-frr/0.log" Jan 22 07:16:20 crc kubenswrapper[4720]: I0122 07:16:20.635626 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-kdlvf_e41ff3f3-3360-4fd3-99ed-448ca648f3b6/cp-frr-files/0.log" Jan 22 07:16:20 crc kubenswrapper[4720]: I0122 07:16:20.658002 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-kdlvf_e41ff3f3-3360-4fd3-99ed-448ca648f3b6/cp-reloader/0.log" Jan 22 07:16:20 crc kubenswrapper[4720]: I0122 07:16:20.666594 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-kdlvf_e41ff3f3-3360-4fd3-99ed-448ca648f3b6/cp-metrics/0.log" Jan 22 07:16:20 crc kubenswrapper[4720]: I0122 07:16:20.677384 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-bnntl_15c14672-daa2-408e-a693-6ac7bef81828/frr-k8s-webhook-server/0.log" Jan 22 07:16:20 crc kubenswrapper[4720]: I0122 07:16:20.709395 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-7449444d4b-xh4ps_0e4de6cb-3e0d-46e0-a286-ab0ac437bb3c/manager/0.log" Jan 22 07:16:20 crc kubenswrapper[4720]: I0122 07:16:20.719786 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-fc49cf759-5hjst_48a13b3e-ee8e-4ba2-ad41-c83176d673a5/webhook-server/0.log" Jan 22 07:16:21 crc kubenswrapper[4720]: I0122 07:16:21.064427 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-67m5k_ce7509f5-f9e6-4130-b569-986bb9b61ffd/speaker/0.log" Jan 22 07:16:21 crc kubenswrapper[4720]: I0122 07:16:21.093179 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-67m5k_ce7509f5-f9e6-4130-b569-986bb9b61ffd/kube-rbac-proxy/0.log" Jan 22 07:16:21 crc kubenswrapper[4720]: I0122 07:16:21.111925 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-db559d697-hjx74_12086a20-e137-4c50-8273-3823f70fbfda/manager/0.log" Jan 22 07:16:21 crc kubenswrapper[4720]: I0122 07:16:21.125267 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-index-dsskv_cb206016-4343-44c8-88e0-2f6400068e6d/registry-server/0.log" Jan 22 07:16:21 crc kubenswrapper[4720]: I0122 07:16:21.210037 4720 scope.go:117] "RemoveContainer" containerID="0eb7d27ae9c0d299c1e1a3566b73c7f2c1ab85b6c0032703c951b172cd0c528c" Jan 22 07:16:21 crc kubenswrapper[4720]: E0122 07:16:21.210278 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bnsvd_openshift-machine-config-operator(f4b26e9d-6a95-4b1c-9750-88b6aa100c67)\"" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" podUID="f4b26e9d-6a95-4b1c-9750-88b6aa100c67" Jan 22 07:16:21 crc kubenswrapper[4720]: I0122 07:16:21.900075 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-86cb77c54b-b9fc8_34089ae4-0f59-4909-96f9-b64ebe3e1a29/cert-manager-controller/0.log" Jan 22 07:16:21 crc kubenswrapper[4720]: I0122 07:16:21.915779 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-855d9ccff4-fkxm7_adf9f211-0196-4391-ae7a-c98e6e20147e/cert-manager-cainjector/0.log" Jan 22 07:16:21 crc kubenswrapper[4720]: I0122 07:16:21.924809 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-f4fb5df64-5klsr_34799a28-6c13-4288-946f-bc4d9e57b756/cert-manager-webhook/0.log" Jan 22 07:16:22 crc kubenswrapper[4720]: I0122 07:16:22.578466 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7754f76f8b-b4rzf_5515f37e-3d61-49f0-ba5d-5d6896527923/nmstate-console-plugin/0.log" Jan 22 07:16:22 crc kubenswrapper[4720]: I0122 07:16:22.591782 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-lfx9d_10356d8e-1761-4a55-ad79-fee34dd3aabf/nmstate-handler/0.log" Jan 22 07:16:22 crc kubenswrapper[4720]: I0122 07:16:22.610003 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-hnrr7_bce8fd7c-de7e-4ca2-bebf-c37b5c6d5ddb/nmstate-metrics/0.log" Jan 22 07:16:22 crc kubenswrapper[4720]: I0122 07:16:22.617520 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-hnrr7_bce8fd7c-de7e-4ca2-bebf-c37b5c6d5ddb/kube-rbac-proxy/0.log" Jan 22 07:16:22 crc kubenswrapper[4720]: I0122 07:16:22.625988 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-zmhj8_b768bae9-692e-4039-8fea-d88359e16ee4/control-plane-machine-set-operator/0.log" Jan 22 07:16:22 crc kubenswrapper[4720]: I0122 07:16:22.631428 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-646758c888-hb6mk_2e442158-14c1-4ed3-a62b-679e64c48148/nmstate-operator/0.log" Jan 22 07:16:22 crc kubenswrapper[4720]: I0122 07:16:22.642967 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-8474b5b9d8-68ctq_1d307b97-f8d7-4624-ad82-c40af972eeff/nmstate-webhook/0.log" Jan 22 07:16:22 crc kubenswrapper[4720]: I0122 07:16:22.644627 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-hxdwr_42322892-7874-4c59-ab1a-e3f205212e2e/kube-rbac-proxy/0.log" Jan 22 07:16:22 crc kubenswrapper[4720]: I0122 07:16:22.653335 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-hxdwr_42322892-7874-4c59-ab1a-e3f205212e2e/machine-api-operator/0.log" Jan 22 07:16:23 crc kubenswrapper[4720]: I0122 07:16:23.659154 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_037caf7323fbd4952783c45a5586c50076cc7a0d4d49824be452e917b1vmprt_61a1b004-dab4-4246-93a6-81d023e08232/extract/0.log" Jan 22 07:16:23 crc kubenswrapper[4720]: I0122 07:16:23.666516 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_037caf7323fbd4952783c45a5586c50076cc7a0d4d49824be452e917b1vmprt_61a1b004-dab4-4246-93a6-81d023e08232/util/0.log" Jan 22 07:16:23 crc kubenswrapper[4720]: I0122 07:16:23.675653 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_037caf7323fbd4952783c45a5586c50076cc7a0d4d49824be452e917b1vmprt_61a1b004-dab4-4246-93a6-81d023e08232/pull/0.log" Jan 22 07:16:23 crc kubenswrapper[4720]: I0122 07:16:23.689344 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-59dd8b7cbf-kp5p9_a072cd1a-6b0c-4f3c-aa50-12a441bc87e3/manager/0.log" Jan 22 07:16:23 crc kubenswrapper[4720]: I0122 07:16:23.730963 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-69cf5d4557-g9d9q_15bf2b23-40fc-4958-9774-3c6e4f2c591a/manager/0.log" Jan 22 07:16:23 crc kubenswrapper[4720]: I0122 07:16:23.752877 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-b45d7bf98-4nvtq_cc13fc87-a160-4804-aef4-bb2c6ee89f13/manager/0.log" Jan 22 07:16:23 crc kubenswrapper[4720]: I0122 07:16:23.775248 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_df9484187864248b0416024bc436a60f9e23a62baa023aadac0f15570dbdgwp_40adb427-e593-415a-a491-fc641e94e5a2/extract/0.log" Jan 22 07:16:23 crc kubenswrapper[4720]: I0122 07:16:23.782939 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_df9484187864248b0416024bc436a60f9e23a62baa023aadac0f15570dbdgwp_40adb427-e593-415a-a491-fc641e94e5a2/util/0.log" Jan 22 07:16:23 crc kubenswrapper[4720]: I0122 07:16:23.795177 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_df9484187864248b0416024bc436a60f9e23a62baa023aadac0f15570dbdgwp_40adb427-e593-415a-a491-fc641e94e5a2/pull/0.log" Jan 22 07:16:23 crc kubenswrapper[4720]: I0122 07:16:23.808270 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-78fdd796fd-l7wpl_b464ce62-6f79-452c-a1c6-3c4878bcc8ba/manager/0.log" Jan 22 07:16:23 crc kubenswrapper[4720]: I0122 07:16:23.819467 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-594c8c9d5d-6rl8m_f30c0975-10b7-4d3b-98f7-63a02ae44927/manager/0.log" Jan 22 07:16:23 crc kubenswrapper[4720]: I0122 07:16:23.833280 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-77d5c5b54f-9jw99_7d67431b-e376-4558-83f2-af33c36b403b/manager/0.log" Jan 22 07:16:24 crc kubenswrapper[4720]: I0122 07:16:24.032102 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-54ccf4f85d-h6fd5_25a73ab8-0306-4e57-9417-ce651e370925/manager/0.log" Jan 22 07:16:24 crc kubenswrapper[4720]: I0122 07:16:24.125712 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-69d6c9f5b8-d5h9r_ace6e6bf-fddd-4105-af4e-5ad7fcd9f4d1/manager/0.log" Jan 22 07:16:24 crc kubenswrapper[4720]: I0122 07:16:24.344466 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-b8b6d4659-ddkv8_21ee70f0-2938-4d3a-9edf-beaa943261ab/manager/0.log" Jan 22 07:16:24 crc kubenswrapper[4720]: I0122 07:16:24.355018 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-78c6999f6f-nn4jg_d681304a-06cd-4870-b2b5-4f10936b7775/manager/0.log" Jan 22 07:16:24 crc kubenswrapper[4720]: I0122 07:16:24.390177 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-c87fff755-hq64w_de14bbbe-09fc-4f3c-8857-e3f7abca82f8/manager/0.log" Jan 22 07:16:24 crc kubenswrapper[4720]: I0122 07:16:24.399111 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-5d8f59fb49-47njc_fd7a6c01-1255-4f11-9dba-d3119753d47c/manager/0.log" Jan 22 07:16:24 crc kubenswrapper[4720]: I0122 07:16:24.413062 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-6b8bc8d87d-tnvdl_a2440b28-2217-482c-87c6-443616b586cb/manager/0.log" Jan 22 07:16:24 crc kubenswrapper[4720]: I0122 07:16:24.421675 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-7bd9774b6-gkhjf_e77f3a0e-4936-4b98-829b-6ea9ebe6e817/manager/0.log" Jan 22 07:16:24 crc kubenswrapper[4720]: I0122 07:16:24.446615 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-6b68b8b85485jc7_476ecc66-be12-4a68-8de1-3a062ec12f55/manager/0.log" Jan 22 07:16:24 crc kubenswrapper[4720]: I0122 07:16:24.958603 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-758ddb75c6-rjkvm_611fcdc7-1f1f-4530-9f34-68dae9bf4bd5/manager/0.log" Jan 22 07:16:24 crc kubenswrapper[4720]: I0122 07:16:24.969617 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-2g762_80ba9f63-ae49-476c-9282-f9b32f804ab3/registry-server/0.log" Jan 22 07:16:24 crc kubenswrapper[4720]: I0122 07:16:24.987167 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-55db956ddc-m2hkw_6a45a130-7295-401c-a63c-1df68c263764/manager/0.log" Jan 22 07:16:24 crc kubenswrapper[4720]: I0122 07:16:24.996505 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5d646b7d76-wmhbp_0a6de6f6-4bef-4f84-b4b8-4de46e9347b1/manager/0.log" Jan 22 07:16:25 crc kubenswrapper[4720]: I0122 07:16:25.013100 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-4nvz6_ff37e0b2-69d6-4217-b44f-a8bf016e45d6/operator/0.log" Jan 22 07:16:25 crc kubenswrapper[4720]: I0122 07:16:25.027999 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-547cbdb99f-xqx67_e9c3503d-2a2a-4f59-8c25-b28a681cdcfb/manager/0.log" Jan 22 07:16:25 crc kubenswrapper[4720]: I0122 07:16:25.309873 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-85cd9769bb-4tlfl_0e186e5c-83e6-465d-9353-e9314702d85a/manager/0.log" Jan 22 07:16:25 crc kubenswrapper[4720]: I0122 07:16:25.321880 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-69797bbcbd-2cs6n_1a3c6a91-064b-4006-b40f-ba7bc317aa83/manager/0.log" Jan 22 07:16:25 crc kubenswrapper[4720]: I0122 07:16:25.745218 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-db559d697-hjx74_12086a20-e137-4c50-8273-3823f70fbfda/manager/0.log" Jan 22 07:16:25 crc kubenswrapper[4720]: I0122 07:16:25.755052 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-index-dsskv_cb206016-4343-44c8-88e0-2f6400068e6d/registry-server/0.log" Jan 22 07:16:27 crc kubenswrapper[4720]: I0122 07:16:27.624575 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-lxzml_c7b3c34a-9870-4c9f-990b-29b7e768d5a5/kube-multus-additional-cni-plugins/0.log" Jan 22 07:16:27 crc kubenswrapper[4720]: I0122 07:16:27.636184 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-lxzml_c7b3c34a-9870-4c9f-990b-29b7e768d5a5/egress-router-binary-copy/0.log" Jan 22 07:16:27 crc kubenswrapper[4720]: I0122 07:16:27.646315 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-lxzml_c7b3c34a-9870-4c9f-990b-29b7e768d5a5/cni-plugins/0.log" Jan 22 07:16:27 crc kubenswrapper[4720]: I0122 07:16:27.654805 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-lxzml_c7b3c34a-9870-4c9f-990b-29b7e768d5a5/bond-cni-plugin/0.log" Jan 22 07:16:27 crc kubenswrapper[4720]: I0122 07:16:27.663699 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-lxzml_c7b3c34a-9870-4c9f-990b-29b7e768d5a5/routeoverride-cni/0.log" Jan 22 07:16:27 crc kubenswrapper[4720]: I0122 07:16:27.674739 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-lxzml_c7b3c34a-9870-4c9f-990b-29b7e768d5a5/whereabouts-cni-bincopy/0.log" Jan 22 07:16:27 crc kubenswrapper[4720]: I0122 07:16:27.681612 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-lxzml_c7b3c34a-9870-4c9f-990b-29b7e768d5a5/whereabouts-cni/0.log" Jan 22 07:16:27 crc kubenswrapper[4720]: I0122 07:16:27.698077 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-admission-controller-857f4d67dd-xf5cz_a8593368-7930-499d-aa21-6526251ce66c/multus-admission-controller/0.log" Jan 22 07:16:27 crc kubenswrapper[4720]: I0122 07:16:27.703465 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-admission-controller-857f4d67dd-xf5cz_a8593368-7930-499d-aa21-6526251ce66c/kube-rbac-proxy/0.log" Jan 22 07:16:27 crc kubenswrapper[4720]: I0122 07:16:27.772215 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-n5w5r_85373343-156d-4de0-a72b-baaf7c4e3d08/kube-multus/2.log" Jan 22 07:16:27 crc kubenswrapper[4720]: I0122 07:16:27.781113 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-n5w5r_85373343-156d-4de0-a72b-baaf7c4e3d08/kube-multus/3.log" Jan 22 07:16:27 crc kubenswrapper[4720]: I0122 07:16:27.811820 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_network-metrics-daemon-kvtch_409f50e8-9b68-4efe-8eb4-bc144d383817/network-metrics-daemon/0.log" Jan 22 07:16:27 crc kubenswrapper[4720]: I0122 07:16:27.818005 4720 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_network-metrics-daemon-kvtch_409f50e8-9b68-4efe-8eb4-bc144d383817/kube-rbac-proxy/0.log" Jan 22 07:16:35 crc kubenswrapper[4720]: I0122 07:16:35.210339 4720 scope.go:117] "RemoveContainer" containerID="0eb7d27ae9c0d299c1e1a3566b73c7f2c1ab85b6c0032703c951b172cd0c528c" Jan 22 07:16:35 crc kubenswrapper[4720]: E0122 07:16:35.211146 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bnsvd_openshift-machine-config-operator(f4b26e9d-6a95-4b1c-9750-88b6aa100c67)\"" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" podUID="f4b26e9d-6a95-4b1c-9750-88b6aa100c67" Jan 22 07:16:47 crc kubenswrapper[4720]: I0122 07:16:47.210725 4720 scope.go:117] "RemoveContainer" containerID="0eb7d27ae9c0d299c1e1a3566b73c7f2c1ab85b6c0032703c951b172cd0c528c" Jan 22 07:16:47 crc kubenswrapper[4720]: E0122 07:16:47.211618 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bnsvd_openshift-machine-config-operator(f4b26e9d-6a95-4b1c-9750-88b6aa100c67)\"" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" podUID="f4b26e9d-6a95-4b1c-9750-88b6aa100c67" Jan 22 07:16:58 crc kubenswrapper[4720]: I0122 07:16:58.216415 4720 scope.go:117] "RemoveContainer" containerID="0eb7d27ae9c0d299c1e1a3566b73c7f2c1ab85b6c0032703c951b172cd0c528c" Jan 22 07:16:58 crc kubenswrapper[4720]: E0122 07:16:58.217142 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bnsvd_openshift-machine-config-operator(f4b26e9d-6a95-4b1c-9750-88b6aa100c67)\"" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" podUID="f4b26e9d-6a95-4b1c-9750-88b6aa100c67" Jan 22 07:17:09 crc kubenswrapper[4720]: I0122 07:17:09.210798 4720 scope.go:117] "RemoveContainer" containerID="0eb7d27ae9c0d299c1e1a3566b73c7f2c1ab85b6c0032703c951b172cd0c528c" Jan 22 07:17:09 crc kubenswrapper[4720]: E0122 07:17:09.211534 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bnsvd_openshift-machine-config-operator(f4b26e9d-6a95-4b1c-9750-88b6aa100c67)\"" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" podUID="f4b26e9d-6a95-4b1c-9750-88b6aa100c67" Jan 22 07:17:18 crc kubenswrapper[4720]: I0122 07:17:18.916868 4720 scope.go:117] "RemoveContainer" containerID="16ed8beab0f87d6a21c1576aa5bdd58052f6c272d8b38b9a587d0a2eb080151e" Jan 22 07:17:18 crc kubenswrapper[4720]: I0122 07:17:18.979711 4720 scope.go:117] "RemoveContainer" containerID="a34476e119b754658da1e4b1043687520f9bb7dd42e99c7378ebf0c11f995894" Jan 22 07:17:18 crc kubenswrapper[4720]: I0122 07:17:18.999110 4720 scope.go:117] "RemoveContainer" containerID="ae3e4a93a59399cddf562b020360a952cf4aff72540f0a9855c2754dd4ced9d1" Jan 22 07:17:19 crc kubenswrapper[4720]: I0122 07:17:19.028536 4720 scope.go:117] "RemoveContainer" containerID="37ffc5b53586441d22624c34ddde403cbd2dc8c740d4a9892c32e1b4a7a9b8e4" Jan 22 07:17:19 crc kubenswrapper[4720]: I0122 07:17:19.065190 4720 scope.go:117] "RemoveContainer" containerID="54f87871a87363ea945108d40f1796b642e83c8e7ca3c68c49dce0ba66ee7d31" Jan 22 07:17:22 crc kubenswrapper[4720]: I0122 07:17:22.210920 4720 scope.go:117] "RemoveContainer" containerID="0eb7d27ae9c0d299c1e1a3566b73c7f2c1ab85b6c0032703c951b172cd0c528c" Jan 22 07:17:22 crc kubenswrapper[4720]: E0122 07:17:22.211466 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bnsvd_openshift-machine-config-operator(f4b26e9d-6a95-4b1c-9750-88b6aa100c67)\"" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" podUID="f4b26e9d-6a95-4b1c-9750-88b6aa100c67" Jan 22 07:17:34 crc kubenswrapper[4720]: I0122 07:17:34.211629 4720 scope.go:117] "RemoveContainer" containerID="0eb7d27ae9c0d299c1e1a3566b73c7f2c1ab85b6c0032703c951b172cd0c528c" Jan 22 07:17:34 crc kubenswrapper[4720]: E0122 07:17:34.213165 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bnsvd_openshift-machine-config-operator(f4b26e9d-6a95-4b1c-9750-88b6aa100c67)\"" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" podUID="f4b26e9d-6a95-4b1c-9750-88b6aa100c67" Jan 22 07:17:47 crc kubenswrapper[4720]: I0122 07:17:47.210798 4720 scope.go:117] "RemoveContainer" containerID="0eb7d27ae9c0d299c1e1a3566b73c7f2c1ab85b6c0032703c951b172cd0c528c" Jan 22 07:17:47 crc kubenswrapper[4720]: E0122 07:17:47.212263 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bnsvd_openshift-machine-config-operator(f4b26e9d-6a95-4b1c-9750-88b6aa100c67)\"" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" podUID="f4b26e9d-6a95-4b1c-9750-88b6aa100c67" Jan 22 07:17:58 crc kubenswrapper[4720]: I0122 07:17:58.218370 4720 scope.go:117] "RemoveContainer" containerID="0eb7d27ae9c0d299c1e1a3566b73c7f2c1ab85b6c0032703c951b172cd0c528c" Jan 22 07:17:58 crc kubenswrapper[4720]: E0122 07:17:58.219111 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bnsvd_openshift-machine-config-operator(f4b26e9d-6a95-4b1c-9750-88b6aa100c67)\"" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" podUID="f4b26e9d-6a95-4b1c-9750-88b6aa100c67" Jan 22 07:18:13 crc kubenswrapper[4720]: I0122 07:18:13.211026 4720 scope.go:117] "RemoveContainer" containerID="0eb7d27ae9c0d299c1e1a3566b73c7f2c1ab85b6c0032703c951b172cd0c528c" Jan 22 07:18:13 crc kubenswrapper[4720]: E0122 07:18:13.211772 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bnsvd_openshift-machine-config-operator(f4b26e9d-6a95-4b1c-9750-88b6aa100c67)\"" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" podUID="f4b26e9d-6a95-4b1c-9750-88b6aa100c67" Jan 22 07:18:19 crc kubenswrapper[4720]: I0122 07:18:19.158424 4720 scope.go:117] "RemoveContainer" containerID="806ced3cbc4ba6c9ad97758119eac78b6e28162e8a0772cef70e81eaecec156f" Jan 22 07:18:19 crc kubenswrapper[4720]: I0122 07:18:19.210400 4720 scope.go:117] "RemoveContainer" containerID="82f32033f1ff54d39280eee2ec0a53551990022f44471df205f8b5e63471a831" Jan 22 07:18:19 crc kubenswrapper[4720]: I0122 07:18:19.234543 4720 scope.go:117] "RemoveContainer" containerID="2e524cfb70b60a78682e2f54be696366ed85cc1fd60ba385dc5c86438ad662ab" Jan 22 07:18:19 crc kubenswrapper[4720]: I0122 07:18:19.285499 4720 scope.go:117] "RemoveContainer" containerID="4db83137fc068de77dc06676c1b0f60f172e33619f5133b2922f88fb8045917d" Jan 22 07:18:19 crc kubenswrapper[4720]: I0122 07:18:19.327361 4720 scope.go:117] "RemoveContainer" containerID="77bb54fccc09173dae5f0b5112b1b4564a07b8ab034fbebd99dbecbce071a4ec" Jan 22 07:18:26 crc kubenswrapper[4720]: I0122 07:18:26.210504 4720 scope.go:117] "RemoveContainer" containerID="0eb7d27ae9c0d299c1e1a3566b73c7f2c1ab85b6c0032703c951b172cd0c528c" Jan 22 07:18:26 crc kubenswrapper[4720]: E0122 07:18:26.211191 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bnsvd_openshift-machine-config-operator(f4b26e9d-6a95-4b1c-9750-88b6aa100c67)\"" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" podUID="f4b26e9d-6a95-4b1c-9750-88b6aa100c67" Jan 22 07:18:39 crc kubenswrapper[4720]: I0122 07:18:39.210050 4720 scope.go:117] "RemoveContainer" containerID="0eb7d27ae9c0d299c1e1a3566b73c7f2c1ab85b6c0032703c951b172cd0c528c" Jan 22 07:18:39 crc kubenswrapper[4720]: E0122 07:18:39.210750 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bnsvd_openshift-machine-config-operator(f4b26e9d-6a95-4b1c-9750-88b6aa100c67)\"" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" podUID="f4b26e9d-6a95-4b1c-9750-88b6aa100c67" Jan 22 07:18:43 crc kubenswrapper[4720]: I0122 07:18:43.799682 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-hrq4c"] Jan 22 07:18:43 crc kubenswrapper[4720]: E0122 07:18:43.801833 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="683b9d33-e312-4f08-b4f2-a20d2f49a303" containerName="collect-profiles" Jan 22 07:18:43 crc kubenswrapper[4720]: I0122 07:18:43.801933 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="683b9d33-e312-4f08-b4f2-a20d2f49a303" containerName="collect-profiles" Jan 22 07:18:43 crc kubenswrapper[4720]: I0122 07:18:43.802345 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="683b9d33-e312-4f08-b4f2-a20d2f49a303" containerName="collect-profiles" Jan 22 07:18:43 crc kubenswrapper[4720]: I0122 07:18:43.804346 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hrq4c" Jan 22 07:18:43 crc kubenswrapper[4720]: I0122 07:18:43.813987 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hrq4c"] Jan 22 07:18:43 crc kubenswrapper[4720]: I0122 07:18:43.972437 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/883f5516-228e-4167-a5a3-3e89800d1469-catalog-content\") pod \"certified-operators-hrq4c\" (UID: \"883f5516-228e-4167-a5a3-3e89800d1469\") " pod="openshift-marketplace/certified-operators-hrq4c" Jan 22 07:18:43 crc kubenswrapper[4720]: I0122 07:18:43.972990 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z29hf\" (UniqueName: \"kubernetes.io/projected/883f5516-228e-4167-a5a3-3e89800d1469-kube-api-access-z29hf\") pod \"certified-operators-hrq4c\" (UID: \"883f5516-228e-4167-a5a3-3e89800d1469\") " pod="openshift-marketplace/certified-operators-hrq4c" Jan 22 07:18:43 crc kubenswrapper[4720]: I0122 07:18:43.973072 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/883f5516-228e-4167-a5a3-3e89800d1469-utilities\") pod \"certified-operators-hrq4c\" (UID: \"883f5516-228e-4167-a5a3-3e89800d1469\") " pod="openshift-marketplace/certified-operators-hrq4c" Jan 22 07:18:44 crc kubenswrapper[4720]: I0122 07:18:44.075297 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/883f5516-228e-4167-a5a3-3e89800d1469-utilities\") pod \"certified-operators-hrq4c\" (UID: \"883f5516-228e-4167-a5a3-3e89800d1469\") " pod="openshift-marketplace/certified-operators-hrq4c" Jan 22 07:18:44 crc kubenswrapper[4720]: I0122 07:18:44.075376 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/883f5516-228e-4167-a5a3-3e89800d1469-catalog-content\") pod \"certified-operators-hrq4c\" (UID: \"883f5516-228e-4167-a5a3-3e89800d1469\") " pod="openshift-marketplace/certified-operators-hrq4c" Jan 22 07:18:44 crc kubenswrapper[4720]: I0122 07:18:44.075429 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z29hf\" (UniqueName: \"kubernetes.io/projected/883f5516-228e-4167-a5a3-3e89800d1469-kube-api-access-z29hf\") pod \"certified-operators-hrq4c\" (UID: \"883f5516-228e-4167-a5a3-3e89800d1469\") " pod="openshift-marketplace/certified-operators-hrq4c" Jan 22 07:18:44 crc kubenswrapper[4720]: I0122 07:18:44.075864 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/883f5516-228e-4167-a5a3-3e89800d1469-utilities\") pod \"certified-operators-hrq4c\" (UID: \"883f5516-228e-4167-a5a3-3e89800d1469\") " pod="openshift-marketplace/certified-operators-hrq4c" Jan 22 07:18:44 crc kubenswrapper[4720]: I0122 07:18:44.076013 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/883f5516-228e-4167-a5a3-3e89800d1469-catalog-content\") pod \"certified-operators-hrq4c\" (UID: \"883f5516-228e-4167-a5a3-3e89800d1469\") " pod="openshift-marketplace/certified-operators-hrq4c" Jan 22 07:18:44 crc kubenswrapper[4720]: I0122 07:18:44.095098 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z29hf\" (UniqueName: \"kubernetes.io/projected/883f5516-228e-4167-a5a3-3e89800d1469-kube-api-access-z29hf\") pod \"certified-operators-hrq4c\" (UID: \"883f5516-228e-4167-a5a3-3e89800d1469\") " pod="openshift-marketplace/certified-operators-hrq4c" Jan 22 07:18:44 crc kubenswrapper[4720]: I0122 07:18:44.129159 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hrq4c" Jan 22 07:18:44 crc kubenswrapper[4720]: I0122 07:18:44.743865 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hrq4c"] Jan 22 07:18:45 crc kubenswrapper[4720]: I0122 07:18:45.385404 4720 generic.go:334] "Generic (PLEG): container finished" podID="883f5516-228e-4167-a5a3-3e89800d1469" containerID="2655c7bf2c2c3ccd66b7ee13a1bdea7527ad1bc1216ebf3acd6b9e01bd565c41" exitCode=0 Jan 22 07:18:45 crc kubenswrapper[4720]: I0122 07:18:45.385446 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hrq4c" event={"ID":"883f5516-228e-4167-a5a3-3e89800d1469","Type":"ContainerDied","Data":"2655c7bf2c2c3ccd66b7ee13a1bdea7527ad1bc1216ebf3acd6b9e01bd565c41"} Jan 22 07:18:45 crc kubenswrapper[4720]: I0122 07:18:45.385714 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hrq4c" event={"ID":"883f5516-228e-4167-a5a3-3e89800d1469","Type":"ContainerStarted","Data":"f5467818ce11a0dcefbbb2adb9f542948e13f1def01faf47425c0e786aded3ae"} Jan 22 07:18:45 crc kubenswrapper[4720]: I0122 07:18:45.387323 4720 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 22 07:18:46 crc kubenswrapper[4720]: I0122 07:18:46.399352 4720 generic.go:334] "Generic (PLEG): container finished" podID="883f5516-228e-4167-a5a3-3e89800d1469" containerID="8fa45e555576d0fc79fb0774fd53f1aa311d756f26f0ccb2f60e03eeac859365" exitCode=0 Jan 22 07:18:46 crc kubenswrapper[4720]: I0122 07:18:46.399458 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hrq4c" event={"ID":"883f5516-228e-4167-a5a3-3e89800d1469","Type":"ContainerDied","Data":"8fa45e555576d0fc79fb0774fd53f1aa311d756f26f0ccb2f60e03eeac859365"} Jan 22 07:18:47 crc kubenswrapper[4720]: I0122 07:18:47.412711 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hrq4c" event={"ID":"883f5516-228e-4167-a5a3-3e89800d1469","Type":"ContainerStarted","Data":"ced2f5678475759ce1a8867e5ab7c8bf94f9bc4f93ca83cc9a3b7b2a938f484b"} Jan 22 07:18:47 crc kubenswrapper[4720]: I0122 07:18:47.443123 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-hrq4c" podStartSLOduration=2.998138875 podStartE2EDuration="4.443094273s" podCreationTimestamp="2026-01-22 07:18:43 +0000 UTC" firstStartedPulling="2026-01-22 07:18:45.387119754 +0000 UTC m=+2617.529026459" lastFinishedPulling="2026-01-22 07:18:46.832075152 +0000 UTC m=+2618.973981857" observedRunningTime="2026-01-22 07:18:47.430613779 +0000 UTC m=+2619.572520534" watchObservedRunningTime="2026-01-22 07:18:47.443094273 +0000 UTC m=+2619.585001018" Jan 22 07:18:50 crc kubenswrapper[4720]: I0122 07:18:50.212339 4720 scope.go:117] "RemoveContainer" containerID="0eb7d27ae9c0d299c1e1a3566b73c7f2c1ab85b6c0032703c951b172cd0c528c" Jan 22 07:18:50 crc kubenswrapper[4720]: E0122 07:18:50.212936 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bnsvd_openshift-machine-config-operator(f4b26e9d-6a95-4b1c-9750-88b6aa100c67)\"" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" podUID="f4b26e9d-6a95-4b1c-9750-88b6aa100c67" Jan 22 07:18:53 crc kubenswrapper[4720]: I0122 07:18:53.189980 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-8xpvg"] Jan 22 07:18:53 crc kubenswrapper[4720]: I0122 07:18:53.192391 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8xpvg" Jan 22 07:18:53 crc kubenswrapper[4720]: I0122 07:18:53.210904 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8xpvg"] Jan 22 07:18:53 crc kubenswrapper[4720]: I0122 07:18:53.266717 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b51d4ef-60c3-42ac-97ff-ad0efa93963e-utilities\") pod \"redhat-operators-8xpvg\" (UID: \"1b51d4ef-60c3-42ac-97ff-ad0efa93963e\") " pod="openshift-marketplace/redhat-operators-8xpvg" Jan 22 07:18:53 crc kubenswrapper[4720]: I0122 07:18:53.266836 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b51d4ef-60c3-42ac-97ff-ad0efa93963e-catalog-content\") pod \"redhat-operators-8xpvg\" (UID: \"1b51d4ef-60c3-42ac-97ff-ad0efa93963e\") " pod="openshift-marketplace/redhat-operators-8xpvg" Jan 22 07:18:53 crc kubenswrapper[4720]: I0122 07:18:53.266962 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x6rc7\" (UniqueName: \"kubernetes.io/projected/1b51d4ef-60c3-42ac-97ff-ad0efa93963e-kube-api-access-x6rc7\") pod \"redhat-operators-8xpvg\" (UID: \"1b51d4ef-60c3-42ac-97ff-ad0efa93963e\") " pod="openshift-marketplace/redhat-operators-8xpvg" Jan 22 07:18:53 crc kubenswrapper[4720]: I0122 07:18:53.368054 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b51d4ef-60c3-42ac-97ff-ad0efa93963e-catalog-content\") pod \"redhat-operators-8xpvg\" (UID: \"1b51d4ef-60c3-42ac-97ff-ad0efa93963e\") " pod="openshift-marketplace/redhat-operators-8xpvg" Jan 22 07:18:53 crc kubenswrapper[4720]: I0122 07:18:53.368130 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x6rc7\" (UniqueName: \"kubernetes.io/projected/1b51d4ef-60c3-42ac-97ff-ad0efa93963e-kube-api-access-x6rc7\") pod \"redhat-operators-8xpvg\" (UID: \"1b51d4ef-60c3-42ac-97ff-ad0efa93963e\") " pod="openshift-marketplace/redhat-operators-8xpvg" Jan 22 07:18:53 crc kubenswrapper[4720]: I0122 07:18:53.368203 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b51d4ef-60c3-42ac-97ff-ad0efa93963e-utilities\") pod \"redhat-operators-8xpvg\" (UID: \"1b51d4ef-60c3-42ac-97ff-ad0efa93963e\") " pod="openshift-marketplace/redhat-operators-8xpvg" Jan 22 07:18:53 crc kubenswrapper[4720]: I0122 07:18:53.368683 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b51d4ef-60c3-42ac-97ff-ad0efa93963e-utilities\") pod \"redhat-operators-8xpvg\" (UID: \"1b51d4ef-60c3-42ac-97ff-ad0efa93963e\") " pod="openshift-marketplace/redhat-operators-8xpvg" Jan 22 07:18:53 crc kubenswrapper[4720]: I0122 07:18:53.368921 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b51d4ef-60c3-42ac-97ff-ad0efa93963e-catalog-content\") pod \"redhat-operators-8xpvg\" (UID: \"1b51d4ef-60c3-42ac-97ff-ad0efa93963e\") " pod="openshift-marketplace/redhat-operators-8xpvg" Jan 22 07:18:53 crc kubenswrapper[4720]: I0122 07:18:53.400432 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x6rc7\" (UniqueName: \"kubernetes.io/projected/1b51d4ef-60c3-42ac-97ff-ad0efa93963e-kube-api-access-x6rc7\") pod \"redhat-operators-8xpvg\" (UID: \"1b51d4ef-60c3-42ac-97ff-ad0efa93963e\") " pod="openshift-marketplace/redhat-operators-8xpvg" Jan 22 07:18:53 crc kubenswrapper[4720]: I0122 07:18:53.515115 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8xpvg" Jan 22 07:18:54 crc kubenswrapper[4720]: I0122 07:18:54.040386 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8xpvg"] Jan 22 07:18:54 crc kubenswrapper[4720]: I0122 07:18:54.129524 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-hrq4c" Jan 22 07:18:54 crc kubenswrapper[4720]: I0122 07:18:54.129892 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-hrq4c" Jan 22 07:18:54 crc kubenswrapper[4720]: I0122 07:18:54.192851 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-hrq4c" Jan 22 07:18:54 crc kubenswrapper[4720]: I0122 07:18:54.465114 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8xpvg" event={"ID":"1b51d4ef-60c3-42ac-97ff-ad0efa93963e","Type":"ContainerStarted","Data":"181bbeeec6903a0b41c9cbe5c4f2dfa6a9e4446beea4e61d3dbba9e69e29dabf"} Jan 22 07:18:54 crc kubenswrapper[4720]: I0122 07:18:54.512664 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-hrq4c" Jan 22 07:18:55 crc kubenswrapper[4720]: I0122 07:18:55.473386 4720 generic.go:334] "Generic (PLEG): container finished" podID="1b51d4ef-60c3-42ac-97ff-ad0efa93963e" containerID="3e9cc039ee8b8e93ff9d9244a284b2dc2f369486b7fb94f31bd50a64c3408eab" exitCode=0 Jan 22 07:18:55 crc kubenswrapper[4720]: I0122 07:18:55.473449 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8xpvg" event={"ID":"1b51d4ef-60c3-42ac-97ff-ad0efa93963e","Type":"ContainerDied","Data":"3e9cc039ee8b8e93ff9d9244a284b2dc2f369486b7fb94f31bd50a64c3408eab"} Jan 22 07:18:57 crc kubenswrapper[4720]: I0122 07:18:57.493031 4720 generic.go:334] "Generic (PLEG): container finished" podID="1b51d4ef-60c3-42ac-97ff-ad0efa93963e" containerID="23349ff58048ff993f3f6a652232dcf4450b739bb94cef729b1daf2961811ff6" exitCode=0 Jan 22 07:18:57 crc kubenswrapper[4720]: I0122 07:18:57.493331 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8xpvg" event={"ID":"1b51d4ef-60c3-42ac-97ff-ad0efa93963e","Type":"ContainerDied","Data":"23349ff58048ff993f3f6a652232dcf4450b739bb94cef729b1daf2961811ff6"} Jan 22 07:18:58 crc kubenswrapper[4720]: I0122 07:18:58.502847 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8xpvg" event={"ID":"1b51d4ef-60c3-42ac-97ff-ad0efa93963e","Type":"ContainerStarted","Data":"a7d878f4933e9bea896e6ecb2d0986aa16235d3e3961a296cbe3e94d56ab2a43"} Jan 22 07:18:58 crc kubenswrapper[4720]: I0122 07:18:58.526611 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-8xpvg" podStartSLOduration=3.06913467 podStartE2EDuration="5.526593062s" podCreationTimestamp="2026-01-22 07:18:53 +0000 UTC" firstStartedPulling="2026-01-22 07:18:55.475090151 +0000 UTC m=+2627.616996856" lastFinishedPulling="2026-01-22 07:18:57.932548553 +0000 UTC m=+2630.074455248" observedRunningTime="2026-01-22 07:18:58.523549836 +0000 UTC m=+2630.665456541" watchObservedRunningTime="2026-01-22 07:18:58.526593062 +0000 UTC m=+2630.668499767" Jan 22 07:18:58 crc kubenswrapper[4720]: I0122 07:18:58.985784 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-hrq4c"] Jan 22 07:18:58 crc kubenswrapper[4720]: I0122 07:18:58.986083 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-hrq4c" podUID="883f5516-228e-4167-a5a3-3e89800d1469" containerName="registry-server" containerID="cri-o://ced2f5678475759ce1a8867e5ab7c8bf94f9bc4f93ca83cc9a3b7b2a938f484b" gracePeriod=2 Jan 22 07:18:59 crc kubenswrapper[4720]: I0122 07:18:59.445422 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hrq4c" Jan 22 07:18:59 crc kubenswrapper[4720]: I0122 07:18:59.490811 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z29hf\" (UniqueName: \"kubernetes.io/projected/883f5516-228e-4167-a5a3-3e89800d1469-kube-api-access-z29hf\") pod \"883f5516-228e-4167-a5a3-3e89800d1469\" (UID: \"883f5516-228e-4167-a5a3-3e89800d1469\") " Jan 22 07:18:59 crc kubenswrapper[4720]: I0122 07:18:59.491092 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/883f5516-228e-4167-a5a3-3e89800d1469-catalog-content\") pod \"883f5516-228e-4167-a5a3-3e89800d1469\" (UID: \"883f5516-228e-4167-a5a3-3e89800d1469\") " Jan 22 07:18:59 crc kubenswrapper[4720]: I0122 07:18:59.491169 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/883f5516-228e-4167-a5a3-3e89800d1469-utilities\") pod \"883f5516-228e-4167-a5a3-3e89800d1469\" (UID: \"883f5516-228e-4167-a5a3-3e89800d1469\") " Jan 22 07:18:59 crc kubenswrapper[4720]: I0122 07:18:59.492139 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/883f5516-228e-4167-a5a3-3e89800d1469-utilities" (OuterVolumeSpecName: "utilities") pod "883f5516-228e-4167-a5a3-3e89800d1469" (UID: "883f5516-228e-4167-a5a3-3e89800d1469"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 07:18:59 crc kubenswrapper[4720]: I0122 07:18:59.516524 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/883f5516-228e-4167-a5a3-3e89800d1469-kube-api-access-z29hf" (OuterVolumeSpecName: "kube-api-access-z29hf") pod "883f5516-228e-4167-a5a3-3e89800d1469" (UID: "883f5516-228e-4167-a5a3-3e89800d1469"). InnerVolumeSpecName "kube-api-access-z29hf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 07:18:59 crc kubenswrapper[4720]: I0122 07:18:59.519415 4720 generic.go:334] "Generic (PLEG): container finished" podID="883f5516-228e-4167-a5a3-3e89800d1469" containerID="ced2f5678475759ce1a8867e5ab7c8bf94f9bc4f93ca83cc9a3b7b2a938f484b" exitCode=0 Jan 22 07:18:59 crc kubenswrapper[4720]: I0122 07:18:59.520729 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hrq4c" Jan 22 07:18:59 crc kubenswrapper[4720]: I0122 07:18:59.521414 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hrq4c" event={"ID":"883f5516-228e-4167-a5a3-3e89800d1469","Type":"ContainerDied","Data":"ced2f5678475759ce1a8867e5ab7c8bf94f9bc4f93ca83cc9a3b7b2a938f484b"} Jan 22 07:18:59 crc kubenswrapper[4720]: I0122 07:18:59.521484 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hrq4c" event={"ID":"883f5516-228e-4167-a5a3-3e89800d1469","Type":"ContainerDied","Data":"f5467818ce11a0dcefbbb2adb9f542948e13f1def01faf47425c0e786aded3ae"} Jan 22 07:18:59 crc kubenswrapper[4720]: I0122 07:18:59.521502 4720 scope.go:117] "RemoveContainer" containerID="ced2f5678475759ce1a8867e5ab7c8bf94f9bc4f93ca83cc9a3b7b2a938f484b" Jan 22 07:18:59 crc kubenswrapper[4720]: I0122 07:18:59.534396 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/883f5516-228e-4167-a5a3-3e89800d1469-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "883f5516-228e-4167-a5a3-3e89800d1469" (UID: "883f5516-228e-4167-a5a3-3e89800d1469"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 07:18:59 crc kubenswrapper[4720]: I0122 07:18:59.548750 4720 scope.go:117] "RemoveContainer" containerID="8fa45e555576d0fc79fb0774fd53f1aa311d756f26f0ccb2f60e03eeac859365" Jan 22 07:18:59 crc kubenswrapper[4720]: I0122 07:18:59.577415 4720 scope.go:117] "RemoveContainer" containerID="2655c7bf2c2c3ccd66b7ee13a1bdea7527ad1bc1216ebf3acd6b9e01bd565c41" Jan 22 07:18:59 crc kubenswrapper[4720]: I0122 07:18:59.593054 4720 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/883f5516-228e-4167-a5a3-3e89800d1469-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 07:18:59 crc kubenswrapper[4720]: I0122 07:18:59.593083 4720 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/883f5516-228e-4167-a5a3-3e89800d1469-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 07:18:59 crc kubenswrapper[4720]: I0122 07:18:59.593093 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z29hf\" (UniqueName: \"kubernetes.io/projected/883f5516-228e-4167-a5a3-3e89800d1469-kube-api-access-z29hf\") on node \"crc\" DevicePath \"\"" Jan 22 07:18:59 crc kubenswrapper[4720]: I0122 07:18:59.614491 4720 scope.go:117] "RemoveContainer" containerID="ced2f5678475759ce1a8867e5ab7c8bf94f9bc4f93ca83cc9a3b7b2a938f484b" Jan 22 07:18:59 crc kubenswrapper[4720]: E0122 07:18:59.614895 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ced2f5678475759ce1a8867e5ab7c8bf94f9bc4f93ca83cc9a3b7b2a938f484b\": container with ID starting with ced2f5678475759ce1a8867e5ab7c8bf94f9bc4f93ca83cc9a3b7b2a938f484b not found: ID does not exist" containerID="ced2f5678475759ce1a8867e5ab7c8bf94f9bc4f93ca83cc9a3b7b2a938f484b" Jan 22 07:18:59 crc kubenswrapper[4720]: I0122 07:18:59.614959 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ced2f5678475759ce1a8867e5ab7c8bf94f9bc4f93ca83cc9a3b7b2a938f484b"} err="failed to get container status \"ced2f5678475759ce1a8867e5ab7c8bf94f9bc4f93ca83cc9a3b7b2a938f484b\": rpc error: code = NotFound desc = could not find container \"ced2f5678475759ce1a8867e5ab7c8bf94f9bc4f93ca83cc9a3b7b2a938f484b\": container with ID starting with ced2f5678475759ce1a8867e5ab7c8bf94f9bc4f93ca83cc9a3b7b2a938f484b not found: ID does not exist" Jan 22 07:18:59 crc kubenswrapper[4720]: I0122 07:18:59.614985 4720 scope.go:117] "RemoveContainer" containerID="8fa45e555576d0fc79fb0774fd53f1aa311d756f26f0ccb2f60e03eeac859365" Jan 22 07:18:59 crc kubenswrapper[4720]: E0122 07:18:59.615197 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8fa45e555576d0fc79fb0774fd53f1aa311d756f26f0ccb2f60e03eeac859365\": container with ID starting with 8fa45e555576d0fc79fb0774fd53f1aa311d756f26f0ccb2f60e03eeac859365 not found: ID does not exist" containerID="8fa45e555576d0fc79fb0774fd53f1aa311d756f26f0ccb2f60e03eeac859365" Jan 22 07:18:59 crc kubenswrapper[4720]: I0122 07:18:59.615216 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8fa45e555576d0fc79fb0774fd53f1aa311d756f26f0ccb2f60e03eeac859365"} err="failed to get container status \"8fa45e555576d0fc79fb0774fd53f1aa311d756f26f0ccb2f60e03eeac859365\": rpc error: code = NotFound desc = could not find container \"8fa45e555576d0fc79fb0774fd53f1aa311d756f26f0ccb2f60e03eeac859365\": container with ID starting with 8fa45e555576d0fc79fb0774fd53f1aa311d756f26f0ccb2f60e03eeac859365 not found: ID does not exist" Jan 22 07:18:59 crc kubenswrapper[4720]: I0122 07:18:59.615228 4720 scope.go:117] "RemoveContainer" containerID="2655c7bf2c2c3ccd66b7ee13a1bdea7527ad1bc1216ebf3acd6b9e01bd565c41" Jan 22 07:18:59 crc kubenswrapper[4720]: E0122 07:18:59.615362 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2655c7bf2c2c3ccd66b7ee13a1bdea7527ad1bc1216ebf3acd6b9e01bd565c41\": container with ID starting with 2655c7bf2c2c3ccd66b7ee13a1bdea7527ad1bc1216ebf3acd6b9e01bd565c41 not found: ID does not exist" containerID="2655c7bf2c2c3ccd66b7ee13a1bdea7527ad1bc1216ebf3acd6b9e01bd565c41" Jan 22 07:18:59 crc kubenswrapper[4720]: I0122 07:18:59.615379 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2655c7bf2c2c3ccd66b7ee13a1bdea7527ad1bc1216ebf3acd6b9e01bd565c41"} err="failed to get container status \"2655c7bf2c2c3ccd66b7ee13a1bdea7527ad1bc1216ebf3acd6b9e01bd565c41\": rpc error: code = NotFound desc = could not find container \"2655c7bf2c2c3ccd66b7ee13a1bdea7527ad1bc1216ebf3acd6b9e01bd565c41\": container with ID starting with 2655c7bf2c2c3ccd66b7ee13a1bdea7527ad1bc1216ebf3acd6b9e01bd565c41 not found: ID does not exist" Jan 22 07:18:59 crc kubenswrapper[4720]: I0122 07:18:59.865987 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-hrq4c"] Jan 22 07:18:59 crc kubenswrapper[4720]: I0122 07:18:59.882531 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-hrq4c"] Jan 22 07:19:00 crc kubenswrapper[4720]: I0122 07:19:00.231951 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="883f5516-228e-4167-a5a3-3e89800d1469" path="/var/lib/kubelet/pods/883f5516-228e-4167-a5a3-3e89800d1469/volumes" Jan 22 07:19:03 crc kubenswrapper[4720]: I0122 07:19:03.515940 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-8xpvg" Jan 22 07:19:03 crc kubenswrapper[4720]: I0122 07:19:03.516002 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-8xpvg" Jan 22 07:19:04 crc kubenswrapper[4720]: I0122 07:19:04.212159 4720 scope.go:117] "RemoveContainer" containerID="0eb7d27ae9c0d299c1e1a3566b73c7f2c1ab85b6c0032703c951b172cd0c528c" Jan 22 07:19:04 crc kubenswrapper[4720]: E0122 07:19:04.213367 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bnsvd_openshift-machine-config-operator(f4b26e9d-6a95-4b1c-9750-88b6aa100c67)\"" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" podUID="f4b26e9d-6a95-4b1c-9750-88b6aa100c67" Jan 22 07:19:04 crc kubenswrapper[4720]: I0122 07:19:04.575278 4720 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-8xpvg" podUID="1b51d4ef-60c3-42ac-97ff-ad0efa93963e" containerName="registry-server" probeResult="failure" output=< Jan 22 07:19:04 crc kubenswrapper[4720]: timeout: failed to connect service ":50051" within 1s Jan 22 07:19:04 crc kubenswrapper[4720]: > Jan 22 07:19:13 crc kubenswrapper[4720]: I0122 07:19:13.558848 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-8xpvg" Jan 22 07:19:13 crc kubenswrapper[4720]: I0122 07:19:13.603545 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-8xpvg" Jan 22 07:19:15 crc kubenswrapper[4720]: I0122 07:19:15.210724 4720 scope.go:117] "RemoveContainer" containerID="0eb7d27ae9c0d299c1e1a3566b73c7f2c1ab85b6c0032703c951b172cd0c528c" Jan 22 07:19:15 crc kubenswrapper[4720]: E0122 07:19:15.210967 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bnsvd_openshift-machine-config-operator(f4b26e9d-6a95-4b1c-9750-88b6aa100c67)\"" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" podUID="f4b26e9d-6a95-4b1c-9750-88b6aa100c67" Jan 22 07:19:17 crc kubenswrapper[4720]: I0122 07:19:17.996191 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8xpvg"] Jan 22 07:19:17 crc kubenswrapper[4720]: I0122 07:19:17.997050 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-8xpvg" podUID="1b51d4ef-60c3-42ac-97ff-ad0efa93963e" containerName="registry-server" containerID="cri-o://a7d878f4933e9bea896e6ecb2d0986aa16235d3e3961a296cbe3e94d56ab2a43" gracePeriod=2 Jan 22 07:19:19 crc kubenswrapper[4720]: I0122 07:19:19.447102 4720 scope.go:117] "RemoveContainer" containerID="b069cde98b1b46e01886309eab44f22bb37d6c4097276e0df4ed2c93c64c7aa5" Jan 22 07:19:19 crc kubenswrapper[4720]: I0122 07:19:19.510095 4720 scope.go:117] "RemoveContainer" containerID="394d83aeeb565d877c96015494c1a2873d100f96b0bee82fe09279026e99e779" Jan 22 07:19:19 crc kubenswrapper[4720]: I0122 07:19:19.622475 4720 scope.go:117] "RemoveContainer" containerID="7124e005859df9f544b6042b0339a5236a0dda1cf1b3503aeb1de614855dd7d9" Jan 22 07:19:19 crc kubenswrapper[4720]: I0122 07:19:19.633210 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8xpvg" Jan 22 07:19:19 crc kubenswrapper[4720]: I0122 07:19:19.684000 4720 scope.go:117] "RemoveContainer" containerID="9db6abd6cb610984867ad2454018e0fd723010ea518ed4379a86b3ee88bb3530" Jan 22 07:19:19 crc kubenswrapper[4720]: I0122 07:19:19.740640 4720 generic.go:334] "Generic (PLEG): container finished" podID="1b51d4ef-60c3-42ac-97ff-ad0efa93963e" containerID="a7d878f4933e9bea896e6ecb2d0986aa16235d3e3961a296cbe3e94d56ab2a43" exitCode=0 Jan 22 07:19:19 crc kubenswrapper[4720]: I0122 07:19:19.740697 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8xpvg" event={"ID":"1b51d4ef-60c3-42ac-97ff-ad0efa93963e","Type":"ContainerDied","Data":"a7d878f4933e9bea896e6ecb2d0986aa16235d3e3961a296cbe3e94d56ab2a43"} Jan 22 07:19:19 crc kubenswrapper[4720]: I0122 07:19:19.740722 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8xpvg" event={"ID":"1b51d4ef-60c3-42ac-97ff-ad0efa93963e","Type":"ContainerDied","Data":"181bbeeec6903a0b41c9cbe5c4f2dfa6a9e4446beea4e61d3dbba9e69e29dabf"} Jan 22 07:19:19 crc kubenswrapper[4720]: I0122 07:19:19.740739 4720 scope.go:117] "RemoveContainer" containerID="a7d878f4933e9bea896e6ecb2d0986aa16235d3e3961a296cbe3e94d56ab2a43" Jan 22 07:19:19 crc kubenswrapper[4720]: I0122 07:19:19.740855 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8xpvg" Jan 22 07:19:19 crc kubenswrapper[4720]: I0122 07:19:19.742199 4720 scope.go:117] "RemoveContainer" containerID="d0d3c5d00bf8dcf6e6d84ca5e0ba8f26f0c5ba342888311b1ff972a0f7ce8d58" Jan 22 07:19:19 crc kubenswrapper[4720]: I0122 07:19:19.745966 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b51d4ef-60c3-42ac-97ff-ad0efa93963e-catalog-content\") pod \"1b51d4ef-60c3-42ac-97ff-ad0efa93963e\" (UID: \"1b51d4ef-60c3-42ac-97ff-ad0efa93963e\") " Jan 22 07:19:19 crc kubenswrapper[4720]: I0122 07:19:19.746021 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b51d4ef-60c3-42ac-97ff-ad0efa93963e-utilities\") pod \"1b51d4ef-60c3-42ac-97ff-ad0efa93963e\" (UID: \"1b51d4ef-60c3-42ac-97ff-ad0efa93963e\") " Jan 22 07:19:19 crc kubenswrapper[4720]: I0122 07:19:19.746176 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x6rc7\" (UniqueName: \"kubernetes.io/projected/1b51d4ef-60c3-42ac-97ff-ad0efa93963e-kube-api-access-x6rc7\") pod \"1b51d4ef-60c3-42ac-97ff-ad0efa93963e\" (UID: \"1b51d4ef-60c3-42ac-97ff-ad0efa93963e\") " Jan 22 07:19:19 crc kubenswrapper[4720]: I0122 07:19:19.748229 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1b51d4ef-60c3-42ac-97ff-ad0efa93963e-utilities" (OuterVolumeSpecName: "utilities") pod "1b51d4ef-60c3-42ac-97ff-ad0efa93963e" (UID: "1b51d4ef-60c3-42ac-97ff-ad0efa93963e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 07:19:19 crc kubenswrapper[4720]: I0122 07:19:19.754343 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1b51d4ef-60c3-42ac-97ff-ad0efa93963e-kube-api-access-x6rc7" (OuterVolumeSpecName: "kube-api-access-x6rc7") pod "1b51d4ef-60c3-42ac-97ff-ad0efa93963e" (UID: "1b51d4ef-60c3-42ac-97ff-ad0efa93963e"). InnerVolumeSpecName "kube-api-access-x6rc7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 07:19:19 crc kubenswrapper[4720]: I0122 07:19:19.790092 4720 scope.go:117] "RemoveContainer" containerID="23349ff58048ff993f3f6a652232dcf4450b739bb94cef729b1daf2961811ff6" Jan 22 07:19:19 crc kubenswrapper[4720]: I0122 07:19:19.823091 4720 scope.go:117] "RemoveContainer" containerID="3e9cc039ee8b8e93ff9d9244a284b2dc2f369486b7fb94f31bd50a64c3408eab" Jan 22 07:19:19 crc kubenswrapper[4720]: I0122 07:19:19.849101 4720 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b51d4ef-60c3-42ac-97ff-ad0efa93963e-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 07:19:19 crc kubenswrapper[4720]: I0122 07:19:19.849199 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x6rc7\" (UniqueName: \"kubernetes.io/projected/1b51d4ef-60c3-42ac-97ff-ad0efa93963e-kube-api-access-x6rc7\") on node \"crc\" DevicePath \"\"" Jan 22 07:19:19 crc kubenswrapper[4720]: I0122 07:19:19.854498 4720 scope.go:117] "RemoveContainer" containerID="a7d878f4933e9bea896e6ecb2d0986aa16235d3e3961a296cbe3e94d56ab2a43" Jan 22 07:19:19 crc kubenswrapper[4720]: E0122 07:19:19.854965 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a7d878f4933e9bea896e6ecb2d0986aa16235d3e3961a296cbe3e94d56ab2a43\": container with ID starting with a7d878f4933e9bea896e6ecb2d0986aa16235d3e3961a296cbe3e94d56ab2a43 not found: ID does not exist" containerID="a7d878f4933e9bea896e6ecb2d0986aa16235d3e3961a296cbe3e94d56ab2a43" Jan 22 07:19:19 crc kubenswrapper[4720]: I0122 07:19:19.855065 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a7d878f4933e9bea896e6ecb2d0986aa16235d3e3961a296cbe3e94d56ab2a43"} err="failed to get container status \"a7d878f4933e9bea896e6ecb2d0986aa16235d3e3961a296cbe3e94d56ab2a43\": rpc error: code = NotFound desc = could not find container \"a7d878f4933e9bea896e6ecb2d0986aa16235d3e3961a296cbe3e94d56ab2a43\": container with ID starting with a7d878f4933e9bea896e6ecb2d0986aa16235d3e3961a296cbe3e94d56ab2a43 not found: ID does not exist" Jan 22 07:19:19 crc kubenswrapper[4720]: I0122 07:19:19.855203 4720 scope.go:117] "RemoveContainer" containerID="23349ff58048ff993f3f6a652232dcf4450b739bb94cef729b1daf2961811ff6" Jan 22 07:19:19 crc kubenswrapper[4720]: E0122 07:19:19.855694 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"23349ff58048ff993f3f6a652232dcf4450b739bb94cef729b1daf2961811ff6\": container with ID starting with 23349ff58048ff993f3f6a652232dcf4450b739bb94cef729b1daf2961811ff6 not found: ID does not exist" containerID="23349ff58048ff993f3f6a652232dcf4450b739bb94cef729b1daf2961811ff6" Jan 22 07:19:19 crc kubenswrapper[4720]: I0122 07:19:19.855735 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"23349ff58048ff993f3f6a652232dcf4450b739bb94cef729b1daf2961811ff6"} err="failed to get container status \"23349ff58048ff993f3f6a652232dcf4450b739bb94cef729b1daf2961811ff6\": rpc error: code = NotFound desc = could not find container \"23349ff58048ff993f3f6a652232dcf4450b739bb94cef729b1daf2961811ff6\": container with ID starting with 23349ff58048ff993f3f6a652232dcf4450b739bb94cef729b1daf2961811ff6 not found: ID does not exist" Jan 22 07:19:19 crc kubenswrapper[4720]: I0122 07:19:19.855765 4720 scope.go:117] "RemoveContainer" containerID="3e9cc039ee8b8e93ff9d9244a284b2dc2f369486b7fb94f31bd50a64c3408eab" Jan 22 07:19:19 crc kubenswrapper[4720]: E0122 07:19:19.856012 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3e9cc039ee8b8e93ff9d9244a284b2dc2f369486b7fb94f31bd50a64c3408eab\": container with ID starting with 3e9cc039ee8b8e93ff9d9244a284b2dc2f369486b7fb94f31bd50a64c3408eab not found: ID does not exist" containerID="3e9cc039ee8b8e93ff9d9244a284b2dc2f369486b7fb94f31bd50a64c3408eab" Jan 22 07:19:19 crc kubenswrapper[4720]: I0122 07:19:19.856036 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3e9cc039ee8b8e93ff9d9244a284b2dc2f369486b7fb94f31bd50a64c3408eab"} err="failed to get container status \"3e9cc039ee8b8e93ff9d9244a284b2dc2f369486b7fb94f31bd50a64c3408eab\": rpc error: code = NotFound desc = could not find container \"3e9cc039ee8b8e93ff9d9244a284b2dc2f369486b7fb94f31bd50a64c3408eab\": container with ID starting with 3e9cc039ee8b8e93ff9d9244a284b2dc2f369486b7fb94f31bd50a64c3408eab not found: ID does not exist" Jan 22 07:19:19 crc kubenswrapper[4720]: I0122 07:19:19.908753 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1b51d4ef-60c3-42ac-97ff-ad0efa93963e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1b51d4ef-60c3-42ac-97ff-ad0efa93963e" (UID: "1b51d4ef-60c3-42ac-97ff-ad0efa93963e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 07:19:19 crc kubenswrapper[4720]: I0122 07:19:19.949779 4720 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b51d4ef-60c3-42ac-97ff-ad0efa93963e-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 07:19:20 crc kubenswrapper[4720]: I0122 07:19:20.072778 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8xpvg"] Jan 22 07:19:20 crc kubenswrapper[4720]: I0122 07:19:20.079499 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-8xpvg"] Jan 22 07:19:20 crc kubenswrapper[4720]: I0122 07:19:20.220699 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1b51d4ef-60c3-42ac-97ff-ad0efa93963e" path="/var/lib/kubelet/pods/1b51d4ef-60c3-42ac-97ff-ad0efa93963e/volumes" Jan 22 07:19:28 crc kubenswrapper[4720]: I0122 07:19:28.215708 4720 scope.go:117] "RemoveContainer" containerID="0eb7d27ae9c0d299c1e1a3566b73c7f2c1ab85b6c0032703c951b172cd0c528c" Jan 22 07:19:28 crc kubenswrapper[4720]: E0122 07:19:28.216576 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bnsvd_openshift-machine-config-operator(f4b26e9d-6a95-4b1c-9750-88b6aa100c67)\"" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" podUID="f4b26e9d-6a95-4b1c-9750-88b6aa100c67" Jan 22 07:19:40 crc kubenswrapper[4720]: I0122 07:19:40.213718 4720 scope.go:117] "RemoveContainer" containerID="0eb7d27ae9c0d299c1e1a3566b73c7f2c1ab85b6c0032703c951b172cd0c528c" Jan 22 07:19:40 crc kubenswrapper[4720]: E0122 07:19:40.214409 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bnsvd_openshift-machine-config-operator(f4b26e9d-6a95-4b1c-9750-88b6aa100c67)\"" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" podUID="f4b26e9d-6a95-4b1c-9750-88b6aa100c67" Jan 22 07:19:53 crc kubenswrapper[4720]: I0122 07:19:53.210405 4720 scope.go:117] "RemoveContainer" containerID="0eb7d27ae9c0d299c1e1a3566b73c7f2c1ab85b6c0032703c951b172cd0c528c" Jan 22 07:19:53 crc kubenswrapper[4720]: E0122 07:19:53.211209 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bnsvd_openshift-machine-config-operator(f4b26e9d-6a95-4b1c-9750-88b6aa100c67)\"" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" podUID="f4b26e9d-6a95-4b1c-9750-88b6aa100c67" Jan 22 07:20:07 crc kubenswrapper[4720]: I0122 07:20:07.211150 4720 scope.go:117] "RemoveContainer" containerID="0eb7d27ae9c0d299c1e1a3566b73c7f2c1ab85b6c0032703c951b172cd0c528c" Jan 22 07:20:07 crc kubenswrapper[4720]: E0122 07:20:07.211842 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bnsvd_openshift-machine-config-operator(f4b26e9d-6a95-4b1c-9750-88b6aa100c67)\"" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" podUID="f4b26e9d-6a95-4b1c-9750-88b6aa100c67" Jan 22 07:20:21 crc kubenswrapper[4720]: I0122 07:20:21.210757 4720 scope.go:117] "RemoveContainer" containerID="0eb7d27ae9c0d299c1e1a3566b73c7f2c1ab85b6c0032703c951b172cd0c528c" Jan 22 07:20:21 crc kubenswrapper[4720]: E0122 07:20:21.211458 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bnsvd_openshift-machine-config-operator(f4b26e9d-6a95-4b1c-9750-88b6aa100c67)\"" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" podUID="f4b26e9d-6a95-4b1c-9750-88b6aa100c67" Jan 22 07:20:35 crc kubenswrapper[4720]: I0122 07:20:35.211263 4720 scope.go:117] "RemoveContainer" containerID="0eb7d27ae9c0d299c1e1a3566b73c7f2c1ab85b6c0032703c951b172cd0c528c" Jan 22 07:20:35 crc kubenswrapper[4720]: I0122 07:20:35.395211 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" event={"ID":"f4b26e9d-6a95-4b1c-9750-88b6aa100c67","Type":"ContainerStarted","Data":"d962058cd135b0c7dec5d20ef5079cd43e43b862e0d050e80955f27800040c5e"} Jan 22 07:22:42 crc kubenswrapper[4720]: I0122 07:22:42.597281 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-mzfcr"] Jan 22 07:22:42 crc kubenswrapper[4720]: E0122 07:22:42.598209 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="883f5516-228e-4167-a5a3-3e89800d1469" containerName="extract-content" Jan 22 07:22:42 crc kubenswrapper[4720]: I0122 07:22:42.598238 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="883f5516-228e-4167-a5a3-3e89800d1469" containerName="extract-content" Jan 22 07:22:42 crc kubenswrapper[4720]: E0122 07:22:42.598249 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="883f5516-228e-4167-a5a3-3e89800d1469" containerName="registry-server" Jan 22 07:22:42 crc kubenswrapper[4720]: I0122 07:22:42.598254 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="883f5516-228e-4167-a5a3-3e89800d1469" containerName="registry-server" Jan 22 07:22:42 crc kubenswrapper[4720]: E0122 07:22:42.598262 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="883f5516-228e-4167-a5a3-3e89800d1469" containerName="extract-utilities" Jan 22 07:22:42 crc kubenswrapper[4720]: I0122 07:22:42.598275 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="883f5516-228e-4167-a5a3-3e89800d1469" containerName="extract-utilities" Jan 22 07:22:42 crc kubenswrapper[4720]: E0122 07:22:42.598289 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b51d4ef-60c3-42ac-97ff-ad0efa93963e" containerName="extract-content" Jan 22 07:22:42 crc kubenswrapper[4720]: I0122 07:22:42.598295 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b51d4ef-60c3-42ac-97ff-ad0efa93963e" containerName="extract-content" Jan 22 07:22:42 crc kubenswrapper[4720]: E0122 07:22:42.598315 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b51d4ef-60c3-42ac-97ff-ad0efa93963e" containerName="registry-server" Jan 22 07:22:42 crc kubenswrapper[4720]: I0122 07:22:42.598321 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b51d4ef-60c3-42ac-97ff-ad0efa93963e" containerName="registry-server" Jan 22 07:22:42 crc kubenswrapper[4720]: E0122 07:22:42.598333 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b51d4ef-60c3-42ac-97ff-ad0efa93963e" containerName="extract-utilities" Jan 22 07:22:42 crc kubenswrapper[4720]: I0122 07:22:42.598338 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b51d4ef-60c3-42ac-97ff-ad0efa93963e" containerName="extract-utilities" Jan 22 07:22:42 crc kubenswrapper[4720]: I0122 07:22:42.598490 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="1b51d4ef-60c3-42ac-97ff-ad0efa93963e" containerName="registry-server" Jan 22 07:22:42 crc kubenswrapper[4720]: I0122 07:22:42.598503 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="883f5516-228e-4167-a5a3-3e89800d1469" containerName="registry-server" Jan 22 07:22:42 crc kubenswrapper[4720]: I0122 07:22:42.599579 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mzfcr" Jan 22 07:22:42 crc kubenswrapper[4720]: I0122 07:22:42.617106 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mzfcr"] Jan 22 07:22:42 crc kubenswrapper[4720]: I0122 07:22:42.717243 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xkzfh\" (UniqueName: \"kubernetes.io/projected/070b28e7-b049-4d40-ab4c-f0e83cdec265-kube-api-access-xkzfh\") pod \"redhat-marketplace-mzfcr\" (UID: \"070b28e7-b049-4d40-ab4c-f0e83cdec265\") " pod="openshift-marketplace/redhat-marketplace-mzfcr" Jan 22 07:22:42 crc kubenswrapper[4720]: I0122 07:22:42.717323 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/070b28e7-b049-4d40-ab4c-f0e83cdec265-utilities\") pod \"redhat-marketplace-mzfcr\" (UID: \"070b28e7-b049-4d40-ab4c-f0e83cdec265\") " pod="openshift-marketplace/redhat-marketplace-mzfcr" Jan 22 07:22:42 crc kubenswrapper[4720]: I0122 07:22:42.717449 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/070b28e7-b049-4d40-ab4c-f0e83cdec265-catalog-content\") pod \"redhat-marketplace-mzfcr\" (UID: \"070b28e7-b049-4d40-ab4c-f0e83cdec265\") " pod="openshift-marketplace/redhat-marketplace-mzfcr" Jan 22 07:22:42 crc kubenswrapper[4720]: I0122 07:22:42.819115 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/070b28e7-b049-4d40-ab4c-f0e83cdec265-utilities\") pod \"redhat-marketplace-mzfcr\" (UID: \"070b28e7-b049-4d40-ab4c-f0e83cdec265\") " pod="openshift-marketplace/redhat-marketplace-mzfcr" Jan 22 07:22:42 crc kubenswrapper[4720]: I0122 07:22:42.819185 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/070b28e7-b049-4d40-ab4c-f0e83cdec265-catalog-content\") pod \"redhat-marketplace-mzfcr\" (UID: \"070b28e7-b049-4d40-ab4c-f0e83cdec265\") " pod="openshift-marketplace/redhat-marketplace-mzfcr" Jan 22 07:22:42 crc kubenswrapper[4720]: I0122 07:22:42.819253 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xkzfh\" (UniqueName: \"kubernetes.io/projected/070b28e7-b049-4d40-ab4c-f0e83cdec265-kube-api-access-xkzfh\") pod \"redhat-marketplace-mzfcr\" (UID: \"070b28e7-b049-4d40-ab4c-f0e83cdec265\") " pod="openshift-marketplace/redhat-marketplace-mzfcr" Jan 22 07:22:42 crc kubenswrapper[4720]: I0122 07:22:42.819632 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/070b28e7-b049-4d40-ab4c-f0e83cdec265-utilities\") pod \"redhat-marketplace-mzfcr\" (UID: \"070b28e7-b049-4d40-ab4c-f0e83cdec265\") " pod="openshift-marketplace/redhat-marketplace-mzfcr" Jan 22 07:22:42 crc kubenswrapper[4720]: I0122 07:22:42.819661 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/070b28e7-b049-4d40-ab4c-f0e83cdec265-catalog-content\") pod \"redhat-marketplace-mzfcr\" (UID: \"070b28e7-b049-4d40-ab4c-f0e83cdec265\") " pod="openshift-marketplace/redhat-marketplace-mzfcr" Jan 22 07:22:42 crc kubenswrapper[4720]: I0122 07:22:42.838387 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xkzfh\" (UniqueName: \"kubernetes.io/projected/070b28e7-b049-4d40-ab4c-f0e83cdec265-kube-api-access-xkzfh\") pod \"redhat-marketplace-mzfcr\" (UID: \"070b28e7-b049-4d40-ab4c-f0e83cdec265\") " pod="openshift-marketplace/redhat-marketplace-mzfcr" Jan 22 07:22:42 crc kubenswrapper[4720]: I0122 07:22:42.917355 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mzfcr" Jan 22 07:22:43 crc kubenswrapper[4720]: I0122 07:22:43.453232 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mzfcr"] Jan 22 07:22:44 crc kubenswrapper[4720]: I0122 07:22:44.401107 4720 generic.go:334] "Generic (PLEG): container finished" podID="070b28e7-b049-4d40-ab4c-f0e83cdec265" containerID="2eb3741c9e55d1a9a15991659d3fd85a5ef70a2a3a7d2a969b45992b5b9f3e53" exitCode=0 Jan 22 07:22:44 crc kubenswrapper[4720]: I0122 07:22:44.401170 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mzfcr" event={"ID":"070b28e7-b049-4d40-ab4c-f0e83cdec265","Type":"ContainerDied","Data":"2eb3741c9e55d1a9a15991659d3fd85a5ef70a2a3a7d2a969b45992b5b9f3e53"} Jan 22 07:22:44 crc kubenswrapper[4720]: I0122 07:22:44.401383 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mzfcr" event={"ID":"070b28e7-b049-4d40-ab4c-f0e83cdec265","Type":"ContainerStarted","Data":"7c5c594d778d4021336f369403c9f0b26d9607165eb9e60556483126d6e36859"} Jan 22 07:22:45 crc kubenswrapper[4720]: I0122 07:22:45.409395 4720 generic.go:334] "Generic (PLEG): container finished" podID="070b28e7-b049-4d40-ab4c-f0e83cdec265" containerID="3fb72dd2f3136e372b47c7ee7f4bc76e05ce3019415becadf7196b23bebd72ed" exitCode=0 Jan 22 07:22:45 crc kubenswrapper[4720]: I0122 07:22:45.409556 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mzfcr" event={"ID":"070b28e7-b049-4d40-ab4c-f0e83cdec265","Type":"ContainerDied","Data":"3fb72dd2f3136e372b47c7ee7f4bc76e05ce3019415becadf7196b23bebd72ed"} Jan 22 07:22:46 crc kubenswrapper[4720]: I0122 07:22:46.012203 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-qgp58"] Jan 22 07:22:46 crc kubenswrapper[4720]: I0122 07:22:46.014260 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qgp58" Jan 22 07:22:46 crc kubenswrapper[4720]: I0122 07:22:46.041493 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qgp58"] Jan 22 07:22:46 crc kubenswrapper[4720]: I0122 07:22:46.045104 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m5qgh\" (UniqueName: \"kubernetes.io/projected/0aca8fe3-9985-4750-8074-1a65c43e60db-kube-api-access-m5qgh\") pod \"community-operators-qgp58\" (UID: \"0aca8fe3-9985-4750-8074-1a65c43e60db\") " pod="openshift-marketplace/community-operators-qgp58" Jan 22 07:22:46 crc kubenswrapper[4720]: I0122 07:22:46.045175 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0aca8fe3-9985-4750-8074-1a65c43e60db-utilities\") pod \"community-operators-qgp58\" (UID: \"0aca8fe3-9985-4750-8074-1a65c43e60db\") " pod="openshift-marketplace/community-operators-qgp58" Jan 22 07:22:46 crc kubenswrapper[4720]: I0122 07:22:46.045215 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0aca8fe3-9985-4750-8074-1a65c43e60db-catalog-content\") pod \"community-operators-qgp58\" (UID: \"0aca8fe3-9985-4750-8074-1a65c43e60db\") " pod="openshift-marketplace/community-operators-qgp58" Jan 22 07:22:46 crc kubenswrapper[4720]: I0122 07:22:46.146700 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m5qgh\" (UniqueName: \"kubernetes.io/projected/0aca8fe3-9985-4750-8074-1a65c43e60db-kube-api-access-m5qgh\") pod \"community-operators-qgp58\" (UID: \"0aca8fe3-9985-4750-8074-1a65c43e60db\") " pod="openshift-marketplace/community-operators-qgp58" Jan 22 07:22:46 crc kubenswrapper[4720]: I0122 07:22:46.146769 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0aca8fe3-9985-4750-8074-1a65c43e60db-utilities\") pod \"community-operators-qgp58\" (UID: \"0aca8fe3-9985-4750-8074-1a65c43e60db\") " pod="openshift-marketplace/community-operators-qgp58" Jan 22 07:22:46 crc kubenswrapper[4720]: I0122 07:22:46.146809 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0aca8fe3-9985-4750-8074-1a65c43e60db-catalog-content\") pod \"community-operators-qgp58\" (UID: \"0aca8fe3-9985-4750-8074-1a65c43e60db\") " pod="openshift-marketplace/community-operators-qgp58" Jan 22 07:22:46 crc kubenswrapper[4720]: I0122 07:22:46.147805 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0aca8fe3-9985-4750-8074-1a65c43e60db-utilities\") pod \"community-operators-qgp58\" (UID: \"0aca8fe3-9985-4750-8074-1a65c43e60db\") " pod="openshift-marketplace/community-operators-qgp58" Jan 22 07:22:46 crc kubenswrapper[4720]: I0122 07:22:46.147892 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0aca8fe3-9985-4750-8074-1a65c43e60db-catalog-content\") pod \"community-operators-qgp58\" (UID: \"0aca8fe3-9985-4750-8074-1a65c43e60db\") " pod="openshift-marketplace/community-operators-qgp58" Jan 22 07:22:46 crc kubenswrapper[4720]: I0122 07:22:46.178124 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m5qgh\" (UniqueName: \"kubernetes.io/projected/0aca8fe3-9985-4750-8074-1a65c43e60db-kube-api-access-m5qgh\") pod \"community-operators-qgp58\" (UID: \"0aca8fe3-9985-4750-8074-1a65c43e60db\") " pod="openshift-marketplace/community-operators-qgp58" Jan 22 07:22:46 crc kubenswrapper[4720]: I0122 07:22:46.340358 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qgp58" Jan 22 07:22:46 crc kubenswrapper[4720]: I0122 07:22:46.451096 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mzfcr" event={"ID":"070b28e7-b049-4d40-ab4c-f0e83cdec265","Type":"ContainerStarted","Data":"378e92fd45775326c9192e63f304582a653234f5bb6f0ddfd6ebffe385af26fd"} Jan 22 07:22:46 crc kubenswrapper[4720]: I0122 07:22:46.480513 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-mzfcr" podStartSLOduration=3.081596603 podStartE2EDuration="4.480492543s" podCreationTimestamp="2026-01-22 07:22:42 +0000 UTC" firstStartedPulling="2026-01-22 07:22:44.404071676 +0000 UTC m=+2856.545978381" lastFinishedPulling="2026-01-22 07:22:45.802967616 +0000 UTC m=+2857.944874321" observedRunningTime="2026-01-22 07:22:46.474796661 +0000 UTC m=+2858.616703386" watchObservedRunningTime="2026-01-22 07:22:46.480492543 +0000 UTC m=+2858.622399238" Jan 22 07:22:46 crc kubenswrapper[4720]: I0122 07:22:46.895017 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qgp58"] Jan 22 07:22:46 crc kubenswrapper[4720]: W0122 07:22:46.904819 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0aca8fe3_9985_4750_8074_1a65c43e60db.slice/crio-d5b1049a4064890753e048fb98f30211dce275ff16ac077e6e38c70aee64651d WatchSource:0}: Error finding container d5b1049a4064890753e048fb98f30211dce275ff16ac077e6e38c70aee64651d: Status 404 returned error can't find the container with id d5b1049a4064890753e048fb98f30211dce275ff16ac077e6e38c70aee64651d Jan 22 07:22:47 crc kubenswrapper[4720]: I0122 07:22:47.460328 4720 generic.go:334] "Generic (PLEG): container finished" podID="0aca8fe3-9985-4750-8074-1a65c43e60db" containerID="48418b13c5ecec54f1c76189cef3169fea86b6938dee708a6ebb506fa6dd44a6" exitCode=0 Jan 22 07:22:47 crc kubenswrapper[4720]: I0122 07:22:47.461945 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qgp58" event={"ID":"0aca8fe3-9985-4750-8074-1a65c43e60db","Type":"ContainerDied","Data":"48418b13c5ecec54f1c76189cef3169fea86b6938dee708a6ebb506fa6dd44a6"} Jan 22 07:22:47 crc kubenswrapper[4720]: I0122 07:22:47.461977 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qgp58" event={"ID":"0aca8fe3-9985-4750-8074-1a65c43e60db","Type":"ContainerStarted","Data":"d5b1049a4064890753e048fb98f30211dce275ff16ac077e6e38c70aee64651d"} Jan 22 07:22:48 crc kubenswrapper[4720]: I0122 07:22:48.468026 4720 generic.go:334] "Generic (PLEG): container finished" podID="0aca8fe3-9985-4750-8074-1a65c43e60db" containerID="803bfa80c0d530c1ab17a3b504a5dd0f975177642d52d7adbc25629f97fa3f81" exitCode=0 Jan 22 07:22:48 crc kubenswrapper[4720]: I0122 07:22:48.468081 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qgp58" event={"ID":"0aca8fe3-9985-4750-8074-1a65c43e60db","Type":"ContainerDied","Data":"803bfa80c0d530c1ab17a3b504a5dd0f975177642d52d7adbc25629f97fa3f81"} Jan 22 07:22:49 crc kubenswrapper[4720]: I0122 07:22:49.478672 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qgp58" event={"ID":"0aca8fe3-9985-4750-8074-1a65c43e60db","Type":"ContainerStarted","Data":"de86acd478d2648aa2eb178cb79b28721498e1a78e5c686739c8e9bef7a4e963"} Jan 22 07:22:49 crc kubenswrapper[4720]: I0122 07:22:49.497154 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-qgp58" podStartSLOduration=3.104037923 podStartE2EDuration="4.497136268s" podCreationTimestamp="2026-01-22 07:22:45 +0000 UTC" firstStartedPulling="2026-01-22 07:22:47.462639963 +0000 UTC m=+2859.604546668" lastFinishedPulling="2026-01-22 07:22:48.855738308 +0000 UTC m=+2860.997645013" observedRunningTime="2026-01-22 07:22:49.495529522 +0000 UTC m=+2861.637436227" watchObservedRunningTime="2026-01-22 07:22:49.497136268 +0000 UTC m=+2861.639042983" Jan 22 07:22:52 crc kubenswrapper[4720]: I0122 07:22:52.917814 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-mzfcr" Jan 22 07:22:52 crc kubenswrapper[4720]: I0122 07:22:52.919301 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-mzfcr" Jan 22 07:22:52 crc kubenswrapper[4720]: I0122 07:22:52.961336 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-mzfcr" Jan 22 07:22:53 crc kubenswrapper[4720]: I0122 07:22:53.595587 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-mzfcr" Jan 22 07:22:54 crc kubenswrapper[4720]: I0122 07:22:54.984118 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-mzfcr"] Jan 22 07:22:55 crc kubenswrapper[4720]: I0122 07:22:55.554163 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-mzfcr" podUID="070b28e7-b049-4d40-ab4c-f0e83cdec265" containerName="registry-server" containerID="cri-o://378e92fd45775326c9192e63f304582a653234f5bb6f0ddfd6ebffe385af26fd" gracePeriod=2 Jan 22 07:22:56 crc kubenswrapper[4720]: I0122 07:22:56.341152 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-qgp58" Jan 22 07:22:56 crc kubenswrapper[4720]: I0122 07:22:56.341395 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-qgp58" Jan 22 07:22:56 crc kubenswrapper[4720]: I0122 07:22:56.407267 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-qgp58" Jan 22 07:22:56 crc kubenswrapper[4720]: I0122 07:22:56.520243 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mzfcr" Jan 22 07:22:56 crc kubenswrapper[4720]: I0122 07:22:56.563314 4720 generic.go:334] "Generic (PLEG): container finished" podID="070b28e7-b049-4d40-ab4c-f0e83cdec265" containerID="378e92fd45775326c9192e63f304582a653234f5bb6f0ddfd6ebffe385af26fd" exitCode=0 Jan 22 07:22:56 crc kubenswrapper[4720]: I0122 07:22:56.564220 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mzfcr" Jan 22 07:22:56 crc kubenswrapper[4720]: I0122 07:22:56.564578 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mzfcr" event={"ID":"070b28e7-b049-4d40-ab4c-f0e83cdec265","Type":"ContainerDied","Data":"378e92fd45775326c9192e63f304582a653234f5bb6f0ddfd6ebffe385af26fd"} Jan 22 07:22:56 crc kubenswrapper[4720]: I0122 07:22:56.564611 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mzfcr" event={"ID":"070b28e7-b049-4d40-ab4c-f0e83cdec265","Type":"ContainerDied","Data":"7c5c594d778d4021336f369403c9f0b26d9607165eb9e60556483126d6e36859"} Jan 22 07:22:56 crc kubenswrapper[4720]: I0122 07:22:56.564629 4720 scope.go:117] "RemoveContainer" containerID="378e92fd45775326c9192e63f304582a653234f5bb6f0ddfd6ebffe385af26fd" Jan 22 07:22:56 crc kubenswrapper[4720]: I0122 07:22:56.586015 4720 scope.go:117] "RemoveContainer" containerID="3fb72dd2f3136e372b47c7ee7f4bc76e05ce3019415becadf7196b23bebd72ed" Jan 22 07:22:56 crc kubenswrapper[4720]: I0122 07:22:56.605931 4720 scope.go:117] "RemoveContainer" containerID="2eb3741c9e55d1a9a15991659d3fd85a5ef70a2a3a7d2a969b45992b5b9f3e53" Jan 22 07:22:56 crc kubenswrapper[4720]: I0122 07:22:56.615542 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-qgp58" Jan 22 07:22:56 crc kubenswrapper[4720]: I0122 07:22:56.617838 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/070b28e7-b049-4d40-ab4c-f0e83cdec265-utilities\") pod \"070b28e7-b049-4d40-ab4c-f0e83cdec265\" (UID: \"070b28e7-b049-4d40-ab4c-f0e83cdec265\") " Jan 22 07:22:56 crc kubenswrapper[4720]: I0122 07:22:56.618148 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xkzfh\" (UniqueName: \"kubernetes.io/projected/070b28e7-b049-4d40-ab4c-f0e83cdec265-kube-api-access-xkzfh\") pod \"070b28e7-b049-4d40-ab4c-f0e83cdec265\" (UID: \"070b28e7-b049-4d40-ab4c-f0e83cdec265\") " Jan 22 07:22:56 crc kubenswrapper[4720]: I0122 07:22:56.618326 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/070b28e7-b049-4d40-ab4c-f0e83cdec265-catalog-content\") pod \"070b28e7-b049-4d40-ab4c-f0e83cdec265\" (UID: \"070b28e7-b049-4d40-ab4c-f0e83cdec265\") " Jan 22 07:22:56 crc kubenswrapper[4720]: I0122 07:22:56.620480 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/070b28e7-b049-4d40-ab4c-f0e83cdec265-utilities" (OuterVolumeSpecName: "utilities") pod "070b28e7-b049-4d40-ab4c-f0e83cdec265" (UID: "070b28e7-b049-4d40-ab4c-f0e83cdec265"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 07:22:56 crc kubenswrapper[4720]: I0122 07:22:56.627144 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/070b28e7-b049-4d40-ab4c-f0e83cdec265-kube-api-access-xkzfh" (OuterVolumeSpecName: "kube-api-access-xkzfh") pod "070b28e7-b049-4d40-ab4c-f0e83cdec265" (UID: "070b28e7-b049-4d40-ab4c-f0e83cdec265"). InnerVolumeSpecName "kube-api-access-xkzfh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 07:22:56 crc kubenswrapper[4720]: I0122 07:22:56.656708 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/070b28e7-b049-4d40-ab4c-f0e83cdec265-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "070b28e7-b049-4d40-ab4c-f0e83cdec265" (UID: "070b28e7-b049-4d40-ab4c-f0e83cdec265"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 07:22:56 crc kubenswrapper[4720]: I0122 07:22:56.675893 4720 scope.go:117] "RemoveContainer" containerID="378e92fd45775326c9192e63f304582a653234f5bb6f0ddfd6ebffe385af26fd" Jan 22 07:22:56 crc kubenswrapper[4720]: E0122 07:22:56.677052 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"378e92fd45775326c9192e63f304582a653234f5bb6f0ddfd6ebffe385af26fd\": container with ID starting with 378e92fd45775326c9192e63f304582a653234f5bb6f0ddfd6ebffe385af26fd not found: ID does not exist" containerID="378e92fd45775326c9192e63f304582a653234f5bb6f0ddfd6ebffe385af26fd" Jan 22 07:22:56 crc kubenswrapper[4720]: I0122 07:22:56.677100 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"378e92fd45775326c9192e63f304582a653234f5bb6f0ddfd6ebffe385af26fd"} err="failed to get container status \"378e92fd45775326c9192e63f304582a653234f5bb6f0ddfd6ebffe385af26fd\": rpc error: code = NotFound desc = could not find container \"378e92fd45775326c9192e63f304582a653234f5bb6f0ddfd6ebffe385af26fd\": container with ID starting with 378e92fd45775326c9192e63f304582a653234f5bb6f0ddfd6ebffe385af26fd not found: ID does not exist" Jan 22 07:22:56 crc kubenswrapper[4720]: I0122 07:22:56.677130 4720 scope.go:117] "RemoveContainer" containerID="3fb72dd2f3136e372b47c7ee7f4bc76e05ce3019415becadf7196b23bebd72ed" Jan 22 07:22:56 crc kubenswrapper[4720]: E0122 07:22:56.677482 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3fb72dd2f3136e372b47c7ee7f4bc76e05ce3019415becadf7196b23bebd72ed\": container with ID starting with 3fb72dd2f3136e372b47c7ee7f4bc76e05ce3019415becadf7196b23bebd72ed not found: ID does not exist" containerID="3fb72dd2f3136e372b47c7ee7f4bc76e05ce3019415becadf7196b23bebd72ed" Jan 22 07:22:56 crc kubenswrapper[4720]: I0122 07:22:56.677545 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3fb72dd2f3136e372b47c7ee7f4bc76e05ce3019415becadf7196b23bebd72ed"} err="failed to get container status \"3fb72dd2f3136e372b47c7ee7f4bc76e05ce3019415becadf7196b23bebd72ed\": rpc error: code = NotFound desc = could not find container \"3fb72dd2f3136e372b47c7ee7f4bc76e05ce3019415becadf7196b23bebd72ed\": container with ID starting with 3fb72dd2f3136e372b47c7ee7f4bc76e05ce3019415becadf7196b23bebd72ed not found: ID does not exist" Jan 22 07:22:56 crc kubenswrapper[4720]: I0122 07:22:56.677585 4720 scope.go:117] "RemoveContainer" containerID="2eb3741c9e55d1a9a15991659d3fd85a5ef70a2a3a7d2a969b45992b5b9f3e53" Jan 22 07:22:56 crc kubenswrapper[4720]: E0122 07:22:56.678176 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2eb3741c9e55d1a9a15991659d3fd85a5ef70a2a3a7d2a969b45992b5b9f3e53\": container with ID starting with 2eb3741c9e55d1a9a15991659d3fd85a5ef70a2a3a7d2a969b45992b5b9f3e53 not found: ID does not exist" containerID="2eb3741c9e55d1a9a15991659d3fd85a5ef70a2a3a7d2a969b45992b5b9f3e53" Jan 22 07:22:56 crc kubenswrapper[4720]: I0122 07:22:56.678226 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2eb3741c9e55d1a9a15991659d3fd85a5ef70a2a3a7d2a969b45992b5b9f3e53"} err="failed to get container status \"2eb3741c9e55d1a9a15991659d3fd85a5ef70a2a3a7d2a969b45992b5b9f3e53\": rpc error: code = NotFound desc = could not find container \"2eb3741c9e55d1a9a15991659d3fd85a5ef70a2a3a7d2a969b45992b5b9f3e53\": container with ID starting with 2eb3741c9e55d1a9a15991659d3fd85a5ef70a2a3a7d2a969b45992b5b9f3e53 not found: ID does not exist" Jan 22 07:22:56 crc kubenswrapper[4720]: I0122 07:22:56.720088 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xkzfh\" (UniqueName: \"kubernetes.io/projected/070b28e7-b049-4d40-ab4c-f0e83cdec265-kube-api-access-xkzfh\") on node \"crc\" DevicePath \"\"" Jan 22 07:22:56 crc kubenswrapper[4720]: I0122 07:22:56.720128 4720 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/070b28e7-b049-4d40-ab4c-f0e83cdec265-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 07:22:56 crc kubenswrapper[4720]: I0122 07:22:56.720140 4720 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/070b28e7-b049-4d40-ab4c-f0e83cdec265-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 07:22:56 crc kubenswrapper[4720]: I0122 07:22:56.896981 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-mzfcr"] Jan 22 07:22:56 crc kubenswrapper[4720]: I0122 07:22:56.903779 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-mzfcr"] Jan 22 07:22:58 crc kubenswrapper[4720]: I0122 07:22:58.223958 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="070b28e7-b049-4d40-ab4c-f0e83cdec265" path="/var/lib/kubelet/pods/070b28e7-b049-4d40-ab4c-f0e83cdec265/volumes" Jan 22 07:22:59 crc kubenswrapper[4720]: I0122 07:22:59.780614 4720 patch_prober.go:28] interesting pod/machine-config-daemon-bnsvd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 07:22:59 crc kubenswrapper[4720]: I0122 07:22:59.780665 4720 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" podUID="f4b26e9d-6a95-4b1c-9750-88b6aa100c67" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 07:23:01 crc kubenswrapper[4720]: I0122 07:23:01.188780 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-qgp58"] Jan 22 07:23:01 crc kubenswrapper[4720]: I0122 07:23:01.189492 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-qgp58" podUID="0aca8fe3-9985-4750-8074-1a65c43e60db" containerName="registry-server" containerID="cri-o://de86acd478d2648aa2eb178cb79b28721498e1a78e5c686739c8e9bef7a4e963" gracePeriod=2 Jan 22 07:23:01 crc kubenswrapper[4720]: I0122 07:23:01.608280 4720 generic.go:334] "Generic (PLEG): container finished" podID="0aca8fe3-9985-4750-8074-1a65c43e60db" containerID="de86acd478d2648aa2eb178cb79b28721498e1a78e5c686739c8e9bef7a4e963" exitCode=0 Jan 22 07:23:01 crc kubenswrapper[4720]: I0122 07:23:01.608325 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qgp58" event={"ID":"0aca8fe3-9985-4750-8074-1a65c43e60db","Type":"ContainerDied","Data":"de86acd478d2648aa2eb178cb79b28721498e1a78e5c686739c8e9bef7a4e963"} Jan 22 07:23:01 crc kubenswrapper[4720]: I0122 07:23:01.712190 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qgp58" Jan 22 07:23:01 crc kubenswrapper[4720]: I0122 07:23:01.800066 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m5qgh\" (UniqueName: \"kubernetes.io/projected/0aca8fe3-9985-4750-8074-1a65c43e60db-kube-api-access-m5qgh\") pod \"0aca8fe3-9985-4750-8074-1a65c43e60db\" (UID: \"0aca8fe3-9985-4750-8074-1a65c43e60db\") " Jan 22 07:23:01 crc kubenswrapper[4720]: I0122 07:23:01.800145 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0aca8fe3-9985-4750-8074-1a65c43e60db-utilities\") pod \"0aca8fe3-9985-4750-8074-1a65c43e60db\" (UID: \"0aca8fe3-9985-4750-8074-1a65c43e60db\") " Jan 22 07:23:01 crc kubenswrapper[4720]: I0122 07:23:01.800174 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0aca8fe3-9985-4750-8074-1a65c43e60db-catalog-content\") pod \"0aca8fe3-9985-4750-8074-1a65c43e60db\" (UID: \"0aca8fe3-9985-4750-8074-1a65c43e60db\") " Jan 22 07:23:01 crc kubenswrapper[4720]: I0122 07:23:01.801144 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0aca8fe3-9985-4750-8074-1a65c43e60db-utilities" (OuterVolumeSpecName: "utilities") pod "0aca8fe3-9985-4750-8074-1a65c43e60db" (UID: "0aca8fe3-9985-4750-8074-1a65c43e60db"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 07:23:01 crc kubenswrapper[4720]: I0122 07:23:01.810126 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0aca8fe3-9985-4750-8074-1a65c43e60db-kube-api-access-m5qgh" (OuterVolumeSpecName: "kube-api-access-m5qgh") pod "0aca8fe3-9985-4750-8074-1a65c43e60db" (UID: "0aca8fe3-9985-4750-8074-1a65c43e60db"). InnerVolumeSpecName "kube-api-access-m5qgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 07:23:01 crc kubenswrapper[4720]: I0122 07:23:01.860900 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0aca8fe3-9985-4750-8074-1a65c43e60db-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0aca8fe3-9985-4750-8074-1a65c43e60db" (UID: "0aca8fe3-9985-4750-8074-1a65c43e60db"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 07:23:01 crc kubenswrapper[4720]: I0122 07:23:01.902505 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m5qgh\" (UniqueName: \"kubernetes.io/projected/0aca8fe3-9985-4750-8074-1a65c43e60db-kube-api-access-m5qgh\") on node \"crc\" DevicePath \"\"" Jan 22 07:23:01 crc kubenswrapper[4720]: I0122 07:23:01.902537 4720 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0aca8fe3-9985-4750-8074-1a65c43e60db-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 07:23:01 crc kubenswrapper[4720]: I0122 07:23:01.902546 4720 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0aca8fe3-9985-4750-8074-1a65c43e60db-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 07:23:02 crc kubenswrapper[4720]: I0122 07:23:02.621794 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qgp58" event={"ID":"0aca8fe3-9985-4750-8074-1a65c43e60db","Type":"ContainerDied","Data":"d5b1049a4064890753e048fb98f30211dce275ff16ac077e6e38c70aee64651d"} Jan 22 07:23:02 crc kubenswrapper[4720]: I0122 07:23:02.621874 4720 scope.go:117] "RemoveContainer" containerID="de86acd478d2648aa2eb178cb79b28721498e1a78e5c686739c8e9bef7a4e963" Jan 22 07:23:02 crc kubenswrapper[4720]: I0122 07:23:02.621943 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qgp58" Jan 22 07:23:02 crc kubenswrapper[4720]: I0122 07:23:02.652903 4720 scope.go:117] "RemoveContainer" containerID="803bfa80c0d530c1ab17a3b504a5dd0f975177642d52d7adbc25629f97fa3f81" Jan 22 07:23:02 crc kubenswrapper[4720]: I0122 07:23:02.652903 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-qgp58"] Jan 22 07:23:02 crc kubenswrapper[4720]: I0122 07:23:02.658606 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-qgp58"] Jan 22 07:23:02 crc kubenswrapper[4720]: I0122 07:23:02.669979 4720 scope.go:117] "RemoveContainer" containerID="48418b13c5ecec54f1c76189cef3169fea86b6938dee708a6ebb506fa6dd44a6" Jan 22 07:23:04 crc kubenswrapper[4720]: I0122 07:23:04.227658 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0aca8fe3-9985-4750-8074-1a65c43e60db" path="/var/lib/kubelet/pods/0aca8fe3-9985-4750-8074-1a65c43e60db/volumes" Jan 22 07:23:29 crc kubenswrapper[4720]: I0122 07:23:29.780036 4720 patch_prober.go:28] interesting pod/machine-config-daemon-bnsvd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 07:23:29 crc kubenswrapper[4720]: I0122 07:23:29.780555 4720 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" podUID="f4b26e9d-6a95-4b1c-9750-88b6aa100c67" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 07:23:59 crc kubenswrapper[4720]: I0122 07:23:59.780012 4720 patch_prober.go:28] interesting pod/machine-config-daemon-bnsvd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 07:23:59 crc kubenswrapper[4720]: I0122 07:23:59.780523 4720 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" podUID="f4b26e9d-6a95-4b1c-9750-88b6aa100c67" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 07:23:59 crc kubenswrapper[4720]: I0122 07:23:59.780561 4720 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" Jan 22 07:23:59 crc kubenswrapper[4720]: I0122 07:23:59.781023 4720 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d962058cd135b0c7dec5d20ef5079cd43e43b862e0d050e80955f27800040c5e"} pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 22 07:23:59 crc kubenswrapper[4720]: I0122 07:23:59.781068 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" podUID="f4b26e9d-6a95-4b1c-9750-88b6aa100c67" containerName="machine-config-daemon" containerID="cri-o://d962058cd135b0c7dec5d20ef5079cd43e43b862e0d050e80955f27800040c5e" gracePeriod=600 Jan 22 07:24:00 crc kubenswrapper[4720]: I0122 07:24:00.115013 4720 generic.go:334] "Generic (PLEG): container finished" podID="f4b26e9d-6a95-4b1c-9750-88b6aa100c67" containerID="d962058cd135b0c7dec5d20ef5079cd43e43b862e0d050e80955f27800040c5e" exitCode=0 Jan 22 07:24:00 crc kubenswrapper[4720]: I0122 07:24:00.115203 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" event={"ID":"f4b26e9d-6a95-4b1c-9750-88b6aa100c67","Type":"ContainerDied","Data":"d962058cd135b0c7dec5d20ef5079cd43e43b862e0d050e80955f27800040c5e"} Jan 22 07:24:00 crc kubenswrapper[4720]: I0122 07:24:00.115374 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" event={"ID":"f4b26e9d-6a95-4b1c-9750-88b6aa100c67","Type":"ContainerStarted","Data":"d51385d75b1238749f849dc07117df3a2c33a67da77e643d587f470ad3eb8f8a"} Jan 22 07:24:00 crc kubenswrapper[4720]: I0122 07:24:00.115401 4720 scope.go:117] "RemoveContainer" containerID="0eb7d27ae9c0d299c1e1a3566b73c7f2c1ab85b6c0032703c951b172cd0c528c" Jan 22 07:26:29 crc kubenswrapper[4720]: I0122 07:26:29.781266 4720 patch_prober.go:28] interesting pod/machine-config-daemon-bnsvd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 07:26:29 crc kubenswrapper[4720]: I0122 07:26:29.781738 4720 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" podUID="f4b26e9d-6a95-4b1c-9750-88b6aa100c67" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 07:26:59 crc kubenswrapper[4720]: I0122 07:26:59.780224 4720 patch_prober.go:28] interesting pod/machine-config-daemon-bnsvd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 07:26:59 crc kubenswrapper[4720]: I0122 07:26:59.780747 4720 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" podUID="f4b26e9d-6a95-4b1c-9750-88b6aa100c67" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 07:27:29 crc kubenswrapper[4720]: I0122 07:27:29.780228 4720 patch_prober.go:28] interesting pod/machine-config-daemon-bnsvd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 07:27:29 crc kubenswrapper[4720]: I0122 07:27:29.781607 4720 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" podUID="f4b26e9d-6a95-4b1c-9750-88b6aa100c67" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 07:27:29 crc kubenswrapper[4720]: I0122 07:27:29.781709 4720 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" Jan 22 07:27:29 crc kubenswrapper[4720]: I0122 07:27:29.782326 4720 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d51385d75b1238749f849dc07117df3a2c33a67da77e643d587f470ad3eb8f8a"} pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 22 07:27:29 crc kubenswrapper[4720]: I0122 07:27:29.782470 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" podUID="f4b26e9d-6a95-4b1c-9750-88b6aa100c67" containerName="machine-config-daemon" containerID="cri-o://d51385d75b1238749f849dc07117df3a2c33a67da77e643d587f470ad3eb8f8a" gracePeriod=600 Jan 22 07:27:29 crc kubenswrapper[4720]: E0122 07:27:29.902123 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bnsvd_openshift-machine-config-operator(f4b26e9d-6a95-4b1c-9750-88b6aa100c67)\"" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" podUID="f4b26e9d-6a95-4b1c-9750-88b6aa100c67" Jan 22 07:27:30 crc kubenswrapper[4720]: I0122 07:27:30.844987 4720 generic.go:334] "Generic (PLEG): container finished" podID="f4b26e9d-6a95-4b1c-9750-88b6aa100c67" containerID="d51385d75b1238749f849dc07117df3a2c33a67da77e643d587f470ad3eb8f8a" exitCode=0 Jan 22 07:27:30 crc kubenswrapper[4720]: I0122 07:27:30.845030 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" event={"ID":"f4b26e9d-6a95-4b1c-9750-88b6aa100c67","Type":"ContainerDied","Data":"d51385d75b1238749f849dc07117df3a2c33a67da77e643d587f470ad3eb8f8a"} Jan 22 07:27:30 crc kubenswrapper[4720]: I0122 07:27:30.845060 4720 scope.go:117] "RemoveContainer" containerID="d962058cd135b0c7dec5d20ef5079cd43e43b862e0d050e80955f27800040c5e" Jan 22 07:27:30 crc kubenswrapper[4720]: I0122 07:27:30.845865 4720 scope.go:117] "RemoveContainer" containerID="d51385d75b1238749f849dc07117df3a2c33a67da77e643d587f470ad3eb8f8a" Jan 22 07:27:30 crc kubenswrapper[4720]: E0122 07:27:30.846512 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bnsvd_openshift-machine-config-operator(f4b26e9d-6a95-4b1c-9750-88b6aa100c67)\"" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" podUID="f4b26e9d-6a95-4b1c-9750-88b6aa100c67" Jan 22 07:27:41 crc kubenswrapper[4720]: I0122 07:27:41.211215 4720 scope.go:117] "RemoveContainer" containerID="d51385d75b1238749f849dc07117df3a2c33a67da77e643d587f470ad3eb8f8a" Jan 22 07:27:41 crc kubenswrapper[4720]: E0122 07:27:41.212117 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bnsvd_openshift-machine-config-operator(f4b26e9d-6a95-4b1c-9750-88b6aa100c67)\"" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" podUID="f4b26e9d-6a95-4b1c-9750-88b6aa100c67" Jan 22 07:27:55 crc kubenswrapper[4720]: I0122 07:27:55.211484 4720 scope.go:117] "RemoveContainer" containerID="d51385d75b1238749f849dc07117df3a2c33a67da77e643d587f470ad3eb8f8a" Jan 22 07:27:55 crc kubenswrapper[4720]: E0122 07:27:55.212243 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bnsvd_openshift-machine-config-operator(f4b26e9d-6a95-4b1c-9750-88b6aa100c67)\"" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" podUID="f4b26e9d-6a95-4b1c-9750-88b6aa100c67" Jan 22 07:28:08 crc kubenswrapper[4720]: I0122 07:28:08.216864 4720 scope.go:117] "RemoveContainer" containerID="d51385d75b1238749f849dc07117df3a2c33a67da77e643d587f470ad3eb8f8a" Jan 22 07:28:08 crc kubenswrapper[4720]: E0122 07:28:08.218986 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bnsvd_openshift-machine-config-operator(f4b26e9d-6a95-4b1c-9750-88b6aa100c67)\"" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" podUID="f4b26e9d-6a95-4b1c-9750-88b6aa100c67" Jan 22 07:28:23 crc kubenswrapper[4720]: I0122 07:28:23.210835 4720 scope.go:117] "RemoveContainer" containerID="d51385d75b1238749f849dc07117df3a2c33a67da77e643d587f470ad3eb8f8a" Jan 22 07:28:23 crc kubenswrapper[4720]: E0122 07:28:23.211684 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bnsvd_openshift-machine-config-operator(f4b26e9d-6a95-4b1c-9750-88b6aa100c67)\"" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" podUID="f4b26e9d-6a95-4b1c-9750-88b6aa100c67" Jan 22 07:28:37 crc kubenswrapper[4720]: I0122 07:28:37.211261 4720 scope.go:117] "RemoveContainer" containerID="d51385d75b1238749f849dc07117df3a2c33a67da77e643d587f470ad3eb8f8a" Jan 22 07:28:37 crc kubenswrapper[4720]: E0122 07:28:37.212392 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bnsvd_openshift-machine-config-operator(f4b26e9d-6a95-4b1c-9750-88b6aa100c67)\"" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" podUID="f4b26e9d-6a95-4b1c-9750-88b6aa100c67" Jan 22 07:28:49 crc kubenswrapper[4720]: I0122 07:28:49.211500 4720 scope.go:117] "RemoveContainer" containerID="d51385d75b1238749f849dc07117df3a2c33a67da77e643d587f470ad3eb8f8a" Jan 22 07:28:49 crc kubenswrapper[4720]: E0122 07:28:49.212399 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bnsvd_openshift-machine-config-operator(f4b26e9d-6a95-4b1c-9750-88b6aa100c67)\"" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" podUID="f4b26e9d-6a95-4b1c-9750-88b6aa100c67" Jan 22 07:29:01 crc kubenswrapper[4720]: I0122 07:29:01.211312 4720 scope.go:117] "RemoveContainer" containerID="d51385d75b1238749f849dc07117df3a2c33a67da77e643d587f470ad3eb8f8a" Jan 22 07:29:01 crc kubenswrapper[4720]: E0122 07:29:01.211971 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bnsvd_openshift-machine-config-operator(f4b26e9d-6a95-4b1c-9750-88b6aa100c67)\"" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" podUID="f4b26e9d-6a95-4b1c-9750-88b6aa100c67" Jan 22 07:29:12 crc kubenswrapper[4720]: I0122 07:29:12.210534 4720 scope.go:117] "RemoveContainer" containerID="d51385d75b1238749f849dc07117df3a2c33a67da77e643d587f470ad3eb8f8a" Jan 22 07:29:12 crc kubenswrapper[4720]: E0122 07:29:12.211216 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bnsvd_openshift-machine-config-operator(f4b26e9d-6a95-4b1c-9750-88b6aa100c67)\"" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" podUID="f4b26e9d-6a95-4b1c-9750-88b6aa100c67" Jan 22 07:29:27 crc kubenswrapper[4720]: I0122 07:29:27.210219 4720 scope.go:117] "RemoveContainer" containerID="d51385d75b1238749f849dc07117df3a2c33a67da77e643d587f470ad3eb8f8a" Jan 22 07:29:27 crc kubenswrapper[4720]: E0122 07:29:27.211032 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bnsvd_openshift-machine-config-operator(f4b26e9d-6a95-4b1c-9750-88b6aa100c67)\"" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" podUID="f4b26e9d-6a95-4b1c-9750-88b6aa100c67" Jan 22 07:29:41 crc kubenswrapper[4720]: I0122 07:29:41.210876 4720 scope.go:117] "RemoveContainer" containerID="d51385d75b1238749f849dc07117df3a2c33a67da77e643d587f470ad3eb8f8a" Jan 22 07:29:41 crc kubenswrapper[4720]: E0122 07:29:41.211639 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bnsvd_openshift-machine-config-operator(f4b26e9d-6a95-4b1c-9750-88b6aa100c67)\"" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" podUID="f4b26e9d-6a95-4b1c-9750-88b6aa100c67" Jan 22 07:29:52 crc kubenswrapper[4720]: I0122 07:29:52.210872 4720 scope.go:117] "RemoveContainer" containerID="d51385d75b1238749f849dc07117df3a2c33a67da77e643d587f470ad3eb8f8a" Jan 22 07:29:52 crc kubenswrapper[4720]: E0122 07:29:52.211709 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bnsvd_openshift-machine-config-operator(f4b26e9d-6a95-4b1c-9750-88b6aa100c67)\"" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" podUID="f4b26e9d-6a95-4b1c-9750-88b6aa100c67" Jan 22 07:30:00 crc kubenswrapper[4720]: I0122 07:30:00.150613 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484450-m9xhz"] Jan 22 07:30:00 crc kubenswrapper[4720]: E0122 07:30:00.151705 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="070b28e7-b049-4d40-ab4c-f0e83cdec265" containerName="extract-content" Jan 22 07:30:00 crc kubenswrapper[4720]: I0122 07:30:00.151725 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="070b28e7-b049-4d40-ab4c-f0e83cdec265" containerName="extract-content" Jan 22 07:30:00 crc kubenswrapper[4720]: E0122 07:30:00.151748 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="070b28e7-b049-4d40-ab4c-f0e83cdec265" containerName="extract-utilities" Jan 22 07:30:00 crc kubenswrapper[4720]: I0122 07:30:00.151757 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="070b28e7-b049-4d40-ab4c-f0e83cdec265" containerName="extract-utilities" Jan 22 07:30:00 crc kubenswrapper[4720]: E0122 07:30:00.151781 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0aca8fe3-9985-4750-8074-1a65c43e60db" containerName="extract-content" Jan 22 07:30:00 crc kubenswrapper[4720]: I0122 07:30:00.151790 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="0aca8fe3-9985-4750-8074-1a65c43e60db" containerName="extract-content" Jan 22 07:30:00 crc kubenswrapper[4720]: E0122 07:30:00.151808 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0aca8fe3-9985-4750-8074-1a65c43e60db" containerName="extract-utilities" Jan 22 07:30:00 crc kubenswrapper[4720]: I0122 07:30:00.151816 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="0aca8fe3-9985-4750-8074-1a65c43e60db" containerName="extract-utilities" Jan 22 07:30:00 crc kubenswrapper[4720]: E0122 07:30:00.151831 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="070b28e7-b049-4d40-ab4c-f0e83cdec265" containerName="registry-server" Jan 22 07:30:00 crc kubenswrapper[4720]: I0122 07:30:00.151839 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="070b28e7-b049-4d40-ab4c-f0e83cdec265" containerName="registry-server" Jan 22 07:30:00 crc kubenswrapper[4720]: E0122 07:30:00.151857 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0aca8fe3-9985-4750-8074-1a65c43e60db" containerName="registry-server" Jan 22 07:30:00 crc kubenswrapper[4720]: I0122 07:30:00.151866 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="0aca8fe3-9985-4750-8074-1a65c43e60db" containerName="registry-server" Jan 22 07:30:00 crc kubenswrapper[4720]: I0122 07:30:00.152131 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="0aca8fe3-9985-4750-8074-1a65c43e60db" containerName="registry-server" Jan 22 07:30:00 crc kubenswrapper[4720]: I0122 07:30:00.152154 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="070b28e7-b049-4d40-ab4c-f0e83cdec265" containerName="registry-server" Jan 22 07:30:00 crc kubenswrapper[4720]: I0122 07:30:00.153060 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484450-m9xhz" Jan 22 07:30:00 crc kubenswrapper[4720]: I0122 07:30:00.155969 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 22 07:30:00 crc kubenswrapper[4720]: I0122 07:30:00.159206 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 22 07:30:00 crc kubenswrapper[4720]: I0122 07:30:00.161652 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484450-m9xhz"] Jan 22 07:30:00 crc kubenswrapper[4720]: I0122 07:30:00.242802 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/51cd7ccf-faf7-4bd0-af16-1ea239f4cca7-secret-volume\") pod \"collect-profiles-29484450-m9xhz\" (UID: \"51cd7ccf-faf7-4bd0-af16-1ea239f4cca7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484450-m9xhz" Jan 22 07:30:00 crc kubenswrapper[4720]: I0122 07:30:00.242838 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/51cd7ccf-faf7-4bd0-af16-1ea239f4cca7-config-volume\") pod \"collect-profiles-29484450-m9xhz\" (UID: \"51cd7ccf-faf7-4bd0-af16-1ea239f4cca7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484450-m9xhz" Jan 22 07:30:00 crc kubenswrapper[4720]: I0122 07:30:00.242931 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d7rz7\" (UniqueName: \"kubernetes.io/projected/51cd7ccf-faf7-4bd0-af16-1ea239f4cca7-kube-api-access-d7rz7\") pod \"collect-profiles-29484450-m9xhz\" (UID: \"51cd7ccf-faf7-4bd0-af16-1ea239f4cca7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484450-m9xhz" Jan 22 07:30:00 crc kubenswrapper[4720]: I0122 07:30:00.344380 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/51cd7ccf-faf7-4bd0-af16-1ea239f4cca7-secret-volume\") pod \"collect-profiles-29484450-m9xhz\" (UID: \"51cd7ccf-faf7-4bd0-af16-1ea239f4cca7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484450-m9xhz" Jan 22 07:30:00 crc kubenswrapper[4720]: I0122 07:30:00.344430 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/51cd7ccf-faf7-4bd0-af16-1ea239f4cca7-config-volume\") pod \"collect-profiles-29484450-m9xhz\" (UID: \"51cd7ccf-faf7-4bd0-af16-1ea239f4cca7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484450-m9xhz" Jan 22 07:30:00 crc kubenswrapper[4720]: I0122 07:30:00.344513 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d7rz7\" (UniqueName: \"kubernetes.io/projected/51cd7ccf-faf7-4bd0-af16-1ea239f4cca7-kube-api-access-d7rz7\") pod \"collect-profiles-29484450-m9xhz\" (UID: \"51cd7ccf-faf7-4bd0-af16-1ea239f4cca7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484450-m9xhz" Jan 22 07:30:00 crc kubenswrapper[4720]: I0122 07:30:00.345641 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/51cd7ccf-faf7-4bd0-af16-1ea239f4cca7-config-volume\") pod \"collect-profiles-29484450-m9xhz\" (UID: \"51cd7ccf-faf7-4bd0-af16-1ea239f4cca7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484450-m9xhz" Jan 22 07:30:00 crc kubenswrapper[4720]: I0122 07:30:00.355828 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/51cd7ccf-faf7-4bd0-af16-1ea239f4cca7-secret-volume\") pod \"collect-profiles-29484450-m9xhz\" (UID: \"51cd7ccf-faf7-4bd0-af16-1ea239f4cca7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484450-m9xhz" Jan 22 07:30:00 crc kubenswrapper[4720]: I0122 07:30:00.360994 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d7rz7\" (UniqueName: \"kubernetes.io/projected/51cd7ccf-faf7-4bd0-af16-1ea239f4cca7-kube-api-access-d7rz7\") pod \"collect-profiles-29484450-m9xhz\" (UID: \"51cd7ccf-faf7-4bd0-af16-1ea239f4cca7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484450-m9xhz" Jan 22 07:30:00 crc kubenswrapper[4720]: I0122 07:30:00.478760 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484450-m9xhz" Jan 22 07:30:00 crc kubenswrapper[4720]: I0122 07:30:00.926848 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484450-m9xhz"] Jan 22 07:30:01 crc kubenswrapper[4720]: I0122 07:30:01.110385 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29484450-m9xhz" event={"ID":"51cd7ccf-faf7-4bd0-af16-1ea239f4cca7","Type":"ContainerStarted","Data":"90cd3d24ebd825eb019a92810bfe0511657253ae4753faeee211e8764eda5566"} Jan 22 07:30:01 crc kubenswrapper[4720]: I0122 07:30:01.110436 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29484450-m9xhz" event={"ID":"51cd7ccf-faf7-4bd0-af16-1ea239f4cca7","Type":"ContainerStarted","Data":"64947ecf414b681f9e6036c8eaf6cdd8d494ab8d4efc056e25c7098d7852cea8"} Jan 22 07:30:01 crc kubenswrapper[4720]: I0122 07:30:01.136753 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29484450-m9xhz" podStartSLOduration=1.136729659 podStartE2EDuration="1.136729659s" podCreationTimestamp="2026-01-22 07:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 07:30:01.124394777 +0000 UTC m=+3293.266301502" watchObservedRunningTime="2026-01-22 07:30:01.136729659 +0000 UTC m=+3293.278636404" Jan 22 07:30:02 crc kubenswrapper[4720]: I0122 07:30:02.119112 4720 generic.go:334] "Generic (PLEG): container finished" podID="51cd7ccf-faf7-4bd0-af16-1ea239f4cca7" containerID="90cd3d24ebd825eb019a92810bfe0511657253ae4753faeee211e8764eda5566" exitCode=0 Jan 22 07:30:02 crc kubenswrapper[4720]: I0122 07:30:02.119327 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29484450-m9xhz" event={"ID":"51cd7ccf-faf7-4bd0-af16-1ea239f4cca7","Type":"ContainerDied","Data":"90cd3d24ebd825eb019a92810bfe0511657253ae4753faeee211e8764eda5566"} Jan 22 07:30:03 crc kubenswrapper[4720]: I0122 07:30:03.421605 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484450-m9xhz" Jan 22 07:30:03 crc kubenswrapper[4720]: I0122 07:30:03.510473 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d7rz7\" (UniqueName: \"kubernetes.io/projected/51cd7ccf-faf7-4bd0-af16-1ea239f4cca7-kube-api-access-d7rz7\") pod \"51cd7ccf-faf7-4bd0-af16-1ea239f4cca7\" (UID: \"51cd7ccf-faf7-4bd0-af16-1ea239f4cca7\") " Jan 22 07:30:03 crc kubenswrapper[4720]: I0122 07:30:03.510516 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/51cd7ccf-faf7-4bd0-af16-1ea239f4cca7-config-volume\") pod \"51cd7ccf-faf7-4bd0-af16-1ea239f4cca7\" (UID: \"51cd7ccf-faf7-4bd0-af16-1ea239f4cca7\") " Jan 22 07:30:03 crc kubenswrapper[4720]: I0122 07:30:03.510544 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/51cd7ccf-faf7-4bd0-af16-1ea239f4cca7-secret-volume\") pod \"51cd7ccf-faf7-4bd0-af16-1ea239f4cca7\" (UID: \"51cd7ccf-faf7-4bd0-af16-1ea239f4cca7\") " Jan 22 07:30:03 crc kubenswrapper[4720]: I0122 07:30:03.511474 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/51cd7ccf-faf7-4bd0-af16-1ea239f4cca7-config-volume" (OuterVolumeSpecName: "config-volume") pod "51cd7ccf-faf7-4bd0-af16-1ea239f4cca7" (UID: "51cd7ccf-faf7-4bd0-af16-1ea239f4cca7"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 07:30:03 crc kubenswrapper[4720]: I0122 07:30:03.515876 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/51cd7ccf-faf7-4bd0-af16-1ea239f4cca7-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "51cd7ccf-faf7-4bd0-af16-1ea239f4cca7" (UID: "51cd7ccf-faf7-4bd0-af16-1ea239f4cca7"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:30:03 crc kubenswrapper[4720]: I0122 07:30:03.517611 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/51cd7ccf-faf7-4bd0-af16-1ea239f4cca7-kube-api-access-d7rz7" (OuterVolumeSpecName: "kube-api-access-d7rz7") pod "51cd7ccf-faf7-4bd0-af16-1ea239f4cca7" (UID: "51cd7ccf-faf7-4bd0-af16-1ea239f4cca7"). InnerVolumeSpecName "kube-api-access-d7rz7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 07:30:03 crc kubenswrapper[4720]: I0122 07:30:03.612175 4720 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/51cd7ccf-faf7-4bd0-af16-1ea239f4cca7-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 22 07:30:03 crc kubenswrapper[4720]: I0122 07:30:03.612212 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d7rz7\" (UniqueName: \"kubernetes.io/projected/51cd7ccf-faf7-4bd0-af16-1ea239f4cca7-kube-api-access-d7rz7\") on node \"crc\" DevicePath \"\"" Jan 22 07:30:03 crc kubenswrapper[4720]: I0122 07:30:03.612222 4720 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/51cd7ccf-faf7-4bd0-af16-1ea239f4cca7-config-volume\") on node \"crc\" DevicePath \"\"" Jan 22 07:30:04 crc kubenswrapper[4720]: I0122 07:30:04.135600 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29484450-m9xhz" event={"ID":"51cd7ccf-faf7-4bd0-af16-1ea239f4cca7","Type":"ContainerDied","Data":"64947ecf414b681f9e6036c8eaf6cdd8d494ab8d4efc056e25c7098d7852cea8"} Jan 22 07:30:04 crc kubenswrapper[4720]: I0122 07:30:04.135945 4720 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="64947ecf414b681f9e6036c8eaf6cdd8d494ab8d4efc056e25c7098d7852cea8" Jan 22 07:30:04 crc kubenswrapper[4720]: I0122 07:30:04.136028 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484450-m9xhz" Jan 22 07:30:04 crc kubenswrapper[4720]: I0122 07:30:04.503855 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484405-fcsb6"] Jan 22 07:30:04 crc kubenswrapper[4720]: I0122 07:30:04.509861 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484405-fcsb6"] Jan 22 07:30:06 crc kubenswrapper[4720]: I0122 07:30:06.221613 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="216f8ab1-3326-4006-b0b5-ac9018b17dbe" path="/var/lib/kubelet/pods/216f8ab1-3326-4006-b0b5-ac9018b17dbe/volumes" Jan 22 07:30:07 crc kubenswrapper[4720]: I0122 07:30:07.210430 4720 scope.go:117] "RemoveContainer" containerID="d51385d75b1238749f849dc07117df3a2c33a67da77e643d587f470ad3eb8f8a" Jan 22 07:30:07 crc kubenswrapper[4720]: E0122 07:30:07.210713 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bnsvd_openshift-machine-config-operator(f4b26e9d-6a95-4b1c-9750-88b6aa100c67)\"" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" podUID="f4b26e9d-6a95-4b1c-9750-88b6aa100c67" Jan 22 07:30:08 crc kubenswrapper[4720]: I0122 07:30:08.799522 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-cdcwm"] Jan 22 07:30:08 crc kubenswrapper[4720]: E0122 07:30:08.800247 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51cd7ccf-faf7-4bd0-af16-1ea239f4cca7" containerName="collect-profiles" Jan 22 07:30:08 crc kubenswrapper[4720]: I0122 07:30:08.800264 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="51cd7ccf-faf7-4bd0-af16-1ea239f4cca7" containerName="collect-profiles" Jan 22 07:30:08 crc kubenswrapper[4720]: I0122 07:30:08.800446 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="51cd7ccf-faf7-4bd0-af16-1ea239f4cca7" containerName="collect-profiles" Jan 22 07:30:08 crc kubenswrapper[4720]: I0122 07:30:08.801768 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cdcwm" Jan 22 07:30:08 crc kubenswrapper[4720]: I0122 07:30:08.815616 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tcp99\" (UniqueName: \"kubernetes.io/projected/08cc7a6f-2a74-46a8-aded-0ebdf6ca2aac-kube-api-access-tcp99\") pod \"redhat-operators-cdcwm\" (UID: \"08cc7a6f-2a74-46a8-aded-0ebdf6ca2aac\") " pod="openshift-marketplace/redhat-operators-cdcwm" Jan 22 07:30:08 crc kubenswrapper[4720]: I0122 07:30:08.815700 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/08cc7a6f-2a74-46a8-aded-0ebdf6ca2aac-catalog-content\") pod \"redhat-operators-cdcwm\" (UID: \"08cc7a6f-2a74-46a8-aded-0ebdf6ca2aac\") " pod="openshift-marketplace/redhat-operators-cdcwm" Jan 22 07:30:08 crc kubenswrapper[4720]: I0122 07:30:08.815740 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/08cc7a6f-2a74-46a8-aded-0ebdf6ca2aac-utilities\") pod \"redhat-operators-cdcwm\" (UID: \"08cc7a6f-2a74-46a8-aded-0ebdf6ca2aac\") " pod="openshift-marketplace/redhat-operators-cdcwm" Jan 22 07:30:08 crc kubenswrapper[4720]: I0122 07:30:08.834176 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-cdcwm"] Jan 22 07:30:08 crc kubenswrapper[4720]: I0122 07:30:08.917114 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tcp99\" (UniqueName: \"kubernetes.io/projected/08cc7a6f-2a74-46a8-aded-0ebdf6ca2aac-kube-api-access-tcp99\") pod \"redhat-operators-cdcwm\" (UID: \"08cc7a6f-2a74-46a8-aded-0ebdf6ca2aac\") " pod="openshift-marketplace/redhat-operators-cdcwm" Jan 22 07:30:08 crc kubenswrapper[4720]: I0122 07:30:08.917158 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/08cc7a6f-2a74-46a8-aded-0ebdf6ca2aac-catalog-content\") pod \"redhat-operators-cdcwm\" (UID: \"08cc7a6f-2a74-46a8-aded-0ebdf6ca2aac\") " pod="openshift-marketplace/redhat-operators-cdcwm" Jan 22 07:30:08 crc kubenswrapper[4720]: I0122 07:30:08.917217 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/08cc7a6f-2a74-46a8-aded-0ebdf6ca2aac-utilities\") pod \"redhat-operators-cdcwm\" (UID: \"08cc7a6f-2a74-46a8-aded-0ebdf6ca2aac\") " pod="openshift-marketplace/redhat-operators-cdcwm" Jan 22 07:30:08 crc kubenswrapper[4720]: I0122 07:30:08.917765 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/08cc7a6f-2a74-46a8-aded-0ebdf6ca2aac-catalog-content\") pod \"redhat-operators-cdcwm\" (UID: \"08cc7a6f-2a74-46a8-aded-0ebdf6ca2aac\") " pod="openshift-marketplace/redhat-operators-cdcwm" Jan 22 07:30:08 crc kubenswrapper[4720]: I0122 07:30:08.917780 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/08cc7a6f-2a74-46a8-aded-0ebdf6ca2aac-utilities\") pod \"redhat-operators-cdcwm\" (UID: \"08cc7a6f-2a74-46a8-aded-0ebdf6ca2aac\") " pod="openshift-marketplace/redhat-operators-cdcwm" Jan 22 07:30:08 crc kubenswrapper[4720]: I0122 07:30:08.937846 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tcp99\" (UniqueName: \"kubernetes.io/projected/08cc7a6f-2a74-46a8-aded-0ebdf6ca2aac-kube-api-access-tcp99\") pod \"redhat-operators-cdcwm\" (UID: \"08cc7a6f-2a74-46a8-aded-0ebdf6ca2aac\") " pod="openshift-marketplace/redhat-operators-cdcwm" Jan 22 07:30:09 crc kubenswrapper[4720]: I0122 07:30:09.126308 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cdcwm" Jan 22 07:30:09 crc kubenswrapper[4720]: I0122 07:30:09.433542 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-cdcwm"] Jan 22 07:30:09 crc kubenswrapper[4720]: W0122 07:30:09.440265 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod08cc7a6f_2a74_46a8_aded_0ebdf6ca2aac.slice/crio-2a283321384250653970489f531df3192ff33e8299d91f81741bae3e924e2367 WatchSource:0}: Error finding container 2a283321384250653970489f531df3192ff33e8299d91f81741bae3e924e2367: Status 404 returned error can't find the container with id 2a283321384250653970489f531df3192ff33e8299d91f81741bae3e924e2367 Jan 22 07:30:10 crc kubenswrapper[4720]: I0122 07:30:10.177049 4720 generic.go:334] "Generic (PLEG): container finished" podID="08cc7a6f-2a74-46a8-aded-0ebdf6ca2aac" containerID="a01e8a19f2cb9f2dd91ec13e5abf19fc9e6b76c85bf8743db7850aa501d0157d" exitCode=0 Jan 22 07:30:10 crc kubenswrapper[4720]: I0122 07:30:10.177151 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cdcwm" event={"ID":"08cc7a6f-2a74-46a8-aded-0ebdf6ca2aac","Type":"ContainerDied","Data":"a01e8a19f2cb9f2dd91ec13e5abf19fc9e6b76c85bf8743db7850aa501d0157d"} Jan 22 07:30:10 crc kubenswrapper[4720]: I0122 07:30:10.177358 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cdcwm" event={"ID":"08cc7a6f-2a74-46a8-aded-0ebdf6ca2aac","Type":"ContainerStarted","Data":"2a283321384250653970489f531df3192ff33e8299d91f81741bae3e924e2367"} Jan 22 07:30:10 crc kubenswrapper[4720]: I0122 07:30:10.179738 4720 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 22 07:30:11 crc kubenswrapper[4720]: I0122 07:30:11.185434 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cdcwm" event={"ID":"08cc7a6f-2a74-46a8-aded-0ebdf6ca2aac","Type":"ContainerStarted","Data":"a97d9a45e77a040997ad77f882f014018c4fe68eef036421103cf53901bb9ced"} Jan 22 07:30:11 crc kubenswrapper[4720]: I0122 07:30:11.392829 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-ssthc"] Jan 22 07:30:11 crc kubenswrapper[4720]: I0122 07:30:11.398881 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ssthc" Jan 22 07:30:11 crc kubenswrapper[4720]: I0122 07:30:11.408623 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-ssthc"] Jan 22 07:30:11 crc kubenswrapper[4720]: I0122 07:30:11.456990 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/068c8bd4-616c-4e5f-a4de-faeb9bc1d52f-utilities\") pod \"certified-operators-ssthc\" (UID: \"068c8bd4-616c-4e5f-a4de-faeb9bc1d52f\") " pod="openshift-marketplace/certified-operators-ssthc" Jan 22 07:30:11 crc kubenswrapper[4720]: I0122 07:30:11.457181 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xrsbx\" (UniqueName: \"kubernetes.io/projected/068c8bd4-616c-4e5f-a4de-faeb9bc1d52f-kube-api-access-xrsbx\") pod \"certified-operators-ssthc\" (UID: \"068c8bd4-616c-4e5f-a4de-faeb9bc1d52f\") " pod="openshift-marketplace/certified-operators-ssthc" Jan 22 07:30:11 crc kubenswrapper[4720]: I0122 07:30:11.457340 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/068c8bd4-616c-4e5f-a4de-faeb9bc1d52f-catalog-content\") pod \"certified-operators-ssthc\" (UID: \"068c8bd4-616c-4e5f-a4de-faeb9bc1d52f\") " pod="openshift-marketplace/certified-operators-ssthc" Jan 22 07:30:11 crc kubenswrapper[4720]: I0122 07:30:11.558580 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xrsbx\" (UniqueName: \"kubernetes.io/projected/068c8bd4-616c-4e5f-a4de-faeb9bc1d52f-kube-api-access-xrsbx\") pod \"certified-operators-ssthc\" (UID: \"068c8bd4-616c-4e5f-a4de-faeb9bc1d52f\") " pod="openshift-marketplace/certified-operators-ssthc" Jan 22 07:30:11 crc kubenswrapper[4720]: I0122 07:30:11.558670 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/068c8bd4-616c-4e5f-a4de-faeb9bc1d52f-catalog-content\") pod \"certified-operators-ssthc\" (UID: \"068c8bd4-616c-4e5f-a4de-faeb9bc1d52f\") " pod="openshift-marketplace/certified-operators-ssthc" Jan 22 07:30:11 crc kubenswrapper[4720]: I0122 07:30:11.558714 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/068c8bd4-616c-4e5f-a4de-faeb9bc1d52f-utilities\") pod \"certified-operators-ssthc\" (UID: \"068c8bd4-616c-4e5f-a4de-faeb9bc1d52f\") " pod="openshift-marketplace/certified-operators-ssthc" Jan 22 07:30:11 crc kubenswrapper[4720]: I0122 07:30:11.559173 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/068c8bd4-616c-4e5f-a4de-faeb9bc1d52f-catalog-content\") pod \"certified-operators-ssthc\" (UID: \"068c8bd4-616c-4e5f-a4de-faeb9bc1d52f\") " pod="openshift-marketplace/certified-operators-ssthc" Jan 22 07:30:11 crc kubenswrapper[4720]: I0122 07:30:11.559250 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/068c8bd4-616c-4e5f-a4de-faeb9bc1d52f-utilities\") pod \"certified-operators-ssthc\" (UID: \"068c8bd4-616c-4e5f-a4de-faeb9bc1d52f\") " pod="openshift-marketplace/certified-operators-ssthc" Jan 22 07:30:11 crc kubenswrapper[4720]: I0122 07:30:11.587851 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xrsbx\" (UniqueName: \"kubernetes.io/projected/068c8bd4-616c-4e5f-a4de-faeb9bc1d52f-kube-api-access-xrsbx\") pod \"certified-operators-ssthc\" (UID: \"068c8bd4-616c-4e5f-a4de-faeb9bc1d52f\") " pod="openshift-marketplace/certified-operators-ssthc" Jan 22 07:30:11 crc kubenswrapper[4720]: I0122 07:30:11.715001 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ssthc" Jan 22 07:30:12 crc kubenswrapper[4720]: I0122 07:30:12.194521 4720 generic.go:334] "Generic (PLEG): container finished" podID="08cc7a6f-2a74-46a8-aded-0ebdf6ca2aac" containerID="a97d9a45e77a040997ad77f882f014018c4fe68eef036421103cf53901bb9ced" exitCode=0 Jan 22 07:30:12 crc kubenswrapper[4720]: I0122 07:30:12.194712 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cdcwm" event={"ID":"08cc7a6f-2a74-46a8-aded-0ebdf6ca2aac","Type":"ContainerDied","Data":"a97d9a45e77a040997ad77f882f014018c4fe68eef036421103cf53901bb9ced"} Jan 22 07:30:12 crc kubenswrapper[4720]: W0122 07:30:12.286939 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod068c8bd4_616c_4e5f_a4de_faeb9bc1d52f.slice/crio-d5ad7fe0cbd8059280ccab05948ccbc9d610c5f330e693d56f3f40bf41785e80 WatchSource:0}: Error finding container d5ad7fe0cbd8059280ccab05948ccbc9d610c5f330e693d56f3f40bf41785e80: Status 404 returned error can't find the container with id d5ad7fe0cbd8059280ccab05948ccbc9d610c5f330e693d56f3f40bf41785e80 Jan 22 07:30:12 crc kubenswrapper[4720]: I0122 07:30:12.287635 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-ssthc"] Jan 22 07:30:13 crc kubenswrapper[4720]: I0122 07:30:13.205056 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cdcwm" event={"ID":"08cc7a6f-2a74-46a8-aded-0ebdf6ca2aac","Type":"ContainerStarted","Data":"8a4d1511c5658d9a221f3ba756a52b37c328e71a3101bc048f70d6ded19120df"} Jan 22 07:30:13 crc kubenswrapper[4720]: I0122 07:30:13.206210 4720 generic.go:334] "Generic (PLEG): container finished" podID="068c8bd4-616c-4e5f-a4de-faeb9bc1d52f" containerID="735c3c379812a4317cf05ea7572ce35aff1c868d8cca96e6637e3e763f767f61" exitCode=0 Jan 22 07:30:13 crc kubenswrapper[4720]: I0122 07:30:13.206237 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ssthc" event={"ID":"068c8bd4-616c-4e5f-a4de-faeb9bc1d52f","Type":"ContainerDied","Data":"735c3c379812a4317cf05ea7572ce35aff1c868d8cca96e6637e3e763f767f61"} Jan 22 07:30:13 crc kubenswrapper[4720]: I0122 07:30:13.206253 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ssthc" event={"ID":"068c8bd4-616c-4e5f-a4de-faeb9bc1d52f","Type":"ContainerStarted","Data":"d5ad7fe0cbd8059280ccab05948ccbc9d610c5f330e693d56f3f40bf41785e80"} Jan 22 07:30:13 crc kubenswrapper[4720]: I0122 07:30:13.223811 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-cdcwm" podStartSLOduration=2.82679294 podStartE2EDuration="5.223794918s" podCreationTimestamp="2026-01-22 07:30:08 +0000 UTC" firstStartedPulling="2026-01-22 07:30:10.179486898 +0000 UTC m=+3302.321393603" lastFinishedPulling="2026-01-22 07:30:12.576488876 +0000 UTC m=+3304.718395581" observedRunningTime="2026-01-22 07:30:13.221990447 +0000 UTC m=+3305.363897152" watchObservedRunningTime="2026-01-22 07:30:13.223794918 +0000 UTC m=+3305.365701623" Jan 22 07:30:14 crc kubenswrapper[4720]: I0122 07:30:14.221829 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ssthc" event={"ID":"068c8bd4-616c-4e5f-a4de-faeb9bc1d52f","Type":"ContainerStarted","Data":"43eb537b37e19ed100ec961b1bc8f0d9428fd87838a31d7dc1ab59c072aff40c"} Jan 22 07:30:17 crc kubenswrapper[4720]: I0122 07:30:17.239246 4720 generic.go:334] "Generic (PLEG): container finished" podID="068c8bd4-616c-4e5f-a4de-faeb9bc1d52f" containerID="43eb537b37e19ed100ec961b1bc8f0d9428fd87838a31d7dc1ab59c072aff40c" exitCode=0 Jan 22 07:30:17 crc kubenswrapper[4720]: I0122 07:30:17.239290 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ssthc" event={"ID":"068c8bd4-616c-4e5f-a4de-faeb9bc1d52f","Type":"ContainerDied","Data":"43eb537b37e19ed100ec961b1bc8f0d9428fd87838a31d7dc1ab59c072aff40c"} Jan 22 07:30:18 crc kubenswrapper[4720]: I0122 07:30:18.215467 4720 scope.go:117] "RemoveContainer" containerID="d51385d75b1238749f849dc07117df3a2c33a67da77e643d587f470ad3eb8f8a" Jan 22 07:30:18 crc kubenswrapper[4720]: E0122 07:30:18.216244 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bnsvd_openshift-machine-config-operator(f4b26e9d-6a95-4b1c-9750-88b6aa100c67)\"" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" podUID="f4b26e9d-6a95-4b1c-9750-88b6aa100c67" Jan 22 07:30:19 crc kubenswrapper[4720]: I0122 07:30:19.126793 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-cdcwm" Jan 22 07:30:19 crc kubenswrapper[4720]: I0122 07:30:19.127194 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-cdcwm" Jan 22 07:30:19 crc kubenswrapper[4720]: I0122 07:30:19.169713 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-cdcwm" Jan 22 07:30:19 crc kubenswrapper[4720]: I0122 07:30:19.260202 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ssthc" event={"ID":"068c8bd4-616c-4e5f-a4de-faeb9bc1d52f","Type":"ContainerStarted","Data":"ed527b511380b4c55ca20d65e2ed620d32ca3663cf9a0efede928549ca3c23de"} Jan 22 07:30:19 crc kubenswrapper[4720]: I0122 07:30:19.290579 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-ssthc" podStartSLOduration=2.696795311 podStartE2EDuration="8.290559986s" podCreationTimestamp="2026-01-22 07:30:11 +0000 UTC" firstStartedPulling="2026-01-22 07:30:13.207685719 +0000 UTC m=+3305.349592424" lastFinishedPulling="2026-01-22 07:30:18.801450394 +0000 UTC m=+3310.943357099" observedRunningTime="2026-01-22 07:30:19.287315494 +0000 UTC m=+3311.429222209" watchObservedRunningTime="2026-01-22 07:30:19.290559986 +0000 UTC m=+3311.432466691" Jan 22 07:30:19 crc kubenswrapper[4720]: I0122 07:30:19.314341 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-cdcwm" Jan 22 07:30:20 crc kubenswrapper[4720]: I0122 07:30:20.110550 4720 scope.go:117] "RemoveContainer" containerID="4f659f8d7e2a0d044e94c569c7209aefbda13544226d0f425c6c04462dd0afcb" Jan 22 07:30:21 crc kubenswrapper[4720]: I0122 07:30:21.590729 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-cdcwm"] Jan 22 07:30:21 crc kubenswrapper[4720]: I0122 07:30:21.591278 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-cdcwm" podUID="08cc7a6f-2a74-46a8-aded-0ebdf6ca2aac" containerName="registry-server" containerID="cri-o://8a4d1511c5658d9a221f3ba756a52b37c328e71a3101bc048f70d6ded19120df" gracePeriod=2 Jan 22 07:30:21 crc kubenswrapper[4720]: I0122 07:30:21.715792 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-ssthc" Jan 22 07:30:21 crc kubenswrapper[4720]: I0122 07:30:21.715898 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-ssthc" Jan 22 07:30:21 crc kubenswrapper[4720]: I0122 07:30:21.761246 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-ssthc" Jan 22 07:30:22 crc kubenswrapper[4720]: I0122 07:30:22.281323 4720 generic.go:334] "Generic (PLEG): container finished" podID="08cc7a6f-2a74-46a8-aded-0ebdf6ca2aac" containerID="8a4d1511c5658d9a221f3ba756a52b37c328e71a3101bc048f70d6ded19120df" exitCode=0 Jan 22 07:30:22 crc kubenswrapper[4720]: I0122 07:30:22.281386 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cdcwm" event={"ID":"08cc7a6f-2a74-46a8-aded-0ebdf6ca2aac","Type":"ContainerDied","Data":"8a4d1511c5658d9a221f3ba756a52b37c328e71a3101bc048f70d6ded19120df"} Jan 22 07:30:22 crc kubenswrapper[4720]: I0122 07:30:22.504005 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cdcwm" Jan 22 07:30:22 crc kubenswrapper[4720]: I0122 07:30:22.524171 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/08cc7a6f-2a74-46a8-aded-0ebdf6ca2aac-utilities\") pod \"08cc7a6f-2a74-46a8-aded-0ebdf6ca2aac\" (UID: \"08cc7a6f-2a74-46a8-aded-0ebdf6ca2aac\") " Jan 22 07:30:22 crc kubenswrapper[4720]: I0122 07:30:22.524222 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/08cc7a6f-2a74-46a8-aded-0ebdf6ca2aac-catalog-content\") pod \"08cc7a6f-2a74-46a8-aded-0ebdf6ca2aac\" (UID: \"08cc7a6f-2a74-46a8-aded-0ebdf6ca2aac\") " Jan 22 07:30:22 crc kubenswrapper[4720]: I0122 07:30:22.524324 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tcp99\" (UniqueName: \"kubernetes.io/projected/08cc7a6f-2a74-46a8-aded-0ebdf6ca2aac-kube-api-access-tcp99\") pod \"08cc7a6f-2a74-46a8-aded-0ebdf6ca2aac\" (UID: \"08cc7a6f-2a74-46a8-aded-0ebdf6ca2aac\") " Jan 22 07:30:22 crc kubenswrapper[4720]: I0122 07:30:22.535306 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/08cc7a6f-2a74-46a8-aded-0ebdf6ca2aac-utilities" (OuterVolumeSpecName: "utilities") pod "08cc7a6f-2a74-46a8-aded-0ebdf6ca2aac" (UID: "08cc7a6f-2a74-46a8-aded-0ebdf6ca2aac"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 07:30:22 crc kubenswrapper[4720]: I0122 07:30:22.536339 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/08cc7a6f-2a74-46a8-aded-0ebdf6ca2aac-kube-api-access-tcp99" (OuterVolumeSpecName: "kube-api-access-tcp99") pod "08cc7a6f-2a74-46a8-aded-0ebdf6ca2aac" (UID: "08cc7a6f-2a74-46a8-aded-0ebdf6ca2aac"). InnerVolumeSpecName "kube-api-access-tcp99". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 07:30:22 crc kubenswrapper[4720]: I0122 07:30:22.626504 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tcp99\" (UniqueName: \"kubernetes.io/projected/08cc7a6f-2a74-46a8-aded-0ebdf6ca2aac-kube-api-access-tcp99\") on node \"crc\" DevicePath \"\"" Jan 22 07:30:22 crc kubenswrapper[4720]: I0122 07:30:22.626856 4720 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/08cc7a6f-2a74-46a8-aded-0ebdf6ca2aac-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 07:30:22 crc kubenswrapper[4720]: I0122 07:30:22.654815 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/08cc7a6f-2a74-46a8-aded-0ebdf6ca2aac-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "08cc7a6f-2a74-46a8-aded-0ebdf6ca2aac" (UID: "08cc7a6f-2a74-46a8-aded-0ebdf6ca2aac"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 07:30:22 crc kubenswrapper[4720]: I0122 07:30:22.728635 4720 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/08cc7a6f-2a74-46a8-aded-0ebdf6ca2aac-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 07:30:23 crc kubenswrapper[4720]: I0122 07:30:23.295211 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cdcwm" event={"ID":"08cc7a6f-2a74-46a8-aded-0ebdf6ca2aac","Type":"ContainerDied","Data":"2a283321384250653970489f531df3192ff33e8299d91f81741bae3e924e2367"} Jan 22 07:30:23 crc kubenswrapper[4720]: I0122 07:30:23.295273 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cdcwm" Jan 22 07:30:23 crc kubenswrapper[4720]: I0122 07:30:23.295295 4720 scope.go:117] "RemoveContainer" containerID="8a4d1511c5658d9a221f3ba756a52b37c328e71a3101bc048f70d6ded19120df" Jan 22 07:30:23 crc kubenswrapper[4720]: I0122 07:30:23.323650 4720 scope.go:117] "RemoveContainer" containerID="a97d9a45e77a040997ad77f882f014018c4fe68eef036421103cf53901bb9ced" Jan 22 07:30:23 crc kubenswrapper[4720]: I0122 07:30:23.331961 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-cdcwm"] Jan 22 07:30:23 crc kubenswrapper[4720]: I0122 07:30:23.339016 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-cdcwm"] Jan 22 07:30:23 crc kubenswrapper[4720]: I0122 07:30:23.355499 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-ssthc" Jan 22 07:30:23 crc kubenswrapper[4720]: I0122 07:30:23.360195 4720 scope.go:117] "RemoveContainer" containerID="a01e8a19f2cb9f2dd91ec13e5abf19fc9e6b76c85bf8743db7850aa501d0157d" Jan 22 07:30:24 crc kubenswrapper[4720]: I0122 07:30:24.222083 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="08cc7a6f-2a74-46a8-aded-0ebdf6ca2aac" path="/var/lib/kubelet/pods/08cc7a6f-2a74-46a8-aded-0ebdf6ca2aac/volumes" Jan 22 07:30:25 crc kubenswrapper[4720]: I0122 07:30:25.783130 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-ssthc"] Jan 22 07:30:25 crc kubenswrapper[4720]: I0122 07:30:25.783341 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-ssthc" podUID="068c8bd4-616c-4e5f-a4de-faeb9bc1d52f" containerName="registry-server" containerID="cri-o://ed527b511380b4c55ca20d65e2ed620d32ca3663cf9a0efede928549ca3c23de" gracePeriod=2 Jan 22 07:30:26 crc kubenswrapper[4720]: I0122 07:30:26.322621 4720 generic.go:334] "Generic (PLEG): container finished" podID="068c8bd4-616c-4e5f-a4de-faeb9bc1d52f" containerID="ed527b511380b4c55ca20d65e2ed620d32ca3663cf9a0efede928549ca3c23de" exitCode=0 Jan 22 07:30:26 crc kubenswrapper[4720]: I0122 07:30:26.322920 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ssthc" event={"ID":"068c8bd4-616c-4e5f-a4de-faeb9bc1d52f","Type":"ContainerDied","Data":"ed527b511380b4c55ca20d65e2ed620d32ca3663cf9a0efede928549ca3c23de"} Jan 22 07:30:26 crc kubenswrapper[4720]: I0122 07:30:26.795996 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ssthc" Jan 22 07:30:26 crc kubenswrapper[4720]: I0122 07:30:26.895533 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/068c8bd4-616c-4e5f-a4de-faeb9bc1d52f-catalog-content\") pod \"068c8bd4-616c-4e5f-a4de-faeb9bc1d52f\" (UID: \"068c8bd4-616c-4e5f-a4de-faeb9bc1d52f\") " Jan 22 07:30:26 crc kubenswrapper[4720]: I0122 07:30:26.895610 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xrsbx\" (UniqueName: \"kubernetes.io/projected/068c8bd4-616c-4e5f-a4de-faeb9bc1d52f-kube-api-access-xrsbx\") pod \"068c8bd4-616c-4e5f-a4de-faeb9bc1d52f\" (UID: \"068c8bd4-616c-4e5f-a4de-faeb9bc1d52f\") " Jan 22 07:30:26 crc kubenswrapper[4720]: I0122 07:30:26.895738 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/068c8bd4-616c-4e5f-a4de-faeb9bc1d52f-utilities\") pod \"068c8bd4-616c-4e5f-a4de-faeb9bc1d52f\" (UID: \"068c8bd4-616c-4e5f-a4de-faeb9bc1d52f\") " Jan 22 07:30:26 crc kubenswrapper[4720]: I0122 07:30:26.896967 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/068c8bd4-616c-4e5f-a4de-faeb9bc1d52f-utilities" (OuterVolumeSpecName: "utilities") pod "068c8bd4-616c-4e5f-a4de-faeb9bc1d52f" (UID: "068c8bd4-616c-4e5f-a4de-faeb9bc1d52f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 07:30:26 crc kubenswrapper[4720]: I0122 07:30:26.914336 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/068c8bd4-616c-4e5f-a4de-faeb9bc1d52f-kube-api-access-xrsbx" (OuterVolumeSpecName: "kube-api-access-xrsbx") pod "068c8bd4-616c-4e5f-a4de-faeb9bc1d52f" (UID: "068c8bd4-616c-4e5f-a4de-faeb9bc1d52f"). InnerVolumeSpecName "kube-api-access-xrsbx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 07:30:26 crc kubenswrapper[4720]: I0122 07:30:26.942563 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/068c8bd4-616c-4e5f-a4de-faeb9bc1d52f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "068c8bd4-616c-4e5f-a4de-faeb9bc1d52f" (UID: "068c8bd4-616c-4e5f-a4de-faeb9bc1d52f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 07:30:26 crc kubenswrapper[4720]: I0122 07:30:26.997538 4720 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/068c8bd4-616c-4e5f-a4de-faeb9bc1d52f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 07:30:26 crc kubenswrapper[4720]: I0122 07:30:26.997581 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xrsbx\" (UniqueName: \"kubernetes.io/projected/068c8bd4-616c-4e5f-a4de-faeb9bc1d52f-kube-api-access-xrsbx\") on node \"crc\" DevicePath \"\"" Jan 22 07:30:26 crc kubenswrapper[4720]: I0122 07:30:26.997601 4720 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/068c8bd4-616c-4e5f-a4de-faeb9bc1d52f-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 07:30:27 crc kubenswrapper[4720]: I0122 07:30:27.344127 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ssthc" event={"ID":"068c8bd4-616c-4e5f-a4de-faeb9bc1d52f","Type":"ContainerDied","Data":"d5ad7fe0cbd8059280ccab05948ccbc9d610c5f330e693d56f3f40bf41785e80"} Jan 22 07:30:27 crc kubenswrapper[4720]: I0122 07:30:27.344181 4720 scope.go:117] "RemoveContainer" containerID="ed527b511380b4c55ca20d65e2ed620d32ca3663cf9a0efede928549ca3c23de" Jan 22 07:30:27 crc kubenswrapper[4720]: I0122 07:30:27.344223 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ssthc" Jan 22 07:30:27 crc kubenswrapper[4720]: I0122 07:30:27.378032 4720 scope.go:117] "RemoveContainer" containerID="43eb537b37e19ed100ec961b1bc8f0d9428fd87838a31d7dc1ab59c072aff40c" Jan 22 07:30:27 crc kubenswrapper[4720]: I0122 07:30:27.408802 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-ssthc"] Jan 22 07:30:27 crc kubenswrapper[4720]: I0122 07:30:27.412927 4720 scope.go:117] "RemoveContainer" containerID="735c3c379812a4317cf05ea7572ce35aff1c868d8cca96e6637e3e763f767f61" Jan 22 07:30:27 crc kubenswrapper[4720]: I0122 07:30:27.418075 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-ssthc"] Jan 22 07:30:28 crc kubenswrapper[4720]: I0122 07:30:28.219138 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="068c8bd4-616c-4e5f-a4de-faeb9bc1d52f" path="/var/lib/kubelet/pods/068c8bd4-616c-4e5f-a4de-faeb9bc1d52f/volumes" Jan 22 07:30:33 crc kubenswrapper[4720]: I0122 07:30:33.211128 4720 scope.go:117] "RemoveContainer" containerID="d51385d75b1238749f849dc07117df3a2c33a67da77e643d587f470ad3eb8f8a" Jan 22 07:30:33 crc kubenswrapper[4720]: E0122 07:30:33.211844 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bnsvd_openshift-machine-config-operator(f4b26e9d-6a95-4b1c-9750-88b6aa100c67)\"" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" podUID="f4b26e9d-6a95-4b1c-9750-88b6aa100c67" Jan 22 07:30:48 crc kubenswrapper[4720]: I0122 07:30:48.214710 4720 scope.go:117] "RemoveContainer" containerID="d51385d75b1238749f849dc07117df3a2c33a67da77e643d587f470ad3eb8f8a" Jan 22 07:30:48 crc kubenswrapper[4720]: E0122 07:30:48.215625 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bnsvd_openshift-machine-config-operator(f4b26e9d-6a95-4b1c-9750-88b6aa100c67)\"" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" podUID="f4b26e9d-6a95-4b1c-9750-88b6aa100c67" Jan 22 07:31:01 crc kubenswrapper[4720]: I0122 07:31:01.210422 4720 scope.go:117] "RemoveContainer" containerID="d51385d75b1238749f849dc07117df3a2c33a67da77e643d587f470ad3eb8f8a" Jan 22 07:31:01 crc kubenswrapper[4720]: E0122 07:31:01.211252 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bnsvd_openshift-machine-config-operator(f4b26e9d-6a95-4b1c-9750-88b6aa100c67)\"" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" podUID="f4b26e9d-6a95-4b1c-9750-88b6aa100c67" Jan 22 07:31:12 crc kubenswrapper[4720]: I0122 07:31:12.211207 4720 scope.go:117] "RemoveContainer" containerID="d51385d75b1238749f849dc07117df3a2c33a67da77e643d587f470ad3eb8f8a" Jan 22 07:31:12 crc kubenswrapper[4720]: E0122 07:31:12.212740 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bnsvd_openshift-machine-config-operator(f4b26e9d-6a95-4b1c-9750-88b6aa100c67)\"" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" podUID="f4b26e9d-6a95-4b1c-9750-88b6aa100c67" Jan 22 07:31:26 crc kubenswrapper[4720]: I0122 07:31:26.212683 4720 scope.go:117] "RemoveContainer" containerID="d51385d75b1238749f849dc07117df3a2c33a67da77e643d587f470ad3eb8f8a" Jan 22 07:31:26 crc kubenswrapper[4720]: E0122 07:31:26.213349 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bnsvd_openshift-machine-config-operator(f4b26e9d-6a95-4b1c-9750-88b6aa100c67)\"" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" podUID="f4b26e9d-6a95-4b1c-9750-88b6aa100c67" Jan 22 07:31:41 crc kubenswrapper[4720]: I0122 07:31:41.211436 4720 scope.go:117] "RemoveContainer" containerID="d51385d75b1238749f849dc07117df3a2c33a67da77e643d587f470ad3eb8f8a" Jan 22 07:31:41 crc kubenswrapper[4720]: E0122 07:31:41.212306 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bnsvd_openshift-machine-config-operator(f4b26e9d-6a95-4b1c-9750-88b6aa100c67)\"" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" podUID="f4b26e9d-6a95-4b1c-9750-88b6aa100c67" Jan 22 07:31:55 crc kubenswrapper[4720]: I0122 07:31:55.210621 4720 scope.go:117] "RemoveContainer" containerID="d51385d75b1238749f849dc07117df3a2c33a67da77e643d587f470ad3eb8f8a" Jan 22 07:31:55 crc kubenswrapper[4720]: E0122 07:31:55.211537 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bnsvd_openshift-machine-config-operator(f4b26e9d-6a95-4b1c-9750-88b6aa100c67)\"" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" podUID="f4b26e9d-6a95-4b1c-9750-88b6aa100c67" Jan 22 07:32:08 crc kubenswrapper[4720]: I0122 07:32:08.225116 4720 scope.go:117] "RemoveContainer" containerID="d51385d75b1238749f849dc07117df3a2c33a67da77e643d587f470ad3eb8f8a" Jan 22 07:32:08 crc kubenswrapper[4720]: E0122 07:32:08.225882 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bnsvd_openshift-machine-config-operator(f4b26e9d-6a95-4b1c-9750-88b6aa100c67)\"" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" podUID="f4b26e9d-6a95-4b1c-9750-88b6aa100c67" Jan 22 07:32:22 crc kubenswrapper[4720]: I0122 07:32:22.211186 4720 scope.go:117] "RemoveContainer" containerID="d51385d75b1238749f849dc07117df3a2c33a67da77e643d587f470ad3eb8f8a" Jan 22 07:32:22 crc kubenswrapper[4720]: E0122 07:32:22.212096 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bnsvd_openshift-machine-config-operator(f4b26e9d-6a95-4b1c-9750-88b6aa100c67)\"" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" podUID="f4b26e9d-6a95-4b1c-9750-88b6aa100c67" Jan 22 07:32:37 crc kubenswrapper[4720]: I0122 07:32:37.211215 4720 scope.go:117] "RemoveContainer" containerID="d51385d75b1238749f849dc07117df3a2c33a67da77e643d587f470ad3eb8f8a" Jan 22 07:32:37 crc kubenswrapper[4720]: I0122 07:32:37.469860 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" event={"ID":"f4b26e9d-6a95-4b1c-9750-88b6aa100c67","Type":"ContainerStarted","Data":"0710e049a63a4a94943c4705f1694b188f6b833b61e7983b3d30140e58a14404"} Jan 22 07:32:45 crc kubenswrapper[4720]: I0122 07:32:45.997984 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-ttk9n"] Jan 22 07:32:45 crc kubenswrapper[4720]: E0122 07:32:45.998802 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="068c8bd4-616c-4e5f-a4de-faeb9bc1d52f" containerName="extract-utilities" Jan 22 07:32:45 crc kubenswrapper[4720]: I0122 07:32:45.998816 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="068c8bd4-616c-4e5f-a4de-faeb9bc1d52f" containerName="extract-utilities" Jan 22 07:32:45 crc kubenswrapper[4720]: E0122 07:32:45.998828 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08cc7a6f-2a74-46a8-aded-0ebdf6ca2aac" containerName="extract-content" Jan 22 07:32:45 crc kubenswrapper[4720]: I0122 07:32:45.998834 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="08cc7a6f-2a74-46a8-aded-0ebdf6ca2aac" containerName="extract-content" Jan 22 07:32:45 crc kubenswrapper[4720]: E0122 07:32:45.998847 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="068c8bd4-616c-4e5f-a4de-faeb9bc1d52f" containerName="extract-content" Jan 22 07:32:45 crc kubenswrapper[4720]: I0122 07:32:45.998855 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="068c8bd4-616c-4e5f-a4de-faeb9bc1d52f" containerName="extract-content" Jan 22 07:32:45 crc kubenswrapper[4720]: E0122 07:32:45.998863 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08cc7a6f-2a74-46a8-aded-0ebdf6ca2aac" containerName="extract-utilities" Jan 22 07:32:45 crc kubenswrapper[4720]: I0122 07:32:45.998869 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="08cc7a6f-2a74-46a8-aded-0ebdf6ca2aac" containerName="extract-utilities" Jan 22 07:32:45 crc kubenswrapper[4720]: E0122 07:32:45.998881 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08cc7a6f-2a74-46a8-aded-0ebdf6ca2aac" containerName="registry-server" Jan 22 07:32:45 crc kubenswrapper[4720]: I0122 07:32:45.998887 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="08cc7a6f-2a74-46a8-aded-0ebdf6ca2aac" containerName="registry-server" Jan 22 07:32:45 crc kubenswrapper[4720]: E0122 07:32:45.998895 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="068c8bd4-616c-4e5f-a4de-faeb9bc1d52f" containerName="registry-server" Jan 22 07:32:45 crc kubenswrapper[4720]: I0122 07:32:45.998901 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="068c8bd4-616c-4e5f-a4de-faeb9bc1d52f" containerName="registry-server" Jan 22 07:32:45 crc kubenswrapper[4720]: I0122 07:32:45.999070 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="08cc7a6f-2a74-46a8-aded-0ebdf6ca2aac" containerName="registry-server" Jan 22 07:32:45 crc kubenswrapper[4720]: I0122 07:32:45.999082 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="068c8bd4-616c-4e5f-a4de-faeb9bc1d52f" containerName="registry-server" Jan 22 07:32:46 crc kubenswrapper[4720]: I0122 07:32:46.000322 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ttk9n" Jan 22 07:32:46 crc kubenswrapper[4720]: I0122 07:32:46.032578 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-ttk9n"] Jan 22 07:32:46 crc kubenswrapper[4720]: I0122 07:32:46.123061 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0fddbb17-3235-4ed6-ade7-7aeff0d6430a-utilities\") pod \"redhat-marketplace-ttk9n\" (UID: \"0fddbb17-3235-4ed6-ade7-7aeff0d6430a\") " pod="openshift-marketplace/redhat-marketplace-ttk9n" Jan 22 07:32:46 crc kubenswrapper[4720]: I0122 07:32:46.123135 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q4vl9\" (UniqueName: \"kubernetes.io/projected/0fddbb17-3235-4ed6-ade7-7aeff0d6430a-kube-api-access-q4vl9\") pod \"redhat-marketplace-ttk9n\" (UID: \"0fddbb17-3235-4ed6-ade7-7aeff0d6430a\") " pod="openshift-marketplace/redhat-marketplace-ttk9n" Jan 22 07:32:46 crc kubenswrapper[4720]: I0122 07:32:46.123182 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0fddbb17-3235-4ed6-ade7-7aeff0d6430a-catalog-content\") pod \"redhat-marketplace-ttk9n\" (UID: \"0fddbb17-3235-4ed6-ade7-7aeff0d6430a\") " pod="openshift-marketplace/redhat-marketplace-ttk9n" Jan 22 07:32:46 crc kubenswrapper[4720]: I0122 07:32:46.225071 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0fddbb17-3235-4ed6-ade7-7aeff0d6430a-utilities\") pod \"redhat-marketplace-ttk9n\" (UID: \"0fddbb17-3235-4ed6-ade7-7aeff0d6430a\") " pod="openshift-marketplace/redhat-marketplace-ttk9n" Jan 22 07:32:46 crc kubenswrapper[4720]: I0122 07:32:46.225327 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q4vl9\" (UniqueName: \"kubernetes.io/projected/0fddbb17-3235-4ed6-ade7-7aeff0d6430a-kube-api-access-q4vl9\") pod \"redhat-marketplace-ttk9n\" (UID: \"0fddbb17-3235-4ed6-ade7-7aeff0d6430a\") " pod="openshift-marketplace/redhat-marketplace-ttk9n" Jan 22 07:32:46 crc kubenswrapper[4720]: I0122 07:32:46.225375 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0fddbb17-3235-4ed6-ade7-7aeff0d6430a-catalog-content\") pod \"redhat-marketplace-ttk9n\" (UID: \"0fddbb17-3235-4ed6-ade7-7aeff0d6430a\") " pod="openshift-marketplace/redhat-marketplace-ttk9n" Jan 22 07:32:46 crc kubenswrapper[4720]: I0122 07:32:46.225637 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0fddbb17-3235-4ed6-ade7-7aeff0d6430a-utilities\") pod \"redhat-marketplace-ttk9n\" (UID: \"0fddbb17-3235-4ed6-ade7-7aeff0d6430a\") " pod="openshift-marketplace/redhat-marketplace-ttk9n" Jan 22 07:32:46 crc kubenswrapper[4720]: I0122 07:32:46.225773 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0fddbb17-3235-4ed6-ade7-7aeff0d6430a-catalog-content\") pod \"redhat-marketplace-ttk9n\" (UID: \"0fddbb17-3235-4ed6-ade7-7aeff0d6430a\") " pod="openshift-marketplace/redhat-marketplace-ttk9n" Jan 22 07:32:46 crc kubenswrapper[4720]: I0122 07:32:46.257266 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q4vl9\" (UniqueName: \"kubernetes.io/projected/0fddbb17-3235-4ed6-ade7-7aeff0d6430a-kube-api-access-q4vl9\") pod \"redhat-marketplace-ttk9n\" (UID: \"0fddbb17-3235-4ed6-ade7-7aeff0d6430a\") " pod="openshift-marketplace/redhat-marketplace-ttk9n" Jan 22 07:32:46 crc kubenswrapper[4720]: I0122 07:32:46.319044 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ttk9n" Jan 22 07:32:46 crc kubenswrapper[4720]: I0122 07:32:46.566545 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-ttk9n"] Jan 22 07:32:47 crc kubenswrapper[4720]: I0122 07:32:47.549152 4720 generic.go:334] "Generic (PLEG): container finished" podID="0fddbb17-3235-4ed6-ade7-7aeff0d6430a" containerID="8783346d8ac9b7d244b2907866db30ee1c5501932a517da19d83b400cb2654ca" exitCode=0 Jan 22 07:32:47 crc kubenswrapper[4720]: I0122 07:32:47.549517 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ttk9n" event={"ID":"0fddbb17-3235-4ed6-ade7-7aeff0d6430a","Type":"ContainerDied","Data":"8783346d8ac9b7d244b2907866db30ee1c5501932a517da19d83b400cb2654ca"} Jan 22 07:32:47 crc kubenswrapper[4720]: I0122 07:32:47.549553 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ttk9n" event={"ID":"0fddbb17-3235-4ed6-ade7-7aeff0d6430a","Type":"ContainerStarted","Data":"8436dc00d497ea9a7d7e453666a2b2b26e9a9d4bfb3d0af47e95e70b6f23ace1"} Jan 22 07:32:48 crc kubenswrapper[4720]: I0122 07:32:48.557553 4720 generic.go:334] "Generic (PLEG): container finished" podID="0fddbb17-3235-4ed6-ade7-7aeff0d6430a" containerID="5f68045f27b326dd90edb5dcbb9a7862aa4927105639d60fd91d15b1e39ec810" exitCode=0 Jan 22 07:32:48 crc kubenswrapper[4720]: I0122 07:32:48.557971 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ttk9n" event={"ID":"0fddbb17-3235-4ed6-ade7-7aeff0d6430a","Type":"ContainerDied","Data":"5f68045f27b326dd90edb5dcbb9a7862aa4927105639d60fd91d15b1e39ec810"} Jan 22 07:32:49 crc kubenswrapper[4720]: I0122 07:32:49.566556 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ttk9n" event={"ID":"0fddbb17-3235-4ed6-ade7-7aeff0d6430a","Type":"ContainerStarted","Data":"d908db6f6d3e5661072120ce906f363bf648a21d89799cec076f033889fca907"} Jan 22 07:32:49 crc kubenswrapper[4720]: I0122 07:32:49.589539 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-ttk9n" podStartSLOduration=3.090993428 podStartE2EDuration="4.589517695s" podCreationTimestamp="2026-01-22 07:32:45 +0000 UTC" firstStartedPulling="2026-01-22 07:32:47.551483699 +0000 UTC m=+3459.693390424" lastFinishedPulling="2026-01-22 07:32:49.050007966 +0000 UTC m=+3461.191914691" observedRunningTime="2026-01-22 07:32:49.586163579 +0000 UTC m=+3461.728070314" watchObservedRunningTime="2026-01-22 07:32:49.589517695 +0000 UTC m=+3461.731424400" Jan 22 07:32:56 crc kubenswrapper[4720]: I0122 07:32:56.319417 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-ttk9n" Jan 22 07:32:56 crc kubenswrapper[4720]: I0122 07:32:56.319818 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-ttk9n" Jan 22 07:32:56 crc kubenswrapper[4720]: I0122 07:32:56.374004 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-ttk9n" Jan 22 07:32:56 crc kubenswrapper[4720]: I0122 07:32:56.690194 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-ttk9n" Jan 22 07:32:59 crc kubenswrapper[4720]: I0122 07:32:59.995253 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-ttk9n"] Jan 22 07:32:59 crc kubenswrapper[4720]: I0122 07:32:59.995874 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-ttk9n" podUID="0fddbb17-3235-4ed6-ade7-7aeff0d6430a" containerName="registry-server" containerID="cri-o://d908db6f6d3e5661072120ce906f363bf648a21d89799cec076f033889fca907" gracePeriod=2 Jan 22 07:33:00 crc kubenswrapper[4720]: I0122 07:33:00.670655 4720 generic.go:334] "Generic (PLEG): container finished" podID="0fddbb17-3235-4ed6-ade7-7aeff0d6430a" containerID="d908db6f6d3e5661072120ce906f363bf648a21d89799cec076f033889fca907" exitCode=0 Jan 22 07:33:00 crc kubenswrapper[4720]: I0122 07:33:00.670835 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ttk9n" event={"ID":"0fddbb17-3235-4ed6-ade7-7aeff0d6430a","Type":"ContainerDied","Data":"d908db6f6d3e5661072120ce906f363bf648a21d89799cec076f033889fca907"} Jan 22 07:33:00 crc kubenswrapper[4720]: I0122 07:33:00.917545 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ttk9n" Jan 22 07:33:01 crc kubenswrapper[4720]: I0122 07:33:01.079391 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0fddbb17-3235-4ed6-ade7-7aeff0d6430a-catalog-content\") pod \"0fddbb17-3235-4ed6-ade7-7aeff0d6430a\" (UID: \"0fddbb17-3235-4ed6-ade7-7aeff0d6430a\") " Jan 22 07:33:01 crc kubenswrapper[4720]: I0122 07:33:01.079445 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q4vl9\" (UniqueName: \"kubernetes.io/projected/0fddbb17-3235-4ed6-ade7-7aeff0d6430a-kube-api-access-q4vl9\") pod \"0fddbb17-3235-4ed6-ade7-7aeff0d6430a\" (UID: \"0fddbb17-3235-4ed6-ade7-7aeff0d6430a\") " Jan 22 07:33:01 crc kubenswrapper[4720]: I0122 07:33:01.079562 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0fddbb17-3235-4ed6-ade7-7aeff0d6430a-utilities\") pod \"0fddbb17-3235-4ed6-ade7-7aeff0d6430a\" (UID: \"0fddbb17-3235-4ed6-ade7-7aeff0d6430a\") " Jan 22 07:33:01 crc kubenswrapper[4720]: I0122 07:33:01.080368 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0fddbb17-3235-4ed6-ade7-7aeff0d6430a-utilities" (OuterVolumeSpecName: "utilities") pod "0fddbb17-3235-4ed6-ade7-7aeff0d6430a" (UID: "0fddbb17-3235-4ed6-ade7-7aeff0d6430a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 07:33:01 crc kubenswrapper[4720]: I0122 07:33:01.086693 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0fddbb17-3235-4ed6-ade7-7aeff0d6430a-kube-api-access-q4vl9" (OuterVolumeSpecName: "kube-api-access-q4vl9") pod "0fddbb17-3235-4ed6-ade7-7aeff0d6430a" (UID: "0fddbb17-3235-4ed6-ade7-7aeff0d6430a"). InnerVolumeSpecName "kube-api-access-q4vl9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 07:33:01 crc kubenswrapper[4720]: I0122 07:33:01.102443 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0fddbb17-3235-4ed6-ade7-7aeff0d6430a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0fddbb17-3235-4ed6-ade7-7aeff0d6430a" (UID: "0fddbb17-3235-4ed6-ade7-7aeff0d6430a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 07:33:01 crc kubenswrapper[4720]: I0122 07:33:01.180928 4720 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0fddbb17-3235-4ed6-ade7-7aeff0d6430a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 07:33:01 crc kubenswrapper[4720]: I0122 07:33:01.181225 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q4vl9\" (UniqueName: \"kubernetes.io/projected/0fddbb17-3235-4ed6-ade7-7aeff0d6430a-kube-api-access-q4vl9\") on node \"crc\" DevicePath \"\"" Jan 22 07:33:01 crc kubenswrapper[4720]: I0122 07:33:01.181237 4720 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0fddbb17-3235-4ed6-ade7-7aeff0d6430a-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 07:33:01 crc kubenswrapper[4720]: I0122 07:33:01.682777 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ttk9n" event={"ID":"0fddbb17-3235-4ed6-ade7-7aeff0d6430a","Type":"ContainerDied","Data":"8436dc00d497ea9a7d7e453666a2b2b26e9a9d4bfb3d0af47e95e70b6f23ace1"} Jan 22 07:33:01 crc kubenswrapper[4720]: I0122 07:33:01.682873 4720 scope.go:117] "RemoveContainer" containerID="d908db6f6d3e5661072120ce906f363bf648a21d89799cec076f033889fca907" Jan 22 07:33:01 crc kubenswrapper[4720]: I0122 07:33:01.683228 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ttk9n" Jan 22 07:33:01 crc kubenswrapper[4720]: I0122 07:33:01.733116 4720 scope.go:117] "RemoveContainer" containerID="5f68045f27b326dd90edb5dcbb9a7862aa4927105639d60fd91d15b1e39ec810" Jan 22 07:33:01 crc kubenswrapper[4720]: I0122 07:33:01.779517 4720 scope.go:117] "RemoveContainer" containerID="8783346d8ac9b7d244b2907866db30ee1c5501932a517da19d83b400cb2654ca" Jan 22 07:33:01 crc kubenswrapper[4720]: I0122 07:33:01.798945 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-ttk9n"] Jan 22 07:33:01 crc kubenswrapper[4720]: I0122 07:33:01.824973 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-ttk9n"] Jan 22 07:33:02 crc kubenswrapper[4720]: I0122 07:33:02.223303 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0fddbb17-3235-4ed6-ade7-7aeff0d6430a" path="/var/lib/kubelet/pods/0fddbb17-3235-4ed6-ade7-7aeff0d6430a/volumes" Jan 22 07:33:15 crc kubenswrapper[4720]: I0122 07:33:15.393823 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-c8xrd"] Jan 22 07:33:15 crc kubenswrapper[4720]: E0122 07:33:15.394726 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0fddbb17-3235-4ed6-ade7-7aeff0d6430a" containerName="extract-content" Jan 22 07:33:15 crc kubenswrapper[4720]: I0122 07:33:15.394740 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="0fddbb17-3235-4ed6-ade7-7aeff0d6430a" containerName="extract-content" Jan 22 07:33:15 crc kubenswrapper[4720]: E0122 07:33:15.394763 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0fddbb17-3235-4ed6-ade7-7aeff0d6430a" containerName="registry-server" Jan 22 07:33:15 crc kubenswrapper[4720]: I0122 07:33:15.394769 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="0fddbb17-3235-4ed6-ade7-7aeff0d6430a" containerName="registry-server" Jan 22 07:33:15 crc kubenswrapper[4720]: E0122 07:33:15.394783 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0fddbb17-3235-4ed6-ade7-7aeff0d6430a" containerName="extract-utilities" Jan 22 07:33:15 crc kubenswrapper[4720]: I0122 07:33:15.394790 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="0fddbb17-3235-4ed6-ade7-7aeff0d6430a" containerName="extract-utilities" Jan 22 07:33:15 crc kubenswrapper[4720]: I0122 07:33:15.394949 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="0fddbb17-3235-4ed6-ade7-7aeff0d6430a" containerName="registry-server" Jan 22 07:33:15 crc kubenswrapper[4720]: I0122 07:33:15.396054 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-c8xrd" Jan 22 07:33:15 crc kubenswrapper[4720]: I0122 07:33:15.406404 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-c8xrd"] Jan 22 07:33:15 crc kubenswrapper[4720]: I0122 07:33:15.509505 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c19fec4b-6ede-4d73-bab1-1011b0888301-catalog-content\") pod \"community-operators-c8xrd\" (UID: \"c19fec4b-6ede-4d73-bab1-1011b0888301\") " pod="openshift-marketplace/community-operators-c8xrd" Jan 22 07:33:15 crc kubenswrapper[4720]: I0122 07:33:15.509616 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xmqcp\" (UniqueName: \"kubernetes.io/projected/c19fec4b-6ede-4d73-bab1-1011b0888301-kube-api-access-xmqcp\") pod \"community-operators-c8xrd\" (UID: \"c19fec4b-6ede-4d73-bab1-1011b0888301\") " pod="openshift-marketplace/community-operators-c8xrd" Jan 22 07:33:15 crc kubenswrapper[4720]: I0122 07:33:15.509663 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c19fec4b-6ede-4d73-bab1-1011b0888301-utilities\") pod \"community-operators-c8xrd\" (UID: \"c19fec4b-6ede-4d73-bab1-1011b0888301\") " pod="openshift-marketplace/community-operators-c8xrd" Jan 22 07:33:15 crc kubenswrapper[4720]: I0122 07:33:15.610700 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xmqcp\" (UniqueName: \"kubernetes.io/projected/c19fec4b-6ede-4d73-bab1-1011b0888301-kube-api-access-xmqcp\") pod \"community-operators-c8xrd\" (UID: \"c19fec4b-6ede-4d73-bab1-1011b0888301\") " pod="openshift-marketplace/community-operators-c8xrd" Jan 22 07:33:15 crc kubenswrapper[4720]: I0122 07:33:15.610773 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c19fec4b-6ede-4d73-bab1-1011b0888301-utilities\") pod \"community-operators-c8xrd\" (UID: \"c19fec4b-6ede-4d73-bab1-1011b0888301\") " pod="openshift-marketplace/community-operators-c8xrd" Jan 22 07:33:15 crc kubenswrapper[4720]: I0122 07:33:15.610825 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c19fec4b-6ede-4d73-bab1-1011b0888301-catalog-content\") pod \"community-operators-c8xrd\" (UID: \"c19fec4b-6ede-4d73-bab1-1011b0888301\") " pod="openshift-marketplace/community-operators-c8xrd" Jan 22 07:33:15 crc kubenswrapper[4720]: I0122 07:33:15.611312 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c19fec4b-6ede-4d73-bab1-1011b0888301-catalog-content\") pod \"community-operators-c8xrd\" (UID: \"c19fec4b-6ede-4d73-bab1-1011b0888301\") " pod="openshift-marketplace/community-operators-c8xrd" Jan 22 07:33:15 crc kubenswrapper[4720]: I0122 07:33:15.611400 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c19fec4b-6ede-4d73-bab1-1011b0888301-utilities\") pod \"community-operators-c8xrd\" (UID: \"c19fec4b-6ede-4d73-bab1-1011b0888301\") " pod="openshift-marketplace/community-operators-c8xrd" Jan 22 07:33:15 crc kubenswrapper[4720]: I0122 07:33:15.654708 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xmqcp\" (UniqueName: \"kubernetes.io/projected/c19fec4b-6ede-4d73-bab1-1011b0888301-kube-api-access-xmqcp\") pod \"community-operators-c8xrd\" (UID: \"c19fec4b-6ede-4d73-bab1-1011b0888301\") " pod="openshift-marketplace/community-operators-c8xrd" Jan 22 07:33:15 crc kubenswrapper[4720]: I0122 07:33:15.725391 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-c8xrd" Jan 22 07:33:16 crc kubenswrapper[4720]: I0122 07:33:16.230216 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-c8xrd"] Jan 22 07:33:16 crc kubenswrapper[4720]: I0122 07:33:16.810305 4720 generic.go:334] "Generic (PLEG): container finished" podID="c19fec4b-6ede-4d73-bab1-1011b0888301" containerID="cb1d42d4f5c9fdb4fe3123af7ac8f2cfd8500308f2500648dc2a6adf0c095377" exitCode=0 Jan 22 07:33:16 crc kubenswrapper[4720]: I0122 07:33:16.810349 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c8xrd" event={"ID":"c19fec4b-6ede-4d73-bab1-1011b0888301","Type":"ContainerDied","Data":"cb1d42d4f5c9fdb4fe3123af7ac8f2cfd8500308f2500648dc2a6adf0c095377"} Jan 22 07:33:16 crc kubenswrapper[4720]: I0122 07:33:16.810372 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c8xrd" event={"ID":"c19fec4b-6ede-4d73-bab1-1011b0888301","Type":"ContainerStarted","Data":"118fe67238c4be2f4a0fab40b4cc69108c32df3d20b2b41d7f7c9364f40f70a6"} Jan 22 07:33:17 crc kubenswrapper[4720]: I0122 07:33:17.822124 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c8xrd" event={"ID":"c19fec4b-6ede-4d73-bab1-1011b0888301","Type":"ContainerStarted","Data":"c35181317fc2f618bdaf1e5eab440d41a4fdf1b9eada41b774495c0a06c2114b"} Jan 22 07:33:18 crc kubenswrapper[4720]: I0122 07:33:18.833036 4720 generic.go:334] "Generic (PLEG): container finished" podID="c19fec4b-6ede-4d73-bab1-1011b0888301" containerID="c35181317fc2f618bdaf1e5eab440d41a4fdf1b9eada41b774495c0a06c2114b" exitCode=0 Jan 22 07:33:18 crc kubenswrapper[4720]: I0122 07:33:18.833086 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c8xrd" event={"ID":"c19fec4b-6ede-4d73-bab1-1011b0888301","Type":"ContainerDied","Data":"c35181317fc2f618bdaf1e5eab440d41a4fdf1b9eada41b774495c0a06c2114b"} Jan 22 07:33:19 crc kubenswrapper[4720]: I0122 07:33:19.840783 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c8xrd" event={"ID":"c19fec4b-6ede-4d73-bab1-1011b0888301","Type":"ContainerStarted","Data":"0027c8d5702dbd019e9ee924f4ce89319bf4b5de5494d8e5482e1d25d4cf1ffc"} Jan 22 07:33:19 crc kubenswrapper[4720]: I0122 07:33:19.867260 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-c8xrd" podStartSLOduration=2.346371733 podStartE2EDuration="4.867245122s" podCreationTimestamp="2026-01-22 07:33:15 +0000 UTC" firstStartedPulling="2026-01-22 07:33:16.811852406 +0000 UTC m=+3488.953759111" lastFinishedPulling="2026-01-22 07:33:19.332725795 +0000 UTC m=+3491.474632500" observedRunningTime="2026-01-22 07:33:19.860989253 +0000 UTC m=+3492.002895978" watchObservedRunningTime="2026-01-22 07:33:19.867245122 +0000 UTC m=+3492.009151827" Jan 22 07:33:25 crc kubenswrapper[4720]: I0122 07:33:25.727071 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-c8xrd" Jan 22 07:33:25 crc kubenswrapper[4720]: I0122 07:33:25.727865 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-c8xrd" Jan 22 07:33:25 crc kubenswrapper[4720]: I0122 07:33:25.794585 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-c8xrd" Jan 22 07:33:25 crc kubenswrapper[4720]: I0122 07:33:25.952425 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-c8xrd" Jan 22 07:33:33 crc kubenswrapper[4720]: I0122 07:33:33.387076 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-c8xrd"] Jan 22 07:33:33 crc kubenswrapper[4720]: I0122 07:33:33.387817 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-c8xrd" podUID="c19fec4b-6ede-4d73-bab1-1011b0888301" containerName="registry-server" containerID="cri-o://0027c8d5702dbd019e9ee924f4ce89319bf4b5de5494d8e5482e1d25d4cf1ffc" gracePeriod=2 Jan 22 07:33:33 crc kubenswrapper[4720]: I0122 07:33:33.957839 4720 generic.go:334] "Generic (PLEG): container finished" podID="c19fec4b-6ede-4d73-bab1-1011b0888301" containerID="0027c8d5702dbd019e9ee924f4ce89319bf4b5de5494d8e5482e1d25d4cf1ffc" exitCode=0 Jan 22 07:33:33 crc kubenswrapper[4720]: I0122 07:33:33.957893 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c8xrd" event={"ID":"c19fec4b-6ede-4d73-bab1-1011b0888301","Type":"ContainerDied","Data":"0027c8d5702dbd019e9ee924f4ce89319bf4b5de5494d8e5482e1d25d4cf1ffc"} Jan 22 07:33:34 crc kubenswrapper[4720]: I0122 07:33:34.356141 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-c8xrd" Jan 22 07:33:34 crc kubenswrapper[4720]: I0122 07:33:34.474519 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c19fec4b-6ede-4d73-bab1-1011b0888301-utilities\") pod \"c19fec4b-6ede-4d73-bab1-1011b0888301\" (UID: \"c19fec4b-6ede-4d73-bab1-1011b0888301\") " Jan 22 07:33:34 crc kubenswrapper[4720]: I0122 07:33:34.474581 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c19fec4b-6ede-4d73-bab1-1011b0888301-catalog-content\") pod \"c19fec4b-6ede-4d73-bab1-1011b0888301\" (UID: \"c19fec4b-6ede-4d73-bab1-1011b0888301\") " Jan 22 07:33:34 crc kubenswrapper[4720]: I0122 07:33:34.474717 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xmqcp\" (UniqueName: \"kubernetes.io/projected/c19fec4b-6ede-4d73-bab1-1011b0888301-kube-api-access-xmqcp\") pod \"c19fec4b-6ede-4d73-bab1-1011b0888301\" (UID: \"c19fec4b-6ede-4d73-bab1-1011b0888301\") " Jan 22 07:33:34 crc kubenswrapper[4720]: I0122 07:33:34.476708 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c19fec4b-6ede-4d73-bab1-1011b0888301-utilities" (OuterVolumeSpecName: "utilities") pod "c19fec4b-6ede-4d73-bab1-1011b0888301" (UID: "c19fec4b-6ede-4d73-bab1-1011b0888301"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 07:33:34 crc kubenswrapper[4720]: I0122 07:33:34.492117 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c19fec4b-6ede-4d73-bab1-1011b0888301-kube-api-access-xmqcp" (OuterVolumeSpecName: "kube-api-access-xmqcp") pod "c19fec4b-6ede-4d73-bab1-1011b0888301" (UID: "c19fec4b-6ede-4d73-bab1-1011b0888301"). InnerVolumeSpecName "kube-api-access-xmqcp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 07:33:34 crc kubenswrapper[4720]: I0122 07:33:34.533677 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c19fec4b-6ede-4d73-bab1-1011b0888301-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c19fec4b-6ede-4d73-bab1-1011b0888301" (UID: "c19fec4b-6ede-4d73-bab1-1011b0888301"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 07:33:34 crc kubenswrapper[4720]: I0122 07:33:34.577031 4720 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c19fec4b-6ede-4d73-bab1-1011b0888301-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 07:33:34 crc kubenswrapper[4720]: I0122 07:33:34.577314 4720 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c19fec4b-6ede-4d73-bab1-1011b0888301-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 07:33:34 crc kubenswrapper[4720]: I0122 07:33:34.577404 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xmqcp\" (UniqueName: \"kubernetes.io/projected/c19fec4b-6ede-4d73-bab1-1011b0888301-kube-api-access-xmqcp\") on node \"crc\" DevicePath \"\"" Jan 22 07:33:34 crc kubenswrapper[4720]: I0122 07:33:34.965085 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c8xrd" event={"ID":"c19fec4b-6ede-4d73-bab1-1011b0888301","Type":"ContainerDied","Data":"118fe67238c4be2f4a0fab40b4cc69108c32df3d20b2b41d7f7c9364f40f70a6"} Jan 22 07:33:34 crc kubenswrapper[4720]: I0122 07:33:34.965146 4720 scope.go:117] "RemoveContainer" containerID="0027c8d5702dbd019e9ee924f4ce89319bf4b5de5494d8e5482e1d25d4cf1ffc" Jan 22 07:33:34 crc kubenswrapper[4720]: I0122 07:33:34.965282 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-c8xrd" Jan 22 07:33:34 crc kubenswrapper[4720]: I0122 07:33:34.999255 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-c8xrd"] Jan 22 07:33:35 crc kubenswrapper[4720]: I0122 07:33:35.001989 4720 scope.go:117] "RemoveContainer" containerID="c35181317fc2f618bdaf1e5eab440d41a4fdf1b9eada41b774495c0a06c2114b" Jan 22 07:33:35 crc kubenswrapper[4720]: I0122 07:33:35.007871 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-c8xrd"] Jan 22 07:33:35 crc kubenswrapper[4720]: I0122 07:33:35.031594 4720 scope.go:117] "RemoveContainer" containerID="cb1d42d4f5c9fdb4fe3123af7ac8f2cfd8500308f2500648dc2a6adf0c095377" Jan 22 07:33:36 crc kubenswrapper[4720]: I0122 07:33:36.219982 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c19fec4b-6ede-4d73-bab1-1011b0888301" path="/var/lib/kubelet/pods/c19fec4b-6ede-4d73-bab1-1011b0888301/volumes" Jan 22 07:34:59 crc kubenswrapper[4720]: I0122 07:34:59.779943 4720 patch_prober.go:28] interesting pod/machine-config-daemon-bnsvd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 07:34:59 crc kubenswrapper[4720]: I0122 07:34:59.780583 4720 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" podUID="f4b26e9d-6a95-4b1c-9750-88b6aa100c67" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 07:35:29 crc kubenswrapper[4720]: I0122 07:35:29.780462 4720 patch_prober.go:28] interesting pod/machine-config-daemon-bnsvd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 07:35:29 crc kubenswrapper[4720]: I0122 07:35:29.781001 4720 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" podUID="f4b26e9d-6a95-4b1c-9750-88b6aa100c67" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 07:35:59 crc kubenswrapper[4720]: I0122 07:35:59.780192 4720 patch_prober.go:28] interesting pod/machine-config-daemon-bnsvd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 07:35:59 crc kubenswrapper[4720]: I0122 07:35:59.780841 4720 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" podUID="f4b26e9d-6a95-4b1c-9750-88b6aa100c67" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 07:35:59 crc kubenswrapper[4720]: I0122 07:35:59.780901 4720 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" Jan 22 07:35:59 crc kubenswrapper[4720]: I0122 07:35:59.782143 4720 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0710e049a63a4a94943c4705f1694b188f6b833b61e7983b3d30140e58a14404"} pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 22 07:35:59 crc kubenswrapper[4720]: I0122 07:35:59.782198 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" podUID="f4b26e9d-6a95-4b1c-9750-88b6aa100c67" containerName="machine-config-daemon" containerID="cri-o://0710e049a63a4a94943c4705f1694b188f6b833b61e7983b3d30140e58a14404" gracePeriod=600 Jan 22 07:36:00 crc kubenswrapper[4720]: I0122 07:36:00.328976 4720 generic.go:334] "Generic (PLEG): container finished" podID="f4b26e9d-6a95-4b1c-9750-88b6aa100c67" containerID="0710e049a63a4a94943c4705f1694b188f6b833b61e7983b3d30140e58a14404" exitCode=0 Jan 22 07:36:00 crc kubenswrapper[4720]: I0122 07:36:00.329042 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" event={"ID":"f4b26e9d-6a95-4b1c-9750-88b6aa100c67","Type":"ContainerDied","Data":"0710e049a63a4a94943c4705f1694b188f6b833b61e7983b3d30140e58a14404"} Jan 22 07:36:00 crc kubenswrapper[4720]: I0122 07:36:00.329347 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" event={"ID":"f4b26e9d-6a95-4b1c-9750-88b6aa100c67","Type":"ContainerStarted","Data":"d93ba9d9d132d1e9c26102e6659c91bdad0555c67031d00274c2f450bfe2748b"} Jan 22 07:36:00 crc kubenswrapper[4720]: I0122 07:36:00.329389 4720 scope.go:117] "RemoveContainer" containerID="d51385d75b1238749f849dc07117df3a2c33a67da77e643d587f470ad3eb8f8a" Jan 22 07:38:29 crc kubenswrapper[4720]: I0122 07:38:29.780515 4720 patch_prober.go:28] interesting pod/machine-config-daemon-bnsvd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 07:38:29 crc kubenswrapper[4720]: I0122 07:38:29.781294 4720 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" podUID="f4b26e9d-6a95-4b1c-9750-88b6aa100c67" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 07:38:59 crc kubenswrapper[4720]: I0122 07:38:59.780440 4720 patch_prober.go:28] interesting pod/machine-config-daemon-bnsvd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 07:38:59 crc kubenswrapper[4720]: I0122 07:38:59.781256 4720 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" podUID="f4b26e9d-6a95-4b1c-9750-88b6aa100c67" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 07:39:29 crc kubenswrapper[4720]: I0122 07:39:29.780557 4720 patch_prober.go:28] interesting pod/machine-config-daemon-bnsvd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 07:39:29 crc kubenswrapper[4720]: I0122 07:39:29.781151 4720 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" podUID="f4b26e9d-6a95-4b1c-9750-88b6aa100c67" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 07:39:29 crc kubenswrapper[4720]: I0122 07:39:29.781194 4720 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" Jan 22 07:39:29 crc kubenswrapper[4720]: I0122 07:39:29.781785 4720 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d93ba9d9d132d1e9c26102e6659c91bdad0555c67031d00274c2f450bfe2748b"} pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 22 07:39:29 crc kubenswrapper[4720]: I0122 07:39:29.781837 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" podUID="f4b26e9d-6a95-4b1c-9750-88b6aa100c67" containerName="machine-config-daemon" containerID="cri-o://d93ba9d9d132d1e9c26102e6659c91bdad0555c67031d00274c2f450bfe2748b" gracePeriod=600 Jan 22 07:39:29 crc kubenswrapper[4720]: E0122 07:39:29.911518 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bnsvd_openshift-machine-config-operator(f4b26e9d-6a95-4b1c-9750-88b6aa100c67)\"" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" podUID="f4b26e9d-6a95-4b1c-9750-88b6aa100c67" Jan 22 07:39:29 crc kubenswrapper[4720]: I0122 07:39:29.956817 4720 generic.go:334] "Generic (PLEG): container finished" podID="f4b26e9d-6a95-4b1c-9750-88b6aa100c67" containerID="d93ba9d9d132d1e9c26102e6659c91bdad0555c67031d00274c2f450bfe2748b" exitCode=0 Jan 22 07:39:29 crc kubenswrapper[4720]: I0122 07:39:29.956864 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" event={"ID":"f4b26e9d-6a95-4b1c-9750-88b6aa100c67","Type":"ContainerDied","Data":"d93ba9d9d132d1e9c26102e6659c91bdad0555c67031d00274c2f450bfe2748b"} Jan 22 07:39:29 crc kubenswrapper[4720]: I0122 07:39:29.956900 4720 scope.go:117] "RemoveContainer" containerID="0710e049a63a4a94943c4705f1694b188f6b833b61e7983b3d30140e58a14404" Jan 22 07:39:29 crc kubenswrapper[4720]: I0122 07:39:29.957567 4720 scope.go:117] "RemoveContainer" containerID="d93ba9d9d132d1e9c26102e6659c91bdad0555c67031d00274c2f450bfe2748b" Jan 22 07:39:29 crc kubenswrapper[4720]: E0122 07:39:29.957816 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bnsvd_openshift-machine-config-operator(f4b26e9d-6a95-4b1c-9750-88b6aa100c67)\"" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" podUID="f4b26e9d-6a95-4b1c-9750-88b6aa100c67" Jan 22 07:39:44 crc kubenswrapper[4720]: I0122 07:39:44.211764 4720 scope.go:117] "RemoveContainer" containerID="d93ba9d9d132d1e9c26102e6659c91bdad0555c67031d00274c2f450bfe2748b" Jan 22 07:39:44 crc kubenswrapper[4720]: E0122 07:39:44.212590 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bnsvd_openshift-machine-config-operator(f4b26e9d-6a95-4b1c-9750-88b6aa100c67)\"" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" podUID="f4b26e9d-6a95-4b1c-9750-88b6aa100c67" Jan 22 07:39:56 crc kubenswrapper[4720]: I0122 07:39:56.211168 4720 scope.go:117] "RemoveContainer" containerID="d93ba9d9d132d1e9c26102e6659c91bdad0555c67031d00274c2f450bfe2748b" Jan 22 07:39:56 crc kubenswrapper[4720]: E0122 07:39:56.211873 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bnsvd_openshift-machine-config-operator(f4b26e9d-6a95-4b1c-9750-88b6aa100c67)\"" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" podUID="f4b26e9d-6a95-4b1c-9750-88b6aa100c67" Jan 22 07:40:08 crc kubenswrapper[4720]: I0122 07:40:08.210817 4720 scope.go:117] "RemoveContainer" containerID="d93ba9d9d132d1e9c26102e6659c91bdad0555c67031d00274c2f450bfe2748b" Jan 22 07:40:08 crc kubenswrapper[4720]: E0122 07:40:08.213081 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bnsvd_openshift-machine-config-operator(f4b26e9d-6a95-4b1c-9750-88b6aa100c67)\"" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" podUID="f4b26e9d-6a95-4b1c-9750-88b6aa100c67" Jan 22 07:40:22 crc kubenswrapper[4720]: I0122 07:40:22.210876 4720 scope.go:117] "RemoveContainer" containerID="d93ba9d9d132d1e9c26102e6659c91bdad0555c67031d00274c2f450bfe2748b" Jan 22 07:40:22 crc kubenswrapper[4720]: E0122 07:40:22.211722 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bnsvd_openshift-machine-config-operator(f4b26e9d-6a95-4b1c-9750-88b6aa100c67)\"" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" podUID="f4b26e9d-6a95-4b1c-9750-88b6aa100c67" Jan 22 07:40:36 crc kubenswrapper[4720]: I0122 07:40:36.210494 4720 scope.go:117] "RemoveContainer" containerID="d93ba9d9d132d1e9c26102e6659c91bdad0555c67031d00274c2f450bfe2748b" Jan 22 07:40:36 crc kubenswrapper[4720]: E0122 07:40:36.211177 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bnsvd_openshift-machine-config-operator(f4b26e9d-6a95-4b1c-9750-88b6aa100c67)\"" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" podUID="f4b26e9d-6a95-4b1c-9750-88b6aa100c67" Jan 22 07:40:50 crc kubenswrapper[4720]: I0122 07:40:50.210707 4720 scope.go:117] "RemoveContainer" containerID="d93ba9d9d132d1e9c26102e6659c91bdad0555c67031d00274c2f450bfe2748b" Jan 22 07:40:50 crc kubenswrapper[4720]: E0122 07:40:50.211783 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bnsvd_openshift-machine-config-operator(f4b26e9d-6a95-4b1c-9750-88b6aa100c67)\"" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" podUID="f4b26e9d-6a95-4b1c-9750-88b6aa100c67" Jan 22 07:41:04 crc kubenswrapper[4720]: I0122 07:41:04.211503 4720 scope.go:117] "RemoveContainer" containerID="d93ba9d9d132d1e9c26102e6659c91bdad0555c67031d00274c2f450bfe2748b" Jan 22 07:41:04 crc kubenswrapper[4720]: E0122 07:41:04.212237 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bnsvd_openshift-machine-config-operator(f4b26e9d-6a95-4b1c-9750-88b6aa100c67)\"" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" podUID="f4b26e9d-6a95-4b1c-9750-88b6aa100c67" Jan 22 07:41:18 crc kubenswrapper[4720]: I0122 07:41:18.217183 4720 scope.go:117] "RemoveContainer" containerID="d93ba9d9d132d1e9c26102e6659c91bdad0555c67031d00274c2f450bfe2748b" Jan 22 07:41:18 crc kubenswrapper[4720]: E0122 07:41:18.218522 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bnsvd_openshift-machine-config-operator(f4b26e9d-6a95-4b1c-9750-88b6aa100c67)\"" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" podUID="f4b26e9d-6a95-4b1c-9750-88b6aa100c67" Jan 22 07:41:26 crc kubenswrapper[4720]: I0122 07:41:26.995117 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-7vhtl"] Jan 22 07:41:26 crc kubenswrapper[4720]: E0122 07:41:26.995992 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c19fec4b-6ede-4d73-bab1-1011b0888301" containerName="registry-server" Jan 22 07:41:26 crc kubenswrapper[4720]: I0122 07:41:26.996006 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="c19fec4b-6ede-4d73-bab1-1011b0888301" containerName="registry-server" Jan 22 07:41:26 crc kubenswrapper[4720]: E0122 07:41:26.996017 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c19fec4b-6ede-4d73-bab1-1011b0888301" containerName="extract-content" Jan 22 07:41:26 crc kubenswrapper[4720]: I0122 07:41:26.996023 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="c19fec4b-6ede-4d73-bab1-1011b0888301" containerName="extract-content" Jan 22 07:41:26 crc kubenswrapper[4720]: E0122 07:41:26.996039 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c19fec4b-6ede-4d73-bab1-1011b0888301" containerName="extract-utilities" Jan 22 07:41:26 crc kubenswrapper[4720]: I0122 07:41:26.996046 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="c19fec4b-6ede-4d73-bab1-1011b0888301" containerName="extract-utilities" Jan 22 07:41:26 crc kubenswrapper[4720]: I0122 07:41:26.996205 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="c19fec4b-6ede-4d73-bab1-1011b0888301" containerName="registry-server" Jan 22 07:41:26 crc kubenswrapper[4720]: I0122 07:41:26.997268 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7vhtl" Jan 22 07:41:27 crc kubenswrapper[4720]: I0122 07:41:27.010373 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7vhtl"] Jan 22 07:41:27 crc kubenswrapper[4720]: I0122 07:41:27.090823 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d7f4c945-2519-4fac-a039-39388edfc00c-catalog-content\") pod \"redhat-operators-7vhtl\" (UID: \"d7f4c945-2519-4fac-a039-39388edfc00c\") " pod="openshift-marketplace/redhat-operators-7vhtl" Jan 22 07:41:27 crc kubenswrapper[4720]: I0122 07:41:27.090969 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rnb27\" (UniqueName: \"kubernetes.io/projected/d7f4c945-2519-4fac-a039-39388edfc00c-kube-api-access-rnb27\") pod \"redhat-operators-7vhtl\" (UID: \"d7f4c945-2519-4fac-a039-39388edfc00c\") " pod="openshift-marketplace/redhat-operators-7vhtl" Jan 22 07:41:27 crc kubenswrapper[4720]: I0122 07:41:27.091059 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d7f4c945-2519-4fac-a039-39388edfc00c-utilities\") pod \"redhat-operators-7vhtl\" (UID: \"d7f4c945-2519-4fac-a039-39388edfc00c\") " pod="openshift-marketplace/redhat-operators-7vhtl" Jan 22 07:41:27 crc kubenswrapper[4720]: I0122 07:41:27.192524 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d7f4c945-2519-4fac-a039-39388edfc00c-utilities\") pod \"redhat-operators-7vhtl\" (UID: \"d7f4c945-2519-4fac-a039-39388edfc00c\") " pod="openshift-marketplace/redhat-operators-7vhtl" Jan 22 07:41:27 crc kubenswrapper[4720]: I0122 07:41:27.192663 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d7f4c945-2519-4fac-a039-39388edfc00c-catalog-content\") pod \"redhat-operators-7vhtl\" (UID: \"d7f4c945-2519-4fac-a039-39388edfc00c\") " pod="openshift-marketplace/redhat-operators-7vhtl" Jan 22 07:41:27 crc kubenswrapper[4720]: I0122 07:41:27.192785 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rnb27\" (UniqueName: \"kubernetes.io/projected/d7f4c945-2519-4fac-a039-39388edfc00c-kube-api-access-rnb27\") pod \"redhat-operators-7vhtl\" (UID: \"d7f4c945-2519-4fac-a039-39388edfc00c\") " pod="openshift-marketplace/redhat-operators-7vhtl" Jan 22 07:41:27 crc kubenswrapper[4720]: I0122 07:41:27.193061 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d7f4c945-2519-4fac-a039-39388edfc00c-utilities\") pod \"redhat-operators-7vhtl\" (UID: \"d7f4c945-2519-4fac-a039-39388edfc00c\") " pod="openshift-marketplace/redhat-operators-7vhtl" Jan 22 07:41:27 crc kubenswrapper[4720]: I0122 07:41:27.193321 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d7f4c945-2519-4fac-a039-39388edfc00c-catalog-content\") pod \"redhat-operators-7vhtl\" (UID: \"d7f4c945-2519-4fac-a039-39388edfc00c\") " pod="openshift-marketplace/redhat-operators-7vhtl" Jan 22 07:41:27 crc kubenswrapper[4720]: I0122 07:41:27.213318 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rnb27\" (UniqueName: \"kubernetes.io/projected/d7f4c945-2519-4fac-a039-39388edfc00c-kube-api-access-rnb27\") pod \"redhat-operators-7vhtl\" (UID: \"d7f4c945-2519-4fac-a039-39388edfc00c\") " pod="openshift-marketplace/redhat-operators-7vhtl" Jan 22 07:41:27 crc kubenswrapper[4720]: I0122 07:41:27.320249 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7vhtl" Jan 22 07:41:27 crc kubenswrapper[4720]: I0122 07:41:27.629470 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-bhm2j"] Jan 22 07:41:27 crc kubenswrapper[4720]: I0122 07:41:27.631734 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bhm2j" Jan 22 07:41:27 crc kubenswrapper[4720]: I0122 07:41:27.644238 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bhm2j"] Jan 22 07:41:27 crc kubenswrapper[4720]: I0122 07:41:27.707157 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fqhm8\" (UniqueName: \"kubernetes.io/projected/4c606aa6-ef9a-44ba-a8d5-7e05da546fc0-kube-api-access-fqhm8\") pod \"certified-operators-bhm2j\" (UID: \"4c606aa6-ef9a-44ba-a8d5-7e05da546fc0\") " pod="openshift-marketplace/certified-operators-bhm2j" Jan 22 07:41:27 crc kubenswrapper[4720]: I0122 07:41:27.707269 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4c606aa6-ef9a-44ba-a8d5-7e05da546fc0-catalog-content\") pod \"certified-operators-bhm2j\" (UID: \"4c606aa6-ef9a-44ba-a8d5-7e05da546fc0\") " pod="openshift-marketplace/certified-operators-bhm2j" Jan 22 07:41:27 crc kubenswrapper[4720]: I0122 07:41:27.707345 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4c606aa6-ef9a-44ba-a8d5-7e05da546fc0-utilities\") pod \"certified-operators-bhm2j\" (UID: \"4c606aa6-ef9a-44ba-a8d5-7e05da546fc0\") " pod="openshift-marketplace/certified-operators-bhm2j" Jan 22 07:41:27 crc kubenswrapper[4720]: I0122 07:41:27.808611 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fqhm8\" (UniqueName: \"kubernetes.io/projected/4c606aa6-ef9a-44ba-a8d5-7e05da546fc0-kube-api-access-fqhm8\") pod \"certified-operators-bhm2j\" (UID: \"4c606aa6-ef9a-44ba-a8d5-7e05da546fc0\") " pod="openshift-marketplace/certified-operators-bhm2j" Jan 22 07:41:27 crc kubenswrapper[4720]: I0122 07:41:27.809675 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4c606aa6-ef9a-44ba-a8d5-7e05da546fc0-catalog-content\") pod \"certified-operators-bhm2j\" (UID: \"4c606aa6-ef9a-44ba-a8d5-7e05da546fc0\") " pod="openshift-marketplace/certified-operators-bhm2j" Jan 22 07:41:27 crc kubenswrapper[4720]: I0122 07:41:27.809758 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4c606aa6-ef9a-44ba-a8d5-7e05da546fc0-utilities\") pod \"certified-operators-bhm2j\" (UID: \"4c606aa6-ef9a-44ba-a8d5-7e05da546fc0\") " pod="openshift-marketplace/certified-operators-bhm2j" Jan 22 07:41:27 crc kubenswrapper[4720]: I0122 07:41:27.810452 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4c606aa6-ef9a-44ba-a8d5-7e05da546fc0-utilities\") pod \"certified-operators-bhm2j\" (UID: \"4c606aa6-ef9a-44ba-a8d5-7e05da546fc0\") " pod="openshift-marketplace/certified-operators-bhm2j" Jan 22 07:41:27 crc kubenswrapper[4720]: I0122 07:41:27.810452 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4c606aa6-ef9a-44ba-a8d5-7e05da546fc0-catalog-content\") pod \"certified-operators-bhm2j\" (UID: \"4c606aa6-ef9a-44ba-a8d5-7e05da546fc0\") " pod="openshift-marketplace/certified-operators-bhm2j" Jan 22 07:41:27 crc kubenswrapper[4720]: I0122 07:41:27.853841 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fqhm8\" (UniqueName: \"kubernetes.io/projected/4c606aa6-ef9a-44ba-a8d5-7e05da546fc0-kube-api-access-fqhm8\") pod \"certified-operators-bhm2j\" (UID: \"4c606aa6-ef9a-44ba-a8d5-7e05da546fc0\") " pod="openshift-marketplace/certified-operators-bhm2j" Jan 22 07:41:27 crc kubenswrapper[4720]: I0122 07:41:27.910671 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7vhtl"] Jan 22 07:41:27 crc kubenswrapper[4720]: I0122 07:41:27.957165 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7vhtl" event={"ID":"d7f4c945-2519-4fac-a039-39388edfc00c","Type":"ContainerStarted","Data":"4a13af1a8e9b683b7723ece43d09bfc4e02dd335d1a5c73234219a2c3e8d787f"} Jan 22 07:41:27 crc kubenswrapper[4720]: I0122 07:41:27.963957 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bhm2j" Jan 22 07:41:28 crc kubenswrapper[4720]: I0122 07:41:28.486930 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bhm2j"] Jan 22 07:41:28 crc kubenswrapper[4720]: I0122 07:41:28.966243 4720 generic.go:334] "Generic (PLEG): container finished" podID="4c606aa6-ef9a-44ba-a8d5-7e05da546fc0" containerID="5dcc1fcecd2f4dd739db94d629d33405ea3ef04d8b999c177b4c4a2400678014" exitCode=0 Jan 22 07:41:28 crc kubenswrapper[4720]: I0122 07:41:28.966378 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bhm2j" event={"ID":"4c606aa6-ef9a-44ba-a8d5-7e05da546fc0","Type":"ContainerDied","Data":"5dcc1fcecd2f4dd739db94d629d33405ea3ef04d8b999c177b4c4a2400678014"} Jan 22 07:41:28 crc kubenswrapper[4720]: I0122 07:41:28.966644 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bhm2j" event={"ID":"4c606aa6-ef9a-44ba-a8d5-7e05da546fc0","Type":"ContainerStarted","Data":"4458fcb2d47dfd0dc6a664f8d749f4c7566f41da096a73bcf82c1c8521028810"} Jan 22 07:41:28 crc kubenswrapper[4720]: I0122 07:41:28.968497 4720 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 22 07:41:28 crc kubenswrapper[4720]: I0122 07:41:28.969532 4720 generic.go:334] "Generic (PLEG): container finished" podID="d7f4c945-2519-4fac-a039-39388edfc00c" containerID="8e9145585f7455ff6424ee74e498904fe734b21988c956baed8f75e1cbf8b7ed" exitCode=0 Jan 22 07:41:28 crc kubenswrapper[4720]: I0122 07:41:28.969598 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7vhtl" event={"ID":"d7f4c945-2519-4fac-a039-39388edfc00c","Type":"ContainerDied","Data":"8e9145585f7455ff6424ee74e498904fe734b21988c956baed8f75e1cbf8b7ed"} Jan 22 07:41:29 crc kubenswrapper[4720]: I0122 07:41:29.982665 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7vhtl" event={"ID":"d7f4c945-2519-4fac-a039-39388edfc00c","Type":"ContainerStarted","Data":"0eb252516d9ee5bf982b1ba875c72a9e7440cb6fbfc230607a217e54441481b8"} Jan 22 07:41:30 crc kubenswrapper[4720]: I0122 07:41:30.991387 4720 generic.go:334] "Generic (PLEG): container finished" podID="4c606aa6-ef9a-44ba-a8d5-7e05da546fc0" containerID="cef60d1f3db2d3c0d24ddec2ff0748091c71dec0733f3f23b60c454103420c82" exitCode=0 Jan 22 07:41:30 crc kubenswrapper[4720]: I0122 07:41:30.991476 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bhm2j" event={"ID":"4c606aa6-ef9a-44ba-a8d5-7e05da546fc0","Type":"ContainerDied","Data":"cef60d1f3db2d3c0d24ddec2ff0748091c71dec0733f3f23b60c454103420c82"} Jan 22 07:41:32 crc kubenswrapper[4720]: I0122 07:41:32.211534 4720 scope.go:117] "RemoveContainer" containerID="d93ba9d9d132d1e9c26102e6659c91bdad0555c67031d00274c2f450bfe2748b" Jan 22 07:41:32 crc kubenswrapper[4720]: E0122 07:41:32.212089 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bnsvd_openshift-machine-config-operator(f4b26e9d-6a95-4b1c-9750-88b6aa100c67)\"" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" podUID="f4b26e9d-6a95-4b1c-9750-88b6aa100c67" Jan 22 07:41:33 crc kubenswrapper[4720]: I0122 07:41:33.009235 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bhm2j" event={"ID":"4c606aa6-ef9a-44ba-a8d5-7e05da546fc0","Type":"ContainerStarted","Data":"62816aaefd63b7599a003124abdf2575a8db024ab6f4022255f4d9e961e37be8"} Jan 22 07:41:33 crc kubenswrapper[4720]: I0122 07:41:33.011200 4720 generic.go:334] "Generic (PLEG): container finished" podID="d7f4c945-2519-4fac-a039-39388edfc00c" containerID="0eb252516d9ee5bf982b1ba875c72a9e7440cb6fbfc230607a217e54441481b8" exitCode=0 Jan 22 07:41:33 crc kubenswrapper[4720]: I0122 07:41:33.011232 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7vhtl" event={"ID":"d7f4c945-2519-4fac-a039-39388edfc00c","Type":"ContainerDied","Data":"0eb252516d9ee5bf982b1ba875c72a9e7440cb6fbfc230607a217e54441481b8"} Jan 22 07:41:33 crc kubenswrapper[4720]: I0122 07:41:33.040489 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-bhm2j" podStartSLOduration=3.122476638 podStartE2EDuration="6.040471078s" podCreationTimestamp="2026-01-22 07:41:27 +0000 UTC" firstStartedPulling="2026-01-22 07:41:28.968244024 +0000 UTC m=+3981.110150729" lastFinishedPulling="2026-01-22 07:41:31.886238464 +0000 UTC m=+3984.028145169" observedRunningTime="2026-01-22 07:41:33.033298564 +0000 UTC m=+3985.175205289" watchObservedRunningTime="2026-01-22 07:41:33.040471078 +0000 UTC m=+3985.182377783" Jan 22 07:41:35 crc kubenswrapper[4720]: I0122 07:41:35.028445 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7vhtl" event={"ID":"d7f4c945-2519-4fac-a039-39388edfc00c","Type":"ContainerStarted","Data":"c582ca62036a7de4bfcd12a06769e2d2cdee950222372842f9bec10d51017cdb"} Jan 22 07:41:35 crc kubenswrapper[4720]: I0122 07:41:35.059853 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-7vhtl" podStartSLOduration=3.667944883 podStartE2EDuration="9.059836095s" podCreationTimestamp="2026-01-22 07:41:26 +0000 UTC" firstStartedPulling="2026-01-22 07:41:28.971189447 +0000 UTC m=+3981.113096152" lastFinishedPulling="2026-01-22 07:41:34.363080659 +0000 UTC m=+3986.504987364" observedRunningTime="2026-01-22 07:41:35.057004824 +0000 UTC m=+3987.198911529" watchObservedRunningTime="2026-01-22 07:41:35.059836095 +0000 UTC m=+3987.201742800" Jan 22 07:41:37 crc kubenswrapper[4720]: I0122 07:41:37.320781 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-7vhtl" Jan 22 07:41:37 crc kubenswrapper[4720]: I0122 07:41:37.321412 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-7vhtl" Jan 22 07:41:37 crc kubenswrapper[4720]: I0122 07:41:37.964710 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-bhm2j" Jan 22 07:41:37 crc kubenswrapper[4720]: I0122 07:41:37.964976 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-bhm2j" Jan 22 07:41:38 crc kubenswrapper[4720]: I0122 07:41:38.007927 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-bhm2j" Jan 22 07:41:38 crc kubenswrapper[4720]: I0122 07:41:38.089270 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-bhm2j" Jan 22 07:41:38 crc kubenswrapper[4720]: I0122 07:41:38.359007 4720 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-7vhtl" podUID="d7f4c945-2519-4fac-a039-39388edfc00c" containerName="registry-server" probeResult="failure" output=< Jan 22 07:41:38 crc kubenswrapper[4720]: timeout: failed to connect service ":50051" within 1s Jan 22 07:41:38 crc kubenswrapper[4720]: > Jan 22 07:41:40 crc kubenswrapper[4720]: I0122 07:41:40.387551 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-bhm2j"] Jan 22 07:41:41 crc kubenswrapper[4720]: I0122 07:41:41.070231 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-bhm2j" podUID="4c606aa6-ef9a-44ba-a8d5-7e05da546fc0" containerName="registry-server" containerID="cri-o://62816aaefd63b7599a003124abdf2575a8db024ab6f4022255f4d9e961e37be8" gracePeriod=2 Jan 22 07:41:42 crc kubenswrapper[4720]: I0122 07:41:42.083715 4720 generic.go:334] "Generic (PLEG): container finished" podID="4c606aa6-ef9a-44ba-a8d5-7e05da546fc0" containerID="62816aaefd63b7599a003124abdf2575a8db024ab6f4022255f4d9e961e37be8" exitCode=0 Jan 22 07:41:42 crc kubenswrapper[4720]: I0122 07:41:42.083808 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bhm2j" event={"ID":"4c606aa6-ef9a-44ba-a8d5-7e05da546fc0","Type":"ContainerDied","Data":"62816aaefd63b7599a003124abdf2575a8db024ab6f4022255f4d9e961e37be8"} Jan 22 07:41:42 crc kubenswrapper[4720]: I0122 07:41:42.820845 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bhm2j" Jan 22 07:41:42 crc kubenswrapper[4720]: I0122 07:41:42.977497 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4c606aa6-ef9a-44ba-a8d5-7e05da546fc0-utilities\") pod \"4c606aa6-ef9a-44ba-a8d5-7e05da546fc0\" (UID: \"4c606aa6-ef9a-44ba-a8d5-7e05da546fc0\") " Jan 22 07:41:42 crc kubenswrapper[4720]: I0122 07:41:42.977612 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqhm8\" (UniqueName: \"kubernetes.io/projected/4c606aa6-ef9a-44ba-a8d5-7e05da546fc0-kube-api-access-fqhm8\") pod \"4c606aa6-ef9a-44ba-a8d5-7e05da546fc0\" (UID: \"4c606aa6-ef9a-44ba-a8d5-7e05da546fc0\") " Jan 22 07:41:42 crc kubenswrapper[4720]: I0122 07:41:42.977741 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4c606aa6-ef9a-44ba-a8d5-7e05da546fc0-catalog-content\") pod \"4c606aa6-ef9a-44ba-a8d5-7e05da546fc0\" (UID: \"4c606aa6-ef9a-44ba-a8d5-7e05da546fc0\") " Jan 22 07:41:42 crc kubenswrapper[4720]: I0122 07:41:42.978369 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4c606aa6-ef9a-44ba-a8d5-7e05da546fc0-utilities" (OuterVolumeSpecName: "utilities") pod "4c606aa6-ef9a-44ba-a8d5-7e05da546fc0" (UID: "4c606aa6-ef9a-44ba-a8d5-7e05da546fc0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 07:41:42 crc kubenswrapper[4720]: I0122 07:41:42.997339 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4c606aa6-ef9a-44ba-a8d5-7e05da546fc0-kube-api-access-fqhm8" (OuterVolumeSpecName: "kube-api-access-fqhm8") pod "4c606aa6-ef9a-44ba-a8d5-7e05da546fc0" (UID: "4c606aa6-ef9a-44ba-a8d5-7e05da546fc0"). InnerVolumeSpecName "kube-api-access-fqhm8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 07:41:43 crc kubenswrapper[4720]: I0122 07:41:43.029514 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4c606aa6-ef9a-44ba-a8d5-7e05da546fc0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4c606aa6-ef9a-44ba-a8d5-7e05da546fc0" (UID: "4c606aa6-ef9a-44ba-a8d5-7e05da546fc0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 07:41:43 crc kubenswrapper[4720]: I0122 07:41:43.079895 4720 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4c606aa6-ef9a-44ba-a8d5-7e05da546fc0-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 07:41:43 crc kubenswrapper[4720]: I0122 07:41:43.079970 4720 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4c606aa6-ef9a-44ba-a8d5-7e05da546fc0-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 07:41:43 crc kubenswrapper[4720]: I0122 07:41:43.079997 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqhm8\" (UniqueName: \"kubernetes.io/projected/4c606aa6-ef9a-44ba-a8d5-7e05da546fc0-kube-api-access-fqhm8\") on node \"crc\" DevicePath \"\"" Jan 22 07:41:43 crc kubenswrapper[4720]: I0122 07:41:43.093194 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bhm2j" event={"ID":"4c606aa6-ef9a-44ba-a8d5-7e05da546fc0","Type":"ContainerDied","Data":"4458fcb2d47dfd0dc6a664f8d749f4c7566f41da096a73bcf82c1c8521028810"} Jan 22 07:41:43 crc kubenswrapper[4720]: I0122 07:41:43.093255 4720 scope.go:117] "RemoveContainer" containerID="62816aaefd63b7599a003124abdf2575a8db024ab6f4022255f4d9e961e37be8" Jan 22 07:41:43 crc kubenswrapper[4720]: I0122 07:41:43.093400 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bhm2j" Jan 22 07:41:43 crc kubenswrapper[4720]: I0122 07:41:43.122752 4720 scope.go:117] "RemoveContainer" containerID="cef60d1f3db2d3c0d24ddec2ff0748091c71dec0733f3f23b60c454103420c82" Jan 22 07:41:43 crc kubenswrapper[4720]: I0122 07:41:43.127088 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-bhm2j"] Jan 22 07:41:43 crc kubenswrapper[4720]: I0122 07:41:43.132246 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-bhm2j"] Jan 22 07:41:43 crc kubenswrapper[4720]: I0122 07:41:43.173001 4720 scope.go:117] "RemoveContainer" containerID="5dcc1fcecd2f4dd739db94d629d33405ea3ef04d8b999c177b4c4a2400678014" Jan 22 07:41:44 crc kubenswrapper[4720]: I0122 07:41:44.219520 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4c606aa6-ef9a-44ba-a8d5-7e05da546fc0" path="/var/lib/kubelet/pods/4c606aa6-ef9a-44ba-a8d5-7e05da546fc0/volumes" Jan 22 07:41:45 crc kubenswrapper[4720]: I0122 07:41:45.210484 4720 scope.go:117] "RemoveContainer" containerID="d93ba9d9d132d1e9c26102e6659c91bdad0555c67031d00274c2f450bfe2748b" Jan 22 07:41:45 crc kubenswrapper[4720]: E0122 07:41:45.210874 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bnsvd_openshift-machine-config-operator(f4b26e9d-6a95-4b1c-9750-88b6aa100c67)\"" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" podUID="f4b26e9d-6a95-4b1c-9750-88b6aa100c67" Jan 22 07:41:47 crc kubenswrapper[4720]: I0122 07:41:47.379534 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-7vhtl" Jan 22 07:41:47 crc kubenswrapper[4720]: I0122 07:41:47.446398 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-7vhtl" Jan 22 07:41:50 crc kubenswrapper[4720]: I0122 07:41:50.992416 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-7vhtl"] Jan 22 07:41:50 crc kubenswrapper[4720]: I0122 07:41:50.993005 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-7vhtl" podUID="d7f4c945-2519-4fac-a039-39388edfc00c" containerName="registry-server" containerID="cri-o://c582ca62036a7de4bfcd12a06769e2d2cdee950222372842f9bec10d51017cdb" gracePeriod=2 Jan 22 07:41:52 crc kubenswrapper[4720]: I0122 07:41:52.054518 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7vhtl" Jan 22 07:41:52 crc kubenswrapper[4720]: I0122 07:41:52.168950 4720 generic.go:334] "Generic (PLEG): container finished" podID="d7f4c945-2519-4fac-a039-39388edfc00c" containerID="c582ca62036a7de4bfcd12a06769e2d2cdee950222372842f9bec10d51017cdb" exitCode=0 Jan 22 07:41:52 crc kubenswrapper[4720]: I0122 07:41:52.169002 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7vhtl" event={"ID":"d7f4c945-2519-4fac-a039-39388edfc00c","Type":"ContainerDied","Data":"c582ca62036a7de4bfcd12a06769e2d2cdee950222372842f9bec10d51017cdb"} Jan 22 07:41:52 crc kubenswrapper[4720]: I0122 07:41:52.169042 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7vhtl" event={"ID":"d7f4c945-2519-4fac-a039-39388edfc00c","Type":"ContainerDied","Data":"4a13af1a8e9b683b7723ece43d09bfc4e02dd335d1a5c73234219a2c3e8d787f"} Jan 22 07:41:52 crc kubenswrapper[4720]: I0122 07:41:52.169061 4720 scope.go:117] "RemoveContainer" containerID="c582ca62036a7de4bfcd12a06769e2d2cdee950222372842f9bec10d51017cdb" Jan 22 07:41:52 crc kubenswrapper[4720]: I0122 07:41:52.169064 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7vhtl" Jan 22 07:41:52 crc kubenswrapper[4720]: I0122 07:41:52.191332 4720 scope.go:117] "RemoveContainer" containerID="0eb252516d9ee5bf982b1ba875c72a9e7440cb6fbfc230607a217e54441481b8" Jan 22 07:41:52 crc kubenswrapper[4720]: I0122 07:41:52.208926 4720 scope.go:117] "RemoveContainer" containerID="8e9145585f7455ff6424ee74e498904fe734b21988c956baed8f75e1cbf8b7ed" Jan 22 07:41:52 crc kubenswrapper[4720]: I0122 07:41:52.246271 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d7f4c945-2519-4fac-a039-39388edfc00c-catalog-content\") pod \"d7f4c945-2519-4fac-a039-39388edfc00c\" (UID: \"d7f4c945-2519-4fac-a039-39388edfc00c\") " Jan 22 07:41:52 crc kubenswrapper[4720]: I0122 07:41:52.246598 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnb27\" (UniqueName: \"kubernetes.io/projected/d7f4c945-2519-4fac-a039-39388edfc00c-kube-api-access-rnb27\") pod \"d7f4c945-2519-4fac-a039-39388edfc00c\" (UID: \"d7f4c945-2519-4fac-a039-39388edfc00c\") " Jan 22 07:41:52 crc kubenswrapper[4720]: I0122 07:41:52.246707 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d7f4c945-2519-4fac-a039-39388edfc00c-utilities\") pod \"d7f4c945-2519-4fac-a039-39388edfc00c\" (UID: \"d7f4c945-2519-4fac-a039-39388edfc00c\") " Jan 22 07:41:52 crc kubenswrapper[4720]: I0122 07:41:52.248583 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d7f4c945-2519-4fac-a039-39388edfc00c-utilities" (OuterVolumeSpecName: "utilities") pod "d7f4c945-2519-4fac-a039-39388edfc00c" (UID: "d7f4c945-2519-4fac-a039-39388edfc00c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 07:41:52 crc kubenswrapper[4720]: I0122 07:41:52.251969 4720 scope.go:117] "RemoveContainer" containerID="c582ca62036a7de4bfcd12a06769e2d2cdee950222372842f9bec10d51017cdb" Jan 22 07:41:52 crc kubenswrapper[4720]: E0122 07:41:52.252377 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c582ca62036a7de4bfcd12a06769e2d2cdee950222372842f9bec10d51017cdb\": container with ID starting with c582ca62036a7de4bfcd12a06769e2d2cdee950222372842f9bec10d51017cdb not found: ID does not exist" containerID="c582ca62036a7de4bfcd12a06769e2d2cdee950222372842f9bec10d51017cdb" Jan 22 07:41:52 crc kubenswrapper[4720]: I0122 07:41:52.252413 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c582ca62036a7de4bfcd12a06769e2d2cdee950222372842f9bec10d51017cdb"} err="failed to get container status \"c582ca62036a7de4bfcd12a06769e2d2cdee950222372842f9bec10d51017cdb\": rpc error: code = NotFound desc = could not find container \"c582ca62036a7de4bfcd12a06769e2d2cdee950222372842f9bec10d51017cdb\": container with ID starting with c582ca62036a7de4bfcd12a06769e2d2cdee950222372842f9bec10d51017cdb not found: ID does not exist" Jan 22 07:41:52 crc kubenswrapper[4720]: I0122 07:41:52.252439 4720 scope.go:117] "RemoveContainer" containerID="0eb252516d9ee5bf982b1ba875c72a9e7440cb6fbfc230607a217e54441481b8" Jan 22 07:41:52 crc kubenswrapper[4720]: I0122 07:41:52.252462 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7f4c945-2519-4fac-a039-39388edfc00c-kube-api-access-rnb27" (OuterVolumeSpecName: "kube-api-access-rnb27") pod "d7f4c945-2519-4fac-a039-39388edfc00c" (UID: "d7f4c945-2519-4fac-a039-39388edfc00c"). InnerVolumeSpecName "kube-api-access-rnb27". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 07:41:52 crc kubenswrapper[4720]: E0122 07:41:52.252966 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0eb252516d9ee5bf982b1ba875c72a9e7440cb6fbfc230607a217e54441481b8\": container with ID starting with 0eb252516d9ee5bf982b1ba875c72a9e7440cb6fbfc230607a217e54441481b8 not found: ID does not exist" containerID="0eb252516d9ee5bf982b1ba875c72a9e7440cb6fbfc230607a217e54441481b8" Jan 22 07:41:52 crc kubenswrapper[4720]: I0122 07:41:52.253000 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0eb252516d9ee5bf982b1ba875c72a9e7440cb6fbfc230607a217e54441481b8"} err="failed to get container status \"0eb252516d9ee5bf982b1ba875c72a9e7440cb6fbfc230607a217e54441481b8\": rpc error: code = NotFound desc = could not find container \"0eb252516d9ee5bf982b1ba875c72a9e7440cb6fbfc230607a217e54441481b8\": container with ID starting with 0eb252516d9ee5bf982b1ba875c72a9e7440cb6fbfc230607a217e54441481b8 not found: ID does not exist" Jan 22 07:41:52 crc kubenswrapper[4720]: I0122 07:41:52.253021 4720 scope.go:117] "RemoveContainer" containerID="8e9145585f7455ff6424ee74e498904fe734b21988c956baed8f75e1cbf8b7ed" Jan 22 07:41:52 crc kubenswrapper[4720]: E0122 07:41:52.253367 4720 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8e9145585f7455ff6424ee74e498904fe734b21988c956baed8f75e1cbf8b7ed\": container with ID starting with 8e9145585f7455ff6424ee74e498904fe734b21988c956baed8f75e1cbf8b7ed not found: ID does not exist" containerID="8e9145585f7455ff6424ee74e498904fe734b21988c956baed8f75e1cbf8b7ed" Jan 22 07:41:52 crc kubenswrapper[4720]: I0122 07:41:52.253392 4720 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8e9145585f7455ff6424ee74e498904fe734b21988c956baed8f75e1cbf8b7ed"} err="failed to get container status \"8e9145585f7455ff6424ee74e498904fe734b21988c956baed8f75e1cbf8b7ed\": rpc error: code = NotFound desc = could not find container \"8e9145585f7455ff6424ee74e498904fe734b21988c956baed8f75e1cbf8b7ed\": container with ID starting with 8e9145585f7455ff6424ee74e498904fe734b21988c956baed8f75e1cbf8b7ed not found: ID does not exist" Jan 22 07:41:52 crc kubenswrapper[4720]: I0122 07:41:52.348843 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnb27\" (UniqueName: \"kubernetes.io/projected/d7f4c945-2519-4fac-a039-39388edfc00c-kube-api-access-rnb27\") on node \"crc\" DevicePath \"\"" Jan 22 07:41:52 crc kubenswrapper[4720]: I0122 07:41:52.348897 4720 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d7f4c945-2519-4fac-a039-39388edfc00c-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 07:41:52 crc kubenswrapper[4720]: I0122 07:41:52.372814 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d7f4c945-2519-4fac-a039-39388edfc00c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d7f4c945-2519-4fac-a039-39388edfc00c" (UID: "d7f4c945-2519-4fac-a039-39388edfc00c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 07:41:52 crc kubenswrapper[4720]: I0122 07:41:52.450472 4720 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d7f4c945-2519-4fac-a039-39388edfc00c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 07:41:52 crc kubenswrapper[4720]: I0122 07:41:52.510899 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-7vhtl"] Jan 22 07:41:52 crc kubenswrapper[4720]: I0122 07:41:52.516676 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-7vhtl"] Jan 22 07:41:54 crc kubenswrapper[4720]: I0122 07:41:54.220855 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d7f4c945-2519-4fac-a039-39388edfc00c" path="/var/lib/kubelet/pods/d7f4c945-2519-4fac-a039-39388edfc00c/volumes" Jan 22 07:42:00 crc kubenswrapper[4720]: I0122 07:42:00.210934 4720 scope.go:117] "RemoveContainer" containerID="d93ba9d9d132d1e9c26102e6659c91bdad0555c67031d00274c2f450bfe2748b" Jan 22 07:42:00 crc kubenswrapper[4720]: E0122 07:42:00.211805 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bnsvd_openshift-machine-config-operator(f4b26e9d-6a95-4b1c-9750-88b6aa100c67)\"" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" podUID="f4b26e9d-6a95-4b1c-9750-88b6aa100c67" Jan 22 07:42:11 crc kubenswrapper[4720]: I0122 07:42:11.210576 4720 scope.go:117] "RemoveContainer" containerID="d93ba9d9d132d1e9c26102e6659c91bdad0555c67031d00274c2f450bfe2748b" Jan 22 07:42:11 crc kubenswrapper[4720]: E0122 07:42:11.212268 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bnsvd_openshift-machine-config-operator(f4b26e9d-6a95-4b1c-9750-88b6aa100c67)\"" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" podUID="f4b26e9d-6a95-4b1c-9750-88b6aa100c67" Jan 22 07:42:22 crc kubenswrapper[4720]: I0122 07:42:22.212277 4720 scope.go:117] "RemoveContainer" containerID="d93ba9d9d132d1e9c26102e6659c91bdad0555c67031d00274c2f450bfe2748b" Jan 22 07:42:22 crc kubenswrapper[4720]: E0122 07:42:22.213345 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bnsvd_openshift-machine-config-operator(f4b26e9d-6a95-4b1c-9750-88b6aa100c67)\"" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" podUID="f4b26e9d-6a95-4b1c-9750-88b6aa100c67" Jan 22 07:42:36 crc kubenswrapper[4720]: I0122 07:42:36.210781 4720 scope.go:117] "RemoveContainer" containerID="d93ba9d9d132d1e9c26102e6659c91bdad0555c67031d00274c2f450bfe2748b" Jan 22 07:42:36 crc kubenswrapper[4720]: E0122 07:42:36.211517 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bnsvd_openshift-machine-config-operator(f4b26e9d-6a95-4b1c-9750-88b6aa100c67)\"" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" podUID="f4b26e9d-6a95-4b1c-9750-88b6aa100c67" Jan 22 07:42:47 crc kubenswrapper[4720]: I0122 07:42:47.210768 4720 scope.go:117] "RemoveContainer" containerID="d93ba9d9d132d1e9c26102e6659c91bdad0555c67031d00274c2f450bfe2748b" Jan 22 07:42:47 crc kubenswrapper[4720]: E0122 07:42:47.211517 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bnsvd_openshift-machine-config-operator(f4b26e9d-6a95-4b1c-9750-88b6aa100c67)\"" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" podUID="f4b26e9d-6a95-4b1c-9750-88b6aa100c67" Jan 22 07:43:02 crc kubenswrapper[4720]: I0122 07:43:02.210727 4720 scope.go:117] "RemoveContainer" containerID="d93ba9d9d132d1e9c26102e6659c91bdad0555c67031d00274c2f450bfe2748b" Jan 22 07:43:02 crc kubenswrapper[4720]: E0122 07:43:02.211498 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bnsvd_openshift-machine-config-operator(f4b26e9d-6a95-4b1c-9750-88b6aa100c67)\"" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" podUID="f4b26e9d-6a95-4b1c-9750-88b6aa100c67" Jan 22 07:43:17 crc kubenswrapper[4720]: I0122 07:43:17.210728 4720 scope.go:117] "RemoveContainer" containerID="d93ba9d9d132d1e9c26102e6659c91bdad0555c67031d00274c2f450bfe2748b" Jan 22 07:43:17 crc kubenswrapper[4720]: E0122 07:43:17.211469 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bnsvd_openshift-machine-config-operator(f4b26e9d-6a95-4b1c-9750-88b6aa100c67)\"" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" podUID="f4b26e9d-6a95-4b1c-9750-88b6aa100c67" Jan 22 07:43:29 crc kubenswrapper[4720]: I0122 07:43:29.210314 4720 scope.go:117] "RemoveContainer" containerID="d93ba9d9d132d1e9c26102e6659c91bdad0555c67031d00274c2f450bfe2748b" Jan 22 07:43:29 crc kubenswrapper[4720]: E0122 07:43:29.211077 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bnsvd_openshift-machine-config-operator(f4b26e9d-6a95-4b1c-9750-88b6aa100c67)\"" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" podUID="f4b26e9d-6a95-4b1c-9750-88b6aa100c67" Jan 22 07:43:44 crc kubenswrapper[4720]: I0122 07:43:44.216570 4720 scope.go:117] "RemoveContainer" containerID="d93ba9d9d132d1e9c26102e6659c91bdad0555c67031d00274c2f450bfe2748b" Jan 22 07:43:44 crc kubenswrapper[4720]: E0122 07:43:44.217578 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bnsvd_openshift-machine-config-operator(f4b26e9d-6a95-4b1c-9750-88b6aa100c67)\"" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" podUID="f4b26e9d-6a95-4b1c-9750-88b6aa100c67" Jan 22 07:43:56 crc kubenswrapper[4720]: I0122 07:43:56.210666 4720 scope.go:117] "RemoveContainer" containerID="d93ba9d9d132d1e9c26102e6659c91bdad0555c67031d00274c2f450bfe2748b" Jan 22 07:43:56 crc kubenswrapper[4720]: E0122 07:43:56.211327 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bnsvd_openshift-machine-config-operator(f4b26e9d-6a95-4b1c-9750-88b6aa100c67)\"" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" podUID="f4b26e9d-6a95-4b1c-9750-88b6aa100c67" Jan 22 07:43:57 crc kubenswrapper[4720]: I0122 07:43:57.024944 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-ktnnh"] Jan 22 07:43:57 crc kubenswrapper[4720]: E0122 07:43:57.025552 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c606aa6-ef9a-44ba-a8d5-7e05da546fc0" containerName="extract-utilities" Jan 22 07:43:57 crc kubenswrapper[4720]: I0122 07:43:57.025564 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c606aa6-ef9a-44ba-a8d5-7e05da546fc0" containerName="extract-utilities" Jan 22 07:43:57 crc kubenswrapper[4720]: E0122 07:43:57.025576 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7f4c945-2519-4fac-a039-39388edfc00c" containerName="extract-content" Jan 22 07:43:57 crc kubenswrapper[4720]: I0122 07:43:57.025584 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7f4c945-2519-4fac-a039-39388edfc00c" containerName="extract-content" Jan 22 07:43:57 crc kubenswrapper[4720]: E0122 07:43:57.025597 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c606aa6-ef9a-44ba-a8d5-7e05da546fc0" containerName="extract-content" Jan 22 07:43:57 crc kubenswrapper[4720]: I0122 07:43:57.025607 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c606aa6-ef9a-44ba-a8d5-7e05da546fc0" containerName="extract-content" Jan 22 07:43:57 crc kubenswrapper[4720]: E0122 07:43:57.025625 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7f4c945-2519-4fac-a039-39388edfc00c" containerName="extract-utilities" Jan 22 07:43:57 crc kubenswrapper[4720]: I0122 07:43:57.025634 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7f4c945-2519-4fac-a039-39388edfc00c" containerName="extract-utilities" Jan 22 07:43:57 crc kubenswrapper[4720]: E0122 07:43:57.025647 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7f4c945-2519-4fac-a039-39388edfc00c" containerName="registry-server" Jan 22 07:43:57 crc kubenswrapper[4720]: I0122 07:43:57.025652 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7f4c945-2519-4fac-a039-39388edfc00c" containerName="registry-server" Jan 22 07:43:57 crc kubenswrapper[4720]: E0122 07:43:57.025665 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c606aa6-ef9a-44ba-a8d5-7e05da546fc0" containerName="registry-server" Jan 22 07:43:57 crc kubenswrapper[4720]: I0122 07:43:57.025670 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c606aa6-ef9a-44ba-a8d5-7e05da546fc0" containerName="registry-server" Jan 22 07:43:57 crc kubenswrapper[4720]: I0122 07:43:57.025835 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c606aa6-ef9a-44ba-a8d5-7e05da546fc0" containerName="registry-server" Jan 22 07:43:57 crc kubenswrapper[4720]: I0122 07:43:57.025855 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7f4c945-2519-4fac-a039-39388edfc00c" containerName="registry-server" Jan 22 07:43:57 crc kubenswrapper[4720]: I0122 07:43:57.026982 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ktnnh" Jan 22 07:43:57 crc kubenswrapper[4720]: I0122 07:43:57.073885 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f5d835ea-ce12-44d3-b447-0ea6eea286ff-catalog-content\") pod \"redhat-marketplace-ktnnh\" (UID: \"f5d835ea-ce12-44d3-b447-0ea6eea286ff\") " pod="openshift-marketplace/redhat-marketplace-ktnnh" Jan 22 07:43:57 crc kubenswrapper[4720]: I0122 07:43:57.073986 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8hnm6\" (UniqueName: \"kubernetes.io/projected/f5d835ea-ce12-44d3-b447-0ea6eea286ff-kube-api-access-8hnm6\") pod \"redhat-marketplace-ktnnh\" (UID: \"f5d835ea-ce12-44d3-b447-0ea6eea286ff\") " pod="openshift-marketplace/redhat-marketplace-ktnnh" Jan 22 07:43:57 crc kubenswrapper[4720]: I0122 07:43:57.074034 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f5d835ea-ce12-44d3-b447-0ea6eea286ff-utilities\") pod \"redhat-marketplace-ktnnh\" (UID: \"f5d835ea-ce12-44d3-b447-0ea6eea286ff\") " pod="openshift-marketplace/redhat-marketplace-ktnnh" Jan 22 07:43:57 crc kubenswrapper[4720]: I0122 07:43:57.090932 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-ktnnh"] Jan 22 07:43:57 crc kubenswrapper[4720]: I0122 07:43:57.175308 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f5d835ea-ce12-44d3-b447-0ea6eea286ff-catalog-content\") pod \"redhat-marketplace-ktnnh\" (UID: \"f5d835ea-ce12-44d3-b447-0ea6eea286ff\") " pod="openshift-marketplace/redhat-marketplace-ktnnh" Jan 22 07:43:57 crc kubenswrapper[4720]: I0122 07:43:57.175401 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8hnm6\" (UniqueName: \"kubernetes.io/projected/f5d835ea-ce12-44d3-b447-0ea6eea286ff-kube-api-access-8hnm6\") pod \"redhat-marketplace-ktnnh\" (UID: \"f5d835ea-ce12-44d3-b447-0ea6eea286ff\") " pod="openshift-marketplace/redhat-marketplace-ktnnh" Jan 22 07:43:57 crc kubenswrapper[4720]: I0122 07:43:57.175462 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f5d835ea-ce12-44d3-b447-0ea6eea286ff-utilities\") pod \"redhat-marketplace-ktnnh\" (UID: \"f5d835ea-ce12-44d3-b447-0ea6eea286ff\") " pod="openshift-marketplace/redhat-marketplace-ktnnh" Jan 22 07:43:57 crc kubenswrapper[4720]: I0122 07:43:57.175847 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f5d835ea-ce12-44d3-b447-0ea6eea286ff-catalog-content\") pod \"redhat-marketplace-ktnnh\" (UID: \"f5d835ea-ce12-44d3-b447-0ea6eea286ff\") " pod="openshift-marketplace/redhat-marketplace-ktnnh" Jan 22 07:43:57 crc kubenswrapper[4720]: I0122 07:43:57.175871 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f5d835ea-ce12-44d3-b447-0ea6eea286ff-utilities\") pod \"redhat-marketplace-ktnnh\" (UID: \"f5d835ea-ce12-44d3-b447-0ea6eea286ff\") " pod="openshift-marketplace/redhat-marketplace-ktnnh" Jan 22 07:43:57 crc kubenswrapper[4720]: I0122 07:43:57.197544 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8hnm6\" (UniqueName: \"kubernetes.io/projected/f5d835ea-ce12-44d3-b447-0ea6eea286ff-kube-api-access-8hnm6\") pod \"redhat-marketplace-ktnnh\" (UID: \"f5d835ea-ce12-44d3-b447-0ea6eea286ff\") " pod="openshift-marketplace/redhat-marketplace-ktnnh" Jan 22 07:43:57 crc kubenswrapper[4720]: I0122 07:43:57.356398 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ktnnh" Jan 22 07:43:57 crc kubenswrapper[4720]: I0122 07:43:57.855449 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-ktnnh"] Jan 22 07:43:58 crc kubenswrapper[4720]: I0122 07:43:58.180888 4720 generic.go:334] "Generic (PLEG): container finished" podID="f5d835ea-ce12-44d3-b447-0ea6eea286ff" containerID="c90e30b174f0bb4a27b65620bf7551d98694f51deacfd63efd94f169f5ee45e4" exitCode=0 Jan 22 07:43:58 crc kubenswrapper[4720]: I0122 07:43:58.181141 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ktnnh" event={"ID":"f5d835ea-ce12-44d3-b447-0ea6eea286ff","Type":"ContainerDied","Data":"c90e30b174f0bb4a27b65620bf7551d98694f51deacfd63efd94f169f5ee45e4"} Jan 22 07:43:58 crc kubenswrapper[4720]: I0122 07:43:58.181432 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ktnnh" event={"ID":"f5d835ea-ce12-44d3-b447-0ea6eea286ff","Type":"ContainerStarted","Data":"15c864672408345f5d67b03742e9f50c244dfdbb3936bf3c77b2b0d45cbfd124"} Jan 22 07:43:59 crc kubenswrapper[4720]: I0122 07:43:59.190013 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ktnnh" event={"ID":"f5d835ea-ce12-44d3-b447-0ea6eea286ff","Type":"ContainerStarted","Data":"7ca258df4e5bc36007b792725c0f75521d62b7a25205b30832a769879a0a0391"} Jan 22 07:44:00 crc kubenswrapper[4720]: I0122 07:44:00.199735 4720 generic.go:334] "Generic (PLEG): container finished" podID="f5d835ea-ce12-44d3-b447-0ea6eea286ff" containerID="7ca258df4e5bc36007b792725c0f75521d62b7a25205b30832a769879a0a0391" exitCode=0 Jan 22 07:44:00 crc kubenswrapper[4720]: I0122 07:44:00.199971 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ktnnh" event={"ID":"f5d835ea-ce12-44d3-b447-0ea6eea286ff","Type":"ContainerDied","Data":"7ca258df4e5bc36007b792725c0f75521d62b7a25205b30832a769879a0a0391"} Jan 22 07:44:01 crc kubenswrapper[4720]: I0122 07:44:01.211389 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ktnnh" event={"ID":"f5d835ea-ce12-44d3-b447-0ea6eea286ff","Type":"ContainerStarted","Data":"4361586ed51abeaab0170a904cfcb040cbea0bd6ce0a2f253e9bee3405223349"} Jan 22 07:44:01 crc kubenswrapper[4720]: I0122 07:44:01.241034 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-ktnnh" podStartSLOduration=1.8197431800000001 podStartE2EDuration="4.241009042s" podCreationTimestamp="2026-01-22 07:43:57 +0000 UTC" firstStartedPulling="2026-01-22 07:43:58.183035186 +0000 UTC m=+4130.324941901" lastFinishedPulling="2026-01-22 07:44:00.604301058 +0000 UTC m=+4132.746207763" observedRunningTime="2026-01-22 07:44:01.228766424 +0000 UTC m=+4133.370673159" watchObservedRunningTime="2026-01-22 07:44:01.241009042 +0000 UTC m=+4133.382915817" Jan 22 07:44:07 crc kubenswrapper[4720]: I0122 07:44:07.357164 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-ktnnh" Jan 22 07:44:07 crc kubenswrapper[4720]: I0122 07:44:07.358483 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-ktnnh" Jan 22 07:44:07 crc kubenswrapper[4720]: I0122 07:44:07.401078 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-ktnnh" Jan 22 07:44:08 crc kubenswrapper[4720]: I0122 07:44:08.317123 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-ktnnh" Jan 22 07:44:11 crc kubenswrapper[4720]: I0122 07:44:11.031716 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-ktnnh"] Jan 22 07:44:11 crc kubenswrapper[4720]: I0122 07:44:11.032735 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-ktnnh" podUID="f5d835ea-ce12-44d3-b447-0ea6eea286ff" containerName="registry-server" containerID="cri-o://4361586ed51abeaab0170a904cfcb040cbea0bd6ce0a2f253e9bee3405223349" gracePeriod=2 Jan 22 07:44:11 crc kubenswrapper[4720]: I0122 07:44:11.211038 4720 scope.go:117] "RemoveContainer" containerID="d93ba9d9d132d1e9c26102e6659c91bdad0555c67031d00274c2f450bfe2748b" Jan 22 07:44:11 crc kubenswrapper[4720]: E0122 07:44:11.211214 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bnsvd_openshift-machine-config-operator(f4b26e9d-6a95-4b1c-9750-88b6aa100c67)\"" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" podUID="f4b26e9d-6a95-4b1c-9750-88b6aa100c67" Jan 22 07:44:12 crc kubenswrapper[4720]: I0122 07:44:12.307923 4720 generic.go:334] "Generic (PLEG): container finished" podID="f5d835ea-ce12-44d3-b447-0ea6eea286ff" containerID="4361586ed51abeaab0170a904cfcb040cbea0bd6ce0a2f253e9bee3405223349" exitCode=0 Jan 22 07:44:12 crc kubenswrapper[4720]: I0122 07:44:12.307963 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ktnnh" event={"ID":"f5d835ea-ce12-44d3-b447-0ea6eea286ff","Type":"ContainerDied","Data":"4361586ed51abeaab0170a904cfcb040cbea0bd6ce0a2f253e9bee3405223349"} Jan 22 07:44:12 crc kubenswrapper[4720]: I0122 07:44:12.555600 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ktnnh" Jan 22 07:44:12 crc kubenswrapper[4720]: I0122 07:44:12.617449 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f5d835ea-ce12-44d3-b447-0ea6eea286ff-utilities\") pod \"f5d835ea-ce12-44d3-b447-0ea6eea286ff\" (UID: \"f5d835ea-ce12-44d3-b447-0ea6eea286ff\") " Jan 22 07:44:12 crc kubenswrapper[4720]: I0122 07:44:12.617513 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f5d835ea-ce12-44d3-b447-0ea6eea286ff-catalog-content\") pod \"f5d835ea-ce12-44d3-b447-0ea6eea286ff\" (UID: \"f5d835ea-ce12-44d3-b447-0ea6eea286ff\") " Jan 22 07:44:12 crc kubenswrapper[4720]: I0122 07:44:12.617601 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8hnm6\" (UniqueName: \"kubernetes.io/projected/f5d835ea-ce12-44d3-b447-0ea6eea286ff-kube-api-access-8hnm6\") pod \"f5d835ea-ce12-44d3-b447-0ea6eea286ff\" (UID: \"f5d835ea-ce12-44d3-b447-0ea6eea286ff\") " Jan 22 07:44:12 crc kubenswrapper[4720]: I0122 07:44:12.622522 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f5d835ea-ce12-44d3-b447-0ea6eea286ff-utilities" (OuterVolumeSpecName: "utilities") pod "f5d835ea-ce12-44d3-b447-0ea6eea286ff" (UID: "f5d835ea-ce12-44d3-b447-0ea6eea286ff"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 07:44:12 crc kubenswrapper[4720]: I0122 07:44:12.626177 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f5d835ea-ce12-44d3-b447-0ea6eea286ff-kube-api-access-8hnm6" (OuterVolumeSpecName: "kube-api-access-8hnm6") pod "f5d835ea-ce12-44d3-b447-0ea6eea286ff" (UID: "f5d835ea-ce12-44d3-b447-0ea6eea286ff"). InnerVolumeSpecName "kube-api-access-8hnm6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 07:44:12 crc kubenswrapper[4720]: I0122 07:44:12.644943 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f5d835ea-ce12-44d3-b447-0ea6eea286ff-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f5d835ea-ce12-44d3-b447-0ea6eea286ff" (UID: "f5d835ea-ce12-44d3-b447-0ea6eea286ff"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 07:44:12 crc kubenswrapper[4720]: I0122 07:44:12.719834 4720 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f5d835ea-ce12-44d3-b447-0ea6eea286ff-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 07:44:12 crc kubenswrapper[4720]: I0122 07:44:12.719872 4720 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f5d835ea-ce12-44d3-b447-0ea6eea286ff-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 07:44:12 crc kubenswrapper[4720]: I0122 07:44:12.719886 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8hnm6\" (UniqueName: \"kubernetes.io/projected/f5d835ea-ce12-44d3-b447-0ea6eea286ff-kube-api-access-8hnm6\") on node \"crc\" DevicePath \"\"" Jan 22 07:44:13 crc kubenswrapper[4720]: I0122 07:44:13.319746 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ktnnh" event={"ID":"f5d835ea-ce12-44d3-b447-0ea6eea286ff","Type":"ContainerDied","Data":"15c864672408345f5d67b03742e9f50c244dfdbb3936bf3c77b2b0d45cbfd124"} Jan 22 07:44:13 crc kubenswrapper[4720]: I0122 07:44:13.320084 4720 scope.go:117] "RemoveContainer" containerID="4361586ed51abeaab0170a904cfcb040cbea0bd6ce0a2f253e9bee3405223349" Jan 22 07:44:13 crc kubenswrapper[4720]: I0122 07:44:13.319805 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ktnnh" Jan 22 07:44:13 crc kubenswrapper[4720]: I0122 07:44:13.337429 4720 scope.go:117] "RemoveContainer" containerID="7ca258df4e5bc36007b792725c0f75521d62b7a25205b30832a769879a0a0391" Jan 22 07:44:13 crc kubenswrapper[4720]: I0122 07:44:13.367996 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-ktnnh"] Jan 22 07:44:13 crc kubenswrapper[4720]: I0122 07:44:13.370218 4720 scope.go:117] "RemoveContainer" containerID="c90e30b174f0bb4a27b65620bf7551d98694f51deacfd63efd94f169f5ee45e4" Jan 22 07:44:13 crc kubenswrapper[4720]: I0122 07:44:13.385073 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-ktnnh"] Jan 22 07:44:14 crc kubenswrapper[4720]: I0122 07:44:14.223525 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f5d835ea-ce12-44d3-b447-0ea6eea286ff" path="/var/lib/kubelet/pods/f5d835ea-ce12-44d3-b447-0ea6eea286ff/volumes" Jan 22 07:44:26 crc kubenswrapper[4720]: I0122 07:44:26.210434 4720 scope.go:117] "RemoveContainer" containerID="d93ba9d9d132d1e9c26102e6659c91bdad0555c67031d00274c2f450bfe2748b" Jan 22 07:44:26 crc kubenswrapper[4720]: E0122 07:44:26.211325 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bnsvd_openshift-machine-config-operator(f4b26e9d-6a95-4b1c-9750-88b6aa100c67)\"" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" podUID="f4b26e9d-6a95-4b1c-9750-88b6aa100c67" Jan 22 07:44:40 crc kubenswrapper[4720]: I0122 07:44:40.210609 4720 scope.go:117] "RemoveContainer" containerID="d93ba9d9d132d1e9c26102e6659c91bdad0555c67031d00274c2f450bfe2748b" Jan 22 07:44:40 crc kubenswrapper[4720]: I0122 07:44:40.516450 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" event={"ID":"f4b26e9d-6a95-4b1c-9750-88b6aa100c67","Type":"ContainerStarted","Data":"c0022d1e8e69f2a0f3848614f20fec2a6391fc510938e5521ba85d4bb9f113e8"} Jan 22 07:45:00 crc kubenswrapper[4720]: I0122 07:45:00.199207 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484465-46tff"] Jan 22 07:45:00 crc kubenswrapper[4720]: E0122 07:45:00.200252 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5d835ea-ce12-44d3-b447-0ea6eea286ff" containerName="extract-content" Jan 22 07:45:00 crc kubenswrapper[4720]: I0122 07:45:00.200267 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5d835ea-ce12-44d3-b447-0ea6eea286ff" containerName="extract-content" Jan 22 07:45:00 crc kubenswrapper[4720]: E0122 07:45:00.200287 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5d835ea-ce12-44d3-b447-0ea6eea286ff" containerName="extract-utilities" Jan 22 07:45:00 crc kubenswrapper[4720]: I0122 07:45:00.200293 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5d835ea-ce12-44d3-b447-0ea6eea286ff" containerName="extract-utilities" Jan 22 07:45:00 crc kubenswrapper[4720]: E0122 07:45:00.200301 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5d835ea-ce12-44d3-b447-0ea6eea286ff" containerName="registry-server" Jan 22 07:45:00 crc kubenswrapper[4720]: I0122 07:45:00.200308 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5d835ea-ce12-44d3-b447-0ea6eea286ff" containerName="registry-server" Jan 22 07:45:00 crc kubenswrapper[4720]: I0122 07:45:00.200468 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="f5d835ea-ce12-44d3-b447-0ea6eea286ff" containerName="registry-server" Jan 22 07:45:00 crc kubenswrapper[4720]: I0122 07:45:00.201043 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484465-46tff" Jan 22 07:45:00 crc kubenswrapper[4720]: I0122 07:45:00.204108 4720 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 22 07:45:00 crc kubenswrapper[4720]: I0122 07:45:00.204108 4720 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 22 07:45:00 crc kubenswrapper[4720]: I0122 07:45:00.222751 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484465-46tff"] Jan 22 07:45:00 crc kubenswrapper[4720]: I0122 07:45:00.351179 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e29718b9-c8e0-4732-9699-dbffd0bf9257-secret-volume\") pod \"collect-profiles-29484465-46tff\" (UID: \"e29718b9-c8e0-4732-9699-dbffd0bf9257\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484465-46tff" Jan 22 07:45:00 crc kubenswrapper[4720]: I0122 07:45:00.351537 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8vvs2\" (UniqueName: \"kubernetes.io/projected/e29718b9-c8e0-4732-9699-dbffd0bf9257-kube-api-access-8vvs2\") pod \"collect-profiles-29484465-46tff\" (UID: \"e29718b9-c8e0-4732-9699-dbffd0bf9257\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484465-46tff" Jan 22 07:45:00 crc kubenswrapper[4720]: I0122 07:45:00.351747 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e29718b9-c8e0-4732-9699-dbffd0bf9257-config-volume\") pod \"collect-profiles-29484465-46tff\" (UID: \"e29718b9-c8e0-4732-9699-dbffd0bf9257\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484465-46tff" Jan 22 07:45:00 crc kubenswrapper[4720]: I0122 07:45:00.453736 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8vvs2\" (UniqueName: \"kubernetes.io/projected/e29718b9-c8e0-4732-9699-dbffd0bf9257-kube-api-access-8vvs2\") pod \"collect-profiles-29484465-46tff\" (UID: \"e29718b9-c8e0-4732-9699-dbffd0bf9257\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484465-46tff" Jan 22 07:45:00 crc kubenswrapper[4720]: I0122 07:45:00.454417 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e29718b9-c8e0-4732-9699-dbffd0bf9257-config-volume\") pod \"collect-profiles-29484465-46tff\" (UID: \"e29718b9-c8e0-4732-9699-dbffd0bf9257\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484465-46tff" Jan 22 07:45:00 crc kubenswrapper[4720]: I0122 07:45:00.455272 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e29718b9-c8e0-4732-9699-dbffd0bf9257-secret-volume\") pod \"collect-profiles-29484465-46tff\" (UID: \"e29718b9-c8e0-4732-9699-dbffd0bf9257\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484465-46tff" Jan 22 07:45:00 crc kubenswrapper[4720]: I0122 07:45:00.455174 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e29718b9-c8e0-4732-9699-dbffd0bf9257-config-volume\") pod \"collect-profiles-29484465-46tff\" (UID: \"e29718b9-c8e0-4732-9699-dbffd0bf9257\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484465-46tff" Jan 22 07:45:00 crc kubenswrapper[4720]: I0122 07:45:00.462582 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e29718b9-c8e0-4732-9699-dbffd0bf9257-secret-volume\") pod \"collect-profiles-29484465-46tff\" (UID: \"e29718b9-c8e0-4732-9699-dbffd0bf9257\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484465-46tff" Jan 22 07:45:00 crc kubenswrapper[4720]: I0122 07:45:00.472103 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8vvs2\" (UniqueName: \"kubernetes.io/projected/e29718b9-c8e0-4732-9699-dbffd0bf9257-kube-api-access-8vvs2\") pod \"collect-profiles-29484465-46tff\" (UID: \"e29718b9-c8e0-4732-9699-dbffd0bf9257\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29484465-46tff" Jan 22 07:45:00 crc kubenswrapper[4720]: I0122 07:45:00.525832 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484465-46tff" Jan 22 07:45:00 crc kubenswrapper[4720]: I0122 07:45:00.968500 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484465-46tff"] Jan 22 07:45:00 crc kubenswrapper[4720]: W0122 07:45:00.970522 4720 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode29718b9_c8e0_4732_9699_dbffd0bf9257.slice/crio-09a35220b7ecd4d6bebce725ca3e68028be74ce307cfe40b5c3aca7a0773f441 WatchSource:0}: Error finding container 09a35220b7ecd4d6bebce725ca3e68028be74ce307cfe40b5c3aca7a0773f441: Status 404 returned error can't find the container with id 09a35220b7ecd4d6bebce725ca3e68028be74ce307cfe40b5c3aca7a0773f441 Jan 22 07:45:01 crc kubenswrapper[4720]: I0122 07:45:01.984278 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29484465-46tff" event={"ID":"e29718b9-c8e0-4732-9699-dbffd0bf9257","Type":"ContainerStarted","Data":"127ca0c333920cb32402a7648ff0f4f3e69f3c2adde284ca30efc17000cffeb6"} Jan 22 07:45:01 crc kubenswrapper[4720]: I0122 07:45:01.984617 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29484465-46tff" event={"ID":"e29718b9-c8e0-4732-9699-dbffd0bf9257","Type":"ContainerStarted","Data":"09a35220b7ecd4d6bebce725ca3e68028be74ce307cfe40b5c3aca7a0773f441"} Jan 22 07:45:02 crc kubenswrapper[4720]: I0122 07:45:02.002774 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29484465-46tff" podStartSLOduration=2.002753682 podStartE2EDuration="2.002753682s" podCreationTimestamp="2026-01-22 07:45:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-22 07:45:02.001814965 +0000 UTC m=+4194.143721680" watchObservedRunningTime="2026-01-22 07:45:02.002753682 +0000 UTC m=+4194.144660397" Jan 22 07:45:02 crc kubenswrapper[4720]: I0122 07:45:02.992318 4720 generic.go:334] "Generic (PLEG): container finished" podID="e29718b9-c8e0-4732-9699-dbffd0bf9257" containerID="127ca0c333920cb32402a7648ff0f4f3e69f3c2adde284ca30efc17000cffeb6" exitCode=0 Jan 22 07:45:02 crc kubenswrapper[4720]: I0122 07:45:02.992373 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29484465-46tff" event={"ID":"e29718b9-c8e0-4732-9699-dbffd0bf9257","Type":"ContainerDied","Data":"127ca0c333920cb32402a7648ff0f4f3e69f3c2adde284ca30efc17000cffeb6"} Jan 22 07:45:04 crc kubenswrapper[4720]: I0122 07:45:04.268182 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484465-46tff" Jan 22 07:45:04 crc kubenswrapper[4720]: I0122 07:45:04.418842 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8vvs2\" (UniqueName: \"kubernetes.io/projected/e29718b9-c8e0-4732-9699-dbffd0bf9257-kube-api-access-8vvs2\") pod \"e29718b9-c8e0-4732-9699-dbffd0bf9257\" (UID: \"e29718b9-c8e0-4732-9699-dbffd0bf9257\") " Jan 22 07:45:04 crc kubenswrapper[4720]: I0122 07:45:04.418983 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e29718b9-c8e0-4732-9699-dbffd0bf9257-config-volume\") pod \"e29718b9-c8e0-4732-9699-dbffd0bf9257\" (UID: \"e29718b9-c8e0-4732-9699-dbffd0bf9257\") " Jan 22 07:45:04 crc kubenswrapper[4720]: I0122 07:45:04.419009 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e29718b9-c8e0-4732-9699-dbffd0bf9257-secret-volume\") pod \"e29718b9-c8e0-4732-9699-dbffd0bf9257\" (UID: \"e29718b9-c8e0-4732-9699-dbffd0bf9257\") " Jan 22 07:45:04 crc kubenswrapper[4720]: I0122 07:45:04.419770 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e29718b9-c8e0-4732-9699-dbffd0bf9257-config-volume" (OuterVolumeSpecName: "config-volume") pod "e29718b9-c8e0-4732-9699-dbffd0bf9257" (UID: "e29718b9-c8e0-4732-9699-dbffd0bf9257"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 22 07:45:04 crc kubenswrapper[4720]: I0122 07:45:04.424827 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e29718b9-c8e0-4732-9699-dbffd0bf9257-kube-api-access-8vvs2" (OuterVolumeSpecName: "kube-api-access-8vvs2") pod "e29718b9-c8e0-4732-9699-dbffd0bf9257" (UID: "e29718b9-c8e0-4732-9699-dbffd0bf9257"). InnerVolumeSpecName "kube-api-access-8vvs2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 07:45:04 crc kubenswrapper[4720]: I0122 07:45:04.425100 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e29718b9-c8e0-4732-9699-dbffd0bf9257-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "e29718b9-c8e0-4732-9699-dbffd0bf9257" (UID: "e29718b9-c8e0-4732-9699-dbffd0bf9257"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 22 07:45:04 crc kubenswrapper[4720]: I0122 07:45:04.521207 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8vvs2\" (UniqueName: \"kubernetes.io/projected/e29718b9-c8e0-4732-9699-dbffd0bf9257-kube-api-access-8vvs2\") on node \"crc\" DevicePath \"\"" Jan 22 07:45:04 crc kubenswrapper[4720]: I0122 07:45:04.521234 4720 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e29718b9-c8e0-4732-9699-dbffd0bf9257-config-volume\") on node \"crc\" DevicePath \"\"" Jan 22 07:45:04 crc kubenswrapper[4720]: I0122 07:45:04.521244 4720 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e29718b9-c8e0-4732-9699-dbffd0bf9257-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 22 07:45:05 crc kubenswrapper[4720]: I0122 07:45:05.006988 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29484465-46tff" event={"ID":"e29718b9-c8e0-4732-9699-dbffd0bf9257","Type":"ContainerDied","Data":"09a35220b7ecd4d6bebce725ca3e68028be74ce307cfe40b5c3aca7a0773f441"} Jan 22 07:45:05 crc kubenswrapper[4720]: I0122 07:45:05.007030 4720 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="09a35220b7ecd4d6bebce725ca3e68028be74ce307cfe40b5c3aca7a0773f441" Jan 22 07:45:05 crc kubenswrapper[4720]: I0122 07:45:05.007057 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29484465-46tff" Jan 22 07:45:05 crc kubenswrapper[4720]: I0122 07:45:05.351273 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484420-f6lwk"] Jan 22 07:45:05 crc kubenswrapper[4720]: I0122 07:45:05.358045 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29484420-f6lwk"] Jan 22 07:45:06 crc kubenswrapper[4720]: I0122 07:45:06.222283 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c38ccafb-7319-4e13-a9e1-f38f73a8bd3c" path="/var/lib/kubelet/pods/c38ccafb-7319-4e13-a9e1-f38f73a8bd3c/volumes" Jan 22 07:45:20 crc kubenswrapper[4720]: I0122 07:45:20.508191 4720 scope.go:117] "RemoveContainer" containerID="2fd1f6213d307140eac84aae651360f60c42a6f8497d24e80e7cad9b552dc318" Jan 22 07:46:23 crc kubenswrapper[4720]: I0122 07:46:23.428755 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-696fl"] Jan 22 07:46:23 crc kubenswrapper[4720]: E0122 07:46:23.430969 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e29718b9-c8e0-4732-9699-dbffd0bf9257" containerName="collect-profiles" Jan 22 07:46:23 crc kubenswrapper[4720]: I0122 07:46:23.431080 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="e29718b9-c8e0-4732-9699-dbffd0bf9257" containerName="collect-profiles" Jan 22 07:46:23 crc kubenswrapper[4720]: I0122 07:46:23.431399 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="e29718b9-c8e0-4732-9699-dbffd0bf9257" containerName="collect-profiles" Jan 22 07:46:23 crc kubenswrapper[4720]: I0122 07:46:23.433011 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-696fl" Jan 22 07:46:23 crc kubenswrapper[4720]: I0122 07:46:23.442192 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-696fl"] Jan 22 07:46:23 crc kubenswrapper[4720]: I0122 07:46:23.582628 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gh5cq\" (UniqueName: \"kubernetes.io/projected/6bb29bc6-ec95-4c49-90d3-111a380c8c79-kube-api-access-gh5cq\") pod \"community-operators-696fl\" (UID: \"6bb29bc6-ec95-4c49-90d3-111a380c8c79\") " pod="openshift-marketplace/community-operators-696fl" Jan 22 07:46:23 crc kubenswrapper[4720]: I0122 07:46:23.582872 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6bb29bc6-ec95-4c49-90d3-111a380c8c79-catalog-content\") pod \"community-operators-696fl\" (UID: \"6bb29bc6-ec95-4c49-90d3-111a380c8c79\") " pod="openshift-marketplace/community-operators-696fl" Jan 22 07:46:23 crc kubenswrapper[4720]: I0122 07:46:23.583034 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6bb29bc6-ec95-4c49-90d3-111a380c8c79-utilities\") pod \"community-operators-696fl\" (UID: \"6bb29bc6-ec95-4c49-90d3-111a380c8c79\") " pod="openshift-marketplace/community-operators-696fl" Jan 22 07:46:23 crc kubenswrapper[4720]: I0122 07:46:23.685137 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6bb29bc6-ec95-4c49-90d3-111a380c8c79-catalog-content\") pod \"community-operators-696fl\" (UID: \"6bb29bc6-ec95-4c49-90d3-111a380c8c79\") " pod="openshift-marketplace/community-operators-696fl" Jan 22 07:46:23 crc kubenswrapper[4720]: I0122 07:46:23.685208 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6bb29bc6-ec95-4c49-90d3-111a380c8c79-utilities\") pod \"community-operators-696fl\" (UID: \"6bb29bc6-ec95-4c49-90d3-111a380c8c79\") " pod="openshift-marketplace/community-operators-696fl" Jan 22 07:46:23 crc kubenswrapper[4720]: I0122 07:46:23.685254 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gh5cq\" (UniqueName: \"kubernetes.io/projected/6bb29bc6-ec95-4c49-90d3-111a380c8c79-kube-api-access-gh5cq\") pod \"community-operators-696fl\" (UID: \"6bb29bc6-ec95-4c49-90d3-111a380c8c79\") " pod="openshift-marketplace/community-operators-696fl" Jan 22 07:46:23 crc kubenswrapper[4720]: I0122 07:46:23.685722 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6bb29bc6-ec95-4c49-90d3-111a380c8c79-catalog-content\") pod \"community-operators-696fl\" (UID: \"6bb29bc6-ec95-4c49-90d3-111a380c8c79\") " pod="openshift-marketplace/community-operators-696fl" Jan 22 07:46:23 crc kubenswrapper[4720]: I0122 07:46:23.685744 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6bb29bc6-ec95-4c49-90d3-111a380c8c79-utilities\") pod \"community-operators-696fl\" (UID: \"6bb29bc6-ec95-4c49-90d3-111a380c8c79\") " pod="openshift-marketplace/community-operators-696fl" Jan 22 07:46:23 crc kubenswrapper[4720]: I0122 07:46:23.711350 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gh5cq\" (UniqueName: \"kubernetes.io/projected/6bb29bc6-ec95-4c49-90d3-111a380c8c79-kube-api-access-gh5cq\") pod \"community-operators-696fl\" (UID: \"6bb29bc6-ec95-4c49-90d3-111a380c8c79\") " pod="openshift-marketplace/community-operators-696fl" Jan 22 07:46:23 crc kubenswrapper[4720]: I0122 07:46:23.765761 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-696fl" Jan 22 07:46:24 crc kubenswrapper[4720]: I0122 07:46:24.086660 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-696fl"] Jan 22 07:46:24 crc kubenswrapper[4720]: I0122 07:46:24.625771 4720 generic.go:334] "Generic (PLEG): container finished" podID="6bb29bc6-ec95-4c49-90d3-111a380c8c79" containerID="d92b47c68847680dfe34473f02b076c220787b12f4ec02fa644e7caaf08adbfa" exitCode=0 Jan 22 07:46:24 crc kubenswrapper[4720]: I0122 07:46:24.626040 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-696fl" event={"ID":"6bb29bc6-ec95-4c49-90d3-111a380c8c79","Type":"ContainerDied","Data":"d92b47c68847680dfe34473f02b076c220787b12f4ec02fa644e7caaf08adbfa"} Jan 22 07:46:24 crc kubenswrapper[4720]: I0122 07:46:24.626064 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-696fl" event={"ID":"6bb29bc6-ec95-4c49-90d3-111a380c8c79","Type":"ContainerStarted","Data":"7446f4937b59b5d42ecf7fd8c3ec7550bf72b9d575902f79de8de7d7186a7e12"} Jan 22 07:46:26 crc kubenswrapper[4720]: I0122 07:46:26.644223 4720 generic.go:334] "Generic (PLEG): container finished" podID="6bb29bc6-ec95-4c49-90d3-111a380c8c79" containerID="47d54fa97a9ccf103480dfec487bb8131c7f9dafcbd24cba4d6dd2e4e3840ebb" exitCode=0 Jan 22 07:46:26 crc kubenswrapper[4720]: I0122 07:46:26.645867 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-696fl" event={"ID":"6bb29bc6-ec95-4c49-90d3-111a380c8c79","Type":"ContainerDied","Data":"47d54fa97a9ccf103480dfec487bb8131c7f9dafcbd24cba4d6dd2e4e3840ebb"} Jan 22 07:46:27 crc kubenswrapper[4720]: I0122 07:46:27.656272 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-696fl" event={"ID":"6bb29bc6-ec95-4c49-90d3-111a380c8c79","Type":"ContainerStarted","Data":"0076d33eb24d51a659e09f9b3a8e76c92c5a5db60674c3846f9708a4b149cc81"} Jan 22 07:46:27 crc kubenswrapper[4720]: I0122 07:46:27.678097 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-696fl" podStartSLOduration=1.946926787 podStartE2EDuration="4.678078529s" podCreationTimestamp="2026-01-22 07:46:23 +0000 UTC" firstStartedPulling="2026-01-22 07:46:24.627416683 +0000 UTC m=+4276.769323388" lastFinishedPulling="2026-01-22 07:46:27.358568425 +0000 UTC m=+4279.500475130" observedRunningTime="2026-01-22 07:46:27.674358463 +0000 UTC m=+4279.816265178" watchObservedRunningTime="2026-01-22 07:46:27.678078529 +0000 UTC m=+4279.819985234" Jan 22 07:46:33 crc kubenswrapper[4720]: I0122 07:46:33.766109 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-696fl" Jan 22 07:46:33 crc kubenswrapper[4720]: I0122 07:46:33.767699 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-696fl" Jan 22 07:46:33 crc kubenswrapper[4720]: I0122 07:46:33.824149 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-696fl" Jan 22 07:46:34 crc kubenswrapper[4720]: I0122 07:46:34.759762 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-696fl" Jan 22 07:46:38 crc kubenswrapper[4720]: I0122 07:46:38.222624 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-696fl"] Jan 22 07:46:38 crc kubenswrapper[4720]: I0122 07:46:38.223344 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-696fl" podUID="6bb29bc6-ec95-4c49-90d3-111a380c8c79" containerName="registry-server" containerID="cri-o://0076d33eb24d51a659e09f9b3a8e76c92c5a5db60674c3846f9708a4b149cc81" gracePeriod=2 Jan 22 07:46:38 crc kubenswrapper[4720]: I0122 07:46:38.739528 4720 generic.go:334] "Generic (PLEG): container finished" podID="6bb29bc6-ec95-4c49-90d3-111a380c8c79" containerID="0076d33eb24d51a659e09f9b3a8e76c92c5a5db60674c3846f9708a4b149cc81" exitCode=0 Jan 22 07:46:38 crc kubenswrapper[4720]: I0122 07:46:38.739568 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-696fl" event={"ID":"6bb29bc6-ec95-4c49-90d3-111a380c8c79","Type":"ContainerDied","Data":"0076d33eb24d51a659e09f9b3a8e76c92c5a5db60674c3846f9708a4b149cc81"} Jan 22 07:46:39 crc kubenswrapper[4720]: I0122 07:46:39.142956 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-696fl" Jan 22 07:46:39 crc kubenswrapper[4720]: I0122 07:46:39.266698 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6bb29bc6-ec95-4c49-90d3-111a380c8c79-utilities\") pod \"6bb29bc6-ec95-4c49-90d3-111a380c8c79\" (UID: \"6bb29bc6-ec95-4c49-90d3-111a380c8c79\") " Jan 22 07:46:39 crc kubenswrapper[4720]: I0122 07:46:39.266791 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6bb29bc6-ec95-4c49-90d3-111a380c8c79-catalog-content\") pod \"6bb29bc6-ec95-4c49-90d3-111a380c8c79\" (UID: \"6bb29bc6-ec95-4c49-90d3-111a380c8c79\") " Jan 22 07:46:39 crc kubenswrapper[4720]: I0122 07:46:39.266933 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gh5cq\" (UniqueName: \"kubernetes.io/projected/6bb29bc6-ec95-4c49-90d3-111a380c8c79-kube-api-access-gh5cq\") pod \"6bb29bc6-ec95-4c49-90d3-111a380c8c79\" (UID: \"6bb29bc6-ec95-4c49-90d3-111a380c8c79\") " Jan 22 07:46:39 crc kubenswrapper[4720]: I0122 07:46:39.269339 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6bb29bc6-ec95-4c49-90d3-111a380c8c79-utilities" (OuterVolumeSpecName: "utilities") pod "6bb29bc6-ec95-4c49-90d3-111a380c8c79" (UID: "6bb29bc6-ec95-4c49-90d3-111a380c8c79"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 07:46:39 crc kubenswrapper[4720]: I0122 07:46:39.279971 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6bb29bc6-ec95-4c49-90d3-111a380c8c79-kube-api-access-gh5cq" (OuterVolumeSpecName: "kube-api-access-gh5cq") pod "6bb29bc6-ec95-4c49-90d3-111a380c8c79" (UID: "6bb29bc6-ec95-4c49-90d3-111a380c8c79"). InnerVolumeSpecName "kube-api-access-gh5cq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 07:46:39 crc kubenswrapper[4720]: I0122 07:46:39.321890 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6bb29bc6-ec95-4c49-90d3-111a380c8c79-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6bb29bc6-ec95-4c49-90d3-111a380c8c79" (UID: "6bb29bc6-ec95-4c49-90d3-111a380c8c79"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 07:46:39 crc kubenswrapper[4720]: I0122 07:46:39.370144 4720 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6bb29bc6-ec95-4c49-90d3-111a380c8c79-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 07:46:39 crc kubenswrapper[4720]: I0122 07:46:39.370174 4720 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6bb29bc6-ec95-4c49-90d3-111a380c8c79-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 07:46:39 crc kubenswrapper[4720]: I0122 07:46:39.370207 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gh5cq\" (UniqueName: \"kubernetes.io/projected/6bb29bc6-ec95-4c49-90d3-111a380c8c79-kube-api-access-gh5cq\") on node \"crc\" DevicePath \"\"" Jan 22 07:46:39 crc kubenswrapper[4720]: I0122 07:46:39.747404 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-696fl" event={"ID":"6bb29bc6-ec95-4c49-90d3-111a380c8c79","Type":"ContainerDied","Data":"7446f4937b59b5d42ecf7fd8c3ec7550bf72b9d575902f79de8de7d7186a7e12"} Jan 22 07:46:39 crc kubenswrapper[4720]: I0122 07:46:39.747468 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-696fl" Jan 22 07:46:39 crc kubenswrapper[4720]: I0122 07:46:39.747475 4720 scope.go:117] "RemoveContainer" containerID="0076d33eb24d51a659e09f9b3a8e76c92c5a5db60674c3846f9708a4b149cc81" Jan 22 07:46:39 crc kubenswrapper[4720]: I0122 07:46:39.765001 4720 scope.go:117] "RemoveContainer" containerID="47d54fa97a9ccf103480dfec487bb8131c7f9dafcbd24cba4d6dd2e4e3840ebb" Jan 22 07:46:39 crc kubenswrapper[4720]: I0122 07:46:39.786105 4720 scope.go:117] "RemoveContainer" containerID="d92b47c68847680dfe34473f02b076c220787b12f4ec02fa644e7caaf08adbfa" Jan 22 07:46:39 crc kubenswrapper[4720]: I0122 07:46:39.805032 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-696fl"] Jan 22 07:46:39 crc kubenswrapper[4720]: I0122 07:46:39.811159 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-696fl"] Jan 22 07:46:40 crc kubenswrapper[4720]: I0122 07:46:40.220479 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6bb29bc6-ec95-4c49-90d3-111a380c8c79" path="/var/lib/kubelet/pods/6bb29bc6-ec95-4c49-90d3-111a380c8c79/volumes" Jan 22 07:46:59 crc kubenswrapper[4720]: I0122 07:46:59.780063 4720 patch_prober.go:28] interesting pod/machine-config-daemon-bnsvd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 07:46:59 crc kubenswrapper[4720]: I0122 07:46:59.780482 4720 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" podUID="f4b26e9d-6a95-4b1c-9750-88b6aa100c67" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 07:47:29 crc kubenswrapper[4720]: I0122 07:47:29.779956 4720 patch_prober.go:28] interesting pod/machine-config-daemon-bnsvd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 07:47:29 crc kubenswrapper[4720]: I0122 07:47:29.780590 4720 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" podUID="f4b26e9d-6a95-4b1c-9750-88b6aa100c67" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 07:47:59 crc kubenswrapper[4720]: I0122 07:47:59.780931 4720 patch_prober.go:28] interesting pod/machine-config-daemon-bnsvd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 07:47:59 crc kubenswrapper[4720]: I0122 07:47:59.781507 4720 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" podUID="f4b26e9d-6a95-4b1c-9750-88b6aa100c67" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 07:47:59 crc kubenswrapper[4720]: I0122 07:47:59.781559 4720 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" Jan 22 07:47:59 crc kubenswrapper[4720]: I0122 07:47:59.782264 4720 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c0022d1e8e69f2a0f3848614f20fec2a6391fc510938e5521ba85d4bb9f113e8"} pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 22 07:47:59 crc kubenswrapper[4720]: I0122 07:47:59.782322 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" podUID="f4b26e9d-6a95-4b1c-9750-88b6aa100c67" containerName="machine-config-daemon" containerID="cri-o://c0022d1e8e69f2a0f3848614f20fec2a6391fc510938e5521ba85d4bb9f113e8" gracePeriod=600 Jan 22 07:48:00 crc kubenswrapper[4720]: I0122 07:48:00.370736 4720 generic.go:334] "Generic (PLEG): container finished" podID="f4b26e9d-6a95-4b1c-9750-88b6aa100c67" containerID="c0022d1e8e69f2a0f3848614f20fec2a6391fc510938e5521ba85d4bb9f113e8" exitCode=0 Jan 22 07:48:00 crc kubenswrapper[4720]: I0122 07:48:00.371334 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" event={"ID":"f4b26e9d-6a95-4b1c-9750-88b6aa100c67","Type":"ContainerDied","Data":"c0022d1e8e69f2a0f3848614f20fec2a6391fc510938e5521ba85d4bb9f113e8"} Jan 22 07:48:00 crc kubenswrapper[4720]: I0122 07:48:00.371364 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" event={"ID":"f4b26e9d-6a95-4b1c-9750-88b6aa100c67","Type":"ContainerStarted","Data":"0b44ab5d83ae4731222a091b193539bb9c97f8657185039ab14ed97bb7a11099"} Jan 22 07:48:00 crc kubenswrapper[4720]: I0122 07:48:00.371380 4720 scope.go:117] "RemoveContainer" containerID="d93ba9d9d132d1e9c26102e6659c91bdad0555c67031d00274c2f450bfe2748b" Jan 22 07:50:29 crc kubenswrapper[4720]: I0122 07:50:29.780165 4720 patch_prober.go:28] interesting pod/machine-config-daemon-bnsvd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 07:50:29 crc kubenswrapper[4720]: I0122 07:50:29.782489 4720 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" podUID="f4b26e9d-6a95-4b1c-9750-88b6aa100c67" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 07:50:59 crc kubenswrapper[4720]: I0122 07:50:59.780176 4720 patch_prober.go:28] interesting pod/machine-config-daemon-bnsvd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 07:50:59 crc kubenswrapper[4720]: I0122 07:50:59.782796 4720 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" podUID="f4b26e9d-6a95-4b1c-9750-88b6aa100c67" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 07:51:29 crc kubenswrapper[4720]: I0122 07:51:29.780213 4720 patch_prober.go:28] interesting pod/machine-config-daemon-bnsvd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 07:51:29 crc kubenswrapper[4720]: I0122 07:51:29.780776 4720 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" podUID="f4b26e9d-6a95-4b1c-9750-88b6aa100c67" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 07:51:29 crc kubenswrapper[4720]: I0122 07:51:29.780823 4720 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" Jan 22 07:51:29 crc kubenswrapper[4720]: I0122 07:51:29.781585 4720 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0b44ab5d83ae4731222a091b193539bb9c97f8657185039ab14ed97bb7a11099"} pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 22 07:51:29 crc kubenswrapper[4720]: I0122 07:51:29.781646 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" podUID="f4b26e9d-6a95-4b1c-9750-88b6aa100c67" containerName="machine-config-daemon" containerID="cri-o://0b44ab5d83ae4731222a091b193539bb9c97f8657185039ab14ed97bb7a11099" gracePeriod=600 Jan 22 07:51:30 crc kubenswrapper[4720]: I0122 07:51:30.159680 4720 generic.go:334] "Generic (PLEG): container finished" podID="f4b26e9d-6a95-4b1c-9750-88b6aa100c67" containerID="0b44ab5d83ae4731222a091b193539bb9c97f8657185039ab14ed97bb7a11099" exitCode=0 Jan 22 07:51:30 crc kubenswrapper[4720]: I0122 07:51:30.159851 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" event={"ID":"f4b26e9d-6a95-4b1c-9750-88b6aa100c67","Type":"ContainerDied","Data":"0b44ab5d83ae4731222a091b193539bb9c97f8657185039ab14ed97bb7a11099"} Jan 22 07:51:30 crc kubenswrapper[4720]: I0122 07:51:30.160167 4720 scope.go:117] "RemoveContainer" containerID="c0022d1e8e69f2a0f3848614f20fec2a6391fc510938e5521ba85d4bb9f113e8" Jan 22 07:51:30 crc kubenswrapper[4720]: E0122 07:51:30.666081 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bnsvd_openshift-machine-config-operator(f4b26e9d-6a95-4b1c-9750-88b6aa100c67)\"" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" podUID="f4b26e9d-6a95-4b1c-9750-88b6aa100c67" Jan 22 07:51:31 crc kubenswrapper[4720]: I0122 07:51:31.169256 4720 scope.go:117] "RemoveContainer" containerID="0b44ab5d83ae4731222a091b193539bb9c97f8657185039ab14ed97bb7a11099" Jan 22 07:51:31 crc kubenswrapper[4720]: E0122 07:51:31.169638 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bnsvd_openshift-machine-config-operator(f4b26e9d-6a95-4b1c-9750-88b6aa100c67)\"" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" podUID="f4b26e9d-6a95-4b1c-9750-88b6aa100c67" Jan 22 07:51:42 crc kubenswrapper[4720]: I0122 07:51:42.210947 4720 scope.go:117] "RemoveContainer" containerID="0b44ab5d83ae4731222a091b193539bb9c97f8657185039ab14ed97bb7a11099" Jan 22 07:51:42 crc kubenswrapper[4720]: E0122 07:51:42.211966 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bnsvd_openshift-machine-config-operator(f4b26e9d-6a95-4b1c-9750-88b6aa100c67)\"" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" podUID="f4b26e9d-6a95-4b1c-9750-88b6aa100c67" Jan 22 07:51:43 crc kubenswrapper[4720]: I0122 07:51:43.633513 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-9jfjp"] Jan 22 07:51:43 crc kubenswrapper[4720]: E0122 07:51:43.634223 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6bb29bc6-ec95-4c49-90d3-111a380c8c79" containerName="extract-utilities" Jan 22 07:51:43 crc kubenswrapper[4720]: I0122 07:51:43.634255 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="6bb29bc6-ec95-4c49-90d3-111a380c8c79" containerName="extract-utilities" Jan 22 07:51:43 crc kubenswrapper[4720]: E0122 07:51:43.634275 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6bb29bc6-ec95-4c49-90d3-111a380c8c79" containerName="registry-server" Jan 22 07:51:43 crc kubenswrapper[4720]: I0122 07:51:43.634280 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="6bb29bc6-ec95-4c49-90d3-111a380c8c79" containerName="registry-server" Jan 22 07:51:43 crc kubenswrapper[4720]: E0122 07:51:43.634302 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6bb29bc6-ec95-4c49-90d3-111a380c8c79" containerName="extract-content" Jan 22 07:51:43 crc kubenswrapper[4720]: I0122 07:51:43.634308 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="6bb29bc6-ec95-4c49-90d3-111a380c8c79" containerName="extract-content" Jan 22 07:51:43 crc kubenswrapper[4720]: I0122 07:51:43.634461 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="6bb29bc6-ec95-4c49-90d3-111a380c8c79" containerName="registry-server" Jan 22 07:51:43 crc kubenswrapper[4720]: I0122 07:51:43.635766 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9jfjp" Jan 22 07:51:43 crc kubenswrapper[4720]: I0122 07:51:43.646872 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9jfjp"] Jan 22 07:51:43 crc kubenswrapper[4720]: I0122 07:51:43.735292 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dc1cd455-c554-4a22-8b6a-69c4c56562f1-utilities\") pod \"redhat-operators-9jfjp\" (UID: \"dc1cd455-c554-4a22-8b6a-69c4c56562f1\") " pod="openshift-marketplace/redhat-operators-9jfjp" Jan 22 07:51:43 crc kubenswrapper[4720]: I0122 07:51:43.735712 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2vs6v\" (UniqueName: \"kubernetes.io/projected/dc1cd455-c554-4a22-8b6a-69c4c56562f1-kube-api-access-2vs6v\") pod \"redhat-operators-9jfjp\" (UID: \"dc1cd455-c554-4a22-8b6a-69c4c56562f1\") " pod="openshift-marketplace/redhat-operators-9jfjp" Jan 22 07:51:43 crc kubenswrapper[4720]: I0122 07:51:43.735999 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dc1cd455-c554-4a22-8b6a-69c4c56562f1-catalog-content\") pod \"redhat-operators-9jfjp\" (UID: \"dc1cd455-c554-4a22-8b6a-69c4c56562f1\") " pod="openshift-marketplace/redhat-operators-9jfjp" Jan 22 07:51:43 crc kubenswrapper[4720]: I0122 07:51:43.837421 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dc1cd455-c554-4a22-8b6a-69c4c56562f1-catalog-content\") pod \"redhat-operators-9jfjp\" (UID: \"dc1cd455-c554-4a22-8b6a-69c4c56562f1\") " pod="openshift-marketplace/redhat-operators-9jfjp" Jan 22 07:51:43 crc kubenswrapper[4720]: I0122 07:51:43.837501 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dc1cd455-c554-4a22-8b6a-69c4c56562f1-utilities\") pod \"redhat-operators-9jfjp\" (UID: \"dc1cd455-c554-4a22-8b6a-69c4c56562f1\") " pod="openshift-marketplace/redhat-operators-9jfjp" Jan 22 07:51:43 crc kubenswrapper[4720]: I0122 07:51:43.837527 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2vs6v\" (UniqueName: \"kubernetes.io/projected/dc1cd455-c554-4a22-8b6a-69c4c56562f1-kube-api-access-2vs6v\") pod \"redhat-operators-9jfjp\" (UID: \"dc1cd455-c554-4a22-8b6a-69c4c56562f1\") " pod="openshift-marketplace/redhat-operators-9jfjp" Jan 22 07:51:43 crc kubenswrapper[4720]: I0122 07:51:43.838025 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dc1cd455-c554-4a22-8b6a-69c4c56562f1-catalog-content\") pod \"redhat-operators-9jfjp\" (UID: \"dc1cd455-c554-4a22-8b6a-69c4c56562f1\") " pod="openshift-marketplace/redhat-operators-9jfjp" Jan 22 07:51:43 crc kubenswrapper[4720]: I0122 07:51:43.838096 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dc1cd455-c554-4a22-8b6a-69c4c56562f1-utilities\") pod \"redhat-operators-9jfjp\" (UID: \"dc1cd455-c554-4a22-8b6a-69c4c56562f1\") " pod="openshift-marketplace/redhat-operators-9jfjp" Jan 22 07:51:43 crc kubenswrapper[4720]: I0122 07:51:43.858610 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2vs6v\" (UniqueName: \"kubernetes.io/projected/dc1cd455-c554-4a22-8b6a-69c4c56562f1-kube-api-access-2vs6v\") pod \"redhat-operators-9jfjp\" (UID: \"dc1cd455-c554-4a22-8b6a-69c4c56562f1\") " pod="openshift-marketplace/redhat-operators-9jfjp" Jan 22 07:51:43 crc kubenswrapper[4720]: I0122 07:51:43.955680 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9jfjp" Jan 22 07:51:44 crc kubenswrapper[4720]: I0122 07:51:44.430780 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9jfjp"] Jan 22 07:51:45 crc kubenswrapper[4720]: I0122 07:51:45.270192 4720 generic.go:334] "Generic (PLEG): container finished" podID="dc1cd455-c554-4a22-8b6a-69c4c56562f1" containerID="f2bdffdb160ecb13536998dbe35509400af8150fdda34f96fcceb673945325fa" exitCode=0 Jan 22 07:51:45 crc kubenswrapper[4720]: I0122 07:51:45.270502 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9jfjp" event={"ID":"dc1cd455-c554-4a22-8b6a-69c4c56562f1","Type":"ContainerDied","Data":"f2bdffdb160ecb13536998dbe35509400af8150fdda34f96fcceb673945325fa"} Jan 22 07:51:45 crc kubenswrapper[4720]: I0122 07:51:45.270535 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9jfjp" event={"ID":"dc1cd455-c554-4a22-8b6a-69c4c56562f1","Type":"ContainerStarted","Data":"3e75c54579329a30629f2710beac6928805203aec01a782ec963caa56bd0a2d7"} Jan 22 07:51:45 crc kubenswrapper[4720]: I0122 07:51:45.272711 4720 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 22 07:51:46 crc kubenswrapper[4720]: I0122 07:51:46.281565 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9jfjp" event={"ID":"dc1cd455-c554-4a22-8b6a-69c4c56562f1","Type":"ContainerStarted","Data":"953134b7094191c1de35b7d7041f9ef6768ac1d7edb6d37e5e72ae14d97509c7"} Jan 22 07:51:47 crc kubenswrapper[4720]: I0122 07:51:47.290729 4720 generic.go:334] "Generic (PLEG): container finished" podID="dc1cd455-c554-4a22-8b6a-69c4c56562f1" containerID="953134b7094191c1de35b7d7041f9ef6768ac1d7edb6d37e5e72ae14d97509c7" exitCode=0 Jan 22 07:51:47 crc kubenswrapper[4720]: I0122 07:51:47.290924 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9jfjp" event={"ID":"dc1cd455-c554-4a22-8b6a-69c4c56562f1","Type":"ContainerDied","Data":"953134b7094191c1de35b7d7041f9ef6768ac1d7edb6d37e5e72ae14d97509c7"} Jan 22 07:51:48 crc kubenswrapper[4720]: I0122 07:51:48.301081 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9jfjp" event={"ID":"dc1cd455-c554-4a22-8b6a-69c4c56562f1","Type":"ContainerStarted","Data":"ac26feb23c6ea10060b352440e8a9b8f9244e34b1e2281316e1b29110fcadd06"} Jan 22 07:51:48 crc kubenswrapper[4720]: I0122 07:51:48.329841 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-9jfjp" podStartSLOduration=2.586187953 podStartE2EDuration="5.329810671s" podCreationTimestamp="2026-01-22 07:51:43 +0000 UTC" firstStartedPulling="2026-01-22 07:51:45.272502617 +0000 UTC m=+4597.414409312" lastFinishedPulling="2026-01-22 07:51:48.016125325 +0000 UTC m=+4600.158032030" observedRunningTime="2026-01-22 07:51:48.323621315 +0000 UTC m=+4600.465528030" watchObservedRunningTime="2026-01-22 07:51:48.329810671 +0000 UTC m=+4600.471717396" Jan 22 07:51:53 crc kubenswrapper[4720]: I0122 07:51:53.956379 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-9jfjp" Jan 22 07:51:53 crc kubenswrapper[4720]: I0122 07:51:53.956944 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-9jfjp" Jan 22 07:51:54 crc kubenswrapper[4720]: I0122 07:51:54.008108 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-9jfjp" Jan 22 07:51:54 crc kubenswrapper[4720]: I0122 07:51:54.458268 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-9jfjp" Jan 22 07:51:55 crc kubenswrapper[4720]: I0122 07:51:55.743931 4720 scope.go:117] "RemoveContainer" containerID="0b44ab5d83ae4731222a091b193539bb9c97f8657185039ab14ed97bb7a11099" Jan 22 07:51:55 crc kubenswrapper[4720]: E0122 07:51:55.744427 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bnsvd_openshift-machine-config-operator(f4b26e9d-6a95-4b1c-9750-88b6aa100c67)\"" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" podUID="f4b26e9d-6a95-4b1c-9750-88b6aa100c67" Jan 22 07:51:57 crc kubenswrapper[4720]: I0122 07:51:57.622712 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-9jfjp"] Jan 22 07:51:57 crc kubenswrapper[4720]: I0122 07:51:57.623202 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-9jfjp" podUID="dc1cd455-c554-4a22-8b6a-69c4c56562f1" containerName="registry-server" containerID="cri-o://ac26feb23c6ea10060b352440e8a9b8f9244e34b1e2281316e1b29110fcadd06" gracePeriod=2 Jan 22 07:52:00 crc kubenswrapper[4720]: I0122 07:52:00.795233 4720 generic.go:334] "Generic (PLEG): container finished" podID="dc1cd455-c554-4a22-8b6a-69c4c56562f1" containerID="ac26feb23c6ea10060b352440e8a9b8f9244e34b1e2281316e1b29110fcadd06" exitCode=0 Jan 22 07:52:00 crc kubenswrapper[4720]: I0122 07:52:00.795429 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9jfjp" event={"ID":"dc1cd455-c554-4a22-8b6a-69c4c56562f1","Type":"ContainerDied","Data":"ac26feb23c6ea10060b352440e8a9b8f9244e34b1e2281316e1b29110fcadd06"} Jan 22 07:52:01 crc kubenswrapper[4720]: I0122 07:52:01.745305 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9jfjp" Jan 22 07:52:01 crc kubenswrapper[4720]: I0122 07:52:01.806255 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9jfjp" event={"ID":"dc1cd455-c554-4a22-8b6a-69c4c56562f1","Type":"ContainerDied","Data":"3e75c54579329a30629f2710beac6928805203aec01a782ec963caa56bd0a2d7"} Jan 22 07:52:01 crc kubenswrapper[4720]: I0122 07:52:01.806308 4720 scope.go:117] "RemoveContainer" containerID="ac26feb23c6ea10060b352440e8a9b8f9244e34b1e2281316e1b29110fcadd06" Jan 22 07:52:01 crc kubenswrapper[4720]: I0122 07:52:01.806352 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9jfjp" Jan 22 07:52:01 crc kubenswrapper[4720]: I0122 07:52:01.827016 4720 scope.go:117] "RemoveContainer" containerID="953134b7094191c1de35b7d7041f9ef6768ac1d7edb6d37e5e72ae14d97509c7" Jan 22 07:52:01 crc kubenswrapper[4720]: I0122 07:52:01.851779 4720 scope.go:117] "RemoveContainer" containerID="f2bdffdb160ecb13536998dbe35509400af8150fdda34f96fcceb673945325fa" Jan 22 07:52:01 crc kubenswrapper[4720]: I0122 07:52:01.853311 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dc1cd455-c554-4a22-8b6a-69c4c56562f1-utilities\") pod \"dc1cd455-c554-4a22-8b6a-69c4c56562f1\" (UID: \"dc1cd455-c554-4a22-8b6a-69c4c56562f1\") " Jan 22 07:52:01 crc kubenswrapper[4720]: I0122 07:52:01.853376 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2vs6v\" (UniqueName: \"kubernetes.io/projected/dc1cd455-c554-4a22-8b6a-69c4c56562f1-kube-api-access-2vs6v\") pod \"dc1cd455-c554-4a22-8b6a-69c4c56562f1\" (UID: \"dc1cd455-c554-4a22-8b6a-69c4c56562f1\") " Jan 22 07:52:01 crc kubenswrapper[4720]: I0122 07:52:01.853473 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dc1cd455-c554-4a22-8b6a-69c4c56562f1-catalog-content\") pod \"dc1cd455-c554-4a22-8b6a-69c4c56562f1\" (UID: \"dc1cd455-c554-4a22-8b6a-69c4c56562f1\") " Jan 22 07:52:01 crc kubenswrapper[4720]: I0122 07:52:01.854250 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dc1cd455-c554-4a22-8b6a-69c4c56562f1-utilities" (OuterVolumeSpecName: "utilities") pod "dc1cd455-c554-4a22-8b6a-69c4c56562f1" (UID: "dc1cd455-c554-4a22-8b6a-69c4c56562f1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 07:52:01 crc kubenswrapper[4720]: I0122 07:52:01.858989 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc1cd455-c554-4a22-8b6a-69c4c56562f1-kube-api-access-2vs6v" (OuterVolumeSpecName: "kube-api-access-2vs6v") pod "dc1cd455-c554-4a22-8b6a-69c4c56562f1" (UID: "dc1cd455-c554-4a22-8b6a-69c4c56562f1"). InnerVolumeSpecName "kube-api-access-2vs6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 07:52:01 crc kubenswrapper[4720]: I0122 07:52:01.954518 4720 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dc1cd455-c554-4a22-8b6a-69c4c56562f1-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 07:52:01 crc kubenswrapper[4720]: I0122 07:52:01.954573 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2vs6v\" (UniqueName: \"kubernetes.io/projected/dc1cd455-c554-4a22-8b6a-69c4c56562f1-kube-api-access-2vs6v\") on node \"crc\" DevicePath \"\"" Jan 22 07:52:01 crc kubenswrapper[4720]: I0122 07:52:01.973990 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dc1cd455-c554-4a22-8b6a-69c4c56562f1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "dc1cd455-c554-4a22-8b6a-69c4c56562f1" (UID: "dc1cd455-c554-4a22-8b6a-69c4c56562f1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 07:52:02 crc kubenswrapper[4720]: I0122 07:52:02.055992 4720 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dc1cd455-c554-4a22-8b6a-69c4c56562f1-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 07:52:02 crc kubenswrapper[4720]: I0122 07:52:02.133945 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-9jfjp"] Jan 22 07:52:02 crc kubenswrapper[4720]: I0122 07:52:02.149692 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-9jfjp"] Jan 22 07:52:02 crc kubenswrapper[4720]: I0122 07:52:02.222388 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dc1cd455-c554-4a22-8b6a-69c4c56562f1" path="/var/lib/kubelet/pods/dc1cd455-c554-4a22-8b6a-69c4c56562f1/volumes" Jan 22 07:52:08 crc kubenswrapper[4720]: I0122 07:52:08.228889 4720 scope.go:117] "RemoveContainer" containerID="0b44ab5d83ae4731222a091b193539bb9c97f8657185039ab14ed97bb7a11099" Jan 22 07:52:08 crc kubenswrapper[4720]: E0122 07:52:08.237013 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bnsvd_openshift-machine-config-operator(f4b26e9d-6a95-4b1c-9750-88b6aa100c67)\"" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" podUID="f4b26e9d-6a95-4b1c-9750-88b6aa100c67" Jan 22 07:52:19 crc kubenswrapper[4720]: I0122 07:52:19.211461 4720 scope.go:117] "RemoveContainer" containerID="0b44ab5d83ae4731222a091b193539bb9c97f8657185039ab14ed97bb7a11099" Jan 22 07:52:19 crc kubenswrapper[4720]: E0122 07:52:19.212302 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bnsvd_openshift-machine-config-operator(f4b26e9d-6a95-4b1c-9750-88b6aa100c67)\"" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" podUID="f4b26e9d-6a95-4b1c-9750-88b6aa100c67" Jan 22 07:52:30 crc kubenswrapper[4720]: I0122 07:52:30.214292 4720 scope.go:117] "RemoveContainer" containerID="0b44ab5d83ae4731222a091b193539bb9c97f8657185039ab14ed97bb7a11099" Jan 22 07:52:30 crc kubenswrapper[4720]: E0122 07:52:30.215633 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bnsvd_openshift-machine-config-operator(f4b26e9d-6a95-4b1c-9750-88b6aa100c67)\"" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" podUID="f4b26e9d-6a95-4b1c-9750-88b6aa100c67" Jan 22 07:52:43 crc kubenswrapper[4720]: I0122 07:52:43.211098 4720 scope.go:117] "RemoveContainer" containerID="0b44ab5d83ae4731222a091b193539bb9c97f8657185039ab14ed97bb7a11099" Jan 22 07:52:43 crc kubenswrapper[4720]: E0122 07:52:43.211787 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bnsvd_openshift-machine-config-operator(f4b26e9d-6a95-4b1c-9750-88b6aa100c67)\"" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" podUID="f4b26e9d-6a95-4b1c-9750-88b6aa100c67" Jan 22 07:52:57 crc kubenswrapper[4720]: I0122 07:52:57.211858 4720 scope.go:117] "RemoveContainer" containerID="0b44ab5d83ae4731222a091b193539bb9c97f8657185039ab14ed97bb7a11099" Jan 22 07:52:57 crc kubenswrapper[4720]: E0122 07:52:57.212646 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bnsvd_openshift-machine-config-operator(f4b26e9d-6a95-4b1c-9750-88b6aa100c67)\"" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" podUID="f4b26e9d-6a95-4b1c-9750-88b6aa100c67" Jan 22 07:52:59 crc kubenswrapper[4720]: I0122 07:52:59.641302 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-ztqct"] Jan 22 07:52:59 crc kubenswrapper[4720]: E0122 07:52:59.642157 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc1cd455-c554-4a22-8b6a-69c4c56562f1" containerName="registry-server" Jan 22 07:52:59 crc kubenswrapper[4720]: I0122 07:52:59.642173 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc1cd455-c554-4a22-8b6a-69c4c56562f1" containerName="registry-server" Jan 22 07:52:59 crc kubenswrapper[4720]: E0122 07:52:59.642221 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc1cd455-c554-4a22-8b6a-69c4c56562f1" containerName="extract-content" Jan 22 07:52:59 crc kubenswrapper[4720]: I0122 07:52:59.642228 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc1cd455-c554-4a22-8b6a-69c4c56562f1" containerName="extract-content" Jan 22 07:52:59 crc kubenswrapper[4720]: E0122 07:52:59.642245 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc1cd455-c554-4a22-8b6a-69c4c56562f1" containerName="extract-utilities" Jan 22 07:52:59 crc kubenswrapper[4720]: I0122 07:52:59.642252 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc1cd455-c554-4a22-8b6a-69c4c56562f1" containerName="extract-utilities" Jan 22 07:52:59 crc kubenswrapper[4720]: I0122 07:52:59.642495 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc1cd455-c554-4a22-8b6a-69c4c56562f1" containerName="registry-server" Jan 22 07:52:59 crc kubenswrapper[4720]: I0122 07:52:59.644524 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ztqct" Jan 22 07:52:59 crc kubenswrapper[4720]: I0122 07:52:59.652638 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-ztqct"] Jan 22 07:52:59 crc kubenswrapper[4720]: I0122 07:52:59.782585 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/56435e0f-a6fe-43c7-ac2f-0d5053aa0c78-utilities\") pod \"certified-operators-ztqct\" (UID: \"56435e0f-a6fe-43c7-ac2f-0d5053aa0c78\") " pod="openshift-marketplace/certified-operators-ztqct" Jan 22 07:52:59 crc kubenswrapper[4720]: I0122 07:52:59.782657 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/56435e0f-a6fe-43c7-ac2f-0d5053aa0c78-catalog-content\") pod \"certified-operators-ztqct\" (UID: \"56435e0f-a6fe-43c7-ac2f-0d5053aa0c78\") " pod="openshift-marketplace/certified-operators-ztqct" Jan 22 07:52:59 crc kubenswrapper[4720]: I0122 07:52:59.782781 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wlgtc\" (UniqueName: \"kubernetes.io/projected/56435e0f-a6fe-43c7-ac2f-0d5053aa0c78-kube-api-access-wlgtc\") pod \"certified-operators-ztqct\" (UID: \"56435e0f-a6fe-43c7-ac2f-0d5053aa0c78\") " pod="openshift-marketplace/certified-operators-ztqct" Jan 22 07:52:59 crc kubenswrapper[4720]: I0122 07:52:59.885434 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wlgtc\" (UniqueName: \"kubernetes.io/projected/56435e0f-a6fe-43c7-ac2f-0d5053aa0c78-kube-api-access-wlgtc\") pod \"certified-operators-ztqct\" (UID: \"56435e0f-a6fe-43c7-ac2f-0d5053aa0c78\") " pod="openshift-marketplace/certified-operators-ztqct" Jan 22 07:52:59 crc kubenswrapper[4720]: I0122 07:52:59.885530 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/56435e0f-a6fe-43c7-ac2f-0d5053aa0c78-utilities\") pod \"certified-operators-ztqct\" (UID: \"56435e0f-a6fe-43c7-ac2f-0d5053aa0c78\") " pod="openshift-marketplace/certified-operators-ztqct" Jan 22 07:52:59 crc kubenswrapper[4720]: I0122 07:52:59.885568 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/56435e0f-a6fe-43c7-ac2f-0d5053aa0c78-catalog-content\") pod \"certified-operators-ztqct\" (UID: \"56435e0f-a6fe-43c7-ac2f-0d5053aa0c78\") " pod="openshift-marketplace/certified-operators-ztqct" Jan 22 07:52:59 crc kubenswrapper[4720]: I0122 07:52:59.886353 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/56435e0f-a6fe-43c7-ac2f-0d5053aa0c78-catalog-content\") pod \"certified-operators-ztqct\" (UID: \"56435e0f-a6fe-43c7-ac2f-0d5053aa0c78\") " pod="openshift-marketplace/certified-operators-ztqct" Jan 22 07:52:59 crc kubenswrapper[4720]: I0122 07:52:59.886997 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/56435e0f-a6fe-43c7-ac2f-0d5053aa0c78-utilities\") pod \"certified-operators-ztqct\" (UID: \"56435e0f-a6fe-43c7-ac2f-0d5053aa0c78\") " pod="openshift-marketplace/certified-operators-ztqct" Jan 22 07:52:59 crc kubenswrapper[4720]: I0122 07:52:59.910325 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wlgtc\" (UniqueName: \"kubernetes.io/projected/56435e0f-a6fe-43c7-ac2f-0d5053aa0c78-kube-api-access-wlgtc\") pod \"certified-operators-ztqct\" (UID: \"56435e0f-a6fe-43c7-ac2f-0d5053aa0c78\") " pod="openshift-marketplace/certified-operators-ztqct" Jan 22 07:52:59 crc kubenswrapper[4720]: I0122 07:52:59.974468 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ztqct" Jan 22 07:53:00 crc kubenswrapper[4720]: I0122 07:53:00.516476 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-ztqct"] Jan 22 07:53:01 crc kubenswrapper[4720]: I0122 07:53:01.295547 4720 generic.go:334] "Generic (PLEG): container finished" podID="56435e0f-a6fe-43c7-ac2f-0d5053aa0c78" containerID="2065bf40070f2df2c8fb7b2e2c850fd49be1bc5bc89914df132446a2311f8103" exitCode=0 Jan 22 07:53:01 crc kubenswrapper[4720]: I0122 07:53:01.295649 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ztqct" event={"ID":"56435e0f-a6fe-43c7-ac2f-0d5053aa0c78","Type":"ContainerDied","Data":"2065bf40070f2df2c8fb7b2e2c850fd49be1bc5bc89914df132446a2311f8103"} Jan 22 07:53:01 crc kubenswrapper[4720]: I0122 07:53:01.296796 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ztqct" event={"ID":"56435e0f-a6fe-43c7-ac2f-0d5053aa0c78","Type":"ContainerStarted","Data":"0973386994b3923f7242b0c3602c24d342868d753709ecf0724d22025d5043c2"} Jan 22 07:53:02 crc kubenswrapper[4720]: I0122 07:53:02.304685 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ztqct" event={"ID":"56435e0f-a6fe-43c7-ac2f-0d5053aa0c78","Type":"ContainerStarted","Data":"acc5943c90124bd557261060752e0f0ab3b728fa015d27ba551c2b08222b3a72"} Jan 22 07:53:03 crc kubenswrapper[4720]: I0122 07:53:03.316259 4720 generic.go:334] "Generic (PLEG): container finished" podID="56435e0f-a6fe-43c7-ac2f-0d5053aa0c78" containerID="acc5943c90124bd557261060752e0f0ab3b728fa015d27ba551c2b08222b3a72" exitCode=0 Jan 22 07:53:03 crc kubenswrapper[4720]: I0122 07:53:03.317100 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ztqct" event={"ID":"56435e0f-a6fe-43c7-ac2f-0d5053aa0c78","Type":"ContainerDied","Data":"acc5943c90124bd557261060752e0f0ab3b728fa015d27ba551c2b08222b3a72"} Jan 22 07:53:04 crc kubenswrapper[4720]: I0122 07:53:04.326733 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ztqct" event={"ID":"56435e0f-a6fe-43c7-ac2f-0d5053aa0c78","Type":"ContainerStarted","Data":"0b0edc2aba0f32dfbab17fb9acf8508ddf0549e8822abb6416da8427fbcbcbe9"} Jan 22 07:53:04 crc kubenswrapper[4720]: I0122 07:53:04.349223 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-ztqct" podStartSLOduration=2.887455786 podStartE2EDuration="5.349205984s" podCreationTimestamp="2026-01-22 07:52:59 +0000 UTC" firstStartedPulling="2026-01-22 07:53:01.298085422 +0000 UTC m=+4673.439992127" lastFinishedPulling="2026-01-22 07:53:03.75983562 +0000 UTC m=+4675.901742325" observedRunningTime="2026-01-22 07:53:04.345808667 +0000 UTC m=+4676.487715392" watchObservedRunningTime="2026-01-22 07:53:04.349205984 +0000 UTC m=+4676.491112689" Jan 22 07:53:09 crc kubenswrapper[4720]: I0122 07:53:09.974927 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-ztqct" Jan 22 07:53:09 crc kubenswrapper[4720]: I0122 07:53:09.976414 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-ztqct" Jan 22 07:53:10 crc kubenswrapper[4720]: I0122 07:53:10.029364 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-ztqct" Jan 22 07:53:10 crc kubenswrapper[4720]: I0122 07:53:10.211696 4720 scope.go:117] "RemoveContainer" containerID="0b44ab5d83ae4731222a091b193539bb9c97f8657185039ab14ed97bb7a11099" Jan 22 07:53:10 crc kubenswrapper[4720]: E0122 07:53:10.211998 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bnsvd_openshift-machine-config-operator(f4b26e9d-6a95-4b1c-9750-88b6aa100c67)\"" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" podUID="f4b26e9d-6a95-4b1c-9750-88b6aa100c67" Jan 22 07:53:10 crc kubenswrapper[4720]: I0122 07:53:10.417496 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-ztqct" Jan 22 07:53:13 crc kubenswrapper[4720]: I0122 07:53:13.615527 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-ztqct"] Jan 22 07:53:13 crc kubenswrapper[4720]: I0122 07:53:13.616274 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-ztqct" podUID="56435e0f-a6fe-43c7-ac2f-0d5053aa0c78" containerName="registry-server" containerID="cri-o://0b0edc2aba0f32dfbab17fb9acf8508ddf0549e8822abb6416da8427fbcbcbe9" gracePeriod=2 Jan 22 07:53:14 crc kubenswrapper[4720]: I0122 07:53:14.407584 4720 generic.go:334] "Generic (PLEG): container finished" podID="56435e0f-a6fe-43c7-ac2f-0d5053aa0c78" containerID="0b0edc2aba0f32dfbab17fb9acf8508ddf0549e8822abb6416da8427fbcbcbe9" exitCode=0 Jan 22 07:53:14 crc kubenswrapper[4720]: I0122 07:53:14.407651 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ztqct" event={"ID":"56435e0f-a6fe-43c7-ac2f-0d5053aa0c78","Type":"ContainerDied","Data":"0b0edc2aba0f32dfbab17fb9acf8508ddf0549e8822abb6416da8427fbcbcbe9"} Jan 22 07:53:14 crc kubenswrapper[4720]: I0122 07:53:14.538260 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ztqct" Jan 22 07:53:14 crc kubenswrapper[4720]: I0122 07:53:14.545149 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/56435e0f-a6fe-43c7-ac2f-0d5053aa0c78-utilities\") pod \"56435e0f-a6fe-43c7-ac2f-0d5053aa0c78\" (UID: \"56435e0f-a6fe-43c7-ac2f-0d5053aa0c78\") " Jan 22 07:53:14 crc kubenswrapper[4720]: I0122 07:53:14.545204 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wlgtc\" (UniqueName: \"kubernetes.io/projected/56435e0f-a6fe-43c7-ac2f-0d5053aa0c78-kube-api-access-wlgtc\") pod \"56435e0f-a6fe-43c7-ac2f-0d5053aa0c78\" (UID: \"56435e0f-a6fe-43c7-ac2f-0d5053aa0c78\") " Jan 22 07:53:14 crc kubenswrapper[4720]: I0122 07:53:14.545251 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/56435e0f-a6fe-43c7-ac2f-0d5053aa0c78-catalog-content\") pod \"56435e0f-a6fe-43c7-ac2f-0d5053aa0c78\" (UID: \"56435e0f-a6fe-43c7-ac2f-0d5053aa0c78\") " Jan 22 07:53:14 crc kubenswrapper[4720]: I0122 07:53:14.546139 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/56435e0f-a6fe-43c7-ac2f-0d5053aa0c78-utilities" (OuterVolumeSpecName: "utilities") pod "56435e0f-a6fe-43c7-ac2f-0d5053aa0c78" (UID: "56435e0f-a6fe-43c7-ac2f-0d5053aa0c78"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 07:53:14 crc kubenswrapper[4720]: I0122 07:53:14.550290 4720 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/56435e0f-a6fe-43c7-ac2f-0d5053aa0c78-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 07:53:14 crc kubenswrapper[4720]: I0122 07:53:14.555646 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/56435e0f-a6fe-43c7-ac2f-0d5053aa0c78-kube-api-access-wlgtc" (OuterVolumeSpecName: "kube-api-access-wlgtc") pod "56435e0f-a6fe-43c7-ac2f-0d5053aa0c78" (UID: "56435e0f-a6fe-43c7-ac2f-0d5053aa0c78"). InnerVolumeSpecName "kube-api-access-wlgtc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 07:53:14 crc kubenswrapper[4720]: I0122 07:53:14.597167 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/56435e0f-a6fe-43c7-ac2f-0d5053aa0c78-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "56435e0f-a6fe-43c7-ac2f-0d5053aa0c78" (UID: "56435e0f-a6fe-43c7-ac2f-0d5053aa0c78"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 07:53:14 crc kubenswrapper[4720]: I0122 07:53:14.651285 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wlgtc\" (UniqueName: \"kubernetes.io/projected/56435e0f-a6fe-43c7-ac2f-0d5053aa0c78-kube-api-access-wlgtc\") on node \"crc\" DevicePath \"\"" Jan 22 07:53:14 crc kubenswrapper[4720]: I0122 07:53:14.651320 4720 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/56435e0f-a6fe-43c7-ac2f-0d5053aa0c78-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 07:53:15 crc kubenswrapper[4720]: I0122 07:53:15.415788 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ztqct" event={"ID":"56435e0f-a6fe-43c7-ac2f-0d5053aa0c78","Type":"ContainerDied","Data":"0973386994b3923f7242b0c3602c24d342868d753709ecf0724d22025d5043c2"} Jan 22 07:53:15 crc kubenswrapper[4720]: I0122 07:53:15.415837 4720 scope.go:117] "RemoveContainer" containerID="0b0edc2aba0f32dfbab17fb9acf8508ddf0549e8822abb6416da8427fbcbcbe9" Jan 22 07:53:15 crc kubenswrapper[4720]: I0122 07:53:15.416123 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ztqct" Jan 22 07:53:15 crc kubenswrapper[4720]: I0122 07:53:15.434955 4720 scope.go:117] "RemoveContainer" containerID="acc5943c90124bd557261060752e0f0ab3b728fa015d27ba551c2b08222b3a72" Jan 22 07:53:15 crc kubenswrapper[4720]: I0122 07:53:15.450738 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-ztqct"] Jan 22 07:53:15 crc kubenswrapper[4720]: I0122 07:53:15.460489 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-ztqct"] Jan 22 07:53:15 crc kubenswrapper[4720]: I0122 07:53:15.481245 4720 scope.go:117] "RemoveContainer" containerID="2065bf40070f2df2c8fb7b2e2c850fd49be1bc5bc89914df132446a2311f8103" Jan 22 07:53:16 crc kubenswrapper[4720]: I0122 07:53:16.219989 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="56435e0f-a6fe-43c7-ac2f-0d5053aa0c78" path="/var/lib/kubelet/pods/56435e0f-a6fe-43c7-ac2f-0d5053aa0c78/volumes" Jan 22 07:53:23 crc kubenswrapper[4720]: I0122 07:53:23.211354 4720 scope.go:117] "RemoveContainer" containerID="0b44ab5d83ae4731222a091b193539bb9c97f8657185039ab14ed97bb7a11099" Jan 22 07:53:23 crc kubenswrapper[4720]: E0122 07:53:23.212783 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bnsvd_openshift-machine-config-operator(f4b26e9d-6a95-4b1c-9750-88b6aa100c67)\"" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" podUID="f4b26e9d-6a95-4b1c-9750-88b6aa100c67" Jan 22 07:53:35 crc kubenswrapper[4720]: I0122 07:53:35.211045 4720 scope.go:117] "RemoveContainer" containerID="0b44ab5d83ae4731222a091b193539bb9c97f8657185039ab14ed97bb7a11099" Jan 22 07:53:35 crc kubenswrapper[4720]: E0122 07:53:35.212142 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bnsvd_openshift-machine-config-operator(f4b26e9d-6a95-4b1c-9750-88b6aa100c67)\"" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" podUID="f4b26e9d-6a95-4b1c-9750-88b6aa100c67" Jan 22 07:53:48 crc kubenswrapper[4720]: I0122 07:53:48.216894 4720 scope.go:117] "RemoveContainer" containerID="0b44ab5d83ae4731222a091b193539bb9c97f8657185039ab14ed97bb7a11099" Jan 22 07:53:48 crc kubenswrapper[4720]: E0122 07:53:48.218982 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bnsvd_openshift-machine-config-operator(f4b26e9d-6a95-4b1c-9750-88b6aa100c67)\"" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" podUID="f4b26e9d-6a95-4b1c-9750-88b6aa100c67" Jan 22 07:54:01 crc kubenswrapper[4720]: I0122 07:54:01.211292 4720 scope.go:117] "RemoveContainer" containerID="0b44ab5d83ae4731222a091b193539bb9c97f8657185039ab14ed97bb7a11099" Jan 22 07:54:01 crc kubenswrapper[4720]: E0122 07:54:01.216178 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bnsvd_openshift-machine-config-operator(f4b26e9d-6a95-4b1c-9750-88b6aa100c67)\"" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" podUID="f4b26e9d-6a95-4b1c-9750-88b6aa100c67" Jan 22 07:54:12 crc kubenswrapper[4720]: I0122 07:54:12.210511 4720 scope.go:117] "RemoveContainer" containerID="0b44ab5d83ae4731222a091b193539bb9c97f8657185039ab14ed97bb7a11099" Jan 22 07:54:12 crc kubenswrapper[4720]: E0122 07:54:12.211190 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bnsvd_openshift-machine-config-operator(f4b26e9d-6a95-4b1c-9750-88b6aa100c67)\"" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" podUID="f4b26e9d-6a95-4b1c-9750-88b6aa100c67" Jan 22 07:54:25 crc kubenswrapper[4720]: I0122 07:54:25.210957 4720 scope.go:117] "RemoveContainer" containerID="0b44ab5d83ae4731222a091b193539bb9c97f8657185039ab14ed97bb7a11099" Jan 22 07:54:25 crc kubenswrapper[4720]: E0122 07:54:25.211666 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bnsvd_openshift-machine-config-operator(f4b26e9d-6a95-4b1c-9750-88b6aa100c67)\"" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" podUID="f4b26e9d-6a95-4b1c-9750-88b6aa100c67" Jan 22 07:54:39 crc kubenswrapper[4720]: I0122 07:54:39.211443 4720 scope.go:117] "RemoveContainer" containerID="0b44ab5d83ae4731222a091b193539bb9c97f8657185039ab14ed97bb7a11099" Jan 22 07:54:39 crc kubenswrapper[4720]: E0122 07:54:39.212293 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bnsvd_openshift-machine-config-operator(f4b26e9d-6a95-4b1c-9750-88b6aa100c67)\"" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" podUID="f4b26e9d-6a95-4b1c-9750-88b6aa100c67" Jan 22 07:54:40 crc kubenswrapper[4720]: I0122 07:54:40.433681 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-2xhdh"] Jan 22 07:54:40 crc kubenswrapper[4720]: E0122 07:54:40.434215 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56435e0f-a6fe-43c7-ac2f-0d5053aa0c78" containerName="registry-server" Jan 22 07:54:40 crc kubenswrapper[4720]: I0122 07:54:40.434233 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="56435e0f-a6fe-43c7-ac2f-0d5053aa0c78" containerName="registry-server" Jan 22 07:54:40 crc kubenswrapper[4720]: E0122 07:54:40.434257 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56435e0f-a6fe-43c7-ac2f-0d5053aa0c78" containerName="extract-utilities" Jan 22 07:54:40 crc kubenswrapper[4720]: I0122 07:54:40.434265 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="56435e0f-a6fe-43c7-ac2f-0d5053aa0c78" containerName="extract-utilities" Jan 22 07:54:40 crc kubenswrapper[4720]: E0122 07:54:40.434282 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56435e0f-a6fe-43c7-ac2f-0d5053aa0c78" containerName="extract-content" Jan 22 07:54:40 crc kubenswrapper[4720]: I0122 07:54:40.434293 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="56435e0f-a6fe-43c7-ac2f-0d5053aa0c78" containerName="extract-content" Jan 22 07:54:40 crc kubenswrapper[4720]: I0122 07:54:40.434535 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="56435e0f-a6fe-43c7-ac2f-0d5053aa0c78" containerName="registry-server" Jan 22 07:54:40 crc kubenswrapper[4720]: I0122 07:54:40.437848 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2xhdh" Jan 22 07:54:40 crc kubenswrapper[4720]: I0122 07:54:40.445047 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-2xhdh"] Jan 22 07:54:40 crc kubenswrapper[4720]: I0122 07:54:40.548993 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6acda340-6400-47d1-b555-3fea5981f1ea-catalog-content\") pod \"redhat-marketplace-2xhdh\" (UID: \"6acda340-6400-47d1-b555-3fea5981f1ea\") " pod="openshift-marketplace/redhat-marketplace-2xhdh" Jan 22 07:54:40 crc kubenswrapper[4720]: I0122 07:54:40.549094 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g8x4q\" (UniqueName: \"kubernetes.io/projected/6acda340-6400-47d1-b555-3fea5981f1ea-kube-api-access-g8x4q\") pod \"redhat-marketplace-2xhdh\" (UID: \"6acda340-6400-47d1-b555-3fea5981f1ea\") " pod="openshift-marketplace/redhat-marketplace-2xhdh" Jan 22 07:54:40 crc kubenswrapper[4720]: I0122 07:54:40.549168 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6acda340-6400-47d1-b555-3fea5981f1ea-utilities\") pod \"redhat-marketplace-2xhdh\" (UID: \"6acda340-6400-47d1-b555-3fea5981f1ea\") " pod="openshift-marketplace/redhat-marketplace-2xhdh" Jan 22 07:54:40 crc kubenswrapper[4720]: I0122 07:54:40.650501 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6acda340-6400-47d1-b555-3fea5981f1ea-catalog-content\") pod \"redhat-marketplace-2xhdh\" (UID: \"6acda340-6400-47d1-b555-3fea5981f1ea\") " pod="openshift-marketplace/redhat-marketplace-2xhdh" Jan 22 07:54:40 crc kubenswrapper[4720]: I0122 07:54:40.650622 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g8x4q\" (UniqueName: \"kubernetes.io/projected/6acda340-6400-47d1-b555-3fea5981f1ea-kube-api-access-g8x4q\") pod \"redhat-marketplace-2xhdh\" (UID: \"6acda340-6400-47d1-b555-3fea5981f1ea\") " pod="openshift-marketplace/redhat-marketplace-2xhdh" Jan 22 07:54:40 crc kubenswrapper[4720]: I0122 07:54:40.650723 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6acda340-6400-47d1-b555-3fea5981f1ea-utilities\") pod \"redhat-marketplace-2xhdh\" (UID: \"6acda340-6400-47d1-b555-3fea5981f1ea\") " pod="openshift-marketplace/redhat-marketplace-2xhdh" Jan 22 07:54:40 crc kubenswrapper[4720]: I0122 07:54:40.651130 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6acda340-6400-47d1-b555-3fea5981f1ea-catalog-content\") pod \"redhat-marketplace-2xhdh\" (UID: \"6acda340-6400-47d1-b555-3fea5981f1ea\") " pod="openshift-marketplace/redhat-marketplace-2xhdh" Jan 22 07:54:40 crc kubenswrapper[4720]: I0122 07:54:40.651220 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6acda340-6400-47d1-b555-3fea5981f1ea-utilities\") pod \"redhat-marketplace-2xhdh\" (UID: \"6acda340-6400-47d1-b555-3fea5981f1ea\") " pod="openshift-marketplace/redhat-marketplace-2xhdh" Jan 22 07:54:40 crc kubenswrapper[4720]: I0122 07:54:40.670082 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g8x4q\" (UniqueName: \"kubernetes.io/projected/6acda340-6400-47d1-b555-3fea5981f1ea-kube-api-access-g8x4q\") pod \"redhat-marketplace-2xhdh\" (UID: \"6acda340-6400-47d1-b555-3fea5981f1ea\") " pod="openshift-marketplace/redhat-marketplace-2xhdh" Jan 22 07:54:40 crc kubenswrapper[4720]: I0122 07:54:40.771292 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2xhdh" Jan 22 07:54:41 crc kubenswrapper[4720]: I0122 07:54:41.210297 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-2xhdh"] Jan 22 07:54:42 crc kubenswrapper[4720]: I0122 07:54:42.067538 4720 generic.go:334] "Generic (PLEG): container finished" podID="6acda340-6400-47d1-b555-3fea5981f1ea" containerID="8fd7371029094a0a96bdbb2170eba2334cd39334efdac30102e38a052cdc807f" exitCode=0 Jan 22 07:54:42 crc kubenswrapper[4720]: I0122 07:54:42.067637 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2xhdh" event={"ID":"6acda340-6400-47d1-b555-3fea5981f1ea","Type":"ContainerDied","Data":"8fd7371029094a0a96bdbb2170eba2334cd39334efdac30102e38a052cdc807f"} Jan 22 07:54:42 crc kubenswrapper[4720]: I0122 07:54:42.067855 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2xhdh" event={"ID":"6acda340-6400-47d1-b555-3fea5981f1ea","Type":"ContainerStarted","Data":"f9e752a10d7538365155dbff3d09d4ed780a1a6f7d2b2dc1987623292879dc8b"} Jan 22 07:54:44 crc kubenswrapper[4720]: I0122 07:54:44.086297 4720 generic.go:334] "Generic (PLEG): container finished" podID="6acda340-6400-47d1-b555-3fea5981f1ea" containerID="50612584bb9e5e211ff2ff051ab7a8e228a22cbfe1e9b351b2e89a2603e02fb0" exitCode=0 Jan 22 07:54:44 crc kubenswrapper[4720]: I0122 07:54:44.086383 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2xhdh" event={"ID":"6acda340-6400-47d1-b555-3fea5981f1ea","Type":"ContainerDied","Data":"50612584bb9e5e211ff2ff051ab7a8e228a22cbfe1e9b351b2e89a2603e02fb0"} Jan 22 07:54:45 crc kubenswrapper[4720]: I0122 07:54:45.096518 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2xhdh" event={"ID":"6acda340-6400-47d1-b555-3fea5981f1ea","Type":"ContainerStarted","Data":"dd5a75c8cad5ea927b52ec124ae6c4bbbb86ba1fe8dfe90f8e3ffd5303ff67fe"} Jan 22 07:54:50 crc kubenswrapper[4720]: I0122 07:54:50.771720 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-2xhdh" Jan 22 07:54:50 crc kubenswrapper[4720]: I0122 07:54:50.774482 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-2xhdh" Jan 22 07:54:50 crc kubenswrapper[4720]: I0122 07:54:50.817773 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-2xhdh" Jan 22 07:54:50 crc kubenswrapper[4720]: I0122 07:54:50.839438 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-2xhdh" podStartSLOduration=8.38146344 podStartE2EDuration="10.83942129s" podCreationTimestamp="2026-01-22 07:54:40 +0000 UTC" firstStartedPulling="2026-01-22 07:54:42.069725201 +0000 UTC m=+4774.211631916" lastFinishedPulling="2026-01-22 07:54:44.527683061 +0000 UTC m=+4776.669589766" observedRunningTime="2026-01-22 07:54:45.115035707 +0000 UTC m=+4777.256942422" watchObservedRunningTime="2026-01-22 07:54:50.83942129 +0000 UTC m=+4782.981327995" Jan 22 07:54:51 crc kubenswrapper[4720]: I0122 07:54:51.181045 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-2xhdh" Jan 22 07:54:51 crc kubenswrapper[4720]: I0122 07:54:51.211894 4720 scope.go:117] "RemoveContainer" containerID="0b44ab5d83ae4731222a091b193539bb9c97f8657185039ab14ed97bb7a11099" Jan 22 07:54:51 crc kubenswrapper[4720]: E0122 07:54:51.212424 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bnsvd_openshift-machine-config-operator(f4b26e9d-6a95-4b1c-9750-88b6aa100c67)\"" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" podUID="f4b26e9d-6a95-4b1c-9750-88b6aa100c67" Jan 22 07:54:55 crc kubenswrapper[4720]: I0122 07:54:55.215305 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-2xhdh"] Jan 22 07:54:55 crc kubenswrapper[4720]: I0122 07:54:55.215799 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-2xhdh" podUID="6acda340-6400-47d1-b555-3fea5981f1ea" containerName="registry-server" containerID="cri-o://dd5a75c8cad5ea927b52ec124ae6c4bbbb86ba1fe8dfe90f8e3ffd5303ff67fe" gracePeriod=2 Jan 22 07:54:56 crc kubenswrapper[4720]: I0122 07:54:56.176872 4720 generic.go:334] "Generic (PLEG): container finished" podID="6acda340-6400-47d1-b555-3fea5981f1ea" containerID="dd5a75c8cad5ea927b52ec124ae6c4bbbb86ba1fe8dfe90f8e3ffd5303ff67fe" exitCode=0 Jan 22 07:54:56 crc kubenswrapper[4720]: I0122 07:54:56.176944 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2xhdh" event={"ID":"6acda340-6400-47d1-b555-3fea5981f1ea","Type":"ContainerDied","Data":"dd5a75c8cad5ea927b52ec124ae6c4bbbb86ba1fe8dfe90f8e3ffd5303ff67fe"} Jan 22 07:54:56 crc kubenswrapper[4720]: I0122 07:54:56.470765 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2xhdh" Jan 22 07:54:56 crc kubenswrapper[4720]: I0122 07:54:56.608684 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6acda340-6400-47d1-b555-3fea5981f1ea-utilities\") pod \"6acda340-6400-47d1-b555-3fea5981f1ea\" (UID: \"6acda340-6400-47d1-b555-3fea5981f1ea\") " Jan 22 07:54:56 crc kubenswrapper[4720]: I0122 07:54:56.608804 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6acda340-6400-47d1-b555-3fea5981f1ea-catalog-content\") pod \"6acda340-6400-47d1-b555-3fea5981f1ea\" (UID: \"6acda340-6400-47d1-b555-3fea5981f1ea\") " Jan 22 07:54:56 crc kubenswrapper[4720]: I0122 07:54:56.608825 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g8x4q\" (UniqueName: \"kubernetes.io/projected/6acda340-6400-47d1-b555-3fea5981f1ea-kube-api-access-g8x4q\") pod \"6acda340-6400-47d1-b555-3fea5981f1ea\" (UID: \"6acda340-6400-47d1-b555-3fea5981f1ea\") " Jan 22 07:54:56 crc kubenswrapper[4720]: I0122 07:54:56.610357 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6acda340-6400-47d1-b555-3fea5981f1ea-utilities" (OuterVolumeSpecName: "utilities") pod "6acda340-6400-47d1-b555-3fea5981f1ea" (UID: "6acda340-6400-47d1-b555-3fea5981f1ea"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 07:54:56 crc kubenswrapper[4720]: I0122 07:54:56.615385 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6acda340-6400-47d1-b555-3fea5981f1ea-kube-api-access-g8x4q" (OuterVolumeSpecName: "kube-api-access-g8x4q") pod "6acda340-6400-47d1-b555-3fea5981f1ea" (UID: "6acda340-6400-47d1-b555-3fea5981f1ea"). InnerVolumeSpecName "kube-api-access-g8x4q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 07:54:56 crc kubenswrapper[4720]: I0122 07:54:56.638506 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6acda340-6400-47d1-b555-3fea5981f1ea-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6acda340-6400-47d1-b555-3fea5981f1ea" (UID: "6acda340-6400-47d1-b555-3fea5981f1ea"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 07:54:56 crc kubenswrapper[4720]: I0122 07:54:56.710350 4720 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6acda340-6400-47d1-b555-3fea5981f1ea-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 07:54:56 crc kubenswrapper[4720]: I0122 07:54:56.710402 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g8x4q\" (UniqueName: \"kubernetes.io/projected/6acda340-6400-47d1-b555-3fea5981f1ea-kube-api-access-g8x4q\") on node \"crc\" DevicePath \"\"" Jan 22 07:54:56 crc kubenswrapper[4720]: I0122 07:54:56.710420 4720 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6acda340-6400-47d1-b555-3fea5981f1ea-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 07:54:57 crc kubenswrapper[4720]: I0122 07:54:57.185613 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2xhdh" event={"ID":"6acda340-6400-47d1-b555-3fea5981f1ea","Type":"ContainerDied","Data":"f9e752a10d7538365155dbff3d09d4ed780a1a6f7d2b2dc1987623292879dc8b"} Jan 22 07:54:57 crc kubenswrapper[4720]: I0122 07:54:57.185664 4720 scope.go:117] "RemoveContainer" containerID="dd5a75c8cad5ea927b52ec124ae6c4bbbb86ba1fe8dfe90f8e3ffd5303ff67fe" Jan 22 07:54:57 crc kubenswrapper[4720]: I0122 07:54:57.185782 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2xhdh" Jan 22 07:54:57 crc kubenswrapper[4720]: I0122 07:54:57.210465 4720 scope.go:117] "RemoveContainer" containerID="50612584bb9e5e211ff2ff051ab7a8e228a22cbfe1e9b351b2e89a2603e02fb0" Jan 22 07:54:57 crc kubenswrapper[4720]: I0122 07:54:57.220537 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-2xhdh"] Jan 22 07:54:57 crc kubenswrapper[4720]: I0122 07:54:57.227608 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-2xhdh"] Jan 22 07:54:57 crc kubenswrapper[4720]: I0122 07:54:57.239604 4720 scope.go:117] "RemoveContainer" containerID="8fd7371029094a0a96bdbb2170eba2334cd39334efdac30102e38a052cdc807f" Jan 22 07:54:58 crc kubenswrapper[4720]: I0122 07:54:58.221825 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6acda340-6400-47d1-b555-3fea5981f1ea" path="/var/lib/kubelet/pods/6acda340-6400-47d1-b555-3fea5981f1ea/volumes" Jan 22 07:55:02 crc kubenswrapper[4720]: I0122 07:55:02.210345 4720 scope.go:117] "RemoveContainer" containerID="0b44ab5d83ae4731222a091b193539bb9c97f8657185039ab14ed97bb7a11099" Jan 22 07:55:02 crc kubenswrapper[4720]: E0122 07:55:02.210967 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bnsvd_openshift-machine-config-operator(f4b26e9d-6a95-4b1c-9750-88b6aa100c67)\"" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" podUID="f4b26e9d-6a95-4b1c-9750-88b6aa100c67" Jan 22 07:55:13 crc kubenswrapper[4720]: I0122 07:55:13.210459 4720 scope.go:117] "RemoveContainer" containerID="0b44ab5d83ae4731222a091b193539bb9c97f8657185039ab14ed97bb7a11099" Jan 22 07:55:13 crc kubenswrapper[4720]: E0122 07:55:13.211248 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bnsvd_openshift-machine-config-operator(f4b26e9d-6a95-4b1c-9750-88b6aa100c67)\"" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" podUID="f4b26e9d-6a95-4b1c-9750-88b6aa100c67" Jan 22 07:55:28 crc kubenswrapper[4720]: I0122 07:55:28.218951 4720 scope.go:117] "RemoveContainer" containerID="0b44ab5d83ae4731222a091b193539bb9c97f8657185039ab14ed97bb7a11099" Jan 22 07:55:28 crc kubenswrapper[4720]: E0122 07:55:28.219703 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bnsvd_openshift-machine-config-operator(f4b26e9d-6a95-4b1c-9750-88b6aa100c67)\"" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" podUID="f4b26e9d-6a95-4b1c-9750-88b6aa100c67" Jan 22 07:55:43 crc kubenswrapper[4720]: I0122 07:55:43.210747 4720 scope.go:117] "RemoveContainer" containerID="0b44ab5d83ae4731222a091b193539bb9c97f8657185039ab14ed97bb7a11099" Jan 22 07:55:43 crc kubenswrapper[4720]: E0122 07:55:43.211562 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bnsvd_openshift-machine-config-operator(f4b26e9d-6a95-4b1c-9750-88b6aa100c67)\"" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" podUID="f4b26e9d-6a95-4b1c-9750-88b6aa100c67" Jan 22 07:55:55 crc kubenswrapper[4720]: I0122 07:55:55.210866 4720 scope.go:117] "RemoveContainer" containerID="0b44ab5d83ae4731222a091b193539bb9c97f8657185039ab14ed97bb7a11099" Jan 22 07:55:55 crc kubenswrapper[4720]: E0122 07:55:55.211891 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bnsvd_openshift-machine-config-operator(f4b26e9d-6a95-4b1c-9750-88b6aa100c67)\"" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" podUID="f4b26e9d-6a95-4b1c-9750-88b6aa100c67" Jan 22 07:56:07 crc kubenswrapper[4720]: I0122 07:56:07.210850 4720 scope.go:117] "RemoveContainer" containerID="0b44ab5d83ae4731222a091b193539bb9c97f8657185039ab14ed97bb7a11099" Jan 22 07:56:07 crc kubenswrapper[4720]: E0122 07:56:07.211730 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bnsvd_openshift-machine-config-operator(f4b26e9d-6a95-4b1c-9750-88b6aa100c67)\"" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" podUID="f4b26e9d-6a95-4b1c-9750-88b6aa100c67" Jan 22 07:56:20 crc kubenswrapper[4720]: I0122 07:56:20.211015 4720 scope.go:117] "RemoveContainer" containerID="0b44ab5d83ae4731222a091b193539bb9c97f8657185039ab14ed97bb7a11099" Jan 22 07:56:20 crc kubenswrapper[4720]: E0122 07:56:20.211814 4720 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bnsvd_openshift-machine-config-operator(f4b26e9d-6a95-4b1c-9750-88b6aa100c67)\"" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" podUID="f4b26e9d-6a95-4b1c-9750-88b6aa100c67" Jan 22 07:56:32 crc kubenswrapper[4720]: I0122 07:56:32.210737 4720 scope.go:117] "RemoveContainer" containerID="0b44ab5d83ae4731222a091b193539bb9c97f8657185039ab14ed97bb7a11099" Jan 22 07:56:32 crc kubenswrapper[4720]: I0122 07:56:32.915733 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" event={"ID":"f4b26e9d-6a95-4b1c-9750-88b6aa100c67","Type":"ContainerStarted","Data":"c30dfbb85d0e3ab4175612e89518d4d728c966882935f3486d6e925563c60703"} Jan 22 07:57:26 crc kubenswrapper[4720]: I0122 07:57:26.265788 4720 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-ljmt5"] Jan 22 07:57:26 crc kubenswrapper[4720]: E0122 07:57:26.266561 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6acda340-6400-47d1-b555-3fea5981f1ea" containerName="extract-utilities" Jan 22 07:57:26 crc kubenswrapper[4720]: I0122 07:57:26.266573 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="6acda340-6400-47d1-b555-3fea5981f1ea" containerName="extract-utilities" Jan 22 07:57:26 crc kubenswrapper[4720]: E0122 07:57:26.266592 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6acda340-6400-47d1-b555-3fea5981f1ea" containerName="registry-server" Jan 22 07:57:26 crc kubenswrapper[4720]: I0122 07:57:26.266601 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="6acda340-6400-47d1-b555-3fea5981f1ea" containerName="registry-server" Jan 22 07:57:26 crc kubenswrapper[4720]: E0122 07:57:26.266614 4720 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6acda340-6400-47d1-b555-3fea5981f1ea" containerName="extract-content" Jan 22 07:57:26 crc kubenswrapper[4720]: I0122 07:57:26.266620 4720 state_mem.go:107] "Deleted CPUSet assignment" podUID="6acda340-6400-47d1-b555-3fea5981f1ea" containerName="extract-content" Jan 22 07:57:26 crc kubenswrapper[4720]: I0122 07:57:26.266760 4720 memory_manager.go:354] "RemoveStaleState removing state" podUID="6acda340-6400-47d1-b555-3fea5981f1ea" containerName="registry-server" Jan 22 07:57:26 crc kubenswrapper[4720]: I0122 07:57:26.267859 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ljmt5" Jan 22 07:57:26 crc kubenswrapper[4720]: I0122 07:57:26.287965 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ljmt5"] Jan 22 07:57:26 crc kubenswrapper[4720]: I0122 07:57:26.454730 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/48d3c034-1ceb-4a09-8e3f-fc8b4abb17b8-catalog-content\") pod \"community-operators-ljmt5\" (UID: \"48d3c034-1ceb-4a09-8e3f-fc8b4abb17b8\") " pod="openshift-marketplace/community-operators-ljmt5" Jan 22 07:57:26 crc kubenswrapper[4720]: I0122 07:57:26.454810 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sgb8m\" (UniqueName: \"kubernetes.io/projected/48d3c034-1ceb-4a09-8e3f-fc8b4abb17b8-kube-api-access-sgb8m\") pod \"community-operators-ljmt5\" (UID: \"48d3c034-1ceb-4a09-8e3f-fc8b4abb17b8\") " pod="openshift-marketplace/community-operators-ljmt5" Jan 22 07:57:26 crc kubenswrapper[4720]: I0122 07:57:26.456247 4720 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/48d3c034-1ceb-4a09-8e3f-fc8b4abb17b8-utilities\") pod \"community-operators-ljmt5\" (UID: \"48d3c034-1ceb-4a09-8e3f-fc8b4abb17b8\") " pod="openshift-marketplace/community-operators-ljmt5" Jan 22 07:57:26 crc kubenswrapper[4720]: I0122 07:57:26.557302 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/48d3c034-1ceb-4a09-8e3f-fc8b4abb17b8-utilities\") pod \"community-operators-ljmt5\" (UID: \"48d3c034-1ceb-4a09-8e3f-fc8b4abb17b8\") " pod="openshift-marketplace/community-operators-ljmt5" Jan 22 07:57:26 crc kubenswrapper[4720]: I0122 07:57:26.557377 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/48d3c034-1ceb-4a09-8e3f-fc8b4abb17b8-catalog-content\") pod \"community-operators-ljmt5\" (UID: \"48d3c034-1ceb-4a09-8e3f-fc8b4abb17b8\") " pod="openshift-marketplace/community-operators-ljmt5" Jan 22 07:57:26 crc kubenswrapper[4720]: I0122 07:57:26.557416 4720 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sgb8m\" (UniqueName: \"kubernetes.io/projected/48d3c034-1ceb-4a09-8e3f-fc8b4abb17b8-kube-api-access-sgb8m\") pod \"community-operators-ljmt5\" (UID: \"48d3c034-1ceb-4a09-8e3f-fc8b4abb17b8\") " pod="openshift-marketplace/community-operators-ljmt5" Jan 22 07:57:26 crc kubenswrapper[4720]: I0122 07:57:26.557840 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/48d3c034-1ceb-4a09-8e3f-fc8b4abb17b8-utilities\") pod \"community-operators-ljmt5\" (UID: \"48d3c034-1ceb-4a09-8e3f-fc8b4abb17b8\") " pod="openshift-marketplace/community-operators-ljmt5" Jan 22 07:57:26 crc kubenswrapper[4720]: I0122 07:57:26.557956 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/48d3c034-1ceb-4a09-8e3f-fc8b4abb17b8-catalog-content\") pod \"community-operators-ljmt5\" (UID: \"48d3c034-1ceb-4a09-8e3f-fc8b4abb17b8\") " pod="openshift-marketplace/community-operators-ljmt5" Jan 22 07:57:26 crc kubenswrapper[4720]: I0122 07:57:26.576017 4720 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sgb8m\" (UniqueName: \"kubernetes.io/projected/48d3c034-1ceb-4a09-8e3f-fc8b4abb17b8-kube-api-access-sgb8m\") pod \"community-operators-ljmt5\" (UID: \"48d3c034-1ceb-4a09-8e3f-fc8b4abb17b8\") " pod="openshift-marketplace/community-operators-ljmt5" Jan 22 07:57:26 crc kubenswrapper[4720]: I0122 07:57:26.671626 4720 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ljmt5" Jan 22 07:57:27 crc kubenswrapper[4720]: I0122 07:57:27.133641 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ljmt5"] Jan 22 07:57:27 crc kubenswrapper[4720]: I0122 07:57:27.366533 4720 generic.go:334] "Generic (PLEG): container finished" podID="48d3c034-1ceb-4a09-8e3f-fc8b4abb17b8" containerID="2fb34dac2def8105a207fcc533b19f27951a3d4894c9838f6dcfae5b268f97d1" exitCode=0 Jan 22 07:57:27 crc kubenswrapper[4720]: I0122 07:57:27.366575 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ljmt5" event={"ID":"48d3c034-1ceb-4a09-8e3f-fc8b4abb17b8","Type":"ContainerDied","Data":"2fb34dac2def8105a207fcc533b19f27951a3d4894c9838f6dcfae5b268f97d1"} Jan 22 07:57:27 crc kubenswrapper[4720]: I0122 07:57:27.366602 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ljmt5" event={"ID":"48d3c034-1ceb-4a09-8e3f-fc8b4abb17b8","Type":"ContainerStarted","Data":"6b31c3d868bbaef1ccf96b6a9eb336af28fd2a4e0bc90b0bff54edebab62c60c"} Jan 22 07:57:27 crc kubenswrapper[4720]: I0122 07:57:27.368070 4720 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 22 07:57:32 crc kubenswrapper[4720]: I0122 07:57:32.408863 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ljmt5" event={"ID":"48d3c034-1ceb-4a09-8e3f-fc8b4abb17b8","Type":"ContainerStarted","Data":"bc8259da0c7875522d76b8d944855063cf4a5da0244182eaedb2c7703c59fc2b"} Jan 22 07:57:33 crc kubenswrapper[4720]: I0122 07:57:33.418980 4720 generic.go:334] "Generic (PLEG): container finished" podID="48d3c034-1ceb-4a09-8e3f-fc8b4abb17b8" containerID="bc8259da0c7875522d76b8d944855063cf4a5da0244182eaedb2c7703c59fc2b" exitCode=0 Jan 22 07:57:33 crc kubenswrapper[4720]: I0122 07:57:33.420143 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ljmt5" event={"ID":"48d3c034-1ceb-4a09-8e3f-fc8b4abb17b8","Type":"ContainerDied","Data":"bc8259da0c7875522d76b8d944855063cf4a5da0244182eaedb2c7703c59fc2b"} Jan 22 07:57:34 crc kubenswrapper[4720]: I0122 07:57:34.428967 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ljmt5" event={"ID":"48d3c034-1ceb-4a09-8e3f-fc8b4abb17b8","Type":"ContainerStarted","Data":"a3c12129eb6b4fa3265a4b41eb0d9263a870658580a89a791b6b1faf05f4d555"} Jan 22 07:57:36 crc kubenswrapper[4720]: I0122 07:57:36.672308 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-ljmt5" Jan 22 07:57:36 crc kubenswrapper[4720]: I0122 07:57:36.672685 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-ljmt5" Jan 22 07:57:36 crc kubenswrapper[4720]: I0122 07:57:36.737596 4720 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-ljmt5" Jan 22 07:57:36 crc kubenswrapper[4720]: I0122 07:57:36.762610 4720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-ljmt5" podStartSLOduration=4.294624068 podStartE2EDuration="10.762571357s" podCreationTimestamp="2026-01-22 07:57:26 +0000 UTC" firstStartedPulling="2026-01-22 07:57:27.367766848 +0000 UTC m=+4939.509673563" lastFinishedPulling="2026-01-22 07:57:33.835714147 +0000 UTC m=+4945.977620852" observedRunningTime="2026-01-22 07:57:34.449707091 +0000 UTC m=+4946.591613796" watchObservedRunningTime="2026-01-22 07:57:36.762571357 +0000 UTC m=+4948.904478062" Jan 22 07:57:46 crc kubenswrapper[4720]: I0122 07:57:46.722497 4720 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-ljmt5" Jan 22 07:57:49 crc kubenswrapper[4720]: I0122 07:57:49.670228 4720 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ljmt5"] Jan 22 07:57:50 crc kubenswrapper[4720]: I0122 07:57:50.258878 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-2gqg2"] Jan 22 07:57:50 crc kubenswrapper[4720]: I0122 07:57:50.259517 4720 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-2gqg2" podUID="90763cf9-c272-4870-8f6d-9e3b506a712f" containerName="registry-server" containerID="cri-o://9e94f7f64ad3716dfb0de52c4c0fe4945be96884dcbafc8e978b4488e870101c" gracePeriod=2 Jan 22 07:57:50 crc kubenswrapper[4720]: I0122 07:57:50.548873 4720 generic.go:334] "Generic (PLEG): container finished" podID="90763cf9-c272-4870-8f6d-9e3b506a712f" containerID="9e94f7f64ad3716dfb0de52c4c0fe4945be96884dcbafc8e978b4488e870101c" exitCode=0 Jan 22 07:57:50 crc kubenswrapper[4720]: I0122 07:57:50.548971 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2gqg2" event={"ID":"90763cf9-c272-4870-8f6d-9e3b506a712f","Type":"ContainerDied","Data":"9e94f7f64ad3716dfb0de52c4c0fe4945be96884dcbafc8e978b4488e870101c"} Jan 22 07:57:50 crc kubenswrapper[4720]: I0122 07:57:50.721260 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2gqg2" Jan 22 07:57:50 crc kubenswrapper[4720]: I0122 07:57:50.769454 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/90763cf9-c272-4870-8f6d-9e3b506a712f-utilities\") pod \"90763cf9-c272-4870-8f6d-9e3b506a712f\" (UID: \"90763cf9-c272-4870-8f6d-9e3b506a712f\") " Jan 22 07:57:50 crc kubenswrapper[4720]: I0122 07:57:50.769820 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/90763cf9-c272-4870-8f6d-9e3b506a712f-catalog-content\") pod \"90763cf9-c272-4870-8f6d-9e3b506a712f\" (UID: \"90763cf9-c272-4870-8f6d-9e3b506a712f\") " Jan 22 07:57:50 crc kubenswrapper[4720]: I0122 07:57:50.769896 4720 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5wpn4\" (UniqueName: \"kubernetes.io/projected/90763cf9-c272-4870-8f6d-9e3b506a712f-kube-api-access-5wpn4\") pod \"90763cf9-c272-4870-8f6d-9e3b506a712f\" (UID: \"90763cf9-c272-4870-8f6d-9e3b506a712f\") " Jan 22 07:57:50 crc kubenswrapper[4720]: I0122 07:57:50.770080 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/90763cf9-c272-4870-8f6d-9e3b506a712f-utilities" (OuterVolumeSpecName: "utilities") pod "90763cf9-c272-4870-8f6d-9e3b506a712f" (UID: "90763cf9-c272-4870-8f6d-9e3b506a712f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 07:57:50 crc kubenswrapper[4720]: I0122 07:57:50.770575 4720 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/90763cf9-c272-4870-8f6d-9e3b506a712f-utilities\") on node \"crc\" DevicePath \"\"" Jan 22 07:57:50 crc kubenswrapper[4720]: I0122 07:57:50.780139 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/90763cf9-c272-4870-8f6d-9e3b506a712f-kube-api-access-5wpn4" (OuterVolumeSpecName: "kube-api-access-5wpn4") pod "90763cf9-c272-4870-8f6d-9e3b506a712f" (UID: "90763cf9-c272-4870-8f6d-9e3b506a712f"). InnerVolumeSpecName "kube-api-access-5wpn4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 22 07:57:50 crc kubenswrapper[4720]: I0122 07:57:50.828255 4720 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/90763cf9-c272-4870-8f6d-9e3b506a712f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "90763cf9-c272-4870-8f6d-9e3b506a712f" (UID: "90763cf9-c272-4870-8f6d-9e3b506a712f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 22 07:57:50 crc kubenswrapper[4720]: I0122 07:57:50.871586 4720 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5wpn4\" (UniqueName: \"kubernetes.io/projected/90763cf9-c272-4870-8f6d-9e3b506a712f-kube-api-access-5wpn4\") on node \"crc\" DevicePath \"\"" Jan 22 07:57:50 crc kubenswrapper[4720]: I0122 07:57:50.871629 4720 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/90763cf9-c272-4870-8f6d-9e3b506a712f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 22 07:57:51 crc kubenswrapper[4720]: I0122 07:57:51.557458 4720 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2gqg2" event={"ID":"90763cf9-c272-4870-8f6d-9e3b506a712f","Type":"ContainerDied","Data":"363332e23e9640fee57c53b74e58042ef7f201ba79c0109ae2b0f2dc18cdfb4e"} Jan 22 07:57:51 crc kubenswrapper[4720]: I0122 07:57:51.557508 4720 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2gqg2" Jan 22 07:57:51 crc kubenswrapper[4720]: I0122 07:57:51.557520 4720 scope.go:117] "RemoveContainer" containerID="9e94f7f64ad3716dfb0de52c4c0fe4945be96884dcbafc8e978b4488e870101c" Jan 22 07:57:51 crc kubenswrapper[4720]: I0122 07:57:51.596576 4720 scope.go:117] "RemoveContainer" containerID="74e209790f74949de20bc519b38b1efe965e2c1ffc2a4edc8514c244108da16f" Jan 22 07:57:51 crc kubenswrapper[4720]: I0122 07:57:51.608977 4720 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-2gqg2"] Jan 22 07:57:51 crc kubenswrapper[4720]: I0122 07:57:51.613789 4720 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-2gqg2"] Jan 22 07:57:51 crc kubenswrapper[4720]: I0122 07:57:51.622575 4720 scope.go:117] "RemoveContainer" containerID="bddc3808f07ea3e4fabf2daf1ac7b44c1a86b38710cd563c632beb7f9cdb7fcc" Jan 22 07:57:52 crc kubenswrapper[4720]: I0122 07:57:52.219534 4720 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="90763cf9-c272-4870-8f6d-9e3b506a712f" path="/var/lib/kubelet/pods/90763cf9-c272-4870-8f6d-9e3b506a712f/volumes" Jan 22 07:58:59 crc kubenswrapper[4720]: I0122 07:58:59.780520 4720 patch_prober.go:28] interesting pod/machine-config-daemon-bnsvd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 07:58:59 crc kubenswrapper[4720]: I0122 07:58:59.781148 4720 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" podUID="f4b26e9d-6a95-4b1c-9750-88b6aa100c67" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 22 07:59:29 crc kubenswrapper[4720]: I0122 07:59:29.779901 4720 patch_prober.go:28] interesting pod/machine-config-daemon-bnsvd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 22 07:59:29 crc kubenswrapper[4720]: I0122 07:59:29.780444 4720 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bnsvd" podUID="f4b26e9d-6a95-4b1c-9750-88b6aa100c67" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" var/home/core/zuul-output/logs/crc-cloud-workdir-crc-all-logs.tar.gz0000644000175000000000000000005515134354405024451 0ustar coreroot  Om77'(var/home/core/zuul-output/logs/crc-cloud/0000755000175000000000000000000015134354406017367 5ustar corerootvar/home/core/zuul-output/artifacts/0000755000175000017500000000000015134342171016506 5ustar corecorevar/home/core/zuul-output/docs/0000755000175000017500000000000015134342171015456 5ustar corecore